Presentation is loading. Please wait.

Presentation is loading. Please wait.

The PCP Theorem by Gap Amplification

Similar presentations


Presentation on theme: "The PCP Theorem by Gap Amplification"— Presentation transcript:

1 The PCP Theorem by Gap Amplification
Proven by Irit Dinur Exposition by Radhakrishnan-Sudan Lecture by Nick Harvey

2 How to give a proof? Prover writes a proof Verifier checks all of it.
“Is prime?” Prover 1 3 2 Prover writes a proof Verifier checks all of it. If theorem is true, Verifier is convinced. If theorem is false, no proof can convince her.

3 Probabilistically Checkable Proofs
Lazy verifier “Is prime?” Prover 1 3 2 Prover writes a proof Verifier rolls dice; checks a few bits of proof. If theorem is true, Verifier is convinced. If theorem is false, Pr[ Verifier is fooled ] < ½. (This probability is called the Soundness.)

4 PCP Classes What theorems can be proven by PCPs?

5 PCP Classes What languages can be decided by PCPs?
Depends on # dice, # queries, alphabet size… The PCP Theorem [Arora, Lund, Motwani, Sudan, Szegedy ‘92] NP = PCP[ O(log n) dice, O(1) queries, alphabet size 2, soundness 1/2 ]

6 “Is this graph 3-colorable?”
Toy PCP Example Lazy verifier “Is this graph 3-colorable?” Prover

7 “Is this graph 3-colorable?”
Toy PCP Example Lazy verifier “Is this graph 3-colorable?” Prover Only one bad edge! Prover produces coloring Verifier picks a random edge and checks colors If graph is 3 colorable, Verifier is always convinced If graph not 3 colorable, Verifier can be fooled. Prob[ Victor is fooled ] = 1 – O(1/n). Why not 1/n^2? Can assume G to be planar => O(1/n) edges.

8 “Is this graph 3-colorable?”
Toy PCP Example Lazy verifier “Is this graph 3-colorable?” Prover Only one bad edge! Prover produces coloring Verifier picks a random edge and checks colors If graph is 3 colorable, Verifier is always convinced If graph not 3 colorable, Verifier can be fooled. Prob[ Verifier is fooled ] = 1 - O(1/n) Soundness is too low! Want (n) bad edges! Want Gap to be (1)! GAP

9 Ideal Reduction L L No such transformation is known.
Graphs on poly(n) vertices {0,1}* Any NP Language 3 colorable graphs L Transformation T “Far from 3 colorable” graphs L No such transformation is known. But we can do something similar! In every 3 coloring, a constant fraction of edges are bad

10 Generalized Graph Coloring
c{u,v} Constraint c{u,v} v’s color 1 u’s color G = ( V, E, , C ) where  is the “alphabet” of colors C = { ce : e in E } is a set of constraints where ce : ×  {0,1} Let GK = { instances with ||=K }.

11 Generalized Graph Coloring
c{u,v} Constraint c{u,v} v’s color 1 u’s color G is satisfiable if there is a coloring satisfying all constraints. G is -far from satisfiable if every coloring leaves at least ∙|E| constraints unsatisfied.

12 Main Theorem There exists a constant >0 and a polynomial time transformation T s.t. G16 (generalized graph coloring instances with ||=16) {0,1}* Any NP Language Satisfiable graphs L Transformation T -far from satisfiable graphs L

13 Main Theorem There exists a constant >0 and a polynomial time transformation T s.t. G16 (generalized graph coloring instances with ||=16) All graphs 3-colorable graphs Satisfiable graphs It’s trivial but worth mentioning that ordinary graph 3-coloring is already a special case of generalized graph coloring in G16. Why? Because we can choose the constraints to: - ban use of all colors other than the first 3 - compute the EQ function on the first 3 colors Transformation T -far from satisfiable graphs Non-3-colorable graphs

14 Main Theorem Corollary
There exists a constant >0 and a polynomial time transformation T : graphs  G16 s.t. 3-colorable graphs map to satisfiable graphs and non-3-colorable graphs map to -far from satisfiable graphs. Corollary The corollary really says nothing. Don’t view PCPs as scary objects: these generalized graph colorings *ARE* PCPs. If the coloring is valid, verifier will accept. If the coloring is not valid, then most edges will be invalid, so the inconsistency can be detected by random queries. Note: This is not the best known version of the PCP theorem. It is roughly comparable to the Arora et al theorem. Soundness will be tiny: epsilon roughly 1 / billion. Or, we could make a billion queries and get soundness down to ½. NP ⊆ PCP[ O(log n) dice, 2 queries, alphabet size 16, soundness 1- ]

15 How To Prove Main Theorem?
Trivial for =O(1/n) (Call  the Gap) Idea: Find a transformation T that: Increases gap by a constant factor Increases # vertices & edges by a constant factor Maintains alphabet size = 16 Apply T only O(log n) times Done! Satisfiable 3-colorable Graphs -far Not 3-colorable G16 '-far ''-far '''-far NOTE: The transformation will stop working when epsilon gets pretty big, say epsilon = 1 / billion. Gap: 1/n ' '' ''' <

16 How To Prove Main Theorem?
We can ignore this! ∙ Use bigger constant Find a transformation T that: Increases gap by a constant factor c Increases # vertices & edges by a constant factor Maintains alphabet size = 16 Handy Lemma [Arora et al.] There exists a constant >0 such that for any constant alphabet size K, there exists a map M : GK  G16 that shrinks gap by a factor . Pro: Alphabet size shrinks Con: Gap also shrinks Crucial point: ALPHA IS INDEPENDENT OF K! The “Handy Lemma” is really a Baby PCP theorem. One could use the Long Code, or the degree-2 polynomials version in Radhakrishnan-Sudan.

17 Dinur’s Goal Find a transformation T that:
Maps G16  GK, where K is a constant Increases gap by a large constant factor c Increases # vertices & edges by a constant factor Note: 0  gap  1, so the transformation necessarily fails once gap is large enough.

18 What to do? Setup: Silly Approach:
Given G=(V,E,,C), construct G’=(V’,E’,’,C’). Want 2 queries in G’ to somehow test constraints for many edges in G. Silly Approach: Let G’ be the complete graph Increase the alphabet size to ’=n Each vertex stores an “opinion” of other vertices’ colors Constraint in C’ for edge {u,v} checks that u’s opinions all match v’s opinions, and that all constraints in C are satisfied.

19 What to do? G’ G Silly Approach: Much too large!
Let G’ be the complete graph Increase the alphabet size to ’=n Each vertex stores an “opinion” of other vertices’ colors Constraint in C’ for edge {u,v} checks that u’s opinions all match v’s opinions, and that all constraints in C are satisfied. Much too large!

20 Colors of all nearby vertices
What to do? Colors of all nearby vertices Dinur’s Approach: Let t be a constant Each vertex stores “opinions” of its neighbors at distance  t Want storage at each vertex to be a constant  Vertices should have constant degree d. Storage  16dt Verifier queries two nearby vertices and checks some of their nearby edges I need to explain what the constraints in G’ are. Rather than describe them as constraints, I’ll describe them as a deterministic algorithm performed by the verifier. This algorithm can easily be encoded as constraints. So are we done? So long as all vertices’ opinions match, we are done. But the prover (who choses the colors) might try to lie to us! Prover might deliberately give vertices inconsistent opinions. So we need to ensure that inconsistent opinions will be detected by the verifier with good probability.

21 Implementing Dinur’s Approach
Let t be a large constant Pick vertex a at random Do a random walk from a. Halt at each step w.p. 1/t. B(b,t) B(a,t) (Ball of radius t around a) e1 e2 e3 a=v0 v1 v2 v3 vT=b E[length of walk]=t Both a and b have opinions about colors of vertices in B(a,t) ⋂ B(b,t) Verifier tests if a’s and b’s opinions agree, and if the constraints are satisfied on those edges.

22 When does Verifier reject?
Let t be a large constant Pick a vertex a at random Do a random walk. Stop at each step w.p. 1/t. FAILURE! Conflicting Opinions B(a,t) B(b,t) e1 e2 e3 a=v0 v1 v2 v3 vT=b Both a and b have opinions about vertices in B(a,t) ⋂ B(b,t) Verifier tests if a’s and b’s opinions agree, and if the constraints are satisfied on those edges.

23 When does Verifier reject?
Let t be a large constant Pick a vertex a at random Do a random walk. Stop at each step w.p. 1/t. FAILURE! Violated Constraint B(a,t) B(b,t) e1 e2 e3 a=v0 v1 v2 v3 vT=b Both a and b have opinions about vertices in B(a,t) ⋂ B(b,t) Verifier tests if a’s and b’s opinions agree, and if the constraints are satisfied on those edges.

24 Analysis of Dinur’s Approach
If G is satisfiable, then prover supplies valid proof and verifier accepts with probability 1. Main Lemma: Let G be -far from satisfiable. Then verifier rejects every coloring with probability (∙t) (until  reaches 1/t). Caveats: (1) G must have constant degree d (2) G must have good expansion. Corollary: Dinur’s goal is achieved! (Modulo caveats) We have a transformation T that: Maps G16  GK, where K16dt Increases gap by a factor t Increases # vertices & edges by a constant factor EMPHASIZE HERE: The “random walk verifier” can just be encoded as a bunch of constraints.

25 View of Dinur’s Transformation
G G’ New Edges (connect vertices at distance  t) Corollary: Dinur’s goal is achieved! (Modulo caveats) We have a transformation T that: Maps G16  GK, where K16dt Increases gap by a factor t Increases # vertices & edges by a constant factor Each possible random walk of the verifier corresponds to a new pink edge. The “algorithm” or “Tests” performed by the verifier can be encoded as a constraint. But for us it’s more convenient to think of the verifier as an algorithm. Note: the random walk has unbounded length, so actually there should be a pink edge between ANY pair of vertices. This is undesirable: graph is growing too quickly. So what we will actually do is truncate the random walk after 100*t steps. Clearly the truncated distribution isn’t very different from the untruncated one, so all the analysis should go through.

26 Remaining tasks for this talk
Proof of Main Lemma Transforming graphs to constant degree Transforming graphs to expanders Postpone until end

27 Breaktime Puzzle! I have two (biased) dice Da and Db
Say their outcomes have probabilities: a1  a2  …  a where Σi ai = 1 b4  b2  …  b where Σi bi = 1 (ordering is irrelevant) Roll Da and Db once each, independently What is Pr[ outcomes differ ]? (I want a very simple lower bound, depending on ai’s & bi’s) E.g., if they’re uniform, the probability is 5/6. But if they have a1=b1=1 then the probability is 0. A simple lower bound is max{ 1-a1, 1-b4 } SOLUTION: max { 1-a1, 1-b4 }

28 Map graph 3-coloring to generalized graph coloring in G16
Review of first half Satisfiable 3-colorable Graphs -far Not 3-colorable G16 '-far ''-far '''-far Gap: 1/n ' '' ''' < Map graph 3-coloring to generalized graph coloring in G16 Want a transformation T that: Maps G16  GK, where K is a constant Increases gap by a large constant factor c Increases # vertices & edges by a constant factor

29 Analysis of Dinur’s Approach
Suppose G is -far from satisfiable Let A’ be any coloring of G’ Want to show: Pr[ verifier detects fault in G’ ]  (∙t) How? Show that A’ induces a coloring A of G A has many faulty edges by assumption Show that faults in A also cause A’ to have faults Formally: Let N be an RV = # faulty edges of G detected by verifier Want to show: Pr[ N>0 ]  (∙t) Via second-moment method: Pr[ N>0 ]  E[ N ]2 / E[ N2 ] Will show E[ N ] is large and E[ N2 ] is small How to show a random variable is non-zero? Concentration. Can we use Chernoff bounds? No. Next best thing: second moment method. Our inequality P[ N>0 ] is just a handy reformulation of Chebyshev.

30 Remaining Tasks Define coloring A of G induced by coloring A’ of G’
Let F be { faulty edges of coloring A } Let N be an RV = # edges of F detected by verifier Show E[ N ]  ( t∙|F| / |E| ) Show E[ N2 ]  O( t∙|F| / |E| ) Thus Pr[ N>0 ]   ( t∙|F| / |E| )  (t∙) E[ N2 ] E[ N ]2 (end of Main Lemma)

31 Induced Coloring A Notation: Intuition for defining A: Formally:
A’u(v) = u’s opinion of v’s color in coloring A’ A(v) = v’s color in A Intuition for defining A: Color A(u) should be a “majority opinion” of u’s color in A’ Definition of A should be easy to analyze when considering verifier’s behavior Verifier uses random walk  define A via random walk Formally: Perform random walk of length  t-1 from v in G’. Let final vertex be u. Then u votes for color A’u(v). Let A(v) be the majority vote.

32 How to pick A(v)? v r z s y w u x Formally: Votes for v’s color A’s(v)
A’z(v) R G B Set A(v) = Red 1 1 w u x v Formally: Perform random walk of length  t-1 from v in G’. Let final vertex be u. Then u votes for color A’u(v). Let A(v) be the majority vote.

33 Remaining Tasks Define coloring A of G induced by coloring A’ of G’
Let F be { faulty edges of coloring A } Let N be an RV = # edges of F detected by verifier Show E[ N ]  ( t∙|F| / |E| ) Show E[ N2 ]  O( t∙|F| / |E| ) Thus Pr[ N>0 ]   ( t∙|F| / |E| )  (t∙) E[ N2 ] E[ N ]2 (end of Main Lemma)

34 E[ N ] is large Suppose walk traverses e and {u,v} ⊆ B(a,t) ⋂ B(b,t)
Let F be { faulty edges of coloring A } Let N be an RV = # edges of F detected by verifier Focus on a single edge e∈F Suppose walk traverses e and {u,v} ⊆ B(a,t) ⋂ B(b,t) Verifier performs 3 tests: Is A’a(u) = A’b(u)? Is A’a(v) = A’b(v)? Is ce( A’a(u), A’b(v) )=1? Intuition If opinions of nearby vertices differ then Tests 1 and 2 will fail If opinions of nearby vertices match then Test 3 will fail B(a,t) e B(b,t) a u v b Test 1 Test 2 Test 3

35 E[ N ] is large Suppose walk traverses e and {u,v} ⊆ B(a,t) ⋂ B(b,t)
Let F be { faulty edges of coloring A } Let N be an RV = # edges of F detected by verifier Focus on a single edge e∈F Suppose walk traverses e and {u,v} ⊆ B(a,t) ⋂ B(b,t) Verifier performs 3 tests: Is A’a(u) = A’b(u)? Is A’a(v) = A’b(v)? Is ce( A’a(u), A’b(v) )=1? Pr[ verifier detects fault ]  maxi Pr[ Test i detects fault ] Define: pu = Pr[ A’a(u) = A(u) | d(a,u)<t ] pv = Pr[ A’b(v) = A(v) | d(b,v)<t ] B(a,t) e B(b,t) a u v b Test 1 Test 2 Test 3 Want to show that this probability is large. How to do this? Let’s introduce some parameters pu and pv. Pu basically says, assuming that a has an opinion about u’s color, does it match the majority opinion? We don’t know what the value of these parameters is. But we will argue that *no matter what value they have* one of the tests will catch the fault. Unknown parameters

36  (1 – (prob of most likely value for Da)
E[ N ] is large Verifier performs 3 tests: Is A’a(u) = A’b(u)? Is A’a(v) = A’b(v)? Is ce( A’a(u), A’b(v) )=1? Define: pu = Pr[ A’a(u) = A(u) | d(a,u)<t ] Test 1 Test 2 Test 3 (Unknown) Pr[ Test 1 detects fault ]  Pr[ d(a,u)<t  d(b,v)<t  Test 1 detects fault ]  (1-1/e)2 Pr[ Test 1 detects fault | d(a,u)<t  d(b,v)<t ] = (1-1/e)2 Pr[ A’a(u)A’b(u) | d(a,u)<t  d(b,v)<t ] Roll two independent, biased dice Da and Db. What is Pr[ values differ ]?  (1 – (prob of most likely value for Da)

37 E[ N ] is large Verifier performs 3 tests:
Is A’a(u) = A’b(u)? Is A’a(v) = A’b(v)? Is ce( A’a(u), A’b(v) )=1? Define: pu = Pr[ A’a(u) = A(u) | d(a,u)<t ] Test 1 Test 2 Test 3 (Unknown) Pr[ Test 1 detects fault ]  Pr[ d(a,u)<t  d(b,v)<t  Test 1 detects fault ]  (1-1/e)2 Pr[ Test 1 detects fault | d(a,u)<t  d(b,v)<t ] = (1-1/e)2 Pr[ A’a(u)A’b(u) | d(a,u)<t  d(b,v)<t ]  (1 – (prob of most likely value for Da) By definition, A(u) is the most likely value for A’a(u)!  (1-1/e)2 (1-pu)

38 E[ N ] is large Verifier performs 3 tests:
Is A’a(u) = A’b(u)? Is A’a(v) = A’b(v)? Is ce( A’a(u), A’b(v) )=1? Define: pu = Pr[ A’a(u) = A(u) | d(a,u)<t ] pv = Pr[ A’b(v) = A(v) | d(b,v)<t ] Test 1 Test 2 Test 3 Unknown Pr[ Test 3 detects fault ]  Pr[ d(a,u)<t  d(b,v)<t  Test 3 detects fault ]  (1-1/e)2 Pr[ Test 3 detects fault | d(a,u)<t  d(b,v)<t ]  (1-1/e)2 Pr[ A’a(u)=A(u)  A’b(v)=A(v) | d(a,u)<t  d(b,v)<t ] a and b are independent = (1-1/e)2 pu pv

39 E[ N ] is large Verifier performs 3 tests:
Is A’a(u) = A’b(u)? Is A’a(v) = A’b(v)? Is ce( A’a(u), A’b(v) )=1? Define: pu = Pr[ A’a(u) = A(u) | d(a,u)<t ] pv = Pr[ A’b(v) = A(v) | d(b,v)<t ] Test 1 Test 2 Test 3 Unknown Pr[ Test 1 detects fault ]  (1-1/e)2 (1-pu) Pr[ Test 2 detects fault ]  (1-1/e)2 (1-pv) Pr[ Test 3 detects fault ]  (1-1/e)2 pu pv Analyzing the MAX is just simple calculus. WLOG pu = pv. So 1-pu = pu^2. Pr[ Verifier detects fault ]  maxi Pr[ Test i detects fault ]  (1-1/e)2 max { 1-pu, 1-pv, pupv }  (1-1/e)2 ( 5 - 1) / 2  1/8

40 E[ N ] is large Verifier performs 3 tests:
Is A’a(u) = A’b(u)? Is A’a(v) = A’b(v)? Is ce( A’a(u), A’b(v) )=1? Define: pu = Pr[ A’a(u) = A(u) | d(a,u)<t ] pv = Pr[ A’b(v) = A(v) | d(b,v)<t ] Test 1 Test 2 Test 3 Punchline e is a faulty edge for G under coloring A If verifier’s random walk traverses e, then it will detect a fault with probability 1/8. Unknown Pr[ Test 1 detects fault ]  (1-1/e)2 (1-pu) Pr[ Test 2 detects fault ]  (1-1/e)2 (1-pv) Pr[ Test 3 detects fault ]  (1-1/e)2 pu pv Pr[ Verifier detects fault ]  maxi Pr[ Test i detects fault ]  (1-1/e)2 max { 1-pu, 1-pv, pupv }  (1-1/e)2 ( 5 - 1) / 2  1/8

41 ith edge of walk is distributed uniformly since G is regular
Wrapup: E[ N ] is large Notation: Let F be { faulty edges of coloring A } Let N be an RV = # edges of F detected as faulty by verifier Focus on a single edge e∈F Let Me = # times e is traversed by verifier’s random walk Let Ne = # times e is traversed by verifier’s random walk and is detected to be faulty Thus N = Σe∈F Ne E[ Ne ] = E[ Ne | Me=0 ]∙Pr[ Me=0 ] + E[ Ne | Me1 ]∙Pr[ Me1 ] Just need to turn statements about probability into statements about expectations. Since graph is regular, uniform distribution is stationary. This means that every edge of the walk is edge e with probability 1/|E|.  (1/8)∙E[ Me | Me1 ]∙Pr[ Me1 ] = (1/8)∙E[ Me ] ith edge of walk is distributed uniformly since G is regular = (1/8)∙E[ length of random walk ] / |E| = (1/8) ∙ t / |E| E[ N ] = Σe∈F E[ Ne ] = t∙|F| / 8∙|E|= ∙t / 8

42 Remaining Tasks Define coloring A of G induced by coloring A’ of G’
Let F be { faulty edges of coloring A } Let N be an RV = # edges of F detected by verifier Show E[ N ]  ( t∙|F| / |E| ) Show E[ N2 ]  O( t∙|F| / |E| ) Thus Pr[ N>0 ]   ( t∙|F| / |E| )  (t∙) E[ N2 ] E[ N ]2 (end of Main Lemma)

43 E[ N2 ] is small  Pr[j=1| i=1] = (1-1/t)j-i |F|/|E| (if j>i)
Let e1, e2, …, be edges chosen by verifier Let i be indicator that  i edges chosen and ei ∈ F Simplifying Assumption: Edges chosen independently, not by random walk  Pr[j=1| i=1] = (1-1/t)j-i |F|/|E| (if j>i) E[N2] = E[ (∑i1 i)2 ]  2 ∑ji E[ i j ] = 2 ∑i1Pr[i=1] ∑ji Pr[j=1| i=1] = 2 ( t∙|F| / |E| ) ( 1 + ∑k1 (1-1/t)k |F|/|E| ) < 2 ( t∙|F| / |E| ) ( 1 + t |F| / |E| ) < 4 ( t∙|F| / |E| ) So long as =|F|/|E|<1/t

44 E[ N2 ] is small Let e1, e2, …, be edges chosen by verifier
Let i be indicator that  i edges chosen and ei ∈ F Claim: If G is an expander then ei’s on random walk are almost pairwise independent. Pr[j=1| i=1] = (1-1/t)j-i ( |F|/|E| + (1-2/d2)j-i-1 ) (if j>i) Expansion of G E[N2] = E[ (∑i1 i)2 ]  2 ∑ji E[ i j ] = 2 ∑i1Pr[i=1] ∑ji Pr[j=1| i=1] This expansion term just percolates through. If G is a good expander, then this term is only a constant, so we’re done! = 2 ( t∙|F| / |E| ) (1 + ∑k1 (1-1/t)k (|F|/|E|+(1-2/d2)k-1) ) < 2 ( t∙|F| / |E| ) (1 + t |F|/|E| + 2/d2 ) < 4 ( t∙|F| / |E| ) (1 + 2/d2)

45 E[ N2 ] is small Let e1, e2, …, be edges chosen by verifier
Let i be indicator that  i edges chosen and ei ∈ F Claim: If G is an expander then ei’s on random walk are almost pairwise independent. Pr[ej∈F | e1∈F] = |F|/|E| + (1-2/d2)j (if j>1) E[N2] = E[ (∑i1 i)2 ]  2 ∑ji E[ i j ] = 2 ∑i1Pr[i=1] ∑ji Pr[j=1| i=1] = 2 ( t∙|F| / |E| ) (1 + ∑k1 (1-1/t)k (|F|/|E|+(1-2/d2)k-1) ) < 2 ( t∙|F| / |E| ) (1 + t |F|/|E| + 2/d2 ) < 4 ( t∙|F| / |E| ) (1 + 2/d2)

46 Remaining Tasks Define coloring A of G induced by coloring A’ of G’
Let F be { faulty edges of coloring A } Let N be an RV = # edges of F detected by verifier Show E[ N ]  ( t∙|F| / |E| ) Show E[ N2 ]  O( t∙|F| / |E| ) Claim: If G is an expander then ei’s on random walk are almost pairwise independent. Pr[ej∈F | e1∈F] = |F|/|E| + (1-2/d2)j (if j>1) Thus Pr[ N>0 ]   ( t∙|F| / |E| )  (t∙) E[ N2 ] E[ N ]2 (end of Main Lemma)

47 Random Walks on Expanders
Claim: Pr[ej∈F | e1∈F] = |F|/|E| + (2/d)j-2 (if j>1) Notation: 2  d-2/d is the second-largest eigenvalue F(v) = { edges in F incident on v } v1, v2, … are the vertices on random walk i is the distribution on vi, conditioned on e1∈F M is transition matrix of the random walk Preliminaries: 1(v) = F(v) / 2|F| 1 = (1/n)∙1 + F’ / 2|F| where F’ is component of F orthogonal to 1 j-1 = Mj-2 1 = Mj-2 ( (1/n)∙1 + F’ / 2|F| ) Pr[ej∈F | e1∈F] = ∑v (F(v)/d)∙j-1(v) = (FT j-1)/d Facts that we need: G is a graph. Its adjacency matrix is symmetric. All eigenvalues are real. We will give G d/2 self loops (it is called a “positive expander”). This makes all eigenvalues non-negative (because it is now diagonally dominant and hence positive definite). Its maximum eigenvalue is d, its degree. The corresponding eigenvector is the all-1s vector. Its second largest eigenvalue is lambda_2. It is known that lambda_2 <= d – eta^2/d.

48 Random Walks on Expanders
1(v) = F(v) / 2|F| 1 = (1/n)∙1 + F’ / 2|F| j-1 = Mj-2 1 = Mj-2 ( (1/n)∙1 + F’ / 2|F| ) Pr[ej∈F | e1∈F] = ∑v (F(v)/d)∙j-1(v) = (FT j-1)/d Pr[ej∈F | e1∈F] = (FT j-1)/d (from (4)) = (FT Mj-2 ( (1/n)∙1 + F’/2|F| ) ) / d (from (3)) = (1/nd)∙FT (1/2|F|d) FT Mj-2 F’ (linearity)  |F|/|E| + (1/2|F|d) ||F|| ||Mj-2 F’|| (Cauchy-Schwarz)  |F|/|E| + (1/2|F|d) ||F|| (2/d)j-2 ||F|| (since F’ ┴ 1) = |F|/|E| + (1/2|F|d) (2/d)j-2 (∑v F(v)2)  |F|/|E| + (1/2|F|d) (2/d)j-2 (d ∑v F(v)) = |F|/|E| + (2/d)j-2 (end of Claim)

49 Remaining tasks for this talk
Proof of Main Lemma Transforming graphs to constant degree Transforming graphs to expanders

50 Transforming Graphs to Constant Degree
Introduce dummy vertices! v

51 Transforming Graphs to Constant Degree
Want to force dummies to have same color

52 Transforming Graphs to Constant Degree
Clique? Degree too large! Too many edges Add constraints forcing equality

53 Transforming Graphs to Constant Degree
Cycle? Half vertices can half wrong color but only two violated edges. Want a graph where cuts have many edges

54 Transforming Graphs to Constant Degree
Expander? Just right!

55 Transforming Graphs to Expanders
G is not an expander H is an expander

56 Transforming Graphs to Expanders
G  H is an expander


Download ppt "The PCP Theorem by Gap Amplification"

Similar presentations


Ads by Google