Download presentation
Presentation is loading. Please wait.
Published byRoger Barker Modified over 9 years ago
1
The PCP Theorem via gap amplification Irit Dinur Hebrew University
2
The PCP Theorem [AroraSafra, AroraLundMotwaniSudanSzegedy, 1992] Prob.Checkable.Proof SAT Instance: Verifier If sat() = 1 then 9 proof, Pr[Ver accepts] = 1 If sat() < 1 then 8 proof, Pr[Ver accepts] < ½
3
The PCP Theorem [AroraSafra, AroraLundMotwaniSudanSzegedy, 1992] variables V1V1V1V1 V2V2V2V2 V3V3V3V3 VnVnVnVn … PCP Thm reduction from SAT to gap-CSP Given a constraint graph G, it is NP hard to decide between 1. gap(G)=0 2. gap(G)> x1x1x1x1 x2x2x2x2 x4x4x4x4 x5x5x5x5 xnxnxnxn x3x3x3x3 V1V1V1V1 V2V2V2V2
4
This talk New proof for the PCP Theorem: Given a constraint graph G, it is NP-hard to decide between 1. gap(G) = 0 2. gap(G) > Based on: gap amplification, inspired by Reingold’s SL=L proof Also: “very” short PCPs
5
Constraint Systems and Constraint Graphs C={c 1,…,c n } constraints, each over 2 variables from x 1,…,x m, finite alphabet Constraint graph: gap(C) = smallest frac of unsat. constriants, i.e. = min a Pr i [c i rejects] PCP Thm: NP-hard to decide gap(C)=0 or gap(C)> x1x1x1x1 x2x2x2x2 x4x4x4x4 x5x5x5x5 xnxnxnxn x3x3x3x3 c1c1c1c1 c2c2c2c2 x1x1 x2x2variables C1C1C1C1 C2C2C2C2 C3C3C3C3 CnCnCnCn …
6
step 0: Constraint Graph SAT is NP-hard Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) > 0 proof: reduction from 3coloring. ={1,2,3}, inequality constraints on edges. Clearly, G is 3-colorable iff gap(G)=0. PCP Thm: Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) >
7
Basic Plan Start with a constraint graph G (from 3coloring ) G G 1 G 2 … G k = final output of reduction Main Thm: gap(G i+1 ) ¸ 2 ¢ gap(G i ) (if not already too large) size(G i+1 ) = const ¢ size(G i ), degree, alphabet, expansion all remain the same. Conclusion: NP hard to distinguish between gap(G k )=0 and gap(G k )> (constant)
8
Main Step Standard transformations: making G a regular constant degree expander, w/ self- loops Composition with P a “constant-size” PCP. Composition with P = a “constant-size” PCP. P can be as inefficient as possible G i G i+1 : G i+1 = ( prep(G i ) ) t ² P 1. Preprocess G 2. Raise to power t 3. Compose with P = constant size PCP Key step: G G t, multiplies the gap by t; Keeps size linear ! powering
9
Powering a constraint graph Vertices: same Edges: length-t paths (=powering of adj. matrix) Alphabet: d t reflecting “opinions” about neighbors Constraints: check everything you can! u v
10
Powering a constraint graph Vertices: same Edges: length-t paths (=powering of adj. matrix) Alphabet: d t reflecting “opinions” about neighbors Constraints: check everything you can! u v Observations: 1. New Degree = d t 2. New Size = O(size) (#edges is multiplied by d t-1 ) 3. If gap(G)=0 then gap(G t )=0 4. Alphabet increases from to d t Amplification Lemma: gap(G t ) ¸ t ¢ gap(G)
11
Amplification Lemma: gap(G t ) > t ¢ gap(G) Intuition: Spread the information inconsistencies will be detected more often u v Assumption: G is d-regular d=O(1), expander, w self-loops
12
Given A :V d t “best” extract a :V by most popular value in a random t/2 step walk v Amplification Lemma: gap(G t ) > t ¢ gap(G)
13
Given A :V d t “best” extract a :V by most popular value in a random t/2 step walk u Extracting a :V v
14
Given A :V d t “best” extract a :V and consider F = { edges rejecting a } Note: F/E ¸ gap(G) v u Extracting a :V
15
Amplification Lemma: gap(G t ) > t ¢ gap(G) Relate fraction of rejecting paths to fraction of rejecting edges ( = F/E ) v u
16
Two Definitions = (v 0,v 1,…,u,v,…,v t ); j = (v j-1,v j ) Definition: the j-th edge strikes if 1. |j – t/2| < t 2. (u,v) 2 F, i.e., (u,v) rejects a (u), a (v) 3. A ( v 0 ) agrees with a (u) on u & A ( v t ) agrees with a (v) on v. Definition: N() = # edges that strike . 0 · N() < 2 t If N()>0 then rejects, so gap(G t ) ¸ Pr [N()>0] v0v0v0v0 v u vtvtvtvt jjjj
17
We will prove: Pr [N()>0] > t ¢ F/E Lemma 1: E[N] > t ¢ F/E ¢ const(d, ) Intuition: Assuming N() is always 0 or 1, Pr[N>0] = E[N] Lemma 2: E[N 2 ] < t ¢ F/E ¢ const(d, ) Standard: Pr[N>0] ¸ (E[N]) 2 /E(N 2 ) pf: E[N 2 |N>0] ¢ Pr[N>0] 2 ¸ (E[N|N>0]) 2 ¢ Pr[N>0] 2 Pr[N>0] > (t ¢ F /E ) 2 / (t ¢ F /E) = t ¢ F /E gap(G t ) ¸ ¸ gap(G)
18
Lemma 1: E[N] = t ¢ F/E N i () = indicator for event “the i-th edge strikes ” N = i 2 J N i where J = { i : |i-t/2|< t } Claim: if i 2 J E[N i ] ¼ 1/ 2 ¢ F/E can be chosen by the following process: 1. Select a random edge (u,v) 2 E, and let i = (u,v). 2. Select a random i-1 step path from u 3. Select a random t-i step path from v Clearly, Pr [ i 2 F ] = F /E What is the probability that A (v 0 ) agrees with a (u) and A (v t ) agrees with a (v) ? v0v0v0v0vu vtvtvtvt i-1 t-i
19
Claim: if i 2 J E[N i ] ¼ 1/ 2 ¢ F/E chosen by : 1. Select a random edge (u,v) 2 E, and let i = (u,v). 2. Select a random i-1 step path from u 3. Select a random t-i step path from v i-1 = t/2 walk from u reaches v 0 for which A(v 0 ) thinks a (u) of u, with prob. ¸ 1/. i 2 J: roughly the same !! (because of self-loops) v0v0v0v0 v u vtvtvtvt i-1 t-i t/2
20
Analyzing i 2 J …recall the self-loops a random walk from u is described by 1. select which steps stay-put by flipping i random coins. Let k = # heads 2. select length-k random walk on non-pink edges For all i 2 J, # steps k is distributed almost same v u
21
Back to E[N] Fix i 2 J. Select by the following process: 1. Select a random edge (u,v), and let i = (u,v). 2. Select a random i-1 step path from u 3. Select a random t-i step path from v 1. Pr [ i 2 F] = F/E 2. Pr[A(v 0 ) agrees with a on u | (u,v) ] > 1/2 3. Pr[A(v t ) agrees with a on v | (v 0,…,u,v) ] > 1/2 E[N i ] = Pr[N i =1] > F/E ¢ 1/ 2 ¢ const so E[N] = i 2 J E[N i ] > t ¢ F /E ¢ const QED v0v0v0v0vu vtvtvtvt i-1 t-i
22
We will prove: Pr [N()>0] > t ¢ F/E Lemma 1: E[N] > t ¢ F/E ¢ const(d, ) Lemma 2: E[N 2 ] < t ¢ F/E ¢ const(d, ) read: “most striked paths see · a constant number of striking edges” By Pr[N>0] > (E[N]) 2 / E[N 2 ] Pr[N>0] > (t ¢ F /E ) 2 / (t ¢ F /E) = t ¢ F /E gap(G t ) ¸ ¸ gap(G)
23
Lemma 2: Upper bounding E[N 2 ] Observe: N() · # middle intersections of with F Claim: if G=(V,E) is an expander, and F ½ E any (small) fixed set of edges, then E[(N’) 2 ] < t ¢ F/E ¢ (t ¢ F/E+const) proof-sketch: Compute i<j E[N’ i N’ j ]. Conditioned on i 2 F, the expected # remaining steps in F is still · constant.
24
The full inductive step G i G i+1 : G i+1 = ( prep(G i ) ) t ² P 1. Preprocess G 2. Raise to power t 3. Compose with P = constant size PCP
25
Preprocessing G H=prep(G) s.t. H is d-regular, d=O(1) H is an expander, has self-loops. maintain size(H) = O(size(G)) gap(G) ¼ gap(H), i.e., 1. gap(G) = 0 gap(H) = 0 2. gap(G)/const · gap(H) Add expander edges Add self-loops [PY] Blow up every vertex u into a cloud of deg(u) vertices, and inter connect them via an expander.
26
Reducing d t to Consider the constraints {C 1,…,C n } (and forget the graph structure) For each i, we replace C i by {c ij } = constraints over smaller alphabet . P = algorithm that takes C to {c j }, c j over s. t. If C is “satisfiable”, then gap({c j })=0 If C is “unsatisfiable”, then gap({c j }) > Composition Lemma: [BGHSV, DR] The system C’ = [ i P(C i ) has gap(C’) ¼ gap(C) C1C1C1C1 C2C2C2C2 C3C3C3C3 C4C4C4C4 CnCnCnCn… c 11 c 12 c 13 c 14 c 15 P c n1 c n2 c n3 c n4 c n5 P Assignment-testers [DR] / PCPPs [BGHSV]
27
Composition If P is any AT / PCPP then this composition works. P can be Hadamard-based Longcode-based found via exhaustive search (existence must be ensured, though) P’s running time only affects constants.
28
Summary: Main theorem G i G i+1 : G i+1 = ( prep(G i ) ) t ² P gap(G i+1 ) > 2 ¢ gap(G i ) and other params stay same 1. G[, ] 2. G prep(G)[, /const ] 3. G G t [ d t, t ¢ /const ] 4. G G ² P [, t ¢ /const’ ] = [,2] G=G 0 G 1 G 2 … G k = final output of reduction After k=log n steps, If gap(G 0 ) = 0 then gap(G k )=0 If gap(G 0 ) > 0 then gap(G k ) > const
29
Application: short PCPs …[PS, HS, GS, BSVW, BGHSV, BS] [BS’05]: NP µ PCP 1,1-1/polylog [ log (n ¢ polylog ), O(1) ] There is a reduction taking constraint graph G to G’ such that |G’| = |G| ¢ polylog |G| If gap(G)=0 then gap(G’)=0 If gap(G)>0 then gap(G’)> 1/polylog|G| Applying our main step loglog|G| times on G’, we get a new constraint graph G’’ such that If gap(G) = 0 then gap(G’’)=0 If gap(G) > 0 then gap(G’’) > const i.e., NP µ PCP 1,1/2 [ log (n ¢ polylog ), O(1) ]
30
final remarks Main point: gradual amplification Compare to Raz’s parallel-repetition thm Q: get the gap up to 1-o(1)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.