The PCP Theorem via gap amplification Irit Dinur Hebrew University.

Slides:



Advertisements
Similar presentations
Lower Bounds for Additive Spanners, Emulators, and More David P. Woodruff MIT and Tsinghua University To appear in FOCS, 2006.
Advertisements

Direct Product : Decoding & Testing, with Applications Russell Impagliazzo (IAS & UCSD) Ragesh Jaiswal (Columbia) Valentine Kabanets (SFU) Avi Wigderson.
Inapproximability of Hypergraph Vertex-Cover. A k-uniform hypergraph H= : V – a set of vertices E - a collection of k-element subsets of V Example: k=3.
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
MaxClique Inapproximability Seminar on HARDNESS OF APPROXIMATION PROBLEMS by Dr. Irit Dinur Presented by Rica Gonen.
Probabilistically Checkable Proofs (and inapproximability) Irit Dinur, Weizmann open day, May 1 st 2009.
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
CSE 2331/5331 Topic 5: Prob. Analysis Randomized Alg.
Artur Czumaj Dept of Computer Science & DIMAP University of Warwick Testing Expansion in Bounded Degree Graphs Joint work with Christian Sohler.
Introduction to PCP and Hardness of Approximation Dana Moshkovitz Princeton University and The Institute for Advanced Study 1.
1 The PCP Theorem via gap amplification Irit Dinur Presentation by Michal Rosen & Adi Adiv.
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
Complexity ©D.Moshkovits 1 Hardness of Approximation.
Umans Complexity Theory Lectures Lecture 15: Approximation Algorithms and Probabilistically Checkable Proofs (PCPs)
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Inapproximability from different hardness assumptions Prahladh Harsha TIFR 2011 School on Approximability.
Two Query PCP with Sub-constant Error Dana Moshkovitz Princeton University Ran Raz Weizmann Institute 1.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
(Omer Reingold, 2005) Speaker: Roii Werner TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A A A AA A.
1. 2 Gap-QS[O(1), ,2|  | -1 ] Gap-QS[O(n), ,2|  | -1 ] Gap-QS*[O(1),O(1), ,|  | -  ] Gap-QS*[O(1),O(1), ,|  | -  ] conjunctions of constant.
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
Randomized Algorithms and Randomized Rounding Lecture 21: April 13 G n 2 leaves
Constant Degree, Lossless Expanders Omer Reingold AT&T joint work with Michael Capalbo (IAS), Salil Vadhan (Harvard), and Avi Wigderson (Hebrew U., IAS)
Undirected ST-Connectivity 2 DL Omer Reingold, STOC 2005: Presented by: Fenghui Zhang CPSC 637 – paper presentation.
1 COMPOSITION PCP proof by Irit Dinur Presentation by Guy Solomon.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
2-Layer Crossing Minimisation Johan van Rooij. Overview Problem definitions NP-Hardness proof Heuristics & Performance Practical Computation One layer:
Zig-Zag Expanders Seminar in Theory and Algorithmic Research Sashka Davis UCSD, April 2005 “ Entropy Waves, the Zig-Zag Graph Product, and New Constant-
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
Complexity 1 Hardness of Approximation. Complexity 2 Introduction Objectives: –To show several approximation problems are NP-hard Overview: –Reminder:
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. In this lecture we’ll present the Quadratic Solvability.
Complexity ©D.Moshkovitz 1 Paths On the Reasonability of Finding Paths in Graphs.
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP.
Undirected ST-Connectivity In Log Space
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
Undirected ST-Connectivity In Log Space Omer Reingold Slides by Sharon Bruckner.
1 Joint work with Shmuel Safra. 2 Motivation 3 Motivation.
1. 2 Overview of the Previous Lecture Gap-QS[O(n), ,2|  | -1 ] Gap-QS[O(1), ,2|  | -1 ] QS[O(1),  ] Solvability[O(1),  ] 3-SAT This will imply a.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Sub-Constant Error Low Degree Test of Almost-Linear Size Dana Moshkovitz Weizmann Institute Ran Raz Weizmann Institute.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Lecture 22 More NPC problems
Expanders via Random Spanning Trees R 許榮財 R 黃佳婷 R 黃怡嘉.
Testing the independence number of hypergraphs
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
1 How to establish NP-hardness Lemma: If L 1 is NP-hard and L 1 ≤ L 2 then L 2 is NP-hard.
Artur Czumaj DIMAP DIMAP (Centre for Discrete Maths and it Applications) Computer Science & Department of Computer Science University of Warwick Testing.
Complexity and Efficient Algorithms Group / Department of Computer Science Testing the Cluster Structure of Graphs Christian Sohler joint work with Artur.
1 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones.
Hardness of Hyper-Graph Coloring Irit Dinur NEC Joint work with Oded Regev and Cliff Smyth.
Complexity ©D.Moshkovits 1 2-Satisfiability NOTE: These slides were created by Muli Safra, from OPICS/sat/)
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
Complexity ©D.Moshkovitz 1 Our First NP-Complete Problem The Cook-Levin theorem A B C.
Approximation Algorithms based on linear programming.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
1 SAT SAT: Given a Boolean function in CNF representation, is there a way to assign truth values to the variables so that the function evaluates to true?
CS151 Complexity Theory Lecture 15 May 18, Gap producing reductions Main purpose: –r-approximation algorithm for L 2 distinguishes between f(yes)
Umans Complexity Theory Lectures Lecture 16: The PCP Theorem.
Markov Chains and Random Walks
(xy)(yz)(xz)(zy)
Lecture 18: Uniformity Testing Monotonicity Testing
The PCP Theorem by Gap Amplification
Introduction to PCP and Hardness of Approximation
On the effect of randomness on planted 3-coloring models
Undirected ST-Connectivity In Log Space
Umans Complexity Theory Lectures
Presentation transcript:

The PCP Theorem via gap amplification Irit Dinur Hebrew University

The PCP Theorem [AroraSafra, AroraLundMotwaniSudanSzegedy, 1992] Prob.Checkable.Proof SAT Instance:  Verifier If sat() = 1 then 9 proof, Pr[Ver accepts] = 1 If sat() < 1 then 8 proof, Pr[Ver accepts] < ½

The PCP Theorem [AroraSafra, AroraLundMotwaniSudanSzegedy, 1992] variables V1V1V1V1 V2V2V2V2 V3V3V3V3 VnVnVnVn … PCP Thm reduction from SAT to gap-CSP Given a constraint graph G, it is NP hard to decide between 1. gap(G)=0 2. gap(G)> x1x1x1x1 x2x2x2x2 x4x4x4x4 x5x5x5x5 xnxnxnxn x3x3x3x3 V1V1V1V1 V2V2V2V2

This talk New proof for the PCP Theorem:  Given a constraint graph G, it is NP-hard to decide between 1. gap(G) = 0 2. gap(G) >  Based on: gap amplification, inspired by Reingold’s SL=L proof Also: “very” short PCPs

Constraint Systems and Constraint Graphs C={c 1,…,c n } constraints, each over 2 variables from x 1,…,x m, finite alphabet  Constraint graph: gap(C) = smallest frac of unsat. constriants, i.e. = min a Pr i [c i rejects] PCP Thm: NP-hard to decide gap(C)=0 or gap(C)> x1x1x1x1 x2x2x2x2 x4x4x4x4 x5x5x5x5 xnxnxnxn x3x3x3x3 c1c1c1c1 c2c2c2c2 x1x1 x2x2variables C1C1C1C1 C2C2C2C2 C3C3C3C3 CnCnCnCn …

step 0: Constraint Graph SAT is NP-hard Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) > 0  proof: reduction from 3coloring.  ={1,2,3}, inequality constraints on edges.  Clearly, G is 3-colorable iff gap(G)=0. PCP Thm: Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) > 

Basic Plan Start with a constraint graph G (from 3coloring ) G  G 1  G 2  …  G k = final output of reduction Main Thm: gap(G i+1 ) ¸ 2 ¢ gap(G i ) (if not already too large) size(G i+1 ) = const ¢ size(G i ), degree, alphabet, expansion all remain the same. Conclusion: NP hard to distinguish between gap(G k )=0 and gap(G k )>  (constant)

Main Step Standard transformations: making G a regular constant degree expander, w/ self- loops Composition with P a “constant-size” PCP. Composition with P = a “constant-size” PCP. P can be as inefficient as possible  G i  G i+1 : G i+1 = ( prep(G i ) ) t ² P 1. Preprocess G 2. Raise to power t 3. Compose with P = constant size PCP  Key step: G  G t, multiplies the gap by  t; Keeps size linear ! powering

Powering a constraint graph Vertices: same Edges: length-t paths (=powering of adj. matrix) Alphabet:  d t reflecting “opinions” about neighbors Constraints: check everything you can! u v

Powering a constraint graph Vertices: same Edges: length-t paths (=powering of adj. matrix) Alphabet:  d t reflecting “opinions” about neighbors Constraints: check everything you can! u v Observations: 1. New Degree = d t 2. New Size = O(size) (#edges is multiplied by d t-1 ) 3. If gap(G)=0 then gap(G t )=0 4. Alphabet increases from  to  d t Amplification Lemma: gap(G t ) ¸ t ¢ gap(G)

Amplification Lemma: gap(G t ) > t ¢ gap(G) Intuition: Spread the information  inconsistencies will be detected more often u v Assumption: G is d-regular d=O(1), expander, w self-loops

Given A :V   d t “best” extract a :V   by most popular value in a random t/2 step walk v Amplification Lemma: gap(G t ) > t ¢ gap(G)

Given A :V   d t “best” extract a :V   by most popular value in a random t/2 step walk u Extracting a :V   v

Given A :V   d t “best” extract a :V   and consider F = { edges rejecting a } Note: F/E ¸ gap(G) v u Extracting a :V  

Amplification Lemma: gap(G t ) >  t ¢ gap(G)  Relate fraction of rejecting paths to fraction of rejecting edges ( = F/E ) v u

Two Definitions   = (v 0,v 1,…,u,v,…,v t );  j = (v j-1,v j )  Definition: the j-th edge strikes  if 1. |j – t/2| < t 2. (u,v) 2 F, i.e., (u,v) rejects a (u), a (v) 3. A ( v 0 ) agrees with a (u) on u & A ( v t ) agrees with a (v) on v.  Definition: N() = # edges that strike . 0 · N() < 2  t If N()>0 then  rejects, so gap(G t ) ¸ Pr  [N()>0] v0v0v0v0 v u vtvtvtvt  jjjj

We will prove: Pr  [N()>0] >  t ¢ F/E  Lemma 1: E[N] > t ¢ F/E ¢ const(d, )  Intuition: Assuming N() is always 0 or 1, Pr[N>0] = E[N]  Lemma 2: E[N 2 ] < t ¢ F/E ¢ const(d, )  Standard: Pr[N>0] ¸ (E[N]) 2 /E(N 2 ) pf: E[N 2 |N>0] ¢ Pr[N>0] 2 ¸ (E[N|N>0]) 2 ¢ Pr[N>0] 2  Pr[N>0] > (t ¢ F /E ) 2 / (t ¢ F /E) = t ¢ F /E gap(G t ) ¸ ¸ gap(G)

Lemma 1: E[N] =  t ¢ F/E  N i () = indicator for event “the i-th edge strikes ”  N =  i 2 J N i where J = { i : |i-t/2|< t }  Claim: if i 2 J  E[N i ] ¼ 1/ 2 ¢ F/E   can be chosen by the following process: 1. Select a random edge (u,v) 2 E, and let  i = (u,v). 2. Select a random i-1 step path from u 3. Select a random t-i step path from v  Clearly, Pr  [ i 2 F ] = F /E  What is the probability that A (v 0 ) agrees with a (u) and A (v t ) agrees with a (v) ? v0v0v0v0vu vtvtvtvt i-1 t-i

Claim: if i 2 J  E[N i ] ¼ 1/ 2 ¢ F/E   chosen by : 1. Select a random edge (u,v) 2 E, and let  i = (u,v). 2. Select a random i-1 step path from u 3. Select a random t-i step path from v  i-1 = t/2  walk from u reaches v 0 for which A(v 0 ) thinks a (u) of u, with prob. ¸ 1/.  i 2 J: roughly the same !! (because of self-loops) v0v0v0v0 v u vtvtvtvt i-1 t-i t/2

Analyzing i 2 J …recall the self-loops a random walk from u is described by 1. select which steps stay-put by flipping i random coins. Let k = # heads 2. select length-k random walk on non-pink edges For all i 2 J, # steps k is distributed almost same v u

Back to E[N]  Fix i 2 J. Select  by the following process: 1. Select a random edge (u,v), and let  i = (u,v). 2. Select a random i-1 step path from u 3. Select a random t-i step path from v 1. Pr  [ i 2 F] = F/E 2. Pr[A(v 0 ) agrees with a on u | (u,v) ] > 1/2 3. Pr[A(v t ) agrees with a on v | (v 0,…,u,v) ] > 1/2 E[N i ] = Pr[N i =1] > F/E ¢ 1/ 2 ¢ const so E[N] =  i 2 J E[N i ] > t ¢ F /E ¢ const QED v0v0v0v0vu vtvtvtvt i-1 t-i

We will prove: Pr  [N()>0] >  t ¢ F/E  Lemma 1: E[N] > t ¢ F/E ¢ const(d, )  Lemma 2: E[N 2 ] < t ¢ F/E ¢ const(d, ) read: “most striked paths see · a constant number of striking edges” By Pr[N>0] > (E[N]) 2 / E[N 2 ]  Pr[N>0] > (t ¢ F /E ) 2 / (t ¢ F /E) = t ¢ F /E gap(G t ) ¸ ¸ gap(G)

Lemma 2: Upper bounding E[N 2 ]  Observe: N() · # middle intersections of with F  Claim: if G=(V,E) is an expander, and F ½ E any (small) fixed set of edges, then E[(N’) 2 ] < t ¢ F/E ¢ (t ¢ F/E+const)  proof-sketch: Compute  i<j E[N’ i N’ j ]. Conditioned on  i 2 F, the expected # remaining steps in F is still · constant.

The full inductive step  G i  G i+1 : G i+1 = ( prep(G i ) ) t ² P 1. Preprocess G 2. Raise to power t 3. Compose with P = constant size PCP

Preprocessing G  H=prep(G) s.t.  H is d-regular, d=O(1)  H is an expander, has self-loops. maintain  size(H) = O(size(G))  gap(G) ¼ gap(H), i.e., 1. gap(G) = 0  gap(H) = 0 2. gap(G)/const · gap(H) Add expander edges Add self-loops [PY] Blow up every vertex u into a cloud of deg(u) vertices, and inter connect them via an expander.

Reducing  d t to  Consider the constraints {C 1,…,C n } (and forget the graph structure) For each i, we replace C i by {c ij } = constraints over smaller alphabet . P = algorithm that takes C to {c j }, c j over   s. t.  If C is “satisfiable”, then gap({c j })=0  If C is “unsatisfiable”, then gap({c j }) >  Composition Lemma: [BGHSV, DR] The system C’ = [ i P(C i ) has gap(C’) ¼ gap(C) C1C1C1C1 C2C2C2C2 C3C3C3C3 C4C4C4C4 CnCnCnCn… c 11 c 12 c 13 c 14 c 15 P c n1 c n2 c n3 c n4 c n5 P Assignment-testers [DR] / PCPPs [BGHSV]

Composition  If P is any AT / PCPP then this composition works. P can be  Hadamard-based  Longcode-based  found via exhaustive search (existence must be ensured, though)  P’s running time only affects constants.

Summary: Main theorem  G i  G i+1 : G i+1 = ( prep(G i ) ) t ² P  gap(G i+1 ) > 2 ¢ gap(G i ) and other params stay same 1. G[, ] 2. G  prep(G)[,  /const ] 3. G  G t [ d t, t ¢  /const ] 4. G  G ² P [, t ¢  /const’ ] = [,2]  G=G 0  G 1  G 2  …  G k = final output of reduction  After k=log n steps, If gap(G 0 ) = 0 then gap(G k )=0 If gap(G 0 ) > 0 then gap(G k ) > const

Application: short PCPs  …[PS, HS, GS, BSVW, BGHSV, BS]  [BS’05]: NP µ PCP 1,1-1/polylog [ log (n ¢ polylog ), O(1) ]  There is a reduction taking constraint graph G to G’ such that |G’| = |G| ¢ polylog |G| If gap(G)=0 then gap(G’)=0 If gap(G)>0 then gap(G’)> 1/polylog|G|  Applying our main step loglog|G| times on G’, we get a new constraint graph G’’ such that If gap(G) = 0 then gap(G’’)=0 If gap(G) > 0 then gap(G’’) > const  i.e., NP µ PCP 1,1/2 [ log (n ¢ polylog ), O(1) ]

final remarks  Main point: gradual amplification  Compare to Raz’s parallel-repetition thm  Q: get the gap up to 1-o(1)