Download presentation
Presentation is loading. Please wait.
Published byHilary Gardner Modified over 6 years ago
1
S.Safra some slides borrowed from Dana Moshkovits
Probabilistically Checkable Proofs and Hardness of Approximation S.Safra some slides borrowed from Dana Moshkovits
2
The Crazy Tea Party Problem To seat all guests at a round table, so people who sit an adjacent seats like each other. John Mary Bob Jane Alice
3
Solution for the Example
Problem To seat all guests at a round table, so people who sit an adjacent seats like each other. John Jane Mary Alice Bob
4
Naive Algorithm For each ordering of the guests around the table
Verify each guest likes the guest sitting in the next seat.
5
How Much Time Should This Take? (worse case)
guests steps n (n-1)! 5 24 say our computer is capable of 1010 instructions per second, this will still take 3·10138 years! 15 100 9·10155
6
Problem Plan a trip that visits every site exactly once.
Tours Problem Plan a trip that visits every site exactly once.
7
Solution for the Example
Problem Plan a trip that visits every site exactly once.
8
Is a Problem Tractable? YES! And here’s an efficient algorithm for it
NO! and I can prove it and what if neither is the case?
9
Growth Rate: Sketch n! =2O(n lg n) time 2n n2 10n input length
10
The World According to Complexity
reasonable unreasonable polynomial nO(1) exponential 2nO(1)
11
Could one be Fundamentally Harder than the Other?
Seating Tour
12
Relations Between Problems
Assuming an efficient procedure for problem A, there is an efficient procedure for problem B B cannot be radically harder than A
13
Reductions B A p B cannot be radically harder than A In other words: A is at least as hard as B
14
Which One is Harder? ? Seating Tour
15
“directly reachable from…”
Reduce Tour to Seating First Observation: The problems aren’t so different site guest “directly reachable from…” “liked by…”
16
Reduce Tour to Seating Second Observation: Completing the circle
Let’s invite to our party a very popular guest, i.e one who can sit next to everybody else.
17
Reduce Tour to Seating If there is a tour, there is also a way to seat all the imagined guests around the table. popular guest . . . . . .
18
Reduce Tour to Seating If there is a seating, we can easily find a tour path (no tour, no seating). popular guest . . . . . .
19
Bottom Line The seating problem is at least as hard as the tour problem
20
What have we shown? Although we couldn’t come up with an efficient algorithm for the problems Nor to prove they don’t have one, We managed to show a very powerful claim regarding the relation between their hardness
21
Furthermore Interestingly, we can also reduce the seating problem to the tour problem. Moreover, there is a whole class of problems, which can be pair-wise efficiently reduced to each other.
22
NPC NPC ? exponential algorithms efficient algorithms
Contains thousands of distinct problem NPC each reducible to all others exponential algorithms efficient algorithms ?
23
How can Studying P vs NP Make You a Millionaire?
This is the most fundamental open question of computer science. Resolving it would grant the solver a great honor … as well as substantial fortune… Huge philosophical implications: No need for human ingenuity! No need for mathematicians!!!
24
Constraints Satisfaction
Def Constraints Satisfaction Problem (CSP): Instance: Constraints: A set of constraints = { 1, …, l } over two sets of variables, X of range RX and Y of range RY Determinate: each constraint determines the value of a variable yY according to the value of some xX xy : RX RY , satisfied if xy(x)=y Uniform: each xX appears in dX of , and each yY appears in dY of , for some global dX and dy Optimize: Define () = maximum, over all assignments to X and Y A: X RX; Y RY of the fraction of satisfied
25
Cook’s Characterization of NP
Thm: It is NP-hard to distinguish between () = 1 () < 1 For any language L in NP CSP testing membership in L can be reduced to...
26
Showing hardness From now on, to show a problem NP-hard, we merely need to reduce CSP to it. any NP problem can be reduced to... CSP can be reduced to... Cook’s Thm new, hard problem will imply the new problem is NP-hard
27
Max Independent-Set Instance: A graph G=(V,E) and a threshold k.
Problem: To decide if there is a set of vertices I={v1,...,vk}V, s.t. for any u,vI: (u,v)E.
28
Max I.S. is NP-hard Proof: We’ll show CSPp Max I.S. ≤p k
29
The reduction: Co-Partite Graph
G comprise k=|X| cliques of size |RX| - a vertex for each plausible assignment to x: k An edge: two assignments that determine a different value to same y E {(<i,j1>, <i,j2>) | iM, j1≠j2 RX}
30
Proof of Correctness An I.S. of size k must contain exactly one vertex in every clique. k A satisfying assignment implies an I.S. of size k An I.S. of size k corresponds to a consistent, satisfying assignment
31
Generalized Tour Problem
Add prices to the roads of the tour problem Ask for the least costly tour $3 $17 $13 $8 $19 $10 $13 $12
32
Approximation How about approximating the optimal tour?
I.e – finding a tour which costs, say, no more than twice as much as the least costly. $3 $17 $13 $8 $19 $10 $13 $12
33
Hardness of Approximation
34
Promise Problems Sometimes you can promise something about the input
It doesn’t matter what you say for unfeasible inputs I know my graph has clique of size n/4! Does it have a clique of size n/2?
35
Promise Problems & Approximation
We’ll see promise problems of a certain type, called gap problems, can be utilized to prove hardness of approximation.
36
Gap Problems (Max Version)
Instance: … Problem: to distinguish between the following two cases: The maximal solution B The maximal solution ≤ A YES NO
37
Idea We’ve shown “standard” problems are NP-hard by reductions from CSP. We want to prove gap-problems are NP-hard Why won’t we prove some canonical gap-problem is NP-hard and reduce from it? If a reduction reduces one gap-problem to another we refer to it as gap-preserving
38
Gap-CSP[] YES NO Instance: Same as CSP
Problem: to distinguish between the following two cases: There exists an assignment that satisfies all constraints. No assignment can satisfy more than of the constraints. YES NO
39
Gap-CSP[] is NP-hard, as long as |RX|,|RY| ≥ -O(1)
PCP (Without Proof) Theorem [FGLSS, AS, ALMSS]: For any >0, Gap-CSP[] is NP-hard, as long as |RX|,|RY| ≥ -O(1)
40
Why Is It Called PCP? (Probabilistically Checkable Proofs)
CSP has a polynomial membership proof checkable in polynomial time. Prove it! This assignment satisfies it! My formula is satisfiable! x1 x2 x3 x4 x5 x6 x7 x8 yn-3 yn-2 yn-1 yn
41
Why Is It Called PCP? (Probabilistically Checkable Proofs)
…Now our verifier has to check the assignment satisfies all constraints…
42
Why Is It Called PCP? (Probabilistically Checkable Proofs)
In a NO instance of gap-CSP, 1- of the constraints are not satisfied! While for gap-CSP the verifier would be right with high probability, even by: pick at random a constant number of constraints and check only those
43
Why Is It Called PCP? (Probabilistically Checkable Proofs)
Since gap-CSP is NP-hard, All NP problems have probabilistically checkable proofs.
44
Hardness of Approximation
Do the reductions we’ve seen also work for the gap versions (i.e approximation preserving)? We’ll revisit the Max I.S. example.
45
The same Max I.S. Reduction
An I.S. of size k must contain exactly one vertex in every part. k A satisfying assignment implies an I.S. of size k An I.S. of size k corresponds to a consistent assignment satisfying of
46
Corollary Theorem: for any >0,
Independent-set is hard to approximate to within any constant factor
47
Chromatic Number Instance: a graph G=(V,E).
Problem: To minimize k, so that there exists a function f:V{1,…,k}, for which (u,v)E f(u)f(v) skip
48
Observation: Each color class is an independent set
Chromatic Number Observation: Each color class is an independent set
49
Clique Cover Number (CCN)
Instance: a graph G=(V,E). Problem: To minimize k, so that there exists a function f:V{1,…,k}, for which (u,v)E f(u)=f(v)
50
Clique Cover Number (CCN)
51
Observation Claim: The CCN problem on graph G is the CHROMATIC-NUMBER problem on the complement graph Gc.
52
Reduction Idea . . CLIQUE CCN G G’ m same under cyclic shift
clique preserving q
53
Correctness Given such transformation: MAX-CLIQUE(G) = m CCN(G’) = q
54
T is unique for triplets
Transformation T:V[q] for any v1,v2,v3,v4,v5,v6, T(v1)+T(v2)+T(v3) T(v4)+T(v5)+T(v6) (mod q) {v1,v2,v3}={v4,v5,v6} T is unique for triplets
55
Observations Such T is unique for pairs and for single vertices as well: If T(x)+T(u)=T(v)+T(w) (mod q), then {x,u}={v,w} If T(x)=T(y) (mod q), then x=y
56
Using the Transformation
vi vj CLIQUE T(vj)=4 T(vi)=1 CCN … (q-1)
57
Completing the CCN Graph Construction
T(s) (s,t)ECLIQUE (T(s),T(t))ECCN T(t)
58
Completing the CCN Graph Construction
Close the set of edges under shift: For every (x,y)E, if x’-y’=x-y (mod q), then (x’,y’)E T(s) T(t)
59
First Observation: This edge comes only from (s,t)
Edge Origin Unique T(s) First Observation: This edge comes only from (s,t) T(t)
60
Second Observation: A triangle only comes from a triangle
Triangle Consistency Second Observation: A triangle only comes from a triangle
61
Clique Preservation Corollary: {c1,…,ck} is a clique in the CCN graph
iff {T(c1),…,T(ck)} is a clique in the CLIQUE graph.
62
What Remains? It remains to show how to construct the transformation T in polynomial time.
63
Corollaries Theorem: CCN is NP-hard to approximate within any constant factor. Theorem: CHROMATIC-NUMBER is NP-hard to approximate within any constant factor.
64
Max-E3-Lin-2 Def: Max-E3-Lin-2
Instance: a system of linear equations L = { E1, …, En } over Z2 each equation of exactly 3 variables (whose sum is required to equal either 0 or 1) Problem: Compute (L)
65
Main Theorem Thm [Hastad]: gap-Max-E3-Lin-2(1-, ½+) is NP-hard.
That is, for every constant >0 it is NP-hard to distinguish between the case 1- of the equations are satisfiable and the case ½+ are. [ It is therefore NP-Hard to approximate Max-E3-Lin-2 to within 2- constant >0]
66
This bound is Tight! A random assignment satisfies half of the equations. Deciding whether a set of linear equations have a common solution is in P (Gaussian elimination).
67
Proof Outline The proof proceeds with a reduction from gap-CSP[], known to be NP-hard for any constant >0 Given such an instance , the proof shows a poly-time construction, of an instance L of Max-E3-Lin-2 s.t. () = 1 (L) ≥ 1 - L () < (L) ≤ ½ + L Main Idea: Replace every x and every y with a set of variables representing a binary-code of their assigned values. Then, test consistency within encoding and any xy using linear equations over 3 bits
68
Long-Code of R One bit for every subset of R
69
Long-Code of R One bit for every subset of R to encode an element eR
1 1 1
70
In fact use a “folded” long-code, s.t. f(F)=1-f([n]\F)
The Variables of L Consider an instance of CSP[], for small constant (to be fixed later) L has 2 types of variables: a variable z[y,F] for every variable yY and a subset F P[Ry] a variable z[x,F] for every variable xX and a subset F P[RX] In fact use a “folded” long-code, s.t. f(F)=1-f([n]\F)
71
Linearity of a Legal-Encoding
An Boolean function f: P[R] Z2, if legal long-code-word , is a linear-function, that is, for every F, G P[R]: f(F) + f(G) f(FG) where FG P[R] is the symmetric difference of F and G Unfortunately, any linear function (a sum of a subset of variables) will pass this test
72
The Distribution Def: denote by the biased, product distribution over P[RX], which assigns probability to a subset H as follows: Independently, for each aRX, let aH with probability 1- aH with probability One should think of as a multiset of subsets in which every subset H appears with the appropriate probability
73
The Linear Equations L‘s linear-equations are the union, over all , of the following set of equations: F P[RY], G P[RX] and H denote F*= xy-1(F) z[y,F] + z[x, G] z[x, F* G H]
74
Correctness of Reduction
Prop: if () = 1 then (L) = 1- Proof: let A be a satisfying assignment to . Assign all L ‘s variables according to the legal-encoding of A’s values. A linear equation of L, corresponding to xy,F,G,H, would be unsatisfied exactly if A(x)H, which occurs with probability over the choice of H. LLC-Lemma: (L) = ½+/2 () > 42 Note: independent of ! (Later we use that fact to set small enough for our needs). = 2(L) -1
75
Denoting an Assignment to L
Given an assignment AL to L’s variables: For any xX, denote by fx : P[RX] {-1, 1} the function comprising the values AL assigns to z[x,*] (corresponding to the long-code of the value assigned to x) For any yY, denote by fy : P[RY] {-1, 1} the function comprising the values AL assigns to z[y,*] (corresponding to the long-code of the value assigned to y) Replacing 1 by -1 and 0 by 1
76
Distributional Assignments
Consider a CSP instance Let (R) be the set of all distributions over R Def: A distributional-assignment to is A: X (RX); Y (RX) Denote by () the maximum over distributional-assignments A of the average probability for to be satisfied, if variables’ values are chosen according to A Clearly () (). Moreover Prop: () ()
77
The Distributional-Assignment A
Def: Let A be a distributional-assignment to according to the following random processes: For any variable xX Choose a subset SRX with probability Uniformly choose a random aS. For any variable yY Choose a subset SRY with probability Uniformly choose a random bS. For such functions, the squares of the coefficients constitute a distribution
78
odd(xy(S)) = {b| #{aS| xy(a) = b} is odd}
What’s to do: Show that AL‘s expected success on xy is > 42 in two steps: First show that AL‘s success probability, for any xy Then show that value to be 42
79
Claim 1 Claim 1: AL‘s success probability, for any xy
Proof: That success probability is Now, taking the sum for only the cases in which Sy=odd(xy(Sx)), results in the claimed inequality.
80
High Success Probability
81
Related work Thm (Friedgut): a Boolean function f with small average-sensitivity is an [,j]-junta Thm (Bourgain): a Boolean function f with small high-frequency weight is an [,j]-junta Thm (Kindler&Safra): a Boolean function f with small high-frequency weight in a p-biased measure is an [,j]-junta Corollary: a Boolean function f with small noise-sensitivity is an [,j]-junta [Dinur, S] Showing Vertex-Cover hard to approximate to within 10 5 – 21 Parameters: average-sensitivity [BL,KKL,F]; high-frequency weight [H,B], noise-sensitivity [BKS]
82
Boolean Functions and Juntas
A Boolean function Def: f is a j-Junta if there exists J[n] where |J|≤ j, and s.t. for every x f(x) = f(x J) f is (, j)-Junta if j-Junta f’ s.t.
83
Motivation – Testing Long-code
Def (a long-code test): given a code-word w, probe it in a constant number of entries, and accept w.h.p if w is a monotone dictatorship reject w.h.p if w is not close to any monotone dictatorship
84
Motivation – Testing Long-code
Def(a long-code list-test): given a code-word w, probe it in a constant number of entries, and accept w.h.p if w is a monotone dictatorship, reject w.h.p if a Junta J[n] s.t. f is close to f’ and f’(F)=f’(FJ) for all F Note: a long-code list-test, distinguishes between the case w is a dictatorship, to the case w is far from a junta.
85
Motivation – Testing Long-code
The long-code test, and the long-code list-test are essential tools in proving hardness results. Examples … Hence finding simple sufficient-conditions for a function to be a junta is important.
86
Noise-Sensitivity Idea: check how the value of f changes when the input is changed not on one, but on several coordinates. [n] I z x
87
Noise-Sensitivity Def(,p,x[n] ): Let 0<<1, and xP([n]). Then y~,p,x, if y = (x\I) z where I~[n] is a noise subset, and z~ pI is a replacement. Def(-noise-sensitivity): let 0<<1, then Note: deletes a coordinate in x w.p. (1-p), adds a coordinate to x w.p. p. Hence, when p=1/2: equivalent to flipping each coordinate in x w.p. /2. [n] x I z
88
Noise-Sensitivity – Cont.
Advantage: very efficiently testable (using only two queries) by a perturbation-test. Def (perturbation-test): choose x~p, and y~,p,x, check whether f(x)=f(y). The success is proportional to the noise-sensitivity of f. Prop: the -noise-sensitivity is given by
89
Related Work [Dinur, S] Showing Vertex-Cover hard to approximate to within 10 5 – 21 [Bourgain] Showing a Boolean function with weight <1/k on characters of size larger than k, is close to a junta of size exponential in k ([Kindler, S] similar for biased, product distribution)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.