Download presentation
Presentation is loading. Please wait.
1
1 The PCP starting point
2
2 Overview In this lecture we’ll present the Quadratic Solvability problem. In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP. We’ll see this problem is closely related to PCP. And even use it to prove a (very very weak...) PCP characterization of NP. And even use it to prove a (very very weak...) PCP characterization of NP.
3
3 Quadratic Solvability Def: (QS[D, ]): Instance: a set of n quadratic equations over with at most D variables each. Problem: to find if there is a common solution. or equally: a set of dimension D total-degree 2 polynomials Example: =Z 2 ; D=1 y = 0 x 2 + x = 0 x 2 + 1 = 0 1 1 1 0
4
4 Solvability A generalization of this problem: Def: (Solvability[D, ]): Instance: a set of n polynomials over with at most D variables. Each polynomial has degree- bound n in each one of the variables. Problem: to find if there is a common root.
5
5 Solvability is Reducible to QS: y 2 x 2 + x 2 t + tlz + z + 1 = 0 w1w1 w 1 = y 2 w2w2 w 2 = x 2 w 3 w 3 = tl w2w2 Could we use the same “trick” to show Solvability is reducible to Linear Solvability? the parameters (D, ) don’t change (assuming D>2)!
6
6 QS is NP-hard Let us prove that QS is NP-hard by reducing 3-SAT to it: ( 1 2 3 ) ... ( m/3-2 m/3-1 m/3 ) where each literal i {x j, x j } 1 j n Def: (3-SAT): Instance: a 3CNF formula. Problem: to decide if this formula is satisfiable.
7
7 Given an instance of 3-SAT, use the following transformation on each clause: QS is NP-hard xixixixi 1-x i xi xi xi xi xixixixi ( i i+1 i+2 ) Tr[ i ] * Tr[ i+1 ] * Tr[ i+2 ] The corresponding instance of Solvability is the set of all resulting polynomials (which, assuming the variables are only assigned Boolean values, is equivalent)
8
8 QS is NP-hard In order to remove the assumption we need to add the equation for every variable x i : x i * ( 1 - x i ) = 0 This concludes the description of a reduction from 3SAT to Solvability[O(1), ] for any field . What is the maximal degree of the resulting equations ?
9
9 QS is NP-hard According to the two previous reductions:
10
10 Gap-QS Def: (Gap-QS[D, , ]): Instance: a set of n quadratic equations over with at most D variables each. Problem: to distinguish between the following two cases: There is a common solution No more than an fraction of the equations can be satisfied simultaneously.
11
11 Def: L PCP[D,V, ] if there is a polynomial time algorithm, which for any input x, produces a set of efficient Boolean functions over variables of range 2 V, each depending on at most D variables so that: x L iff there exits an assignment to the variables, which satisfies all the functions x L iff no assignment can satisfy more than an - fraction of the functions. Gap-QS and PCP Gap-QS[D, , ] PCP[D,log| |, ] Gap-QS[D, , ] quadratic equations system For each quadratic polynomial p i (x 1,...,x D ), add the Boolean function i (a 1,...,a D ) p i (a 1,...,a D )=0 the variables of the input system values in
12
12 Gap-QS and PCP Therefore, every language which is efficiently reducible to Gap-QS[D, , ] is also in PCP[D,log| |, ]. Therefore, every language which is efficiently reducible to Gap-QS[D, , ] is also in PCP[D,log| |, ]. Thus, proving Gap-QS[D, , ] is NP-hard, also proves the PCP[D,log| |, ] characterization of NP. Thus, proving Gap-QS[D, , ] is NP-hard, also proves the PCP[D,log| |, ] characterization of NP. And indeed our goal henceforth will be proving Gap-QS[D, , ] is NP-hard for the best D, and we can. And indeed our goal henceforth will be proving Gap-QS[D, , ] is NP-hard for the best D, and we can.
13
13 Gap-QS[n, ,2/| | ] is NP-hard Proof: by reduction from QS[O(1), ] p 1 p 2 p 3...p n degree-2 polynomials Instance of QS[O(1), ]: Satisfying assignment : Satisfying assignment : i 00 0...0 Non-satisfying assignment : Non-satisfying assignment : j 03 7...0
14
14 Gap-QS[O(1), ] is NP-hard In order to have a gap we need an efficient degree- preserving transformation on the polynomials so that any non-satisfying assignment results in few satisfied polynomials: p 1 p 2 p 3...p n degree-2 polynomials p 1 ’ p 2 ’ p 3 ’...p m ’ Transformation: Non- satisfying assignment : Non- satisfying assignment : j 0 2 4... 3
15
15 Gap-QS[O(1), ] is NP-hard For such an efficient degree-preserving transformation E it must hold that: Thus E is an error correcting code ! We shall now see examples of degree- preserving transformations which are also error correcting codes:
16
16 The linear transformation: multiplication by a matrix p 1 p 2... p n c 11... c 1m... c n1... c nm p c 1... c m pc 1... pc m = scalars polynomials inner product a linear combination of polynomials poly-time, if m=n c
17
17 The linear transformation: multiplication by a matrix e 1 e 2... e n c 11... c 1m... c n1... c nm c 1... c m c 1... c m = the values of the polynomials under some assignment the values of the new polynomials under the same assignment a zero vector if =0 n
18
18 What’s Ahead We proceed with several examples for linear error correcting codes: We proceed with several examples for linear error correcting codes: Reed-Solomon code Reed-Solomon code Random matrix Random matrix And finally even a code which suits our needs... And finally even a code which suits our needs...
19
19 Using Reed-Solomon Codes Define the matrix as follows: Define the matrix as follows: using multivariate polynomials we can even get =O(logn/| |) That’s really Lagrange’s formula in disguise... One can prove that for any 0 i | |-1, (vA) i is P(i), where P is the unique degree n-1 univariate polynomial, for which P(i)=v i for all 0 i n-1. One can prove that for any 0 i | |-1, (vA) i is P(i), where P is the unique degree n-1 univariate polynomial, for which P(i)=v i for all 0 i n-1. Therefore for any v the fraction of zeroes in vA is bounded by (n-1)/| |. Therefore for any v the fraction of zeroes in vA is bounded by (n-1)/| |.
20
20 Using a Random Matrix Lem: A random matrix A nxm satisfies w.h.p: The fraction of zeros in the output vector
21
21 Using a Random Matrix Proof: (by the probabilistic method) Let v 0 n n. Because the inner product of v and a random vector is random: Hence, |{i : (vA) i = 0}| (denoted X v ) is a binomial random variable with parameters m and | | -1. For this random variable, we can compute the probability Pr[ X v 2m| | -1 ] (the probability that the fraction of zeros exceeds 2| | -1 )
22
22 Using a Random Matrix The Chernoff bound: For a binomial random variable with parameters m and | | -1 : Hence:
23
23 Using a Random Matrix Overall, the number of different vectors v is | | n Hence, according to the union bound, we can multiply the previous probability by the number of different vectors v to obtain a bound on the probability : And this probability is smaller then 1 for: m=O(n| |log| |). Hence, for such m, a random matrix satisfies the lemma with positive probability. The union bound: The probability for a union of events is Smaller then or equal to the sum of Their probabilities
24
24 Deterministic Construction Define a random matrix A nxm : Assume =Z p. Let k=log p n+1. (Assume w.l.o.g k N) Let Z p k be the dimension k extension field of . Associate each row with 1 i p k-1 Hence, n=p k-1 Associate each column with a pair (x,y) Z p k Z p k Hence, m=p 2k
25
25 Deterministic Construction p k-1 p 2k And define A(i,(x,y))= (inner product)
26
26 Analysis For any vector v n, for any column (x,y) Z p k Z p k, For any vector v n, for any column (x,y) Z p k Z p k, And thus the fraction of zeroes And thus the fraction of zeroes The number of zeroes in vA where v 0 n The number of zeroes in vA where v 0 n x,y: G(x)=0 x,y: G(x) 0 =0 +
27
27 Summary of the Reduction Given an instance {p 1,...,p n } for QS[O(1), ], We found a matrix A which satisfies v 0 n |{i : (vA) i = 0}| /m < 2| | -1 !! Hence: {p 1,...,p n } QS[O(1), ] If and only if: {p 1 A,...,p n A} Gap-QS[O(n), ,2| | -1 ] This proves Gap-QS[O(n), ,2| |-1]is NP-hard This proves Gap-QS[O(n), ,2| |-1] is NP-hard
28
28 Hitting the Road This proves a PCP characterization with D=O(n) (hardly a “local” test...). Eventually we’ll prove a characterization with D=O(1) ([DFKRS]) using the results presented here as our starting point.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.