Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP.

Similar presentations


Presentation on theme: "1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP."— Presentation transcript:

1 1 The PCP starting point

2 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP. And even use it to prove a (very very weak...) PCP characterization of NP.

3 3 Quadratic Solvability Definition (QS[D,  ]): Instance: a set of n quadratic equations over  with at most D variables each. Problem: to find if there is a common solution. or equally: a set of dimension D total-degree 2 polynomials Example:  =Z 2 ; D=1 y = 0 x 2 + x = 0 x 2 + 1 = 0 1 1 1 0

4 4 Solvability A generalization of this problem is the following: Definition (Solvability[D,  ]): Instance: a set of n polynomials over  with at most D variables. Each polynomial has degree-bound n in each one of the variables. Problem: to find if there is a common root.

5 5 Solvability is Reducible to QS: Proof Idea y 2 x 2 + x 2 t + tlz + z + 1 = 0 w1w1 w 1 = y 2 w2w2 w 2 = x 2 w 3 w 3 = tl w2w2 Could we use the same “trick” to show Solvability is reducible to Linear Solvability? the parameters (D,  ) don’t change (assuming D>2)!

6 6 3-SAT For completeness we provide the definition of 3-SAT: Definition (3-SAT): Instance: a 3CNF formula. Problem: to decide if this formula is satisfiable. ( 1  2  3 ) ...  ( m/3-2  m/3-1  m/3 ) where each literal i  {x j,  x j } 1  j  n It is well known that 3-SAT is NP-Complete.

7 7 3-SAT is Reducible to Solvability Given an instance of 3-SAT, use the following transformation on each clause: Tr[ x i ] = 1 - x i Tr[  x i ] = x i Tr[ ( i  i+1  i+2 ) ] = Tr[ i ] * Tr[ i+1 ] * Tr[ i+2 ] The corresponding instance of Solvability is the set of all resulting polynomials. For the time being, assume the variables are only assigned {0,1}

8 8 3-SAT is Reducible to Solvability: Removing the Assumption In order to remove the assumption we need to add the equation x i * ( 1 - x i ) = 0 for every variable x i. This concludes the description of a reduction from 3SAT to Solvability[O(1),  ] for any field . What is the maximal dependency?

9 9 QS is NP-hard Proof: by the two above reductions.

10 10 Arithmization To translate 3-SAT to Solvability we used the idea of aritmization. The simple trick is widely used in PCP proofs, as well as in other fields.

11 11 Gap-QS Definition (Gap-QS[D, ,  ]): Instance: a set of n quadratic equations over  with at most D variables each. Problem: to distinguish between: There is a common solution No more than an  fraction of the equations can be satisfied simultaneously.

12 12 Reminder: L  PCP[D,V,  ] if there is an efficient algorithm, which for any input x, produces a set of efficient Boolean functions over variables of range 2 V, each depending on at most D variables. x  L iff there exits an assignment to the variables, which satisfies all the functions x  L iff no assignment can satisfy more than an  - fraction of the functions. Gap-QS and PCP Gap-QS[D, ,  ]  PCP[D,log|  |,  ] Gap-QS[D, ,  ] quadratic equations system For each quadratic polynomial p i (x 1,...,x D ), add the Boolean function  i (a 1,...,a D )  p i (a 1,...,a D )=0 the variables of the input system values in 

13 13 Proving PCP Characterizations of NP through Gap-QS Therefore, every language which is efficiently reducible to Gap- QS[D, ,  ] is also in PCP[D,log|  |,  ]. Thus, proving Gap-QS[D, ,  ] is NP- hard, also proves the PCP[D,log|  |,  ] characterization of NP. And indeed our goal henceforth will be proving Gap-QS[D, ,  ] is NP-hard for the best D,  and  we can.

14 14 Some Gap-QS is NP-hard Proof: by reduction from QS[O(1),  ]. Proof Idea: Observe an instance of QS[O(1),  ]: p 1 p 2 p 3...p n degree-2 polynomials ii any assignment 65 0...0 0 2 4... 3 there might be a lot of zeroes which induces a trans. E on the evaluations s.t.: 1) E(0 n )=0 m 2)  v  0 n,  (E(v),0 m ) is big. not many zeroes p 1 ’ p 2 ’ p 3 ’...p m ’ we need an efficient degree- preserving transformation on the polynomials

15 15 Multiplication by a Matrix Preserves the Degree p 1 p 2... p n c 11... c 1m... c n1... c nm p c 1... c m pc 1... pc m  = scalars polynomials inner product a linear combination of polynomials poly-time, if m=n c

16 16 How Does a Multiplication Affect the Evaluations Vector? e 1 e 2... e n c 11... c 1m... c n1... c nm  c 1... c m  c 1...  c m  = the values of the polynomials under some assignment the values of the new polynomials under the same assignment a zero vector if  =0 n

17 17 Suitable Matrices A matrix A nxm which satisfies for every v  u,  (vA,uA)  1-  is a linear code. Note, that this is completely equivalent to saying A nxm satisfies for every v  0 n,  (vA,0 m )  1- . That’s because  (vA,uA)=  ((v-u)A,0 m ).

18 18 What’s Ahead We proceed with several examples for linear codes: –Reed-Solomon code –Random matrix –And finally even a code which suits our needs... the “generic  - code” from the Encodings lecture.

19 19 Using Reed-Solomon Codes Define the matrix as follows: using multivariate polynomials we can even get  =O(logn/|  |) That’s really Lagrange’s formula in disguise... One can prove that for any 0  i  |  |-1, (vA) i is P(i), where P is the unique degree n-1 univariate polynomial, for which P(i)=v i for all 0  i  n-1. Therefore for any v the fraction of zeroes in vA is bounded by (n-1)/|  |.

20 20 A Random Matrix Should Do Lemma: A random matrix A  nxm satisfies w.h.p.  v  0 n  n, |{i : (vA) i = 0}| / m < 2|  | -1 Proof: Let v  0 n  n.  1  i  m Pr A  nxm [ (vA) i = 0 ] = |  | -1 The inner product of v and a random vector is random. for any 1  |  | -1  |{i : (vA) i = 0}| (denoted X v ) is a binomial r.v with parameters m and |  | -1. By the Chernoff bound, Pr[ X v  2m|  | -1 ]  2e -m/4|  |.

21 21 A Random Matrix Should Do That is, Pr[  v  0 n : X v /m  2|  | -1 ]  2|  | n e -m/4|  |. For m=O(n|  |log|  |), the claim holds.  Every v  0 n disqualifies at most 2e -m/4|  | of the matrices  nxm At most 2|  | n e -m/4|  | of the matrices are disqualified

22 22 Deterministic Construction p k-1 p 2k associate each row with 1  i  p k-1 Assume  =Z p. Let k=log p n+1. (Assume w.l.o.g k  N) Let Z p k be the dimension k extension field of . associate each column with a pair (x,y)  Z p k  Z p k

23 23 Analysis For any v  n, for any (x,y)  Z p k  Z p k, And thus the fraction of zeroes  The number of zeroes in vA where v  0 n  degree-p k-1 polynomial, denoted G(x) x,y: G(x)=0 x,y: G(x)  0  =0 +

24 24 Summary of the Reduction Given an instance {p 1,...,p n } for QS[O(1),  ], find a matrix A which satisfies  v  0  n |{i : (vA) i = 0}| /m < 2|  | -1 {p 1,...,p n }  QS[O(1),  ] iff {p 1 A,...,p n A}  Gap-QS[O(n), ,2|  | -1 ] !!

25 25 Hitting the Road This proves a PCP characterization with D=O(n) (hardly a “local” test...). Eventually we’ll prove a characterization with D=O(1) ([DFKRS]) using the results presented here as our starting point.


Download ppt "1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP."

Similar presentations


Ads by Google