1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. In this lecture we’ll present the Quadratic Solvability.

Slides:



Advertisements
Similar presentations
Sublinear Algorithms … Lecture 23: April 20.
Advertisements

5.4 Basis And Dimension.
1. 2 Overview Review of some basic math Review of some basic math Error correcting codes Error correcting codes Low degree polynomials Low degree polynomials.
Lecture 23. Subset Sum is NPC
Complexity class NP Is the class of languages that can be verified by a polynomial-time algorithm. L = { x in {0,1}* | there exists a certificate y with.
Constraint Satisfaction over a Non-Boolean Domain Approximation Algorithms and Unique Games Hardness Venkatesan Guruswami Prasad Raghavendra University.
Having Proofs for Incorrectness
Efficient Query Evaluation on Probabilistic Databases
Introduction to PCP and Hardness of Approximation Dana Moshkovitz Princeton University and The Institute for Advanced Study 1.
Complexity ©D.Moshkovits 1 Hardness of Approximation.
Complexity 25-1 Complexity Andrei Bulatov #P-Completeness.
Complexity 26-1 Complexity Andrei Bulatov Interactive Proofs.
CS21 Decidability and Tractability
1. 2 Gap-QS[O(1), ,2|  | -1 ] Gap-QS[O(n), ,2|  | -1 ] Gap-QS*[O(1),O(1), ,|  | -  ] Gap-QS*[O(1),O(1), ,|  | -  ] conjunctions of constant.
CS151 Complexity Theory Lecture 7 April 20, 2004.
1 2 Introduction In this chapter we examine consistency tests, and trying to improve their parameters: In this chapter we examine consistency tests,
The Counting Class #P Slides by Vera Asodi & Tomer Naveh
1. 2 Gap-QS[O(n), ,2|  | -1 ] 3SAT QS Error correcting codesSolvability PCP Proof Map In previous lectures: Introducing new variables Clauses to polynomials.
1. 2 Gap-QS[O(1), ,2|  | -1 ] Gap-QS[O(n), ,2|  | -1 ] Gap-QS*[O(1),O(1), ,|  | -  ] Gap-QS*[O(1),O(1), ,|  | -  ] conjunctions of constant.
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
Restricted Satisfiability (SAT) Problem
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
1 Discrete Structures CS 280 Example application of probability: MAX 3-SAT.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
1. 2 Overview Some basic math Error correcting codes Low degree polynomials Introduction to consistent readers and consistency tests H.W.
1 INTRODUCTION NP, NP-hardness Approximation PCP.
Complexity 1 Hardness of Approximation. Complexity 2 Introduction Objectives: –To show several approximation problems are NP-hard Overview: –Reminder:
Complexity ©D.Moshkovitz 1 Paths On the Reasonability of Finding Paths in Graphs.
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP.
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
1 Joint work with Shmuel Safra. 2 Motivation 3 Motivation.
1. 2 Overview of the Previous Lecture Gap-QS[O(n), ,2|  | -1 ] Gap-QS[O(1), ,2|  | -1 ] QS[O(1),  ] Solvability[O(1),  ] 3-SAT This will imply a.
Lecture 22 More NPC problems
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
NP Complexity By Mussie Araya. What is NP Complexity? Formal Definition: NP is the set of decision problems solvable in polynomial time by a non- deterministic.
Complexity 25-1 Complexity Andrei Bulatov Counting Problems.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Non-Approximability Results. Summary -Gap technique -Examples: MINIMUM GRAPH COLORING, MINIMUM TSP, MINIMUM BIN PACKING -The PCP theorem -Application:
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
1 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones.
Complexity 24-1 Complexity Andrei Bulatov Interactive Proofs.
Approximation Algorithms based on linear programming.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
CS151 Complexity Theory Lecture 15 May 18, Gap producing reductions Main purpose: –r-approximation algorithm for L 2 distinguishes between f(yes)
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
TU/e Algorithms (2IL15) – Lecture 10 1 NP-Completeness, II.
The NP class. NP-completeness
NP-Completeness (2) NP-Completeness Graphs 4/13/2018 5:22 AM x x x x x
P & NP.
Richard Anderson Lecture 26 NP-Completeness
Read Once Branching Programs: a model of computation used in CAD.
NP-Completeness (2) NP-Completeness Graphs 7/23/ :02 PM x x x x
NP-Completeness (2) NP-Completeness Graphs 7/23/ :02 PM x x x x
NP-Completeness Proofs
Richard Anderson Lecture 26 NP-Completeness
Intro to Theory of Computation
Background: Lattices and the Learning-with-Errors problem
Intro to Theory of Computation
ICS 353: Design and Analysis of Algorithms
NP-Completeness (2) NP-Completeness Graphs 11/23/2018 2:12 PM x x x x
Richard Anderson Lecture 25 NP-Completeness
NP-Completeness Proofs
The Curve Merger (Dvir & Widgerson, 2008)
Netzer & Miller 1990: On the Complexity of Event Ordering for Shared-Memory Parallel Program Executions.
CS21 Decidability and Tractability
PCP Characterization of NP:
CSE 589 Applied Algorithms Spring 1999
CS151 Complexity Theory Lecture 7 April 23, 2019.
NP-Completeness (2) NP-Completeness Graphs 7/9/2019 6:12 AM x x x x x
Presentation transcript:

1 The PCP starting point

2 Overview In this lecture we’ll present the Quadratic Solvability problem. In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP. We’ll see this problem is closely related to PCP. And even use it to prove a (very very weak...) PCP characterization of NP. And even use it to prove a (very very weak...) PCP characterization of NP.

3 Quadratic Solvability Def: (QS[D,  ]): Instance: a set of n quadratic equations over  with at most D variables each. Problem: to find if there is a common solution. or equally: a set of dimension D total-degree 2 polynomials Example:  =Z 2 ; D=1 y = 0 x 2 + x = 0 x =

4 Solvability A generalization of this problem: Def: (Solvability[D,  ]): Instance: a set of n polynomials over  with at most D variables. Each polynomial has degree- bound n in each one of the variables. Problem: to find if there is a common root.

5 Solvability is Reducible to QS: y 2 x 2 + x 2 t + tlz + z + 1 = 0 w1w1 w 1 = y 2 w2w2 w 2 = x 2 w 3 w 3 = tl w2w2 Could we use the same “trick” to show Solvability is reducible to Linear Solvability? the parameters (D,  ) don’t change (assuming D>2)!

6 QS is NP-hard Let us prove that QS is NP-hard by reducing 3-SAT to it: ( 1  2  3 ) ...  ( m/3-2  m/3-1  m/3 ) where each literal i  {x j,  x j } 1  j  n Def: (3-SAT): Instance: a 3CNF formula. Problem: to decide if this formula is satisfiable.

7 Given an instance of 3-SAT, use the following transformation on each clause: QS is NP-hard xixixixi 1-x i  xi xi xi xi xixixixi ( i  i+1  i+2 ) Tr[ i ] * Tr[ i+1 ] * Tr[ i+2 ] The corresponding instance of Solvability is the set of all resulting polynomials (which, assuming the variables are only assigned Boolean values, is equivalent)

8 QS is NP-hard In order to remove the assumption we need to add the equation for every variable x i : x i * ( 1 - x i ) = 0 This concludes the description of a reduction from 3SAT to Solvability[O(1),  ] for any field . What is the maximal degree of the resulting equations ?

9 QS is NP-hard According to the two previous reductions:

10 Gap-QS Def: (Gap-QS[D, ,  ]): Instance: a set of n quadratic equations over  with at most D variables each. Problem: to distinguish between the following two cases: There is a common solution No more than an  fraction of the equations can be satisfied simultaneously.

11 Def: L  PCP[D,V,  ] if there is a polynomial time algorithm, which for any input x, produces a set of efficient Boolean functions over variables of range 2 V, each depending on at most D variables so that: x  L iff there exits an assignment to the variables, which satisfies all the functions x  L iff no assignment can satisfy more than an  - fraction of the functions. Gap-QS and PCP Gap-QS[D, ,  ]  PCP[D,log|  |,  ] Gap-QS[D, ,  ] quadratic equations system For each quadratic polynomial p i (x 1,...,x D ), add the Boolean function  i (a 1,...,a D )  p i (a 1,...,a D )=0 the variables of the input system values in 

12 Gap-QS and PCP Therefore, every language which is efficiently reducible to Gap-QS[D, ,  ] is also in PCP[D,log|  |,  ]. Therefore, every language which is efficiently reducible to Gap-QS[D, ,  ] is also in PCP[D,log|  |,  ]. Thus, proving Gap-QS[D, ,  ] is NP-hard, also proves the PCP[D,log|  |,  ] characterization of NP. Thus, proving Gap-QS[D, ,  ] is NP-hard, also proves the PCP[D,log|  |,  ] characterization of NP. And indeed our goal henceforth will be proving Gap-QS[D, ,  ] is NP-hard for the best D,  and  we can. And indeed our goal henceforth will be proving Gap-QS[D, ,  ] is NP-hard for the best D,  and  we can.

13 Gap-QS[n, ,2/|  | ] is NP-hard Proof: by reduction from QS[O(1),  ] p 1 p 2 p 3...p n degree-2 polynomials Instance of QS[O(1),  ]: Satisfying assignment : Satisfying assignment :  i Non-satisfying assignment : Non-satisfying assignment :  j

14 Gap-QS[O(1),  ] is NP-hard In order to have a gap we need an efficient degree- preserving transformation on the polynomials so that any non-satisfying assignment results in few satisfied polynomials: p 1 p 2 p 3...p n degree-2 polynomials p 1 ’ p 2 ’ p 3 ’...p m ’ Transformation: Non- satisfying assignment : Non- satisfying assignment :  j

15 Gap-QS[O(1),  ] is NP-hard For such an efficient degree-preserving transformation E it must hold that: Thus E is an error correcting code ! We shall now see examples of degree- preserving transformations which are also error correcting codes:

16 The linear transformation: multiplication by a matrix p 1 p 2... p n c c 1m... c n1... c nm p c 1... c m pc 1... pc m  = scalars polynomials inner product a linear combination of polynomials poly-time, if m=n c

17 The linear transformation: multiplication by a matrix e 1 e 2... e n c c 1m... c n1... c nm  c 1... c m  c 1...  c m  = the values of the polynomials under some assignment the values of the new polynomials under the same assignment a zero vector if  =0 n

18 What’s Ahead We proceed with several examples for linear error correcting codes: We proceed with several examples for linear error correcting codes: Reed-Solomon code Reed-Solomon code Random matrix Random matrix And finally even a code which suits our needs... And finally even a code which suits our needs...

19 Using Reed-Solomon Codes Define the matrix as follows: Define the matrix as follows: using multivariate polynomials we can even get  =O(logn/|  |) That’s really Lagrange’s formula in disguise... One can prove that for any 0  i  |  |-1, (vA) i is P(i), where P is the unique degree n-1 univariate polynomial, for which P(i)=v i for all 0  i  n-1. One can prove that for any 0  i  |  |-1, (vA) i is P(i), where P is the unique degree n-1 univariate polynomial, for which P(i)=v i for all 0  i  n-1. Therefore for any v the fraction of zeroes in vA is bounded by (n-1)/|  |. Therefore for any v the fraction of zeroes in vA is bounded by (n-1)/|  |.

20 Using a Random Matrix Lem: A random matrix A  nxm satisfies w.h.p: The fraction of zeros in the output vector

21 Using a Random Matrix Proof: (by the probabilistic method) Let v  0 n  n. Because the inner product of v and a random vector is random:  Hence, |{i : (vA) i = 0}| (denoted X v ) is a binomial random variable with parameters m and |  | -1. For this random variable, we can compute the probability Pr[ X v  2m|  | -1 ] (the probability that the fraction of zeros exceeds 2|  | -1 )

22 Using a Random Matrix The Chernoff bound: For a binomial random variable with parameters m and |  | -1 : Hence:

23 Using a Random Matrix Overall, the number of different vectors v is  |  | n Hence, according to the union bound, we can multiply the previous probability by the number of different vectors v to obtain a bound on the probability : And this probability is smaller then 1 for: m=O(n|  |log|  |). Hence, for such m, a random matrix satisfies the lemma with positive probability.  The union bound: The probability for a union of events is Smaller then or equal to the sum of Their probabilities

24 Deterministic Construction Define a random matrix A  nxm : Assume  =Z p. Let k=log p n+1. (Assume w.l.o.g k  N) Let Z p k be the dimension k extension field of . Associate each row with 1  i  p k-1 Hence, n=p k-1 Associate each column with a pair (x,y)  Z p k  Z p k Hence, m=p 2k

25 Deterministic Construction p k-1 p 2k And define A(i,(x,y))= (inner product)

26 Analysis For any vector v  n, for any column (x,y)  Z p k  Z p k, For any vector v  n, for any column (x,y)  Z p k  Z p k, And thus the fraction of zeroes  And thus the fraction of zeroes  The number of zeroes in vA where v  0 n  The number of zeroes in vA where v  0 n  x,y: G(x)=0 x,y: G(x)  0  =0 +

27 Summary of the Reduction Given an instance {p 1,...,p n } for QS[O(1),  ], We found a matrix A which satisfies  v  0  n |{i : (vA) i = 0}| /m < 2|  | -1 !! Hence: {p 1,...,p n }  QS[O(1),  ] If and only if: {p 1 A,...,p n A}  Gap-QS[O(n), ,2|  | -1 ] This proves Gap-QS[O(n), ,2|  |-1]is NP-hard This proves Gap-QS[O(n), ,2|  |-1] is NP-hard

28 Hitting the Road This proves a PCP characterization with D=O(n) (hardly a “local” test...). Eventually we’ll prove a characterization with D=O(1) ([DFKRS]) using the results presented here as our starting point.