Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones.

Similar presentations


Presentation on theme: "1 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones."— Presentation transcript:

1

2 1

3 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones

4 3 Review: Decision, Optimization Problems A decision problem is A decision problem is a Boolean function ƒ(X), or alternatively a language L  {0, 1} * comprising all strings for which ƒ is TRUE:L = { X  {0, 1} * | ƒ(X) } An optimization problem is An optimization problem is a function ƒ(X, Y) which, given X, is to be maximized (or minimized) over all possible Y’s: max y [ ƒ(X, Y) ] A threshold version of max-ƒ(X, Y) is A threshold version of max-ƒ(X, Y) is the language L t of all strings X for which there exists Y such that ƒ(X, Y)  t [transforming an optimization problem into decision]

5 4 Review: The Class NP The classical definition of the class NP is: A language L  {0, 1} * belongs to the class NP, if there exists a Turing machine V L [referred to as a verifier] such that X  L  there exists a witness Y such that V L (X, Y) accepts, in time |X| O(1) That is, V L can verify a membership-proof of X in L in time polynomial in the length of X

6 5 Review: NP-Hardness A language L is said to be NP-hard if an efficient (polynomial-time) procedure for L can be utilized to obtain an efficient procedure for any NP-language A language L is said to be NP-hard if an efficient (polynomial-time) procedure for L can be utilized to obtain an efficient procedure for any NP-language This definition allows efficient reduction that use the more general, Cook reduction. An efficient algorithm, translating any NP problem to a single instance of L - thereby showing that L NP-hard - is referred to as Karp reduction This definition allows efficient reduction that use the more general, Cook reduction. An efficient algorithm, translating any NP problem to a single instance of L - thereby showing that L NP-hard - is referred to as Karp reduction

7 6 Review: Characterizing NP Thm [Cook,Levin]: For L  NP there’s an algorithm that, on input X, constructs, in time |X| O(1), a set of local-constraints (Boolean functions)  L,X = {  1  l } over variables y 1,...,y m s.t.: 1. each of  1  l depends on o(1) variables 2. X  L    there exists an assignment A: { y 1,..., y m }  { 0, 1 } satisfying all  L,X [ note that m and l must be at most polynomial in |X| ]

8 7 NP characterization y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTT T TT FTF T TF T F T X  L, If X  L, all of the local tests are satisfied

9 8 Approximation - Some Definitions Def: g-approximation A g-approximation of a maximization (similar for minimization) function f, is an algorithm that on input X, outputs f’(X) such that: A g-approximation of a maximization (similar for minimization) function f, is an algorithm that on input X, outputs f’(X) such that: f’(X)  f(X)/g(|X|). f’(X)  f(X)/g(|X|). Def: PTAS (poly-time approximation scheme) We say that a maximization function f, has a PTAS, if for every g, there is a polynomial p g and a g-approximation for f, whose running time is p g (|X|) We say that a maximization function f, has a PTAS, if for every g, there is a polynomial p g and a g-approximation for f, whose running time is p g (|X|)

10 9 Approximation - NP-hard? We know that by using Cook/Karp reductions, we can show many decision problems to be NP-hard. We know that by using Cook/Karp reductions, we can show many decision problems to be NP-hard. Can an approximation problem be NP-Hard? Can an approximation problem be NP-Hard? One can easily show, that if there is g,for which there is a g-approximating for TSP, P=NP. One can easily show, that if there is g,for which there is a g-approximating for TSP, P=NP.

11 10 Characterization of NP Characterization of NP Thm [Cook,Levin]: For L  NP there’s an algorithm that, on input X, constructs, in time |X| O(1), a set of local-constraints (Boolean functions)  L,X = {  1  l } over variables y 1,...,y m s.t.: 1. each of  1  l depends on o(1) variables 2. X  L    there exists an assignment A: { y 1,..., y m }  { 0, 1 } satisfying all  L,X AS,ALMSS PCP X  L   assignment A: { y1,..., ym }  { 0, 1 } satisfies < ½ fraction of  L,X

12 11 PCP NP characterization y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTF T F T   X  L, If X  L, at least half of the local tests aren’t satisfied !

13 12 Probabilistically Checkable Proofs Hence, Cook-Levin theorem states that a verifier can efficiently verify membership-proofs for any NP language Hence, Cook-Levin theorem states that a verifier can efficiently verify membership-proofs for any NP language PCP characterization of NP, in contrast, states that a membership-proof can be verified probabilistically PCP characterization of NP, in contrast, states that a membership-proof can be verified probabilistically by choosing randomly one local-constraint, by choosing randomly one local-constraint, accessing the small set of variables it depends on, accessing the small set of variables it depends on, accept or reject accordingly accept or reject accordingly erroneously accepting a non-member only with small probability erroneously accepting a non-member only with small probability

14 13 Gap Problems A gap-problem is a maximization (or minimization) problem ƒ(X, Y), and two thresholds t 1 > t 2 A gap-problem is a maximization (or minimization) problem ƒ(X, Y), and two thresholds t 1 > t 2 X must be accepted if max Y [ ƒ(X, Y) ]  t 1 X must be rejected if max Y [ ƒ(X, Y) ]  t 2 other X’s may be accepted or rejected (don’t care) (almost a decision problem, relates to approximation)

15 14 Reducing gap-Problems to Approximation Problems Using an efficient approximation algorithm for ƒ(X, Y) to within a factor g, one can efficiently solve the corresponding gap problem gap-ƒ(X, Y), as long as t 1 / t 2 > g 2 Using an efficient approximation algorithm for ƒ(X, Y) to within a factor g, one can efficiently solve the corresponding gap problem gap-ƒ(X, Y), as long as t 1 / t 2 > g 2 Simply run the approximation algorithm. The outcome clearly determines which side of the gap the given input falls in. (Hence, proving a gap problem NP-hard translates to its approximation version, for appropriate factors ) Simply run the approximation algorithm. The outcome clearly determines which side of the gap the given input falls in. (Hence, proving a gap problem NP-hard translates to its approximation version, for appropriate factors )

16 15 gap-SAT Def: gap-SAT[D, v,  ] is as follows: Def: gap-SAT[D, v,  ] is as follows: Instance: a set  = {    l } of Boolean-functions (local-constraints) over variables y 1,...,y m of range 2 V Instance: a set  = {    l } of Boolean-functions (local-constraints) over variables y 1,...,y m of range 2 V Locality: each of  1  l depends on at most D variables Locality: each of  1  l depends on at most D variables Maximum-Satisfied-Fraction is the fraction of  satisfied by an assignment A: { y 1,..., y m }  2 v if this fraction Maximum-Satisfied-Fraction is the fraction of  satisfied by an assignment A: { y 1,..., y m }  2 v if this fraction = 1  accept = 1  accept <   reject <   reject D, v and  may be a function of l D, v and  may be a function of l

17 16 The PCP Hierarchy Def: L  PCP[ D, V,  ] if L is efficiently reducible to gap-SAT[ D, V,  ] Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] [ The PCP characterization theorem above ] Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] [ The PCP characterization theorem above ] Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c < 1 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c < 1 Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n

18 17 Optimal Characterization One cannot expect the error-probability to be less than exponentially small in the number of bits each local-test looks at One cannot expect the error-probability to be less than exponentially small in the number of bits each local-test looks at since a random assignment would make such a fraction of the local-tests satisfied since a random assignment would make such a fraction of the local-tests satisfied One cannot hope for smaller than polynomially small error-probability One cannot hope for smaller than polynomially small error-probability since it would imply less than one local-test satisfied, hence each local-test, being rather easy to compute, determines completely the outcome since it would imply less than one local-test satisfied, hence each local-test, being rather easy to compute, determines completely the outcome [ the BGLR conjecture is hence optimal in that respect]

19 18 Approximating MAX-IS is NP-hard We will reduce gap-SAT to gap –Independent-Set. Given an expression  = {    l } of Boolean-functions over variables y 1,...,y m of range 2 V, Each of  1  l depends on at most D variables, we must determine whether all the functions can be satisfied or only a fraction less than . We will construct a graph, G , that has an independent set of size r  there exists an assignment, satisfying r of the local-constraints y 1,...,y m.

20 19 (q,r)-co-partite Graph G=(Q  R, E) Comprise q=|Q| cliques of size r=|R|: E  {(, ) | i  Q, j 1,j 2  R} Comprise q=|Q| cliques of size r=|R|: E  {(, ) | i  Q, j 1,j 2  R} q

21 20 q q Gap Independent-Set Instance: an (q,r)-co-partite graph G=(q  R, E) Problem: distinguish between Good: IS(G) = q Good: IS(G) = q Bad: every set I  V s.t. |I|>  q contains an edge Bad: every set I  V s.t. |I|>  q contains an edge Thm: IS( r,  ) is NP-hard as long as r  ( 1 /  ) c for some constant c

22 21  gap-SAT  gap-IS y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTT T TT FTF T l l Construct a graph G  that has 1 clique  i  , in which 1 vertex  satisfying assignment for  i

23 22 gap-SAT  gap-IS y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTT T TT FTF T l l Two vertices are connected if the assignments they represent are inconsistent

24 23 Lemma:  (G  ) = k (independent set of size k)  X  L (There is an assignment that satisfies k clauses)  Consider an assignment A satisfying k clauses. For each clause i consider A's restriction to  i ‘s variables The corresponding k vertexes form an independent set in G   Any independent set of size k in G  implies an assignment satisfying k of  1  l gap-SAT  gap-IS Hence: Gap-IS is NP hard, and IS is NP-hard to approximate!

25 24 Each of the following theorems gives a hardness of approximation result of Max-IS: Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c > 0 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c > 0 Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n Hardness of approximation of Max-IS

26 25 Assuming the PCP theorem, we will show that if P  NP, Max- 3Sat does not have a PTAS: Theorem: There is a constant C>0 so that computing (1+c) approximations to Max-3Sat is NP-hard Hardness of approximation for Max-3SAT

27 26 Hardness of approximation for Max-3SAT SAT formula Equivalent 3SAT formula y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym variables 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 Given an instance of gap-SAT,  = {    l }, we will transform each of the  i ‘s into a 3-SAT expression  i. transform each of the  i ‘s into a 3-SAT expression  i.

28 27 Hardness of approximation for Max-3SAT Given an instance of gap-SAT,  = {    l }, there are O(n) functions  i. Each of the  i ‘s depends on up to D=O(1) variables. y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 Hence each function can be represented as a CNF formula  i : Hence each function can be represented as a CNF formula  i : (a conjunction of 2^D clauses, each of size at most D) Note that the number of clauses is still constant. Overall, we build a CNF formula: a conjunction of  i (one for or each local test).

29 28 Hardness of approximation for Max-3SAT Now rewrite every D-clause as a group of 3-clauses to obtain a 3-CNF: Now rewrite every D-clause as a group of 3-clauses to obtain a 3-CNF: y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 Note that this is still a constant blow up in the number of clauses. Note that this is still a constant blow up in the number of clauses.

30 29 Hardness of approximation for Max-3SAT y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 In case  is NOT satisfyable, some constant fraction of the   are not satisfied, and for each, at least one clause in  i isn’t satisfied.  

31 30 Conclusion: In case the original SAT formula  isn’t satisfied, a constant number of 3SAT formula  i are not satisfied, and for each at least one clause isn’t satisfied. Because each  i contains a constant number of clauses, altogether a constant number of clauses in the resulting 3SAT aren’t satisfied. This provides a gap, and hence 3SAT cannot be approximated to within some constant unless P=NP ! Hardness of approximation for Max-3SAT

32 31 The PCP theorem has ushered in a new era of hardness of approximation results. Here we list a few: We showed that Max-Clique ( and equivalently Max- Independent-Set ) do not has a PTAS. It is known in addition, that to approximate it with a factor of n 1-  is hard unless co-RP = NP. We showed that Max-Clique ( and equivalently Max- Independent-Set ) do not has a PTAS. It is known in addition, that to approximate it with a factor of n 1-  is hard unless co-RP = NP. Chromatic Number - It is NP-Hard to approximate it within a factor of n 1-  unless co-RP = NP. There is a simple reduction from Max-Clique which shows that it is NP-Hard to approximate with factor n . Chromatic Number - It is NP-Hard to approximate it within a factor of n 1-  unless co-RP = NP. There is a simple reduction from Max-Clique which shows that it is NP-Hard to approximate with factor n . Chromatic Number for 3-colorable graph - NP-Hard to approximate with factor 5/3-  (i.e. to differentiate between 4 and 3). Can be approximated within O(n   log O(1) n). Chromatic Number for 3-colorable graph - NP-Hard to approximate with factor 5/3-  (i.e. to differentiate between 4 and 3). Can be approximated within O(n   log O(1) n). More Results Related to PCP

33 32 Vertex Cover – Very easy to approximate within a factor of 2. NP-Hard to approximate it within a factor of 4/3. Vertex Cover – Very easy to approximate within a factor of 2. NP-Hard to approximate it within a factor of 4/3. Max-3-Sat – Known to be approximable within a factor of 8/7. NP-Hard to approximate within a factor of 8/7-  for every  >0 Max-3-Sat – Known to be approximable within a factor of 8/7. NP-Hard to approximate within a factor of 8/7-  for every  >0 Set Cover - NP-Hard to approximate it within a factor of ln n. Cannot be approximated within factor (1-  )  ln n unless NP  Dtime(n loglogn ). Set Cover - NP-Hard to approximate it within a factor of ln n. Cannot be approximated within factor (1-  )  ln n unless NP  Dtime(n loglogn ). More Results Related to PCP

34 33 Maximum Satisfying Linear Sub-System - The problem: Given a linear system Ax=b (A is n x m matrix ) in field F, find the largest number of equations that can be satisfied by some x. Maximum Satisfying Linear Sub-System - The problem: Given a linear system Ax=b (A is n x m matrix ) in field F, find the largest number of equations that can be satisfied by some x. If all equations can be satisfied the problem is in P. If all equations can be satisfied the problem is in P. If F=Q NP-Hard to approximate by factor m . Can be approximated in O(m/logm). If F=Q NP-Hard to approximate by factor m . Can be approximated in O(m/logm). If F=GF(q) can be approximated by factor q (even a random assignment gives such a factor). NP-Hard to approximate within q- . Also NP-Hard for equations with only 3 variables. If F=GF(q) can be approximated by factor q (even a random assignment gives such a factor). NP-Hard to approximate within q- . Also NP-Hard for equations with only 3 variables. For equations with only 2 variables. NP-Hard to approximated within 1.0909 but can be approximated within 1.383 For equations with only 2 variables. NP-Hard to approximated within 1.0909 but can be approximated within 1.383 More Results Related to PCP


Download ppt "1 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones."

Similar presentations


Ads by Google