1 2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones.

Slides:



Advertisements
Similar presentations
NP-Hard Nattee Niparnan.
Advertisements

Max Cut Problem Daniel Natapov.
Theory of Computing Lecture 18 MAS 714 Hartmut Klauck.
MaxClique Inapproximability Seminar on HARDNESS OF APPROXIMATION PROBLEMS by Dr. Irit Dinur Presented by Rica Gonen.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Introduction to PCP and Hardness of Approximation Dana Moshkovitz Princeton University and The Institute for Advanced Study 1.
Complexity ©D.Moshkovits 1 Hardness of Approximation.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
CSC5160 Topics in Algorithms Tutorial 2 Introduction to NP-Complete Problems Feb Jerry Le
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
Computability and Complexity 15-1 Computability and Complexity Andrei Bulatov NP-Completeness.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
1 Polynomial Time Reductions Polynomial Computable function : For any computes in polynomial time.
The Theory of NP-Completeness
Time Complexity.
1 INTRODUCTION NP, NP-hardness Approximation PCP.
CS Master – Introduction to the Theory of Computation Jan Maluszynski - HT Lecture NP-Completeness Jan Maluszynski, IDA, 2007
Chapter 11: Limitations of Algorithmic Power
Toward NP-Completeness: Introduction Almost all the algorithms we studies so far were bounded by some polynomial in the size of the input, so we call them.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 22 Instructor: Paul Beame.
Complexity 1 Hardness of Approximation. Complexity 2 Introduction Objectives: –To show several approximation problems are NP-hard Overview: –Reminder:
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. In this lecture we’ll present the Quadratic Solvability.
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
1 Joint work with Shmuel Safra. 2 Motivation 3 Motivation.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 8: Complexity Theory.
Lecture 22 More NPC problems
Theory of Computation, Feodor F. Dragan, Kent State University 1 NP-Completeness P: is the set of decision problems (or languages) that are solvable in.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
NP-COMPLETENESS PRESENTED BY TUSHAR KUMAR J. RITESH BAGGA.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
1 Design and Analysis of Algorithms Yoram Moses Lecture 11 June 3, 2010
1 Chapter 34: NP-Completeness. 2 About this Tutorial What is NP ? How to check if a problem is in NP ? Cook-Levin Theorem Showing one of the most difficult.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
Non-Approximability Results. Summary -Gap technique -Examples: MINIMUM GRAPH COLORING, MINIMUM TSP, MINIMUM BIN PACKING -The PCP theorem -Application:
NP-Complete problems.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Lecture 25 NP Class. P = ? NP = ? PSPACE They are central problems in computational complexity.
CSE 6311 – Spring 2009 ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS Lecture Notes – Feb. 3, 2009 Instructor: Dr. Gautam Das notes by Walter Wilson.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
NPC.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
CSC 413/513: Intro to Algorithms
COMPLEXITY. Satisfiability(SAT) problem Conjunctive normal form(CNF): Let S be a Boolean expression in CNF. That is, S is the product(and) of several.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
CSCI 2670 Introduction to Theory of Computing December 7, 2005.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
COMPLEXITY. Satisfiability(SAT) problem Conjunctive normal form(CNF): Let S be a Boolean expression in CNF. That is, S is the product(and) of several.
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
NP-Completeness A problem is NP-complete if: It is in NP
The NP class. NP-completeness
More NP-Complete and NP-hard Problems
Richard Anderson Lecture 26 NP-Completeness
NP-Completeness Yin Tat Lee
Intro to Theory of Computation
Introduction to PCP and Hardness of Approximation
Chapter 34: NP-Completeness
Hardness Of Approximation
NP-completeness The Chinese University of Hong Kong Fall 2008
NP-Completeness Yin Tat Lee
Presentation transcript:

1

2 Introduction In this lecture we’ll cover: Definition of PCP Prove some classical hardness of approximation results Review some recent ones

3 Review: Decision, Optimization Problems A decision problem is A decision problem is a Boolean function ƒ(X), or alternatively a language L  {0, 1} * comprising all strings for which ƒ is TRUE:L = { X  {0, 1} * | ƒ(X) } An optimization problem is An optimization problem is a function ƒ(X, Y) which, given X, is to be maximized (or minimized) over all possible Y’s: max y [ ƒ(X, Y) ] A threshold version of max-ƒ(X, Y) is A threshold version of max-ƒ(X, Y) is the language L t of all strings X for which there exists Y such that ƒ(X, Y)  t [transforming an optimization problem into decision]

4 Review: The Class NP The classical definition of the class NP is: A language L  {0, 1} * belongs to the class NP, if there exists a Turing machine V L [referred to as a verifier] such that X  L  there exists a witness Y such that V L (X, Y) accepts, in time |X| O(1) That is, V L can verify a membership-proof of X in L in time polynomial in the length of X

5 Review: NP-Hardness A language L is said to be NP-hard if an efficient (polynomial-time) procedure for L can be utilized to obtain an efficient procedure for any NP-language A language L is said to be NP-hard if an efficient (polynomial-time) procedure for L can be utilized to obtain an efficient procedure for any NP-language This definition allows efficient reduction that use the more general, Cook reduction. An efficient algorithm, translating any NP problem to a single instance of L - thereby showing that L NP-hard - is referred to as Karp reduction This definition allows efficient reduction that use the more general, Cook reduction. An efficient algorithm, translating any NP problem to a single instance of L - thereby showing that L NP-hard - is referred to as Karp reduction

6 Review: Characterizing NP Thm [Cook,Levin]: For L  NP there’s an algorithm that, on input X, constructs, in time |X| O(1), a set of local-constraints (Boolean functions)  L,X = {  1  l } over variables y 1,...,y m s.t.: 1. each of  1  l depends on o(1) variables 2. X  L    there exists an assignment A: { y 1,..., y m }  { 0, 1 } satisfying all  L,X [ note that m and l must be at most polynomial in |X| ]

7 NP characterization y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTT T TT FTF T TF T F T X  L, If X  L, all of the local tests are satisfied

8 Approximation - Some Definitions Def: g-approximation A g-approximation of a maximization (similar for minimization) function f, is an algorithm that on input X, outputs f’(X) such that: A g-approximation of a maximization (similar for minimization) function f, is an algorithm that on input X, outputs f’(X) such that: f’(X)  f(X)/g(|X|). f’(X)  f(X)/g(|X|). Def: PTAS (poly-time approximation scheme) We say that a maximization function f, has a PTAS, if for every g, there is a polynomial p g and a g-approximation for f, whose running time is p g (|X|) We say that a maximization function f, has a PTAS, if for every g, there is a polynomial p g and a g-approximation for f, whose running time is p g (|X|)

9 Approximation - NP-hard? We know that by using Cook/Karp reductions, we can show many decision problems to be NP-hard. We know that by using Cook/Karp reductions, we can show many decision problems to be NP-hard. Can an approximation problem be NP-Hard? Can an approximation problem be NP-Hard? One can easily show, that if there is g,for which there is a g-approximating for TSP, P=NP. One can easily show, that if there is g,for which there is a g-approximating for TSP, P=NP.

10 Characterization of NP Characterization of NP Thm [Cook,Levin]: For L  NP there’s an algorithm that, on input X, constructs, in time |X| O(1), a set of local-constraints (Boolean functions)  L,X = {  1  l } over variables y 1,...,y m s.t.: 1. each of  1  l depends on o(1) variables 2. X  L    there exists an assignment A: { y 1,..., y m }  { 0, 1 } satisfying all  L,X AS,ALMSS PCP X  L   assignment A: { y1,..., ym }  { 0, 1 } satisfies < ½ fraction of  L,X

11 PCP NP characterization y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTF T F T   X  L, If X  L, at least half of the local tests aren’t satisfied !

12 Probabilistically Checkable Proofs Hence, Cook-Levin theorem states that a verifier can efficiently verify membership-proofs for any NP language Hence, Cook-Levin theorem states that a verifier can efficiently verify membership-proofs for any NP language PCP characterization of NP, in contrast, states that a membership-proof can be verified probabilistically PCP characterization of NP, in contrast, states that a membership-proof can be verified probabilistically by choosing randomly one local-constraint, by choosing randomly one local-constraint, accessing the small set of variables it depends on, accessing the small set of variables it depends on, accept or reject accordingly accept or reject accordingly erroneously accepting a non-member only with small probability erroneously accepting a non-member only with small probability

13 Gap Problems A gap-problem is a maximization (or minimization) problem ƒ(X, Y), and two thresholds t 1 > t 2 A gap-problem is a maximization (or minimization) problem ƒ(X, Y), and two thresholds t 1 > t 2 X must be accepted if max Y [ ƒ(X, Y) ]  t 1 X must be rejected if max Y [ ƒ(X, Y) ]  t 2 other X’s may be accepted or rejected (don’t care) (almost a decision problem, relates to approximation)

14 Reducing gap-Problems to Approximation Problems Using an efficient approximation algorithm for ƒ(X, Y) to within a factor g, one can efficiently solve the corresponding gap problem gap-ƒ(X, Y), as long as t 1 / t 2 > g 2 Using an efficient approximation algorithm for ƒ(X, Y) to within a factor g, one can efficiently solve the corresponding gap problem gap-ƒ(X, Y), as long as t 1 / t 2 > g 2 Simply run the approximation algorithm. The outcome clearly determines which side of the gap the given input falls in. (Hence, proving a gap problem NP-hard translates to its approximation version, for appropriate factors ) Simply run the approximation algorithm. The outcome clearly determines which side of the gap the given input falls in. (Hence, proving a gap problem NP-hard translates to its approximation version, for appropriate factors )

15 gap-SAT Def: gap-SAT[D, v,  ] is as follows: Def: gap-SAT[D, v,  ] is as follows: Instance: a set  = {    l } of Boolean-functions (local-constraints) over variables y 1,...,y m of range 2 V Instance: a set  = {    l } of Boolean-functions (local-constraints) over variables y 1,...,y m of range 2 V Locality: each of  1  l depends on at most D variables Locality: each of  1  l depends on at most D variables Maximum-Satisfied-Fraction is the fraction of  satisfied by an assignment A: { y 1,..., y m }  2 v if this fraction Maximum-Satisfied-Fraction is the fraction of  satisfied by an assignment A: { y 1,..., y m }  2 v if this fraction = 1  accept = 1  accept <   reject <   reject D, v and  may be a function of l D, v and  may be a function of l

16 The PCP Hierarchy Def: L  PCP[ D, V,  ] if L is efficiently reducible to gap-SAT[ D, V,  ] Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] [ The PCP characterization theorem above ] Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] [ The PCP characterization theorem above ] Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c < 1 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c < 1 Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n

17 Optimal Characterization One cannot expect the error-probability to be less than exponentially small in the number of bits each local-test looks at One cannot expect the error-probability to be less than exponentially small in the number of bits each local-test looks at since a random assignment would make such a fraction of the local-tests satisfied since a random assignment would make such a fraction of the local-tests satisfied One cannot hope for smaller than polynomially small error-probability One cannot hope for smaller than polynomially small error-probability since it would imply less than one local-test satisfied, hence each local-test, being rather easy to compute, determines completely the outcome since it would imply less than one local-test satisfied, hence each local-test, being rather easy to compute, determines completely the outcome [ the BGLR conjecture is hence optimal in that respect]

18 Approximating MAX-IS is NP-hard We will reduce gap-SAT to gap –Independent-Set. Given an expression  = {    l } of Boolean-functions over variables y 1,...,y m of range 2 V, Each of  1  l depends on at most D variables, we must determine whether all the functions can be satisfied or only a fraction less than . We will construct a graph, G , that has an independent set of size r  there exists an assignment, satisfying r of the local-constraints y 1,...,y m.

19 (q,r)-co-partite Graph G=(Q  R, E) Comprise q=|Q| cliques of size r=|R|: E  {(, ) | i  Q, j 1,j 2  R} Comprise q=|Q| cliques of size r=|R|: E  {(, ) | i  Q, j 1,j 2  R} q

20 q q Gap Independent-Set Instance: an (q,r)-co-partite graph G=(q  R, E) Problem: distinguish between Good: IS(G) = q Good: IS(G) = q Bad: every set I  V s.t. |I|>  q contains an edge Bad: every set I  V s.t. |I|>  q contains an edge Thm: IS( r,  ) is NP-hard as long as r  ( 1 /  ) c for some constant c

21  gap-SAT  gap-IS y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTT T TT FTF T l l Construct a graph G  that has 1 clique  i  , in which 1 vertex  satisfying assignment for  i

22 gap-SAT  gap-IS y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llllTT T TT FTF T l l Two vertices are connected if the assignments they represent are inconsistent

23 Lemma:  (G  ) = k (independent set of size k)  X  L (There is an assignment that satisfies k clauses)  Consider an assignment A satisfying k clauses. For each clause i consider A's restriction to  i ‘s variables The corresponding k vertexes form an independent set in G   Any independent set of size k in G  implies an assignment satisfying k of  1  l gap-SAT  gap-IS Hence: Gap-IS is NP hard, and IS is NP-hard to approximate!

24 Each of the following theorems gives a hardness of approximation result of Max-IS: Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] Thm [AS,ALMSS] NP  PCP[ O(1), 1, ½] Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ RaSa ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for some c > 0 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c > 0 Thm [ DFKRS ] NP  PCP[ O(1), m, 2 -m ] for m  log c n for any c > 0 Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n Conjecture [BGLR] NP  PCP[ O(1), m, 2 -m ] for m  log n Hardness of approximation of Max-IS

25 Assuming the PCP theorem, we will show that if P  NP, Max- 3Sat does not have a PTAS: Theorem: There is a constant C>0 so that computing (1+c) approximations to Max-3Sat is NP-hard Hardness of approximation for Max-3SAT

26 Hardness of approximation for Max-3SAT SAT formula Equivalent 3SAT formula y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym variables 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 Given an instance of gap-SAT,  = {    l }, we will transform each of the  i ‘s into a 3-SAT expression  i. transform each of the  i ‘s into a 3-SAT expression  i.

27 Hardness of approximation for Max-3SAT Given an instance of gap-SAT,  = {    l }, there are O(n) functions  i. Each of the  i ‘s depends on up to D=O(1) variables. y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 Hence each function can be represented as a CNF formula  i : Hence each function can be represented as a CNF formula  i : (a conjunction of 2^D clauses, each of size at most D) Note that the number of clauses is still constant. Overall, we build a CNF formula: a conjunction of  i (one for or each local test).

28 Hardness of approximation for Max-3SAT Now rewrite every D-clause as a group of 3-clauses to obtain a 3-CNF: Now rewrite every D-clause as a group of 3-clauses to obtain a 3-CNF: y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 Note that this is still a constant blow up in the number of clauses. Note that this is still a constant blow up in the number of clauses.

29 Hardness of approximation for Max-3SAT y1y1 y1y1 y2y2 y2y2 yiyi yiyi y m-1 ymym ymym 1111 1111 jjjj jjjj llll llll C1C1C1C1 C1C1C1C1 1111 CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 jjjj CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 C1C1C1C1 C1C1C1C1 llll CkCkCkCk CkCkCkCk C2C2C2C2 C2C2C2C2 C3C3C3C3 C3C3C3C3 In case  is NOT satisfyable, some constant fraction of the   are not satisfied, and for each, at least one clause in  i isn’t satisfied.  

30 Conclusion: In case the original SAT formula  isn’t satisfied, a constant number of 3SAT formula  i are not satisfied, and for each at least one clause isn’t satisfied. Because each  i contains a constant number of clauses, altogether a constant number of clauses in the resulting 3SAT aren’t satisfied. This provides a gap, and hence 3SAT cannot be approximated to within some constant unless P=NP ! Hardness of approximation for Max-3SAT

31 The PCP theorem has ushered in a new era of hardness of approximation results. Here we list a few: We showed that Max-Clique ( and equivalently Max- Independent-Set ) do not has a PTAS. It is known in addition, that to approximate it with a factor of n 1-  is hard unless co-RP = NP. We showed that Max-Clique ( and equivalently Max- Independent-Set ) do not has a PTAS. It is known in addition, that to approximate it with a factor of n 1-  is hard unless co-RP = NP. Chromatic Number - It is NP-Hard to approximate it within a factor of n 1-  unless co-RP = NP. There is a simple reduction from Max-Clique which shows that it is NP-Hard to approximate with factor n . Chromatic Number - It is NP-Hard to approximate it within a factor of n 1-  unless co-RP = NP. There is a simple reduction from Max-Clique which shows that it is NP-Hard to approximate with factor n . Chromatic Number for 3-colorable graph - NP-Hard to approximate with factor 5/3-  (i.e. to differentiate between 4 and 3). Can be approximated within O(n   log O(1) n). Chromatic Number for 3-colorable graph - NP-Hard to approximate with factor 5/3-  (i.e. to differentiate between 4 and 3). Can be approximated within O(n   log O(1) n). More Results Related to PCP

32 Vertex Cover – Very easy to approximate within a factor of 2. NP-Hard to approximate it within a factor of 4/3. Vertex Cover – Very easy to approximate within a factor of 2. NP-Hard to approximate it within a factor of 4/3. Max-3-Sat – Known to be approximable within a factor of 8/7. NP-Hard to approximate within a factor of 8/7-  for every  >0 Max-3-Sat – Known to be approximable within a factor of 8/7. NP-Hard to approximate within a factor of 8/7-  for every  >0 Set Cover - NP-Hard to approximate it within a factor of ln n. Cannot be approximated within factor (1-  )  ln n unless NP  Dtime(n loglogn ). Set Cover - NP-Hard to approximate it within a factor of ln n. Cannot be approximated within factor (1-  )  ln n unless NP  Dtime(n loglogn ). More Results Related to PCP

33 Maximum Satisfying Linear Sub-System - The problem: Given a linear system Ax=b (A is n x m matrix ) in field F, find the largest number of equations that can be satisfied by some x. Maximum Satisfying Linear Sub-System - The problem: Given a linear system Ax=b (A is n x m matrix ) in field F, find the largest number of equations that can be satisfied by some x. If all equations can be satisfied the problem is in P. If all equations can be satisfied the problem is in P. If F=Q NP-Hard to approximate by factor m . Can be approximated in O(m/logm). If F=Q NP-Hard to approximate by factor m . Can be approximated in O(m/logm). If F=GF(q) can be approximated by factor q (even a random assignment gives such a factor). NP-Hard to approximate within q- . Also NP-Hard for equations with only 3 variables. If F=GF(q) can be approximated by factor q (even a random assignment gives such a factor). NP-Hard to approximate within q- . Also NP-Hard for equations with only 3 variables. For equations with only 2 variables. NP-Hard to approximated within but can be approximated within For equations with only 2 variables. NP-Hard to approximated within but can be approximated within More Results Related to PCP