Theorem 8.19 If algorithm A has absolute approximation ratio RA , then the shifting algorithm has absolute approximation (KRA+1)/(K+1) Proof. If N is.

Slides:



Advertisements
Similar presentations
NP-Hard Nattee Niparnan.
Advertisements

Max Cut Problem Daniel Natapov.
NP-complete and NP-hard problems Transitivity of polynomial-time many-one reductions Concept of Completeness and hardness for a complexity class Definition.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
1 Polynomial Time Reductions Polynomial Computable function : For any computes in polynomial time.
The Theory of NP-Completeness
NP-Complete Problems Problems in Computer Science are classified into
1 Discrete Structures CS 280 Example application of probability: MAX 3-SAT.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Chapter 11: Limitations of Algorithmic Power
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
1 Joint work with Shmuel Safra. 2 Motivation 3 Motivation.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Lecture 22 More NPC problems
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
TECH Computer Science NP-Complete Problems Problems  Abstract Problems  Decision Problem, Optimal value, Optimal solution  Encodings  //Data Structure.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
Non-Approximability Results. Summary -Gap technique -Examples: MINIMUM GRAPH COLORING, MINIMUM TSP, MINIMUM BIN PACKING -The PCP theorem -Application:
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
NPC.
COMPLEXITY. Satisfiability(SAT) problem Conjunctive normal form(CNF): Let S be a Boolean expression in CNF. That is, S is the product(and) of several.
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Non-LP-Based Approximation Algorithms Fabrizio Grandoni IDSIA
1 The Theory of NP-Completeness 2 Review: Finding lower bound by problem transformation Problem X reduces to problem Y (X  Y ) iff X can be solved by.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Approximation Preserving Reductions. Summary -AP-reducibility -L-reduction technique -Examples: MAXIMUM CLIQUE, MAXIMUM INDEPENDENT SET, MAXIMUM 2-SAT,
The Theory of NP-Completeness
The NP class. NP-completeness
More NP-Complete and NP-hard Problems
P & NP.
Chapter 10 NP-Complete Problems.
8.3.2 Constant Distance Approximations
Probabilistic Algorithms
Richard Anderson Lecture 26 NP-Completeness
Advanced Algorithms Analysis and Design
Computability and Complexity
Where Can We Draw The Line?
ICS 353: Design and Analysis of Algorithms
Chapter 11 Limitations of Algorithm Power
NP-Complete Problems.
CS 3343: Analysis of Algorithms
CSE 6408 Advanced Algorithms.
P, NP and NP-Complete Problems
No Guarantee Unless P equals NP
Approximation Schemes
The Theory of NP-Completeness
CSE 589 Applied Algorithms Spring 1999
P, NP and NP-Complete Problems
Theorem 9.3 A problem Π is in P if and only if there exist a polynomial p( ) and a constant c such that ICp (x|Π) ≤ c holds for all instances x of Π.
Complexity Theory in Practice
Instructor: Aaron Roth
The Complexity of Approximation
Instructor: Aaron Roth
RAIK 283 Data Structures & Algorithms
Lecture 23 NP-Hard Problems
Presentation transcript:

Theorem 8.19 If algorithm A has absolute approximation ratio RA , then the shifting algorithm has absolute approximation (KRA+1)/(K+1) Proof. If N is the number of disks in some optimal solution. Since A yields RA approximations, the number of disks returned by our algorithm for partition Pi is bounded by (1/ RA )∑jєPiNj, where Nj is the optimal number of disks needed to cover the points in vertical strip j in partition Pi and where j ranges over all such strips. Let Oi is the number of disks in the optimal solution that cover points in two adjacent strips of partition Pi. Our observation can be rewritten as ∑jєPiNj ≤ N + Oi . Because each partition has a different set of adjacent strip and each partition is shifted by a previous one by a full disk diameter , none of the disk that cover points in adjacent strips of Pi can cover point in adjacent strips of Pj, for i≠j.

Thus the total number of disks that can cover points in adjacent strips in any partition is at most N – the total number of dick in an optimal solution. Hence we can write ∑ki=1Oi ≤ N . By summing our first inequality over all k partitions and substituting our second inequality, we obtain ∑ki=1∑jєPiNj ≤ (k+1).N mini=1…k∑jєPiNj ≤ (1/k) ∑ki=1∑jєPiNj ≤ ((k+1)/k).N Using now our first bound for our shifting algorithm, we conclude that its approximation is bounded by (1/(1-RA)).((k+1)/k).N and thus has an absolute approximation ratio (KRA+1)/(K+1), as desired. This result generalizes easily to coverage by uniform convex shapes other than disks, with suitable modifications regarding the effective diameter of the shape.

Theorem 8.20 There is an approximation scheme for Disk Covering such that, for every natural number k, the scheme provides an absolute approximation ratio of (2k+1)/(k+1)2 and runs in O(k4nO(k2)) time.

8.3.4 Fixed Ratio Approximations There are a very large number of problems that have some fixed-ratio approximation and thus belong to APX but do not appear to belong to PTAS, although they obey the necessary condition of simplicity. Examples include Vertex Cover, Maximum Cut, and the most basic problem of all, namely Maximum 3SAT, the optimization version of 3SAT.

Theorem 8. 21 MaxkSAT has a 2-k-approximation. Proof Theorem 8.21 MaxkSAT has a 2-k-approximation. Proof. Consider the following simple algorithm. Assign to each remaining clause ci weight 2-|ci|; thus every unassigned literal left in a clause halves the weight of the clause. (the weight of the clause is inversly proportional to the number of ways in which that clause could be satisfied.) Pick any variable x that appears in some remaining clause. Set x to true if the sum of the weights of the clauses in which x appears as an uncomplemented literal exceeds the sum of the clauses in which it appears as a complemented literal; set it to false otherwise. Update the clause and their weights and repeat until all clauses have been satisfied or reduced to falsehood.

We claim that this algorithm will leave at most m2-k unsatisfied clauses (where m is the number of clauses in the instance); since the best that any algorithm could do would be to satisfy all m clauses. Note that m2-k is exactly the total weight of the m clauses of length k in the original instance; thus our claim is that the number of clauses left unsatisfied by the algorithm is bound by ∑mi=12-|ci| ,the total weight of the clauses in the instance. To prove our claim we use induction on the number of clauses. With a single clause, the algorithm clearly returns a satisfying truth assignment and thus meets the bound. Assume that the algorithm meets the bound on all instances of m or fewer clauses. Let x be the first variable set by the algorithm and denote by mt the number of clauses satisfied by the assignment. mf the number of clauses losing a literal as a result of the assignment. mu=m+1-mt-mf the number of clauses unaffected by assignment.

Also let wm+1 denote the total weight of all the clauses in the original instance, wt the total weight of clauses satisfied by the assignment, wu the total weight of unaffected clauses, and wf the total weight of the clauses losing a literal before the loss of that lateral; thus we can write wm+1=wt+wu+wf. Because we must have had wt ≥ wf in order to assign x as we did, we can write wm+1=wt+wu+wf ≥ wu+2wf. The remaining m-mt = mu+mf clauses now have the total weight of wu+2wf, because the weight of every clause that loses a literal doubles. By inductive hypothesis, our algorithm will leave at most wu+2wf clauses unsatisfied among these clauses and thus also in the original problem; since we have, as noted above, wm+1≥ wu+2wf , our claim is proved.

Definition 8. 12 Let Π1 and Π2 be the two problems in NPO Definition 8.12 Let Π1 and Π2 be the two problems in NPO. We say that Π1 PTAS-reduces to Π2 if there exists three functions, f, g and h, such that for any instance x of Π1, f(x) is an instance of Π2 and is computable in time polynomial in |x|; for any instance x of Π1, any solution y for instance f(x) of Π2, and any relational precision requirement ε (expressed as a fraction), g(x, y, ε) is a solution for x and is computable in time polynomial in |x| and |y|; h is a computable injective function on the set of rationals in the interval [0,1); for any instance x of Π1, any solution y for instance f(x) of Π2 and any precision requirement ε (expressed as a fraction), if the value of y obeys precision requirement h(ε), then the value of g(x, y, ε) obeys the precision requirement ε.

PTATS-reductions are reflexive and transitive Proposition 8.4 Defination 8.13 The class OPTNP is exactly the class of problems that PTATS-reduce to Max3SAT. PTATS-reductions are reflexive and transitive if Π1 PTATS reduces to Π2 and Π2 belongs to APX (respectively, PTAS), then Π1 belongs to APX (respectively, PTAS)

Theorem 8.22 The Maximum Weighted Satisfiability (MaxWSAT) problem has the same instances as Satisfiability, with the addition of a weight function mapping each variable to a natural number. The objective is to find a satisfying truth assignment that maximizes the total weight of the true variables. An instance of the Maximum Bounded Weighted Satisfiability problem is an instance of a MaxWSAT with bound W such that the sum of the weights of all variables in the instance must lie in the interval [W, 2W]. Maximum weighted satisfiability is NPO-complete. Maximum bounded weighted satisfiability is APX-complete.

Proof. Let Π be a problem in NPO and let M be a nondeterministic machine that, for each instance of Π, guesses a solution, checks that if it is feasible, and computes its value. If the guess fails, the M halts with a 0 on the tape; otherwise it halts with the value of the solution, written in binary and “in reverse”, with its LSB on square 1 and increasing bits to the right of that position. By definition of NPO, M runs in polynomial time. For M and any instance x, the construction used in the proof of Cook’s theorem yields a Boolean formula of polynomial size that describe exactly those computation paths of M on input x and guess y that lead to a non zero answer. We assign a weight of 0 to all variables used in the construction, except for those that denote that a tape square contains the character 1 at the end of computation-and that only for squares to the right of position 0. That is, only the tape squares that contain a 1 in the binary representation of the value of the solution for x will count toward

the weight of the MaxWSAT solution the weight of the MaxWSAT solution. This transformation between instances can easily be carried out in polynomial time; a solution for the original problem can be recovered by looking at the assignment of the variables describing the initial guess; and the precision mapping function h is just the identity. Definition 8.14 Let Π1 and Π2 be two maximization problems; denote the value of an optimal solution for an instance x by opt(x). A gap-preserving reduction from Π1 to Π2 is a polynomial-time map from instances of Π1 to instances of Π2, together with 2 pair of functions, (c1,r1) and (c2,r2), such that r1 and r2 return values no smaller than 1 and the following implications hold:

Opt(x) ≥ c1(x) => opt(f(x)) ≥ c2(f(x)) Opt(x) ≤ c1(x)/r1(x) => opt(f(x)) ≤ c2(f(x))/r2(f(x)) Theorem 8.23 For each problem Π in NP, there is a polynomial-time map f from instances of Π to instances of Max3SAT and a fixed ε>0 such that, for any instance x of Π, the following implications hold: x is a “yes” instance => opt(f(x)) = |f(x)| x is a “no” instance => opt(f(x)) < (1- ε) |f(x)| Proof. The gist of alternate characterization of NP is that a “yes” instance of a problem in NP has a certificate that can be verified probabilistically in polynomial time by inspecting only a constant number of bits of the certificate, chosen with the help of a logarithmic number of random bits. If x is a “yes” instance, then verifier will accept it with probability 1,

(that is, it will accept no matter what the random bits are); otherwise, the verifier will reject it with probability at least ½(i.e. at least half of the random bit sequences will lead to rejection). Since Π is in NP, a “yes” instance of size n has a certificate that can be verified in polynomial time with the help of at most c1logn random bits and by reading at most c2 bits from the certificate. All 2c2 possible outcomes that can result from looking up these c2 bits can be examined. Each outcome determines a computation path; some paths lead to acceptance and some to rejection, each in at most a polynomial number of steps. Because there is a constant number of paths and each path is of polynomial length, we can examine all of these paths, determine which are accepting and which rejecting, and write a formula of constant size that describes the accepting paths in terms of the bits of the certificate read during the computation. This formula is a disjunction of at most 2c2 conjuncts, where each conjunct describes one path and thus has at most c2 literals.

Each such formula is satisfiable if and only if the c2 bits of the certificate examined under the chosen sequence of random bits can assume values that lead the verifier to accept its input. We can then take all nc1 such formulae, one for each sequence of random bits, and place them into a single large conjunction. The resulting large conjunction is satisfiable if and only if there exists a certificate such that, for each choice of c1logn random bits (i.e. for each choice of the c2 certificate bits to be read), the verifier accepts its input. If the verifier rejects its input, then it does so for at least one half of the possible choices of random bits. Therefore, at least one half of the constant-size formulae are unsatisfiable. But then at least one out of every k clauses must me false for these (½)nc1 formulae, so that we must have at least (½k)nc1 unsatisfied clauses in any assignment. Thus if the verifier accepts its input, then all knc1 clauses are satisfied, whereas, if it rejects its input, then at most (k- ½k)nc1 = (1- ½k2)knc1 clauses can be satisfied.

Since k is a fixed constant, we have obtained the desired gap, with ε=½k2. Corollary 8.3 No OPTNP-hard problem can be in PTAS unless P equals NP. Theorem 8.2.4 Maximum Bounded Satisfiability PTAS-reduces to Max3SAT. Corollary 8.4 OPTNP equals APX

NP-Hardness of Approximation Schemes. If the problem is not p-simple or if its decision version is strongly NP-complete, then it is not in FPTAS unless P equals NP. If the problem is not simple or if it is OPTNP-hard then it not in PTAS unless P equals NP.

Thank You!!!