Download presentation
Presentation is loading. Please wait.
1
Approximation Schemes
Meera Mohan Sudhir Koduri Xiaolu Liu
2
Why Approximation Algorithms
Problems that we cannot find an optimal solution in a polynomial time E.g. Set Cover, Bin Packing Need to find a near-optimal solution: Heuristic Approximation algorithms: to find approximate solutions to optimization problems
3
Approximation Algorithms
to find approximate solutions to optimization problems guarantees an approximation ratio often associated with NP-hard problems unlikely that there can ever be efficient polynomial time exact algorithms solving NP-hard problems settle for polynomial time non-optimal solutions the approximation is optimal up to a small constant factor (for instance within 5% of the optimal solution) increasingly being used for problems where exact polynomial algorithms are known but are too expensive due to the sizes of the data sets
4
Computationally Hard Problems
You are given a computationally hard problem. Here are two scenarios: No knowledge about approximation: Spend a few months looking for an optimal solution Confess that you cannot do it Get fired!!! Knowledge about approximation: Show that this is a NP-complete (NP-hard) problem Propose a good algorithm (either heuristic or approximation) to find a near-optimal solution Calculate the approximation ratio
5
Application of Approximation Algorithms
Shortest path: Given a graph with edge costs and a pair of nodes, find the shortest path (least costs) between them Traveling salesman: Given a complete graph with nonnegative edge costs, find a minimum cost cycle visiting every vertex exactly once
6
Approximation Algorithms
An algorithm to solve an optimization problem that runs in polynomial time in the length of the input and outputs a solution that is guaranteed to be close to the optimal solution. "Close" has some well-defined sense called the performance guarantee. guaranteed to find a solution at most (or at least, as appropriate) ρ times the optimum ratio ρ is the performance ratio or relative performance guarantee of the algorithm Approximation Ratio ρ(n): Define: C* as a optimal solution and C as the solution produced by the approximation algorithm max (C/C*, C*/C) <= ρ(n) Maximization problem: 0 < C <= C*, thus C*/C shows that C* is larger than C by ρ(n) Minimization problem: 0 < C* <= C, thus C/C* shows that C is larger than C* by ρ(n)
7
Approximation Algorithms (cont)
PTAS (Polynomial-Time Approximation Scheme): (1 + ε)-approximation algorithm, A for a NP-hard optimization П where its running time is bounded by a polynomial in the size of instance I ε is the precision requirement A is the approximation algorithm Family {Ae}e>0 of (1+e)-approximation algorithms running time polynomial in |I| ok: exponential in 1/e : e.g. O(|I|1/e)
8
Approximation Algorithms (cont)
FPTAS (Fully PTAS): The same as PTAS + time is bounded by a polynomial in both the size of instance I and 1/ε FPTAS: Fully PTAS running time also polynomial in 1/e : e.g. O(|I|/e3)
9
Catch-22 We cannot find C*, how can we compare C to C*?
How can we design an algorithm so that we can compare C to C* This is the objective of this presentation!!!
10
Optimization Problem Instance: I Length of description: |I|=n
Goal: minimize objective function ==> Opt(I)
11
Complexity Classes Relationships
NPO APX PTAS FPTAS PO PO C FPTAS C PTAS C APX C NPO
12
Strongly and Weakly NP-hard
If a problem is NP-hard even if the input is encoded in unary, then it is called strongly NP-hard = NP-hard in the strong sense = unary NP-hard If a problem is polynomially solvable under a unary encoding, then it is solvable in pseudo-polynomial time. weak sense strong sense
13
p-approximable class Let π be an optimization problem; if its decision version is strongly NP-complete, then π is not fully p-approximable. Theorem 8.14 FPTAS: Strong NP-hardness ==> no FPTAS
14
Knapsack Problem Knapsack is
p-approximable Fully p-approximable Knapsack problem is solvable in pseudo-polynomial time
15
Properties of Optimization Problem
Let π be an optimization problem with the following properties: f(I) and max(I) are polynomially related through len(I); that is, there exist bivariate polynomials p and q such that we have both: f(I) ≤ p(len(I), max(I)) and max(I) ≤ q(len(I), f(I)); The objective value of any feasible solution varies linearly with the parameters of the instance; and, π can be solved in pseudo-polynomial time. Then π is fully p-approximable.
16
Simple and p-simple Problems
An optimization problem is simple if, for each fixed bound B, the set of instances with optimal values not exceeding B is decidable in polynomial time. For example, Clique, Vertex Cover and Set Cover are simple problems An optimization problem is p-simple if, there exists a fixed bi-variate polynomial q, such that the set of instances I with optimal values not exceeding B is decidable in q(|I|, B) time. For example, Partition is a p-simple problem
17
simple : p-simple :: PTAS : FPTAS
Analogy simple : p-simple :: PTAS : FPTAS A problem is p-simple if it is simple + the uniformity condition, much like our fully p-approximable Simplicity is necessary but, not a sufficient condition for membership in PTAS. For example, Clique, while simple cannot be in PTAS unless P=NP
18
Analogy (cont’d) Let π be an optimization problem.
If π is p-approximable(π Є PTAS), then it is simple If π is fully p-approximable(π Є FPTAS), then it is p-simple PTAS ==> simple problem FPTAS ==> p-simple problem Simple problem + P=NP ==> PTAS
19
p-simple If π is an NP problem with an NP complete decision version, and for each instance I of π, f (I) and max (I) are polynomially related through len (I), then π is p-simple if and only if, it can be solved in pseudo-polynomial time. ==> class PTAS is much richer than class FPTAS
20
Maximum Independent Subset Problem
An instance of a maximum independent subset problem is given by a collection of items, each with a value. The feasible solutions of the instance form an independence system; that is, every subset of a feasible solution is also a feasible solution. The goal is to maximize the sum of the values of the items included in the solution.
21
Summary Approximability has its own hierarchy of complexity classes
Problems in NP have very different approximability properties: … some are impossible to approximate (k-center, TSP) … some are hard, with a bound depending on the input size (set cover) … some can be approximated with some constant ratio (vertex cover, k-center with triangle inequality, TSP with triangle inequality) … and some can be approximated as closely as you like (knapsack) Approximability has its own hierarchy of complexity classes
22
References The Theory of Computation, Bernard M.Moret. Addison-Wesley, 1998 Theory of Computation, Dexter Kozen, Springer, 2006 M.X. Geomans, "The Knapsack Problem and Fully Polynomial Time Approximation Schemes (FPTAS)," Seminar Series in Theoretical Computer Science, March 10, 2006 H. Sachnai and T. Tamir, "Polynomial Time Approximation Schemes- A Survey" V. Vazirani, "Approximation Algorithms," Springer, pp.68-72, 2003.
23
Thank You !!!
24
Approximation Schemes
Sudhir Koduri
25
Problem – Proving completion algorithm is indeed well behaved.
Theorem 8.18 A Maximum independent subset problem is in PTAS if and only if, for any k, it admits a polynomial-time k-completion algorithm. Problem – Proving completion algorithm is indeed well behaved. Shifting Technique Decomposes a problem into suitably sized “adjacent” subpieces and create subproblems by grouping a number of adjacent subpieces.
26
Example A linear array of some kl subpieces in which the subpieces end up in l groups of k consecutive subpieces each. Disk Covering Problem - N points in the plane and disks of fixed diameter D, cover all points with the smallest number of disks. Approx. algo. divides the area in which the n points reside into vertical strips of width D.
27
Theorem 8.19 If algorithm A has absolute approximation ratio RA, then the shifting algorithm has absolute approximation ratio (kRA + 1/k+1). Since A yields RA approximations, the number of approximations for partition is bounded by 1/RA Σ Nj , where Nj is the optimal number of disks needed to cover the points.
28
A disk cannot cover points in two elementary strips that are not adjacent.
We find the locally optimal solutions within each strip Union of all would result in exceeding N by at most the number of the disks that cover in two adjacent strips.
29
Divide and conquer Strategy
New problem To minimize the number of disks of diameter D needed cover a collection of points placed in a vertical strip of width kD Divide and conquer Strategy
30
Fixed-Ratio Approximations
There are a very large number of problems that have - fixed-ratio approximations schemes - belong to Apx but do not appear to PTAS Examples Vertex Cover, Maximum Cut, Maximum 3SAT
31
Theorem 8.21 MAXkSAT has a 2¯k approximation. Consider the following simple algorithm. Assign to each remaining clause ci weight 2-ci; thus every unassigned literal left in a clause halves the weight of that clause. Pick any variable X that appears in some remaining clause and set X to true if the sum of the weights of the clauses in which X appears as an uncomplemented literal exceeds the sum of the clauses in which it appears as a complemented literal; set it to false otherwise.
32
Update the clauses and their weights and repeat until all clauses have been satisfied or reduced to a falsehood.
33
Classification of problems within classes NPO, Apx and PTAS??
Reduction – type of reduction Correspondence between solutions and instances The main reason for these requirements is that we need to be able to retrieve a good approximation for problem ¶1 from a reduction to a problem ¶2 for which we already have an approximate solution algorithm.
34
Fixed-Ratio Approximations
Xiaolu Liu Mississippi State University Nov 12, 2007 CSEET
35
Outline Introduce Fixed-Ratio Approximations APX, PTAS,NPO
Optimization Problems Approximation Algorithms Fixed-Ratio Approximations APX, PTAS,NPO Classify NPO, APX, PTAS Scheme of the reduction PTAS-reduces Gap-preserving reducing
36
Optimization Problems
Many hard problems (especially NP-hard) are optimization problems e.g. find shortest TSP tour e.g. find smallest vertex cover e.g. find largest clique Minimization or maximization problem
37
Approximation Algorithms
Often happy with approximately optimal solution we want approximation algorithm with guaranteed approximation ratio of r meaning: on every input x, output is guaranteed to have value at most r*opt for minimization at least opt/r for maximization “opt” = value of optimal solution
38
Fixed-ration Approximation
C-approximation algorithm for some constant c if it can be proven that the solution is at most c times worse than the optimal solution. C is called the approximation ratio Within 5% of the optimal solution Fixed-ration approximation-Ideally, the approximation is optimal up to a small constant factor (Eg. within 5% of the optimal solution).
39
APX and PTAS PTAS If there is a polynomial-time algorithm to solve a problem within every fixed percentage polynomial-time approximation scheme APX the set of NPO problems that can be approximated within some constant factor, but not every constant factor. Unless P=NP, there are problems in APX but not in PTAS To say a problem is APX-hard is generally bad news, because it denies the existence of a PTAS, which are the most useful sort of approximation algorithm.
40
Classify NPO, APX, PTAS Scheme of the reduction
The requisite style of reduction between approximation problems
41
Scheme of the reduction
Establish a correspondence between solutions as well as instances Preserve approximation ratios Map f between instances Map g between solutions along with know algorithm A New approximation algorithm A’ Call in succession the routines map f->A->map g map g between solutions along with know algorithm A, using A we have been able to get a good approximation for our original problem
42
Scheme of the reduction
Sufficient generality to prove the separation of NPO, PTAS, and APX 3rd map function In order to get Sufficient generality, we introduce 3rd map function, maps precision requirements for TT1 onto precision requirement for TT2
43
Scheme of the reduction
Difference come from the requirements on handling the precision requirement PTAS-reduction Separate APX and PTAS Gap-preserving reducing Separate NPO and PTAS
44
PTAS-reduction Let and be two problems in NPO. PTAS-reduces
Three functions f, g, and h Instance x of , f(x) is an instance of , and computable in time polynomial in |x| Instance x of , solution y for instance f(x) of , and precision requirement , g(x,y, ) is a solution for x and computable in time polynomial in |x| and |y| h is computable injective function on the set of rationals in [0,1) If the value y obeys h( ), the value of g(x,y, ) obeys Both f and g must be computable in a polynomail time map g between solutions along with know algorithm A, using A we have been able to get a good approximation for our original problem
45
Proposition of PTAS-reduces
Reflexive and transitive If belongs to APX (respectively ,PTAS), then belongs to APX (respectively, PTAS) Complete for NPO An optimization problem belongs to NPO (respectively, APX) Every Problem in NPO (respectively, APX) PTAS-reduces to it
46
Max3SAT 2-k approximation Most fundamental NP-complete problem
Key problem in APX due to its nature
47
OptNp The class of problems that PTAS-reduce to Max3SAT OptNp APX
Not see nature problems that are complete for NPO and APX OptNp has at lest one Max3SAT The standard complete problems for NPO and APX are generalizations of Max3SAT
48
Why OptNp Interesting Include many natural problems
Bounded-degree Vertex Cover Bounded-degree Independent Set Maximum Cut Many of them are OptNp-Hard OptNp-Hard can not be in PTAS unless P=NP Many of them are OptNp-Hard, we can use PTAS-reduction from Max3SAT to show that
49
Gap-preserving reducing
To prove NPO not belong to PTAS unless p=NP Create a gap: Decision Problem Opimization Problem “yes” instance instance with optimal value on one side of the gap “no”instance on the other side of the gap
50
Gap-preserving reducing
Two maximization problems Polynomial-time map Two pair of functions (C1,r1) and (C2,r2) opt(x) c1(x) opt(f(x)) c2(f(x)) Opt(x) c1(x)/r1(x) opt(f(x)) c2(f(x))/r2(f(x)) yes yes C2,r2 C1,r1 f r2 r1 no Gap-preserving reduction from TT1 to TT2 is a polynomail-time mapping They should hold this implication no
51
Use of Gap-preserving reducing
Combine it with a gap-creating reduction Be of limited interest The problem for which we had a gap-creating reduction were few Not been used much in further transformation OptNP-hard problems can not be in PTAS, unless P=NP It has become possible to prove any of the OptNP-hard problems can not be in PTAS, unless P=NP
52
Conclusion- NP-Hardness of Approximation Scheme
The problem is not p-simple Its decision version is strongly NP-complete Then it is not in FPTAS, unless P=NP
53
Conclusion- NP-Hardness of Approximation Scheme
The problem is not p-simple It is OptNP-hard Then it is not in PTAS, unless P=NP
54
Conclusion No OptNP-Hard problem can be in PTAS unless P=NP OptNP=APX
55
Thanks Any Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.