Spring 2007Approximation Algorithms1. Spring 2007Approximation Algorithms2 Outline and Reading Approximation Algorithms for NP-Complete Problems (§13.4)

Slides:



Advertisements
Similar presentations
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Advertisements

1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
1 The TSP : Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell ( )
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Branch & Bound Algorithms
Combinatorial Algorithms
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Approximation Algorithms
Branch and Bound Searching Strategies
Graphs 4/16/2017 8:41 PM NP-Completeness.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Approximation Algorithms1. 2 Outline and Reading Approximation Algorithms for NP-Complete Problems (§13.4) Approximation ratios Polynomial-Time Approximation.
Approximation Algorithms Lecture for CS 302. What is a NP problem? Given an instance of the problem, V, and a ‘certificate’, C, we can verify V is in.
1 Approximation Algorithms CSC401 – Analysis of Algorithms Lecture Notes 18 Approximation Algorithms Objectives: Typical NP-complete problems Approximation.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
NP-Complete Problems Reading Material: Chapter 10 Sections 1, 2, 3, and 4 only.
Approximation Algorithms
The Theory of NP-Completeness
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 7 Monday, 4/3/06 Approximation Algorithms.
Analysis of Algorithms CS 477/677
NP-Completeness (2) NP-Completeness Graphs 4/17/2017 6:25 AM x x x x x
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Chapter 11: Limitations of Algorithmic Power
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 7 Tuesday, 4/7/09 Approximation Algorithms.
NP and NP- Completeness Bryan Pearsaul. Outline Decision and Optimization Problems Decision and Optimization Problems P and NP P and NP Polynomial-Time.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Approximation Algorithms
CSE 589 Applied Algorithms Spring Colorability Branch and Bound.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Chapter 13 NP-Completeness
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
NP-Completeness x 1 x 3 x 2 x 1 x 4 x 3 x 2 x
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
Approximation Algorithms
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
CSC401 – Analysis of Algorithms Chapter 13 NP-Completeness Objectives: Introduce the definitions of P and NP problems Introduce the definitions of NP-hard.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Advanced Algorithm Design and Analysis (Lecture 14) SW5 fall 2004 Simonas Šaltenis E1-215b
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Algorithms for hard problems Introduction Juris Viksna, 2015.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
CSC 413/513: Intro to Algorithms
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Spring 2007NP-Completeness1 x 1 x 3 x 2 x 1 x 4 x 3 x 2 x
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
Approximation Algorithms
More NP-Complete and NP-hard Problems
Approximation Algorithms
NP-Completeness (2) NP-Completeness Graphs 7/23/ :02 PM x x x x
NP-Completeness (2) NP-Completeness Graphs 7/23/ :02 PM x x x x
Approximation Algorithms
NP-Completeness Merge Sort 11/16/2018 2:30 AM Spring 2007
NP-Completeness (2) NP-Completeness Graphs 11/23/2018 2:12 PM x x x x
NP-Completeness NP-Completeness Graphs 12/6/2018 1:58 PM x x x x x x x
Approximation Algorithms
Presentation transcript:

Spring 2007Approximation Algorithms1

Spring 2007Approximation Algorithms2 Outline and Reading Approximation Algorithms for NP-Complete Problems (§13.4) Approximation ratios Polynomial-Time Approximation Schemes (§13.4.1) PTAS for 0/1 Knapsack 2-Approximation for Vertex Cover (§13.4.2) 2-Approximation for TSP special case (§13.4.3) Log n-Approximation for Set Cover (§13.4.4)

Spring 2007Approximation Algorithms3 Approximation Ratios Optimization Problems We have some problem instance x that has many feasible “solutions”. We are trying to minimize (or maximize) some cost function c(S) for a “solution” S to x. For example,  Finding a minimum spanning tree of a graph  Finding a smallest vertex cover of a graph  Finding a smallest traveling salesperson tour in a graph

Spring 2007Approximation Algorithms4 Approximation Ratios An approximation produces a solution T T is a δ-approximation to the optimal solution if c(T) ≤ δ OPT (assuming a minimization problem).  OPT ≤ δ c(T) for a maximization problem  In both cases we assume δ>1

Spring 2007Approximation Algorithms5 Polynomial-Time Approximation Schemes A problem L has a polynomial-time approximation scheme (PTAS) if for any fixed  >0 it has a (1+  )-approximation algorithm that runs in polynomial time Polynomial time based on input size n, though  may also appear in the expression for run time. A problem has a fully polynomial-time approximation scheme (FPTAS) if the run time is polynomial in both n and 1/  0/1 Knapsack has a FPTAS, with a running time that is O(n 3 /  ).

Spring 2007Approximation Algorithms6 PTAS for 0/1 Knapsack Problem: Set S of items numbered 1,…,n. Constraint (total size limit) s Item i has size s i and worth w i Want to find subset T that maximizes the worth

Spring 2007Approximation Algorithms7 PTAS for 0/1 Knapsack Want: PTAS that produces a (1+  )-approximation for any given fixed positive constant  That is, if OPT is the value (total worth) of the optimal solution to S, then our algorithm should find subset T’ satisfying the size constraint and such that if we define then OPT ≤ (1 +  )w

Spring 2007Approximation Algorithms8 PTAS for 0/1 Knapsack Actually show that w’ ≥ (1 -  /2)OPT Ok, since for 0<  <1, we have 1/(1 -  /2) < 1+  So w’ ≥ (1 -  /2)OPT implies

Spring 2007Approximation Algorithms9 PTAS for 0/1 Knapsack We leverage the rescalability of the problem Assume: size of each item in S is at most s (otherwise it can be thrown out without changing the solution) Let w max denote the maximum worth of the items in S Clearly w max ≤ OPT ≤ n w max Let M =  w max / 2n. Round each worth w i down to w i ’, the nearest smaller multiple of M. Let S’ denote rounded version of S, and OPT’ the optimal solution to S’

Spring 2007Approximation Algorithms10 PTAS for 0/1 Knapsack Note: OPT ≤ n w max = n (2nM /  ) = 2n 2 M/  Also: OPT’ ≤ OPT, since we rounded all worth values down without changing the sizes Thus: OPT’ ≤ 2n 2 M/  So, look at solution to S’ Every item worth in S’ is a multiple of M, so any achievable worth is a multiple of M There are no more than N =  2n 2 /  of these to consider.

Spring 2007Approximation Algorithms11 PTAS for 0/1 Knapsack We now use dynamic programming to solve S’ Let s[i,j] = the size of the smallest set of items in {1,2,…j} worth iM Note that j tells us which set we are choosing from, and i tells us the multiple of M) Key observation: For i=1,2,…N and j = 1,2,…n, Why? Item j either will or will not contribute to the smallest way of achieving worth iM from items in {1,2,…j}

Spring 2007Approximation Algorithms12 PTAS for 0/1 Knapsack min j-1j s i - (w j /M) i +s j

Spring 2007Approximation Algorithms13 PTAS for 0/1 Knapsack Define s[i,0]=+  for i = 1,2,…N Roughly, no items implies we need infinitely many of them to achieve worth iM Define s[0,j]=0 for j = 1,2,…n Optimal value OPT’ = max{iM: s[i,n]≤s} Cost of the algorithm O(nN) = O(n 3 /  )

Spring 2007Approximation Algorithms14 PTAS for 0/1 Knapsack How good an estimate is OPT’? Well, we reduced worth of each item by at most M, so OPT’ ≥ OPT – nM = OPT -  w max /2, since the optimal solution can contain at most n items. But, OPT ≥ w max so OPT’ ≥ OPT -  OPT/2 = (1-  /2)OPT which is what we needed to prove.

Spring 2007Approximation Algorithms15 Vertex Cover A vertex cover of graph G=(V,E) is a subset W of V, such that, for every (a,b) in E, a is in W or b is in W. OPT-VERTEX-COVER: Given an graph G, find a vertex cover of G with smallest size. OPT-VERTEX-COVER is NP-hard.

Spring 2007Approximation Algorithms16 A 2-Approximation for Vertex Cover Every chosen edge e has both ends in C But e must be covered by an optimal cover; hence, one end of e must be in OPT Thus, there is at most twice as many vertices in C as in OPT. That is, C is a 2-approx. of OPT Running time: O(m) Algorithm VertexCoverApprox(G) Input graph G Output a vertex cover C for G C  empty set H  G while H has edges e  H.removeEdge(H.anEdge()) v  H.origin(e) w  H.destination(e) C.add(v) C.add(w) for each f incident to v or w H.removeEdge(f) return C

Spring 2007Approximation Algorithms17 Special Case of the Traveling Salesperson Problem OPT-TSP: Given a complete, weighted graph, find a cycle of minimum cost that visits each vertex. OPT-TSP is NP-hard Special case: edge weights satisfy the triangle inequality (which is common in many applications):  w(a,b) + w(b,c) > w(a,c) a b c 5 4 7

Spring 2007Approximation Algorithms18 Special Case of the Traveling Salesperson Problem Algorithm has three steps Construct a minimum spanning tree M for G Construct an Euler tour traversal E of T (walk around T always keeping edges to our left) Construct tour T from E by marching around E but:  Each time we have (u,v) and (v,w) in E such that v has already been visited, we replace these two edges by the edge (u,w)

Spring 2007Approximation Algorithms19 A 2-Approximation for TSP Special Case Output tour T Euler tour Pof MSTM Algorithm TSPApprox(G) Input weighted complete graph G, satisfying the triangle inequality Output a TSP tour T for G M  a minimum spanning tree for G P  an Euler tour traversal of M, starting at some vertex s T  empty list for each vertex v in P (in traversal order) if this is v’s first appearance in P then T.insertLast(v) T.insertLast(s) return T

Spring 2007Approximation Algorithms20 A 2-Approximation for TSP Special Case - Proof Euler tour Pof MSTMOutput tour TOptimal tour OPT (twice the cost ofM)(at least the cost of MSTM)(at most the cost ofP) The optimal tour is a spanning tour; hence |M|<|OPT|. Optimal tour minus one edge is spanning tree The Euler tour P visits each edge of M twice; hence |P|=2|M| Each time we shortcut a vertex in the Euler Tour we will not increase the total length, by the triangle inequality (w(a,b) + w(b,c) > w(a,c)); hence, |T|<|P|. Therefore, |T|<|P|=2|M|<2|OPT|

Spring 2007Approximation Algorithms21 Set Cover OPT-SET-COVER: Given a collection of m sets, find the smallest number of them whose union is the same as the whole collection of m sets? OPT-SET-COVER is NP-hard Greedy approach produces an O(log n)-approximation algorithm. See § for details. Algorithm SetCoverApprox(G) Input a collection of sets S 1 …S m Output a subcollection C with same union F  {S 1,S 2,…,S m } C  empty set U  union of S 1 …S m while U is not empty S i  set in F with most elements in U F.remove(S i ) C.add(S i ) Remove all elements in S i from U return C

Spring 2007Approximation Algorithms22 Backtracking and Branch-and-Bound

Spring 2007Approximation Algorithms23 Back to NP-Completeness Many hard problems are NP-Complete But we still need solutions to them, even if they take a long time to generate Backtracking and Branch-and-Bound are two design techniques that have proven promising for dealing with NP-Complete problems And often these find ``good enough’’ solutions in a reasonable amount of time

Spring 2007Approximation Algorithms24 Backtracking A backtracking algorithm searches through a large set of possibilities in a systematic way Typically optimized to avoid symmetries in problem instances (think, e.g., TSP) Traverses search space in manner such that ``easy’’ solution is found if one exists. Technique takes advantage of an inherent structure possessed by many NP-Complete problems

Spring 2007Approximation Algorithms25 NP-Complete Problem Structure Recall: acceptance of solution for instance x of NP-complete problem can be verified in polynomial time Verification typically involves checking a set of choices (the output of the choice function) to see whether they satisfy a formula or demonstrate a successful configuration E.g., Values assigned to Boolean variables, vertices of graph to include in special set, items to go in knapsack

Spring 2007Approximation Algorithms26 Backtracking Systematically searches for a solution to problem Traverses through possible ``search paths’’ to locate solutions or dead ends Configuration at end of such a path consists of a pair (x,y) x is the remaining problem to be solved y is the set of choices that have been made to get to this subproblem from the original problem instance

Spring 2007Approximation Algorithms27 Backtracking Start search with the pair (x,  ), where x is the original problem instance Any time that backtracking algorithm discovers a configuration (x,y) that cannot lead to a valid solution, it cuts off all future searches from this configuration and backtracks to another configuration.

Spring 2007Approximation Algorithms28 Backtracking Pseudocode

Spring 2007Approximation Algorithms29 Backtracking: Required Details Define way of selecting the most promising candidate configuration from frontier set F. If F is a stack, get depth-first search of solution space. F a queue gives a breadth-first search, etc. Specify way of expanding a configuration (x,y) into subproblem configurations This process should, in principle, be able to generate all feasible configurations, starting from (x,  ) Describe how to perform a simple consistency check for configuration (x,y) that returns ``solution found’’, ``dead end’’, or ``continue’’

Spring 2007Approximation Algorithms30 Example: CNF-SAT Problem: Given Boolean formula S in CNF, we want to know if S is satisfiable High level algorithm: Systematically make tentative assignments to variables in S Check whether assignments make S evaluate immediately to 0 or 1, or yield a new formula S’ for which we can continue making tentative value assignments Configuration in our algorithm is a pair (S’,y) where S’ is a Boolean formula in CNF y is assignment of values to variables NOT in S’ such that making these assignments to S yields the formula S’

Spring 2007Approximation Algorithms31 Example: CNF-SAT (details) Given frontier F of configurations, the most promising subproblem S’ is the one with the smallest clause This formula would be the most constrained of all formulas in F, so we expect it to hit a dead end most quickly (if it’s going to hit a dead end) To generate new subproblem from S’, locate smallest clause in S’, pick a variable x i that appears in that clause, and create two new subproblems associated with assigning x i =0 and x i =1 respectively

Spring 2007Approximation Algorithms32 Example: CNF-SAT (details) Processing S’ for consistency check for assignment of a variable x i in S’ First, reduce any clauses containing x i based on the value assigned to x i If this reduction results in new single literal clause (x j or ~x j ) we assign value to x j to make single literal clause satisfied. Repeat process until no single literal clauses If we discover a contradiction (clauses x i and ~x i, or an empty clause) we return ``dead end’’ If we reduce S’ all the way to constant 1, return ``solution found’’ along with assignments made to reach this Else, derive new subformula S’’ such that each clause has at least two literals, along with value assignments that lead from S to S’’ (this is the reduce operation)

Spring 2007Approximation Algorithms33 Example: CNF-SAT (details) Placing this in backtracking template results in algorithm that has exponential worst case run time, but usually does OK In fact, if each clause has no more than two literals, this algorithm runs in polynomial time.

Spring 2007Approximation Algorithms34 Branch-and-Bound Backtracking not designed for optimization problems, where in addition to having some feasibility condition that must be satisfied, we also have a cost f(x) to be optimized We’ll assume minimized Can extend backtracking to work with optimization problems (result is Branch-and-Bound) When a solution is found, we continue processing to find the best solution Add a scoring mechanism to always choose most promising configuration to explore in each iteration (best-first search)

Spring 2007Approximation Algorithms35 Branch-and-Bound In addition to the details required for the backtracking design pattern, we add one more to handle optimization: For any configuration (x,y) we assume we have a lower bound function, lb(x,y), that returns a lower bound on the cost of any solution that is derived from this configuration. Only requirement for lb(x,y) is that it must be less than or equal to the cost of any derived solution. Of course, a more accurate lower bound function results in a more efficient algorithm.

Spring 2007Approximation Algorithms36 Branch-and-Bound Pseudocode

Spring 2007Approximation Algorithms37 Branch-and-Bound Alg. for TSP Problem: optimization version of TSP Given weighted graph G, want least cost tour For edge e, let c(e) be the weight (cost) of edge e We compute, for each edge e = (v,w) in G, the minimum cost path that begins at v and ends at w while visiting all other vertices of G along the way We generate the path from v to w in G – {e} by augmenting a current path by one vertex in each loop of the branch-and-bound algorithm

Spring 2007Approximation Algorithms38 Branch-and-Bound Alg. for TSP After having built a partial path P, starting, say, at v, we only consider augmenting P with vertices not in P We can classify a partial path P as a ``dead end’’ if the vertices not in P are disconnected in G – {e} We define the lower bound function to be the total cost of the edges in P plus c(e). Certainly a lower bound for any tour built from e and P

Spring 2007Approximation Algorithms39 Branch-and-Bound Alg. for TSP After having run the algorithm to completion for one edge of G, we can use the best path found so far over all tested edges, rather than restarting the current best solution b at +  Run time can still be exponential, but we eliminate a considerable amount of redundancy There are other heuristics for approximating TSP. We won’t discuss them here.