Approximation Algorithms

Slides:



Advertisements
Similar presentations
Weighted Matching-Algorithms, Hamiltonian Cycles and TSP
Advertisements

Design and Analysis of Algorithms Approximation algorithms for NP-complete problems Haidong Xue Summer 2012, at GSU.
What is Intractable? Some problems seem too hard to solve efficiently. Question 1: Does an efficient algorithm exist?  An O(a ) algorithm, where a > 1,
1 ©D.Moshkovitz Complexity The Traveling Salesman Problem.
1 The TSP : Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell ( )
End Topics Approximate Vertex Cover Approximate TSP Tour Computation of FFT P, NP, NP Complete, NP hard.
Combinatorial Algorithms
Lecture 21 Approximation Algorithms Introduction.
PCPs and Inapproximability Introduction. My T. Thai 2 Why Approximation Algorithms  Problems that we cannot find an optimal solution.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Approximation Algorithms
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Approximation Algorithms Lecture for CS 302. What is a NP problem? Given an instance of the problem, V, and a ‘certificate’, C, we can verify V is in.
1 Approximation Algorithms CSC401 – Analysis of Algorithms Lecture Notes 18 Approximation Algorithms Objectives: Typical NP-complete problems Approximation.
The Theory of NP-Completeness
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 7 Monday, 4/3/06 Approximation Algorithms.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 8. Approximation Alg Approximation.
Time Complexity.
Vertex cover problem S  V such that for every {u,v}  E u  S or v  S (or both)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 7 Tuesday, 4/7/09 Approximation Algorithms.
9-1 Chapter 9 Approximation Algorithms. 9-2 Approximation algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
9-1 Chapter 9 Approximation Algorithms. 9-2 Approximation algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
An introduction to Approximation Algorithms Presented By Iman Sadeghi.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Approximation Algorithms
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Programming & Data Structures
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Chapter 12 Coping with the Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
The Traveling Salesman Problem Approximation
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
Complexity Classes (Ch. 34) The class P: class of problems that can be solved in time that is polynomial in the size of the input, n. if input size is.
1 Approximation Algorithm Instructor: yedeshi
APPROXIMATION ALGORITHMS VERTEX COVER – MAX CUT PROBLEMS
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
Great Theoretical Ideas in Computer Science.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
Approximation Algorithms
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSE 024: Design & Analysis of Algorithms Chapter 9: NP Completeness Sedgewick Chp:40 David Luebke’s Course Notes / University of Virginia, Computer Science.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
WK15. Vertex Cover and Approximation Algorithm By Lin, Jr-Shiun Choi, Jae Sung.
1 Approximation Algorithm Updated on 2012/12/25. 2 Approximation Algorithm Up to now, the best algorithm for solving an NP-complete problem requires exponential.
Lecture 25 NP Class. P = ? NP = ? PSPACE They are central problems in computational complexity.
1 Chapter 11 Approximation Algorithms Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Algorithms for hard problems Introduction Juris Viksna, 2015.
CSC 413/513: Intro to Algorithms
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 7 Tuesday, 4/6/10 Approximation Algorithms 1.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
The Theory of NP-Completeness
An introduction to Approximation Algorithms Presented By Iman Sadeghi
Approximation Algorithms
Approximation Algorithms
Computability and Complexity
Richard Anderson Lecture 28 Coping with NP-Completeness
The Theory of NP-Completeness
Lecture 24 Vertex Cover and Hamiltonian Cycle
Presentation transcript:

Approximation Algorithms Approximation Algorithmes

What is a NP problem? Given an instance of the problem, V, and a ‘certificate’, C, we can verify V is in the language in polynomial time All problems in P are NP problems Why?

What is NP-Complete? A problem is NP-Complete if: It is in NP Every other NP problem has a polynomial time reduction to this problem NP-Complete problems: 3-SAT VERTEX-COVER CLIQUE HAMILTONIAN-PATH (HAMPATH)

Dilemma NP problems need solutions in real-life We only know exponential algorithms What do we do?

A Solution There are many important NP-Complete problems There is no fast solution ! But we want the answer … If the input is small use backtrack. Isolate the problem into P-problems ! Find the Near-Optimal solution in polynomial time. Approximation Algorithmes

Accuracy NP problems are often optimization problems It’s hard to find the EXACT answer Maybe we just want to know our answer is close to the exact answer?

Approximation Algorithms Can be created for optimization problems The exact answer for an instance is OPT The approximate answer will never be far from OPT We CANNOT approximate decision problems

Performance ratios We are going to find a Near-Optimal solution for a given problem. We assume two hypothesis : Each potential solution has a positive cost. The problem may be either a maximization or a minimization problem on the cost. Approximation Algorithmes

Performance ratios … If for any input of size n, the cost C of the solution produced by the algorithm is within a factor of ρ(n) of the cost C* of an optimal solution: Max ( C/C* , C*/C ) ≤ ρ(n) We call this algorithm as an ρ(n)-approximation algorithm. Approximation Algorithmes

Performance ratios … In Maximization problems: C*/ρ(n) ≤ C ≤ C* In Minimization Problems: C* ≤ C ≤ ρ(n)C* ρ(n) is never less than 1. A 1-approximation algorithm is the optimal solution. The goal is to find a polynomial-time approximation algorithm with small constant approximation ratios. Approximation Algorithmes

Approximation scheme Approximation scheme is an approximation algorithm that takes Є>0 as an input such that for any fixed Є>0 the scheme is (1+Є)-approximation algorithm. Polynomial-time approximation scheme is such algorithm that runs in time polynomial in the size of input. As the Є decreases the running time of the algorithm can increase rapidly: For example it might be O(n2/Є) Approximation Algorithmes

Approximation scheme We have Fully Polynomial-time approximation scheme when its running time is polynomial not only in n but also in 1/Є For example it could be O((1/Є)3n2) Approximation Algorithmes

Some examples: Vertex cover problem. Traveling salesman problem. Set cover problem. Approximation Algorithmes

VERTEX-COVER Given a graph, G, return the smallest set of vertices such that all edges have an end point in the set

The vertex-cover problem A vertex-cover of an undirected graph G is a subset of its vertices such that it includes at least one end of each edge. The problem is to find minimum size of vertex-cover of the given graph. This problem is an NP-Complete problem. Approximation Algorithmes

The vertex-cover problem … Finding the optimal solution is hard (it’s NP!) but finding a near-optimal solution is easy. There is an 2-approximation algorithm: It returns a vertex-cover not more than twice of the size optimal solution. Approximation Algorithmes

The vertex-cover problem … APPROX-VERTEX-COVER(G) 1 C ← Ø 2 E′ ← E[G] 3 while E′ ≠ Ø 4 do let (u, v) be an arbitrary edge of E′ 5 C ← C U {u, v} 6 remove every edge in E′ incident on u or v 7 return C Approximation Algorithmes

The vertex-cover problem … Near Optimal size=6 Optimal Size=3 Approximation Algorithmes

The vertex-cover problem … This is a polynomial-time 2-aproximation algorithm. (Why?) Because: APPROX-VERTEX-COVER is O(V+E) |C*| ≥ |A| |C| = 2|A| |C| ≤ 2|C*| Selected Edges Optimal Selected Vertices Approximation Algorithmes

Minimum Spanning Tree Given a graph, G, a Spanning Tree of G is a subgraph with no cycles that connects every vertex together A MST is a Spanning Tree with minimal weight

Finding a MST Finding a MST can be done in polynomial time using PRIM’S ALGORITHM or KRUSKAL’S ALGORITHM Both are greedy algorithms

HAMILTONIAN CYCLE Given a graph, G, find a cycle that visits every vertex exactly once TSP version: Find the path with the minimum weight

MST vs HAM-CYCLE Any HAM-CYCLE becomes a Spanning Tree by removing an edge cost(MST) ≤ cost(min-HAM-CYCLE)

Traveling salesman problem Given an undirected complete weighted graph G we are to find a minimum cost Hamiltonian cycle. Satisfying triangle inequality or not this problem is NP-Complete. The problem is called Euclidean TSP. Approximation Algorithmes

Traveling salesman problem Near Optimal solution Faster Easier to implement. Approximation Algorithmes

Euclidian Traveling Salesman Problem APPROX-TSP-TOUR(G, W) 1 select a vertex r Є V[G] to be root. 2 compute a MST for G from root r using Prim Alg. 3 L=list of vertices in preorder walk of that MST. 4 return the Hamiltonian cycle H in the order L. Approximation Algorithmes

Euclidian Traveling Salesman Problem MST root Pre-Order walk Hamiltonian Cycle Approximation Algorithmes

Traveling salesman problem This is polynomial-time 2-approximation algorithm. (Why?) Because: APPROX-TSP-TOUR is O(V2) C(MST) ≤ C(H*) H*: optimal soln C(W)=2C(MST) W: Preorder walk C(W)≤2C(H*) C(H)≤C(W) H: approx soln & C(H)≤2C(H*) triangle inequality Optimal Pre-order Solution Approximation Algorithmes

EULER CYCLE Given a graph, G, find a cycle that visits every edge exactly once Necessary & Sufficient Conditions: G is connected and every vertex is even degree. Algorithm (O(n2)): Repeatedly use DFS to find and remove a cycle from G Merge all the cycles into one cycle.

Min-Weight Matching Given a complete weighted graph of even nodes, G, find a perfect matching of minimum total weight Algorithm (O(n3)): Formulated as a linear programming problem, but solved using a special algorithm.

Euclidian Traveling Salesman Problem APPROX-TSP-TOUR2(G, c) 1 Select a vertex r Є V[G] to be root. Compute a MST T for G from root r using Prim Alg. Find a minimal-weight matching M for vertices of odd degree in T. Find an Euler cycle C in G’ = (V, T U M). 5 L=list of vertices in preorder walk of C. 6 return the Hamiltonian cycle H in the order L. Approximation Algorithmes

Euclidian Traveling Salesman Problem MST Min Matching Euler Cycle HAM-Cycle Approximation Algorithmes

Time Complexity O(1) O(n lg n) O(n3) O(n2) O(n) APPROX-TSP-TOUR2(G, c) 1 Select a vertex r Є V[G] to be root. Compute a MST T for G from root r using Prim Alg. Find a minimal-weight matching M for vertices of odd degree in T. Find an Euler cycle A in G’ = (V, T U M). 5 L=list of vertices in preorder walk of A. 6 return the Hamiltonian cycle H in order L. O(1) O(n lg n) O(n3) O(n2) O(n) Approximation Algorithmes

Traveling salesman problem This is polynomial-time 3/2-approximation algorithm. (Why?) Because: APPROX-TSP-TOUR2 is O(n3) C(MST) ≤ C(H*) H*: optimal soln C(M) ≤ 0.5C(H*) M: min matching C(A) = C(MST)+C(M) A: Euler cycle C(H) ≤ C(A) H: approx soln & C(H) ≤ 1.5C(H*) triangle inequality Optimal Min match Euler cycle Solution Approximation Algorithmes

Proof of C(M)≤ 0.5C(H*) Let optimal tour be H*: j1…i1j2…i2j3…i2m {i1,i2,…,i2m}: the set of odd degree vertices in T. Define 2 matchings: M1={[i1,i2],[i3,i4],…,[i2m-1,i2m]} M2={[i2,i3],[i4,i5],…,[i2m,i1]} M is min matching: C(M)  C(M1) and C(M)  C(M2) By triangle inequality: C(H*)  C(M1) + C(M2)  2 C(M)  C(M) 1/2 C(H*)

TSP In General Theorem: If P ≠ NP, then for any constant ρ>1, there is no polynomial time ρ-approximation algorithm. Proof: If we have a polynomial time ρ-approximation algorithm for TSP, we can find a tour of cost ρH*. c(u,w) = if ((u,w) in E) then 1 else ρ|V|+1 The optimal cost H* of a TSP tour is |V|. G has a Ham-cycle iff TSP has a tour of cost  ρ|V|. If a TSP tour has one edge of cost ρ|V|+1, then the total cost is (ρ|V|+1)+|V|-1>ρ|V| Selected edge not in E Rest of edges Approximation Algorithmes

The Set-Cover Problem Instance (X, F) : Set-Cover is in NP X : a finite set of elements. F : family of subsets of X. Solution C : subset of F that includes all the members of X. Set-Cover is in NP Set-Cover is NP-hard, as it’s a generalization of vertex-cover problem. Approximation Algorithmes

An example: |X| = 12, |F| = 6 Minimal Covering set size=3 Approximation Algorithmes

A Greedy Algorithm GREEDY-SET-COVER(X,F) 1 M ← X 2 C ← Ø 3 while M ≠ Ø do 4 select S Є F that maximizes |S ח M| 5 M ← M – S 6 C ← C U {S} 7 return C Approximation Algorithmes

Not optimal … 1st chose 3rd chose 2nd chose Greedy Covering set size=4 4th chose Approximation Algorithmes

Set-Cover … This greedy algorithm is polynomial-time ρ(n)-approximation algorithm ρ(n)=lg(n) Approximation Algorithmes

The bin packing problem n items a1, a2, …, an, 0 ai  1, 1  i  n, to determine the minimum number of bins of unit capacity to accommodate all n items. E.g. n = 5, {0.3, 0.5, 0.8, 0.2 0.4} The bin packing problem is NP-hard.

APPROXIMATE BIN PACKING Problem: fill in objects each of size<= 1, in minimum number of bins (optimal) each of size=1 (NP-complete). Online problem: do not have access to the full set: incremental; Offline problem: can order the set before starting. Theorem: No online algorithm can do better than 4/3 of the optimal #bins, for any given input set.

NEXT-FIT ONLINE BIN-PACKING If the current item fits in the current bin put it there, otherwise move on to the next bin. Linear time with respect to #items - O(n), for n items. Theorem: Suppose, M optimum number of bins are needed for an input. Next-fit never needs more than 2M bins. Proof: Content(Bj) + Content(Bj+1) >1, So, Wastage(Bj) + Wastage(Bj+1)<2-1, Average wastage<0.5, less than half space is wasted, so, should not need more than 2M bins.

FIRST-FIT ONLINE BIN-PACKING Scan the existing bins, starting from the first bin, to find the place for the next item, if none exists create a new bin. O(N2) naïve, O(N logN) possible, for N items. Obviously cannot need more than 2M bins! Wastes less than Next-fit. Theorem: Never needs more than Ceiling(1.7M). Proof: too complicated. For random (Gaussian) input sequence, it takes 2% more than optimal, observed empirically. Great!

BEST-FIT ONLINE BIN-PACKING Scan to find the tightest spot for each item (reduce wastage even further than the previous algorithms), if none exists create a new bin. Does not improve over First-Fit in worst case in optimality, but does not take more worst-case time either! Easy to code.

OFFLINE BIN-PACKING Create a non-increasing order (larger to smaller) of items first and then apply some of the same algorithms as before. Theorem: If M is optimum #bins, then First-fit-offline will not take more than M + (1/3)M #bins.

Polynomial-Time Approximation Schemes A problem L has a fully polynomial-time approximation scheme (FPTAS) if it has a polynomial-time (in n and 1/ε) (1+ε)-approximation algorithm, for any fixed ε >0. 0/1 Knapsack has a FPTAS, with a running time that is O(n3 / ε).

Knapsack Problem Knapsack problem. Given n objects and a "knapsack." Item i has value vi > 0 and weighs wi > 0. Knapsack can carry weight up to W. Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value 40. we'll assume wi  W Item Value Weight 1 1 1 2 6 2 W = 11 3 18 5 4 22 6 5 28 7

Knapsack is NP-Complete KNAPSACK: Given a finite set X, nonnegative weights wi, nonnegative values vi, a weight limit W, and a target value V, is there a subset S  X such that: SUBSET-SUM: Given a finite set X, nonnegative values ui, and an integer U, is there a subset S  X whose elements sum to exactly U? Claim. SUBSET-SUM  P KNAPSACK. Pf. Given instance (u1, …, un, U) of SUBSET-SUM, create KNAPSACK instance:

Knapsack Problem: Dynamic Programming 1 Def. OPT(i, w) = max value subset of items 1,..., i with weight limit w. Case 1: OPT does not select item i. OPT selects best of 1, …, i–1 using up to weight limit w Case 2: OPT selects item i. new weight limit = w – wi OPT selects best of 1, …, i–1 using up to weight limit w – wi Running time. O(n W). W = weight limit. Not polynomial in input size!

Knapsack Problem: Dynamic Programming II Def. OPT(i, v) = min weight subset of items 1, …, i that yields value exactly v. Case 1: OPT does not select item i. OPT selects best of 1, …, i-1 that achieves exactly value v Case 2: OPT selects item i. consumes weight wi, new value needed = v – vi

Knapsack: FPTAS i = 0 or v = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x Item Value Weight i = 0 or v = 0 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 2 1 2 3 3 5 4 4 6 5 6 7 W = 11

Knapsack: FPTAS i = 1 , v = … 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x Item Value Weight i = 1 , v = … 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 2 1 2 3 3 5 4 4 6 5 6 7 W = 11

Knapsack: FPTAS i = 2 , v = … 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x Item Value Weight i = 2 , v = … 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 2 1 2 3 3 5 4 4 6 5 6 7 W = 11

Knapsack: FPTAS i = 3 , v = … 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x Item Value Weight i = 3 , v = … 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 2 1 2 3 3 5 4 4 6 5 6 7 W = 11

Knapsack: FPTAS i = 4, v = … 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x Item Value Weight i = 4, v = … 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 2 1 2 3 3 5 4 4 6 5 6 7 W = 11

Knapsack: FPTAS i = 5, v = … 8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 8 Item Value Weight i = 5, v = … 8 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 16 18 19 21 2 1 2 8 3 3 5 4 4 6 5 6 7 W = 11 S={1, 2, 5}

Knapsack: FPTAS Tracing Solution // first call: pick_item(n, v) where M[n,v] <= W and v is max pick_item( i, v) { if (v == 0) return; if (M[i,v] == wi + M[i-1, v-vi]) { print i; pick_item(i-1, v-vi); print i; } else pick_item(i-1, v); } Item Value Weight i = 5, v = … 8 1 1 1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 x 16 18 19 21 2 1 2 8 3 3 5 4 4 6 5 6 7 W = 11 S={1, 2, 5}

Knapsack Problem: Dynamic Programming II Def. OPT(i, v) = min weight subset of items 1, …, i that yields value exactly v. Case 1: OPT does not select item i. OPT selects best of 1, …, i-1 that achieves exactly value v Case 2: OPT selects item i. consumes weight wi, new value needed = v – vi Running time. O(n V*) = O(n2 vmax). V* = optimal value = maximum v such that OPT(n, v)  W. Not polynomial in input size! V*  n vmax

Knapsack: FPTAS Intuition for approximation algorithm. Round all values up to lie in smaller range. Run dynamic programming algorithm II on rounded instance. Return optimal items in rounded instance. Item Value Weight Item Value Weight 1 934,221 1 1 1 1 2 5,956,342 2 2 1 2 3 17,810,013 5 3 3 5 4 21,217,800 6 4 4 6 5 27,343,199 7 5 6 7 W = 11 W = 11 original instance rounded instance S = { 1, 2, 5 }

Knapsack: FPTAS Knapsack FPTAS. Round up all values: vmax = largest value in original instance  = precision parameter  = scaling factor =  vmax / n Observation. Optimal solution to problems with or are equivalent. Intuition. close to v so optimal solution using is nearly optimal; small and integral so dynamic programming algorithm is fast. Running time. O(n3 / ). Dynamic program II running time is , where

Knapsack: FPTAS Knapsack FPTAS. Round up all values: Theorem. If S is solution found by our algorithm and S* is any other feasible solution of the original problem, then Pf. Let S* be any feasible solution satisfying weight constraint. always round up solve rounded instance optimally WLOG assume no individual item has weight w_n that exceeds weight limit W all by itself never round up by more than  |S|  n DP alg can take vmax n  =  vmax, vmax  iS vi