1 The instructor will be absent on March 29 th. The class resumes on March 31 st.
2 Section 9-7 PTAS Error Ratio Polynomial-Time Approximation Scheme –A family of approximation algorithms –For any pre-specified E, there exists an approximation algorithm with error ratio E –And the complexity is still polynomial O(n/E)
3 Planar Graph Definition: A graph is said to be embedded on a surface S if it can be drawn on S so that its edges intersect only at their end vertices. A graph is a planar graph if it can be embedded on a plane.
4 Examples
5 Face Definition: A face is a region defined by a planar embedding. –The unbound face is called exterior face e.g –All other faces are called interior faces. e.g
6 k-outerplanar In a planar graph, we can associate each node with a level. –Node on exterior face are of level 1. A graph is k-outerplanar if it has no nodes with level greater than k
7 Example
8 Level
9 Level
10 Level
11 Max Independent Set Problem c.f Maximum & Maximal. Maximum independent set problem on planar graphs is NP-hard. For a k-outerplanar graph, an optimal solution for the maximum independent set problem can be found in O(8 k n) time, –through the dynamic programming approach –n is the number of vertices.
A Planar Graph with 9 Levels
Nodes on Level 1,4,7
Nodes on Level 2,5,8
Nodes on Level 3,6,9
Graph Obtained by Removing Nodes on Level 3,6,9 This is a 2-outerplanar graph. Maximum Independent Set Problem can be solved in O(8 k n) time, where k=2.
17 Algorithm 9-7 An Approximation Algorithm to Solve the Max Independent Set Problem on Planar Graph Step 1. For all i = 0, 1, 2, …, k, do –(1.1) Let G i be the graph obtained by deleting all nodes with levels congruent to i (mod k+1). The remaining subgraphs are all k-outerplanar graphs. –(1.2) For each k-outerplanar graph, find its maximum independent set. Let S i denote the union of these solutions. Step 2. Among S 0, S 1, …, S k, choose the S j with the maximum size and let it be our approximation solution S APX. Time Complexity: O(8 k kn)
18 Analysis All nodes are divided into (k+1) classes –Each class corresponds to a level congruent to i (mod k+1), for i = 0, 1, …, k. For every independent set S, the average number of nodes in this set for each class is |S|/(k+1). There is at least one r, such that at most 1/(k+1) of vertices in S OPT are at a level which is congruent to r (mod k+1).
19 Analysis (cont.) Because at most |S OPT |/(k+1) nodes are deleted, the solution S r obtained by deleting the nodes in class r from S OPT will have at least |S OPT |(1 – 1/(k+1)) = |S OPT | k/(k+1) nodes. |S r | |S OPT | k/(k+1) According to our algorithm, |S APX | |S r |
20 PTAS If we set k = ceiling(1/E) –1, the above formula becomes Thus for every given error bound E, we have a corresponding k to guarantee that the approximate solution differs from the optimum one within this error ratio. No matter how small the error is, the complexity of the algorithm is O(8 k kn), which is polynomial with respect to n.
21 0/1 Knapsack Problem n objects, each with a weight w i > 0 a profit p i > 0 capacity of knapsack : M Maximize Subject to x i = 0 or 1, 1 i n
22 Greedy on Density M=2k, APX=2k+3, OPT=4k M=2k+1, APX=OPT=4k P2k+32k Wk+1kk
PTAS of 0/1 Knapsack Problem We shall demonstrate how to obtain an approximation algorithm with error ratio , no matter how small is.
Step 1: Sort the items according to the density. i pipi wiwi p i /w i
25 Step 2: Calculate a number Q Find the largest d such that W=w 1 +w 2 +…+w d M. If d=n or W=M, then –Set P APX =p 1 +p 2 +…+p d and INDICES={1,2,…,d} and stop. –In this case, P OPT =P APX Otherwise, set Q=p 1 +p 2 +…+p d +p d+1 For our case, d=3 and Q= =234
26 Characteristics of Q 1.p 1 +p 2 +…+p d P OPT 2. w d+1 M, therefore p d+1 is a feasible solution p d+1 P OPT 3.Q= p 1 +p 2 +… +p d +p d+1 2P OPT 4.P OPT is a feasible solution and Q is not, so P OPT Q Q/2 P OPT Q
27 Step 3: Calculate a normalizing factor =Q( /3) 2 Let to be 0.6 –In our case, =234(0.6/3) 2 =234(0.2) 2 =9.36 Calculate Set T=Q( /3) –In our case, T = 234(0.6/3)=46.8
28 Step 4.1: SMALL & BIG Let SMALL collect all items whose profits are smaller than or equial to T. Collect all other items into BIG. In our case, –SMALL = {4, 5, 6, 7, 8} –BIG = {1, 2, 3}
29 Step 4.2 Normalize items in BIG In our case,
30 Step 4.3 Initialize an array A Array A has size g. Each entry corresponds to a combination of p i ’ s. Each entry A[i] consists of three fields –I: index of the combination –P: sum of profits –W: sum of weights
31 Step 4.4 Run a dynamic programming on items in BIG When i=1, p 1 ’ =9
32 When i=2, p 2 ’ =6
33 When i=3, p 3 ’ =5
34 Step 5. Add items in SMALL using the greedy algorithm Step 6: Pick the largest profit to be our approximate solution
35 Why is it polynomial? Step 4.4 is an exhaustive scanning step. Intuitively, it will take exponential time: –2 |BIG| Actually, the size of array A is not longer than g.
36 Time Complexity Step 1: O(n log n) Step 2: O(n) Step 4.1 to 4.2: O(n) Step 4.3: O(g) Step 4.4: O(ng) Step 5: O(ng) Step 6: O(g) Total: O(n log n) + O(ng) = O(n log n) + O(n(3/ ) 2 ).
37 Error Analysis
38 Exercise Use the example given in P.493, –Run the PTAS with =0.75. –What is the solution you obtain? –What is the error ratio? Due: April 14 th The instructor will be absent in next week.
39 Exercise Write a program which runs the PTAS of 0/1 Knapsack Problem. (Due: April 19 th ) Input format: