Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006.

Similar presentations


Presentation on theme: "Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006."— Presentation transcript:

1 Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006

2 Suppose you have to solve a problem Π… Is there a Greedy algorithm that solves Π? Is there a Backtracking algorithm that solves Π? Is there a Dynamic Programming algorithm that solves Π? Eureka! I have a DP Algorithm! No Backtracking agl. exists? Or I didn’t think of one? Is my DP algorithm optimal or a better one exists? No Greedy alg. exists? Or I didn’t think of one?

3 Suppose we a have formal model of each algorithmic paradigm Is there a Greedy algorithm that solves Π? No Greedy algorithm can solve Π exactly. Is there a Backtracking algorithm that solves Π? No Backtracking algorithm can solve Π exactly. Is there a Dynamic Programming alg. that solves Π? DP helps! Is my algorithm optimal, or a better DP algorithm exists? Yes, it is! Because NO DP alg. can solve Π more efficiently.

4 The goal To build a formal model of each of the basic algorithmic design paradigms which should capture the strengths of the paradigm. To develop lower bound technique, for each formal model, that can prove negative results for all algorithms in the class.

5 Using the framework we can answer the following questions 1. When solving problems exactly: What algorithmic design paradigm can help? No algorithm within a given formal model can solve the problem exactly. We find an algorithm that fits a given formal model. 2. Is a given algorithm optimal? Prove a lower bound matching the upper bound for all algorithms in the class. 3. Solving the problems approximately: What algorithmic paradigm can help? Is a given approximation scheme optimal within the formal model?

6 Some of our results ADAPTIVE PRIORITY FIXED Greedy Backtracking & Simple DP (tree) Backtracking & Simple DP (tree) Dynamic Programming Dynamic Programming pBT pBP Online

7  is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   1. Order: Objects arrive in worst case order chosen by adversary. 2. Loop considering  i in order. –Make a irrevocable decision  i   On-line algorithms

8  is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   1. Order: Algorithm chooses fixed π :  →R + without looking at I. 2. Loop considering  i in order. –Make a irrevocable decision  i   Fixed priority algorithms

9  is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   2. Loop - Order: Algorithm reorders π :  →R + without looking at rest of I. - Considering next  i in current order. –Make a irrevocable decision  i   Adaptive priority algorithms

10  is a set of data items;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   1. Order: Algorithm chooses π :  →R + without looking at I. 2. Loop considering  i in order. –Make a set of decisions  i   (one of which will be the final decision.) Fixed priority “Back Tracking”

11 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Shortest Path in negative graphs no cycles Bellman-Ford Shortest Path in no-negative graphs Dijkstra’s Online Maximum Matching in Bipartite graphs Flow Algorithms Maximum Matching in Bipartite graphs Minimum Spanning Tree Prim’s Kruskal’s

12 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Dijkstra’s Shortest Path in no-negative graphs Online Prim’s Kruskal’s Minimum Spanning Tree Kruskal’s

13 Kruskal algorithm for MST is a Fixed priority algorithm Input (G=(V,E), ω: E →R) 1.Initialize empty solution T 2.L = Sorted list of edges in non-decreasing order according to their weight 3.while (L is not empty) –e = next edge in L –Add the edge to T, as long as T remains a forest and remove e from L 4.Output T

14 Prim’s algorithm Input G=(V,E), w: E →R 1.Initialize an empty tree T ←  ; S ←  2.Pick a vertex u; S={u}; 3.for (i=1 to |V|-1) –(u,v) = min (u,v)  cut(S, V-S) w(u,v) –S←S  {v}; T←T  {(u,v)} 4.Output T Prims algorithm for MST is an adaptive priority algorithm

15 Dijkstra’s Shortest Paths Alg is an adaptive priority algorithm Dijkstra algorithm (G=(V,E), s  V) T← ∅ ; S←{s}; Until (S≠V) Find e=(u,x) | e = mine  Cut(S, V-S){path(s, u)+ω(e)} T← T  {e}; S ← S  {x}

16 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Dijkstra’s Shortest Path in no-negative graphs Online Prim’s Kruskal’s Minimum Spanning Tree Kruskal’s

17 ShortPath Problem: Given a graph G=(V,E), ω: E →R + ; s, t  V. Find a directed tree of edges, rooted at s, such that the combined weight of the path from s to t is minimal Theorem: No Fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem Some of our results Data items are edges of the graph Decision options = {accept, reject}

18 Fixed priority game SolverAdversary γdγd γiγi γ3γ3 γjγj γkγk γ2γ2 γ1γ1 Γ0Γ0 S_sol = {(γ i2,σ i2 )} σ i2 S_sol = {(γ i2,σ i2 ), (γ i4,σ i4 )} γ i2 γ i9,…γ i1 γ i3 γ i4 γ i5 γ i6 γ i7 γ i8 Γ0Γ0 Γ1Γ1 Γ2Γ2 σ i4 Γ3Γ3 End Game S_adv = {(γ i2,σ * i2 ), (γ i4,σ * i4 )} Solver is awarded =∅

19 Adversary selects  0 t b s a u(k) w(k) x(1) v(1) y(1) z(1)

20 Solver selects an order on  0 If then the Adversary presents: t b s a u(k) w(k) x(1) v(1) y(1) z(1)

21 Adversary’s strategy Waits until Solver considers edge y(1) Solver will consider y(1) before z(1) Event 1 σ y =accept Event 2 σ y =reject

22 Event 1: Solver accepts y(1) t u(k) x(1) y(1) z(1) b a s The Solver constructs a path {u,y} The Adversary outputs solution {x,z}

23 Event 2: Solver rejects y(1) The Solver fails to construct a path. The Adversary outputs a solution {u,y}. t u(k) x(1) y(1) z(1) b a s

24 The outcome of the game: The Solver either fails to output a solution or achieves an approximation ratio (k+1)/2 The Adversary can set k arbitrarily large and thus can force the Algorithm to claim arbitrarily large approximation ratio

25 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Dijkstra’s Shortest Path in no-negative graphs Online

26 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Interval Scheduling value is width Factor of 3 Online Factor of 3

27 Interval scheduling on a single machine Instance: Set of intervals I=(i 1, i 2,…,i n ),  j i j =[r j, d j ] Problem: schedule intervals on a single machine Solution: S  I Objective function: maximize  i  S (d j - r j )

28 A simple solution (LPT) Longest Processing Time algorithm input I=(i 1, i 2,…,i n ) 1.Initialize S ←  2.Sort the intervals in decreasing order (d j – r j ) 3.while (I is not empty) Let i k be the next in the sorted order If i k can be scheduled then S ← S U {i k }; I ← I \ {i k } 4.Output S

29 LPT is a 3-approximation LPT sorts the intervals in decreasing order according to their length 3 LPT ≥ OPT OPT LPT riri didi

30 Example lower bound [BNR02] Theorem1: No adaptive priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem with proportional profit for a single machine configuration

31 Proof of Theorem 1 Adversary’s move Algorithm’s move: Algorithm selects an ordering Let i be the interval with highest priority 1 2 3 q q-1 1 2 3 e

32 Adversary’s strategy If Algorithm decides not to schedule i During next round Adversary removes all remaining intervals and schedules interval i 1 2 3 i jk1 2 3 i Alg’s value = 0 Adv’s value = i

33 Adversary’s strategy If i = and Algorithm schedules i During next round the Adversary restricts the sequence: i jk1 2 3 i i i+1i-1 Alg’s value = i Adv’s value = (i-1)+3(i/3)+(i+1)=3i

34 Adversary’s strategy If i = and Algorithm schedules i During next round the Adversary restricts the sequence: 1 Alg’s value = 1 Adv’s value = 3(1/3)+(2)=3 1 2 3 i jk1 2 31 2

35 Adversary’s strategy If i = and Algorithm schedules i During next round the Adversary restricts the sequence: 1 2 3 i jk1 2 3 q q q-1 Alg’s value = q Adv’s value = (q-1)+3(q/3)+(q-1)=3q-1 But q is big

36 Adversary’s strategy If i = and Algorithm schedules i During next round Adversary restricts the sequence: 1 2 3 m jk1 2 3 i i m Alg’s value = i Adv’s value = (3i) =3i

37 PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Interval Scheduling value is width Factor of 3 Online Factor of 3 The algorithm was missed up before it got a chance to reorder things. ? Some of our results

38 PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Weighted Vertex Cover Factor of 2 Online

39 [Joh74] greedy 2-approximation for WVC Input: instance G with weights on nodes. Output: solution S  V covers all edges and minimizes weight taken nodes. Repeat until all edges covered. Take v minimizing ω(v)/(# uncovered adj edges) Weighted Vertex Cover

40 With Shortest Path, a data item is an edge of the graph –  = (, ω( ) ) With weighted vertex cover, –A data item is a vertex of the graph  = (v, ω(v), adj_list(v)) (Stronger than having the items be edges, because the alg gets more info from nodes. Weighted Vertex Cover Theorem: No Adaptive priority algorithm can achieve an approximation ration better than 2

41 Adaptive priority game SolverAdversary γ3γ3 γ5γ5 γ6γ6 γ1γ1 γ4γ4 γ7γ7 γ2γ2 S_sol = {(γ 7,σ 7 )} σ4σ4 S_sol = {(γ 7,σ 7 ), (γ 4,σ 4 )} Γ3Γ3 Γ1Γ1 Γ2Γ2 σ7σ7 The Game Ends : 1.S_adv = {(γ 7,σ * 7 ), (γ 4,σ * 4 ),(γ 2,σ * 2 )} 2.Solver is awarded payoff f(S_sol)/f(S_adv) γ8γ8 γ9γ9 γ 10 γ 11 γ 12 Γ0Γ0 σ2σ2 S_sol = {(γ 7,σ 7 ), (γ 4,σ 4 ),(γ 2,σ 2 )} 

42 The Adversary chooses instances to be graphs K n,n The weight function ω:V→ {1, n 2 } n2n2 1n2n2 n2n2 n2n2 11 1

43 The game Data items –each node appears in  o as two separate data items with weights 1, n 2 Solver moves –Choses a data item, and commits to a decision Adversary move –Removes from the next  t the data item, corresponding to the node just committed and..

44 Adversary’s strategy is to wait unitl Event 1: Solver accepts a node of weight n 2 Event 2: Solver rejects a node of any weight Event 3: Solver has committed to all but one nodes on either side of the bipartite 1 1 1 1 1

45 Event 1: Solver accepts a node ω(v)=n 2 The Adversary chooses part B of the bipartite as a cover, and incurs cost n The cost of a cover for the Solver is at least n 2 +n-1 1 1 n2n2 1 1 1 1

46 Event 2: Solver rejects a node of any weight The Adversary chooses part A of the bipartite as a cover. The Solver must choose part B of the bipartite as a cover. n2n2 n2n2

47 Event 3: Solver commits to n-1 nodes w(v)=1, on either side of K n,n The Adversary chooses part B of the bipartite as a cover, and incurs cost n The cost of a cover for the Solver is 2n-1 1 1 1 1 1 1 1 n2n2

48 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Weighted Vertex Cover Factor of 2 Online

49 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Facility Location Factor of logn Online

50 Facility location problem Instance is a set of cities and set of facilities –The set of cities is C={1,2,…,n} –Each facility f i has an opening cost cost(f i ) and connection costs for each city: {c i1, c i2,…, c in } Problem: open a collection of facilities such that each city is connected to at least one facility Objective function: minimize the opening and connection costs min(Σ f  S cost(f i ) + Σ j  C min fi  S c ij )

51 [AB02] result Theorem: No adaptive priority algorithm can achieve an approximation ratio better than log(n) for facility location in arbitrary spaces

52 Adversary presents the instance: Cities: C={1,2,…,n}, where n=2 k Facilities: –Each facility has opening cost n –City connection costs are 1 or ∞ –Each facility covers exactly n/2 cities –cover(f j ) = {i | i  C,c ji =1} C u denotes the set of cities not yet covered by the solution of the Algorithm

53 Adversary’s strategy At the beginning of each round t –The Adversary chooses S t to consist of facilities f such that f  S t iff |cover(f) ∩ C u | = n/(2 t ) –The number of uncovered cities C u is n/(2 t-1 ) Two facilities are complementary if together they cover all cities in C. For any round t S t consists of complementary facilities

54 The game Uncovered cities C u

55 End of the game Either Algorithm opened log(n) facilities or failed to produce a valid solution Cost of Algorithm’s solution is n.log(n)+n Adversary opens two facilities incurs total cost 2n+n

56 Some of our results PRIORITY pBT pBP ADAPTIVE PRIORITY FIXED Facility Location Factor of logn Online


Download ppt "Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006."

Similar presentations


Ads by Google