Presentation is loading. Please wait.

Presentation is loading. Please wait.

More On Intractability & Beyond CS161: Online Algorithms Monday, August 11 th 1.

Similar presentations


Presentation on theme: "More On Intractability & Beyond CS161: Online Algorithms Monday, August 11 th 1."— Presentation transcript:

1 More On Intractability & Beyond CS161: Online Algorithms Monday, August 11 th 1

2 Announcements 2 1.PS#6 due wednesday at midnight 2.Project evaluation/competition results on Wednesday 3.Final exam information (later this lecture) 4.Evaluations for the class is open on axess

3 Outline For Today 1.Approximate Set Cover 2.Approximate Vertex Cover 3.Final Exam Information 4.Beyond CS 161: Online Algorithms 3

4 Recap: Knapsack FPTAS 4 Knapsack Algorithm Knapsack Algorithm I: (w i, v i, W) accuracy ε (say 0.01) ≥ (1-ε)*OPT (1-ε)-approx Key Takeaway: We are approximating an NP- complete problem to arbitrary precision.

5 Note On Approximating NP-complete Problems 5 Knapsack is NP-complete.  All NP problems (e.g., TSP) reduce to it.  If we solved Knapsack exactly we solve all NP problems exactly. Alg for KNPS Π 1 ∈ TSP Poly-time TSP -> KNPS Converter Poly-time TSP -> KNPS Converter Π 2 ∈ KNPS Solution to KNPS Solution to TSP Poly-time KNPS Sol. -> TSP Solution Converter Poly-time KNPS Sol. -> TSP Solution Converter

6 Note On Approximating NP-complete Problems 6 But there are NP problems which can’t be approximated to any constant (e.g. TSP)! Poly-time Approx.Alg for KNPS Π 1 ∈ TSP Poly-time TSP -> KNPS Converter Poly-time TSP -> KNPS Converter Π 2 ∈ KNPS Approx. Solution to KNPS Approx Solution to TSP Poly-time KNPS Sol. -> TSP Solution Converter Poly-time KNPS Sol. -> TSP Solution Converter X

7 Note On Approximating NP-complete Problems 7 Key Takeaway: Although we can maintain the exact solutions through reductions, approximate solutions cannot be maintained in general. In other words: The information about the approximate solutions can be lost across reductions (though exact solutions can be maintained)!

8 Outline For Today 1.Approximate Set Cover 2.Randomized Approximate Vertex Cover 3.Final Exam Information 4.Beyond CS 161: Online Algorithms 8

9 Set Cover Problem (Sec 11.3) 9  Input: U ={1, …, n} items, S 1, …, S m sets s.t. S 1 ∪ S 2 ∪ … ∪ S m = U  Output: minimum # sets required to cover U  Fact: Set Cover is NP-complete: one of Karp’s 21 NP- complete algorithms (Vertex Cover ≤ p Set Cover)

10 Set Cover Example 10 12341234 12341234 U S1S1 5678356783 5678356783 S2S2 9 10 11 12 9 10 11 12 S3S3 1 5 9 2 6 10 1 5 9 2 6 10 S4S4 6 10 7 11 6 10 7 11 S5S5 123456123456 7 8 9 10 11 12 4848 4848 S6S6

11 Set Cover Example 11 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S2S2 S3S3 S4S4 S5S5 S6S6

12 Set Cover Example 12 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S2S2 S3S3 C opt : S 1 ∪ S 2 ∪ S 3

13 Set Cover Example 13 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12

14 Greedy Set-Cover Algorithm 14 Idea: Iteratively pick the set that covers the most “uncovered” elements. procedure Greedy-SetCover( U, S 1, …,S n ): C = ∅ while U is not empty pick S i that maximizes |S i ∩ U | C = C + S i U = U \ S i return C min(| U |, n) iterations, each iteration O(n*| U |) Total: O(n*| U |*min(| U |, n)) time.

15 Greedy Algorithm Simulation 15 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S2S2 S3S3 S4S4 S5S5 S6S6 C greedy :

16 Greedy Algorithm Simulation 16 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S2S2 S3S3 S5S5 S6S6 C greedy : S 4 S4S4

17 Greedy Algorithm Simulation 17 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S2S2 S3S3 S5S5 S6S6 C greedy : S 4

18 Greedy Algorithm Simulation 18 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S2S2 S3S3 S5S5 S6S6 C greedy : S 4, S 2

19 Greedy Algorithm Simulation 19 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S3S3 S5S5 S6S6 C greedy : S 4, S 2

20 Greedy Algorithm Simulation 20 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S5S5 S6S6 C greedy : S 4, S 2, S 3

21 Greedy Algorithm Simulation 21 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S6S6 C greedy : S 4, S 2, S 3

22 Greedy Algorithm Simulation 22 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 C greedy : S 4, S 2, S 3, S 1 Size is 4, Not Optimal

23 Thought Experiment 23 Cost of each set in the output is 1. Distribute the cost of each set S i over the new elements that S i covers when it’s picked.

24 Thought Experiment Simulation 24 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S2S2 S4S4 S5S5 S6S6 2 2 3 3 4 4 6 6 7 7 8 8 9 9 10 11 12 1 1 5 5 Costs of Elements S3S3

25 Thought Experiment Simulation 25 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S4S4 2 1/6 2 1/6 3 3 4 4 6 1/6 6 1/6 7 7 8 8 9 1/6 9 1/6 10 1/6 10 1/6 11 12 1 1/6 1 1/6 5 1/6 5 1/6 Costs of Elements

26 Thought Experiment Simulation 26 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S4S4 2 1/6 2 1/6 4 4 6 1/6 6 1/6 7 7 8 8 9 1/6 9 1/6 10 1/6 10 1/6 11 12 1 1/6 1 1/6 5 1/6 5 1/6 Costs of Elements S2S2 3 3

27 Thought Experiment Simulation 27 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S4S4 2 1/6 2 1/6 4 4 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 12 1 1/6 1 1/6 5 1/6 5 1/6 Costs of Elements S2S2 3 1/3 3 1/3

28 Thought Experiment Simulation 28 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S4S4 2 1/6 2 1/6 4 4 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 12 1 1/6 1 1/6 5 1/6 5 1/6 Costs of Elements S2S2 3 1/3 3 1/3 S3S3

29 Thought Experiment Simulation 29 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S4S4 2 1/6 2 1/6 4 4 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 Costs of Elements S2S2 3 1/3 3 1/3 S3S3

30 Thought Experiment Simulation 30 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S4S4 2 1/6 2 1/6 4 4 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 Costs of Elements S2S2 3 1/3 3 1/3 S3S3

31 Thought Experiment Simulation 31 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 11 12 S1S1 S4S4 2 1/6 2 1/6 4 1/1 4 1/1 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 Costs of Elements S2S2 3 1/3 3 1/3 S3S3

32 Q1: “Cost of the Universe” U ? A: |C greedy | b/c each time the greedy algorithm picks a new set we distribute a cost of 1 to newly covered elements. 2 1/6 2 1/6 4 1/1 4 1/1 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 3 1/3 3 1/3

33 Q1: “Cost of the Universe” U ? 2 1/6 2 1/6 4 1/1 4 1/1 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 3 1/3 3 1/3 b/c each time the greedy algorithm picks a new set we distribute a cost of 1 to newly covered elements.

34 Q2: Sum of the “Costs of the Sets” in C opt ? 2 1/6 2 1/6 4 1/1 4 1/1 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 3 1/3 3 1/3

35 Q2: Sum of the “Costs of the Sets” in C opt ? 2 1/6 2 1/6 4 1/1 4 1/1 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 3 1/3 3 1/3 S1S1 1/6 + 1/6+ 1/3 + 1/1

36 Q2: Sum of the “Costs of the Sets” in C opt ? 2 1/6 2 1/6 4 1/1 4 1/1 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 3 1/3 3 1/3 S1S1 1/6 + 1/6+ 1/3 + 1/1 S2S2 1/6 + 1/6+ 1/3 + 1/3

37 Q2: Sum of the “Costs of the Sets” in C opt ? 2 1/6 2 1/6 4 1/1 4 1/1 6 1/6 6 1/6 7 1/3 7 1/3 8 1/3 8 1/3 9 1/6 9 1/6 10 1/6 10 1/6 11 1/2 11 1/2 12 1/2 12 1/2 1 1/6 1 1/6 5 1/6 5 1/6 3 1/3 3 1/3 S1S1 1/6 + 1/6+ 1/3 + 1/1 S2S2 1/6 + 1/6+ 1/3 + 1/3 S3S3 1/6 + 1/6+ 1/2 + 1/2

38 Q2: Sum of the “Costs of the Sets” in C opt ? b/c C opt is a set cover.

39 Goal Bound the cost of each set in C opt then, we can get a bound on |C greedy | in terms of |C opt | To say that C greedy is not much larger than C opt We currently have:

40 “Cost of Each Set S” is ≤ H(|S|)=O(ln(|S|) Claim: Cost of set S (not just the ones in C opt but in any set S) is ≤ H(|S|)=O(ln(|S|) If the claim is true then:

41 Proof of Claim: by picture 41 f f d d e e S c c a a b b g g 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7

42 Proof of Claim: by picture 42 f f d d e e S c c a a b b g g 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 Q: Suppose the first time S’s elements are covered, 3 are covered: e, f, g. What can you assert about the costs they get?

43 Proof of Claim: by picture 43 f ≤1/7 f ≤1/7 d d e ≤1/7 e ≤1/7 S c c a a b b g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 A: ≤ 1/7 (b/c S had 7 uncovered elements but S was not picked. So the set that’s picked must have had at least 7 uncovered elements.)

44 Proof of Claim: by picture 44 f ≤1/7 f ≤1/7 d d e ≤1/7 e ≤1/7 S c c a a b b g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 Q: Suppose the 2 nd time S’s elements are covered, 1 was covered: d What can you assert about the cost of d?

45 Proof of Claim: by picture 45 f ≤1/7 f ≤1/7 d ≤1/4 d ≤1/4 e ≤1/7 e ≤1/7 S c c a a b b g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 A: ≤ 1/4 (by the same argument)

46 Proof of Claim: by picture 46 f ≤1/7 f ≤1/7 d ≤1/4 d ≤1/4 e ≤1/7 e ≤1/7 S c c a a b b g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 Q: Suppose the 3 rd time S’s elements are covered, 2 was covered: b and c What can you assert about the costs of b and c?

47 Proof of Claim: by picture 47 f ≤1/7 f ≤1/7 d ≤1/4 d ≤1/4 e ≤1/7 e ≤1/7 S c ≤1/3 c ≤1/3 a a b ≤1/3 b ≤1/3 g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 A: ≤ 1/3 (by the same argument)

48 Proof of Claim: by picture 48 f ≤1/7 f ≤1/7 d ≤1/4 d ≤1/4 e ≤1/7 e ≤1/7 S c ≤1/3 c ≤1/3 a a b ≤1/3 b ≤1/3 g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 Q:What can you assert about the cost of a?

49 Proof of Claim: by picture 49 f ≤1/7 f ≤1/7 d ≤1/4 d ≤1/4 e ≤1/7 e ≤1/7 S c ≤1/3 c ≤1/3 a ≤1 a ≤1 b ≤1/3 b ≤1/3 g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 A: ≤ 1 (by the same argument)

50 Proof of Claim: by picture 50 f ≤1/7 f ≤1/7 d ≤1/4 d ≤1/4 e ≤1/7 e ≤1/7 S c ≤1/3 c ≤1/3 a ≤1 a ≤1 b ≤1/3 b ≤1/3 g ≤1/7 g ≤1/7 1/6 1/4 1/5 H(S) 1/3 1 1 1/2 1/7 Conclusion: costs of a + b + … + g ≤ 1 + 1/2 + 1/3 + … + 1/7 costs of a + b + … + g ≤ H(7) ≤ ln(7) Q.E.D.

51 A More Formal Proof By Induction Template 51 List the items of S covered in reverse order: Let k = |S| e 1, e 2, …, e k Proof by (Reverse) Induction Claim: e i ≤ 1/i Base case is k (argue that it holds) Assume holds for k, k-1, …, I Show holds for i-1 by the same argument in the proof by picture.

52 Summary: Greedy is log(n)-approximation. 52 1.Assigned costs to each element by distributing the cost 1 of each new set S added by greedy equally to each new element covered by S. 2.By construction: 3.B/c C opt covers all elements 4. 5.Put a log(|S|) bound to the “cost of each set S 6.Concluded

53 Key Takeaway 53 For NP-complete Problems the algorithmic tools in our toolbox can be used as is. But we have to give up something: (1)generality, (2) exactness, or (3) efficiency.

54 Outline For Today 1.Approximate Set Cover 2.Approximate Vertex Cover 3.Final Exam Information 4.Beyond CS 161: Online Algorithms 54

55 Recap: Vertex Cover 55  Input: Undirected Graph G(V, E)  Output: Minimum Vertex Cover of G  Vertex Cover: S ⊆ V, s.t. for each (u, v) ∈ E: either u ∈ S, or v ∈ S.  Fact: Vertex Cover is NP-complete:  3-SAT≤ p CLIQUE≤ p VERTEX-COVER

56 Vertex Cover Example 56 B D C A E F

57 Vertex Cover Example 57 B D C A E F

58 Vertex Cover Example 58 B D C A F Min Vertex Cover: {A, B} F

59 2-Approximation VC 59 procedure 2-Approx-VC(G(V, E)): VC-OUT = ∅ for (u, v) ∈ E: if neither u or v is in VC: VC-OUT = VC-OUT ∪ {u, v} return VC-OUT Run-time: Can be done in O(n + m) time (exercise) Claim 1: 2-Approx-VC returns a Vertex Cover. Proof: By construction each edge (u, v) is either already covered when we loop over it, or we cover it by adding both u and v.

60 2-Approximation VC Example 60 B D C A E F

61 2-Approximation VC Example 61 B D C A F F

62 2-Approximation VC Example 62 B D C A F F Output : {A, C, B, F}, 4 vertices

63 Output In Terms of Disjoint Edges 63 B D C A E F Output : {(A, C), (B, F)}, 2 disjoint edges Proof idea: For each edge, any VC has to contain one vertex.

64 Claim 2: 2-Approx-VC is a 2 approximation 64 Proof: We identify a set of “disjoint edges” (u i, v i ), i.e., no pair of edges we pick have a common vertex. Since any VC has to have either u i or v i in it:  any VC must have at least |VC-OUT|/2 vertices!  the optimal VC must have size ≥|VC-OUT|/2. Fact: Best Approximation known. It’s open whether a better one can exist or not.

65 Randomized 2-Approx Vertex Cover 65 procedure Rand-2-Approx-VC(G(V, E)): VC-OUT = ∅ for (u, v) ∈ E: if neither u or v is in VC: put either u or v into VC-OUT randomly return VC-OUT Again this algorithm outputs a VC by construction. Exercise: Show that E[|VC-OUT|] ≤ 2|VC opt.|

66 Outline For Today 1.Approximate Set Cover 2.Randomized Approximate Vertex Cover 3.Final Exam Information 4.Beyond CS 161: Online Algorithms 66

67 Final Exam Information 67  This Saturday at 3:30pm. At Gates B01  Closed book/notes, etc. One double-sided A4 cheat-sheet is allowed.  140 points.  1 problem consisting of 10 T/F questions (no proofs required). +2 points for correct -2 for incorrect answers  1 problem testing mathematical tools we’ve used  4 or 5 questions on designing and analyzing algorithms.  You can use any algorithm we have covered as a subroutine without re-proving any run-time and correctness claims. But you have to know the run-times of the algorithms we covered.

68 Topics Covered 68  Cumulative until the first half of today’s lecture.  8 Category of Topics/Algorithms 1.Mathematical Tools: Big-oh Notation, Master Theorem, Substitution Method, Linearity of Expectation, Independence 2.Data Structures: Heaps, Union-Find, Hash Tables, Bloom Filters 3.Fund. Graph Primitives: BFS/DFS, Topological Sort of DAGs, Undirected Conn. Comp., Directed (Strongly) Connected Components

69 Topics Covered 69 4. DC & Algs: MergeSort, Strassen 5. Greedy & Algs: Dijkstra, Prim, Kruskal (for MST), Cut Property and Lemmas for MST, Huffman, Scheduling Problems (and others in PSs), Greedy proof techniques: Greedy Stays Ahead, Exchange Arguments 6. Randomized Algs: QuickSort/QuickSelect, Karger, Approximate Max- Cut/Vertex Cover 7. DP & Algs: DP Recipe, Linear Ind. Set, Sequence Alignment, Bellman-Ford, Floyd-Warshall, Pseudo-polynomial Knapsack Algorithms 8. Intractability: P, NP, NP-complete, reductions, Options for Confronting NP-complete Problems, Knapsack Greedy Approx, Knapsack FPTAS, Set Cover, TSP with Triangle Inequality

70 A Final Note About The Final 70 For most problems, we will give you a computational problem and ask you to solve it, just as in PSs.

71 Outline For Today 1.Approximate Set Cover 2.Randomized Approximate Vertex Cover 3.Final Exam Information 4.Beyond CS 161: Online Algorithms 71

72 CS 161’s Computational Model Assumptions 72 1.Inputs to computational problems are fixed size n. 2.Input is correct/error-free. 3.Computation performed on a serial machine (single processor) 4.Computation is performed on a classic machine, i. e., each bit stores a 0 or 1 vs Quantum Machines with qbits. And others, such as Random Access Memory model. Different Computational Models Drop One or More of These Assumptions

73 Streaming Applications 73  Input is a possibly infinite stream.  At each point in time, the application needs to make an algorithmic decision.  E.g. Caching in OSs, infinite disk lookup requests. OS Cache OS Cache Algorithmic Decision: If there is a miss, what to evict? Question: How optimal is the alg’s eviction strategy?

74 Streaming Applications 74  News Feeds, FB, Twitter receiving continuous tweets/ user updates.  At each point, these apps need to decide which news/update should appear in whose news feeds.  CS161 Tools Cannot Analyze the algorithmic decision these apps make. FB/Twitt er/Googl e

75 Online Algorithms 75  Takes as input a possibly infinite stream.  At each point in time t make a decision based on  what has been seen so far  but without knowing the rest of the input  Type of Optimality Analysis: Competitive Ratio  “Worst” (Cost of online algorithm)/(Cost of OPT) ratios against any input stream  Where OPT is the best solution possible if we knew the entire input in advance

76 Example 1: Skiing in Tahoe 76  Buying equipment costs $500  Renting costs $50  Q: Should we buy or rent?  A: If we will go 9 times or fewer then rent, o.w. buy.  An online algorithm for this problem makes a decision of whether to buy or not each time we go to Tahoe.  Once the algorithm buys, there’s no other decision to make, everything is free.

77 Example 1: Skiing in Tahoe 77 An Online Skiing Algorithm t=1 RENT An Online Skiing Algorithm t=2 RENT An Online Skiing Algorithm t=k BUY … Observation: Any online algorithm is completely described by the time k it buys the equipment. Q: What’s the optimal choice of k?

78 Competitive Ratio If We Pick k = 1 78 An Online Skiing Algorithm t=1 BUY Q1: What’s the cost of this algorithm? A1: $500 Q2: What’s the competitive ratio of the algorithm that pick k=1? i.e., what’s the worst case input for this algorithm? A: Going only once. Then the optimal solution would be just rent for $50. => CR = $500/50 = 10

79 Competitive Ratio If We Pick k = 2 79 An Online Skiing Algorithm t=1 RENT Q1: What’s the cost of this algorithm? A1: $550 Q2: What’s the CR? An Online Skiing Algorithm t=2 BUY

80 Competitive Ratio If We Pick k = 2 80 Case 1: If we go once: We pay $50, opt is $50, ratio = 1. Case 2: If we go twice: We pay $550, opt is 100, ratio: 5.5 Case 3: If we go three times: We pay $550, opt is 150, ratio: 3.6 Case 4: If we go four times: We pay $550, opt is 200, ratio: 2.75 … A: CR is 5.5 (much better than k=1 algorithm)

81 Competitive Ratio If We Pick k < 10 81 An Online Skiing Algorithm t≤k-1 RENT Q1: What’s the cost of this algorithm? A1: (k-1)50 + 500 Q2: What’s the CR? A: If we go ≤ k-1 times, we’re optimal. If we go k times then the ratio: An Online Skiing Algorithm t=k BUY

82 Competitive Ratio If We Pick k > 10 82 Q1: What’s the cost of this algorithm? A1: (k-1)50 + 500 Q2: What’s the CR? A: If we go < 10 times, we’re optimal. What if we go ≥ 10 times? Then OPT is $500. If we go t 10 < t < k times, ratio: t50/500 (so increasing by 0.1) If we go exactly k times

83 Competitive Ratio If We Pick k > 10 83 Q1: What’s the cost of this algorithm? A1: (k-1)50 + 500 Q2: What’s the CR? A: If we go < 10 times, we’re optimal. What if we go ≥ 10 times? Then OPT is $500. If we go t 10 < t < k times, ratio: t50/500 (so increasing by 0.1) If we go exactly k times

84 Optimal k 84 Case 1: k < 10, CR: Case 2: k > 10, CR: **Optimal k = 10 => CR: 1.9** Best online strategy is to wait until we go 10 times and then buy the equipment. Case 2: k = 10, CR:

85 Caching 85 Slow Disk O.w (miss), send request to disk, put the page into cache. Q: Which page to evict? If page is in cache (hit) reply directly from cache

86 Caching 86 Input: N pages in disk, and stream of infinite page requests. Online Algorithm: Decide which page to evict from cache when it’s full and there’s a miss. Goal: minimize the number of misses. Idea: LRU: Remove the Least Recently Used page

87 LRU with k = 3 87 412153441132451 miss

88 LRU with k = 3 88 412153441132451

89 LRU with k = 3 89 412153441132451 miss

90 LRU with k = 3 90 412153441132451

91 LRU with k = 3 91 412153441132451 miss

92 LRU with k = 3 92 412153441132451

93 LRU with k = 3 93 412153441132451 hit

94 LRU with k = 3 94 412153441132451 miss

95 LRU with k = 3 95 412153441132451

96 LRU with k = 3 96 412153441132451 miss

97 LRU with k = 3 97 412153441132451

98 LRU with k = 3 98 412153441132451 miss

99 LRU with k = 3 99 412153441132451

100 LRU with k = 3 100 412153441132451 hit

101 LRU with k = 3 101 412153441132451 miss

102 LRU with k = 3 102 412153441132451 so and so forth…

103 Competitive Ratio Claim 103 Claim: If the optimal sequence of choices for a size-h cache causes m misses. Then, for the same sequence of requests, LRU for a size-k cache causes misses Interpretation: If LRU had twice as much cache size as an algorithm OPT that knew the future, it would have at most twice the misses of OPT. Note will prove the claim for

104 Proof of Competitive Ratio 104 Recursively break the sequence of inputs into phases. Let t be the time when we see the (k+1)st different request. Phase 1: a 1 … a t-1 Let t` be the time we see the (k+1)st different element starting from a t Phase 2: a t … a t’-1 412153441132451

105 Proof of Competitive Ratio 105 412153441132451 k=3 Phase 1 Phase 2 Phase 3 Phase 4 By construction, each phase has k distinct requests. Q: At most how many misses does LRU have in each phase? A: k b/c even if it evicted everything in the k+1 st item, it would have at most k misses.

106 Proof of Competitive Ratio 106 412153441132451 Phase 1 Phase 2 Phase 3 Phase 4 Q: What’s the minimum misses that any size-h cache must have in any phase? A: k-h b/c k distinct items will be in the cache at different points during the phase, so at least k-h of them must trigger misses. Therefore the CR: k/k-h Q.E.D.

107 107 Wednesday More on Beyond CS 161 Parallel Algorithms


Download ppt "More On Intractability & Beyond CS161: Online Algorithms Monday, August 11 th 1."

Similar presentations


Ads by Google