Lecture 10. Dynamic Programming I

Slides:



Advertisements
Similar presentations
Introduction to Algorithms 6.046J/18.401J/SMA5503
Advertisements

Types of Algorithms.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Lecture 9. Dynamic Programming: optimal coin change We have seen this problem before: you are given an amount in cents, and you want to make change using.
1 Dynamic Programming Jose Rolim University of Geneva.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Dynamic Programming CIS 606 Spring 2010.
Choosing the “most probable state path” The Decoding Problem If we had to choose just one possible state path as a basis for further inference about some.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Analysis of Algorithms
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
PROBLEM-SOLVING TECHNIQUES Rocky K. C. Chang November 10, 2015.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Exponential time algorithms Algorithms and networks.
Lecture 10: Dynamic Programming 1 CS 341: Algorithms Thursday, June 2 nd
The NP class. NP-completeness
Greedy algorithms: CSC317
Dynamic Programming 26-Apr-18.
All-pairs Shortest paths Transitive Closure
Computational problems, algorithms, runtime, hardness
Dynamic Programming Sequence of decisions. Problem state.
Design & Analysis of Algorithm Dynamic Programming
Analysis of Algorithms
Lecture 32 CSE 331 Nov 16, 2016.
Weighted Interval Scheduling
Finding a Path With Largest Smallest Edge
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
JinJu Lee & Beatrice Seifert CSE 5311 Fall 2005 Week 10 (Nov 1 & 3)
Types of Algorithms.
Greedy Algorithms / Interval Scheduling Yin Tat Lee
CS330 Discussion 6.
Topic 25 Dynamic Programming
CS 3343: Analysis of Algorithms
CSCE 411 Design and Analysis of Algorithms
Algorithm design and Analysis
Prepared by Chen & Po-Chuan 2016/03/29
Data Structures and Algorithms
Unit-5 Dynamic Programming
Types of Algorithms.
Coping With NP-Completeness
Algorithms (2IL15) – Lecture 2
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Advanced Algorithms Analysis and Design
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Chapter 15-1 : Dynamic Programming I
Dynamic Programming 23-Feb-19.
CPS 173 Computational problems, algorithms, runtime, hardness
Lecture 8. Paradigm #6 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
Introduction to Algorithms: Dynamic Programming
DYNAMIC PROGRAMMING.
Types of Algorithms.
Lecture 4 Dynamic Programming
COMP108 Algorithmic Foundations Dynamic Programming
Dynamic Programming II DP over Intervals
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
State-Space Searches.
Algorithms Recurrences.
State-Space Searches.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Coping With NP-Completeness
Dynamic Programming.
State-Space Searches.
Time Complexity and the divide and conquer strategy
Week 8 - Friday CS322.
Presentation transcript:

Lecture 10. Dynamic Programming I CS341, Feb. 6, Tuesday

Outline For Today What is dynamic programming Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set

Outline For Today What is dynamic programming Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set

What is dynamic programming Dynamic programming (DP) is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions. You all know and have used dynamic programming!

Remember Fibonacci numbers? Talk at U of Maryland Remember Fibonacci numbers? 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … F(0) = 0, F(1)=1 F(n) = F(n-1) + F(n-2) If we are not careful, do a recursive algorithm on this. We end up with an exponential number of redundant recursive calls. However, how many of these are distinct?

Q: How many distinct recursive calls? F(n) F(n-2) F(n-1) F(n-4) F(n-3) F(n-3) F(n-2) Answer: only n b/c we only have F(i), i=1 … n. **The exponential time is due to large redundancy**

Fix 1: Memoization Store solutions to subproblems each time they’re solved. Afterwards, just lookup the solution, instead of computing it.

Yes, this is (trivial) dynamic programming! Fix 2: Dynamic programming F(2) = F(0)+F(1) F(3)=F(2)+F(1) F(4)=F(3)+F(2) … F(n)=F(n-1)+F(n-2) Yes, this is (trivial) dynamic programming! You already know how to do it: solve the smaller problems (and store them for reuse) bottom up.

You wonder what it means? So did Richard Bellman Richard Bellman: An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research. You can imagine how he felt, then, about the term, mathematical. … Hence, I felt I had to do something to shield Wilson … from the fact that I was really doing mathematical research inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning…. But planning, is not a good word for various reasons. I decided therefore to use the word, ‘programming.’ … Then, I said let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative/negative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it to hide my activities.

What does it do It often improves an otherwise exponential time (exhaustive search) algorithm to a polynomial time algorithm. This is a very powerful method, you really need to master it very well. Note: in the Fibonacci’s case, there happens to be a faster algorithm (O(log n)) by divide and conquer, but that’s a special case due to the simplicity of the problem.

Outline For Today What is dynamic programming Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set

DP: optimal coin change Talk at U of Maryland DP: optimal coin change We have seen this problem before: you are given an amount in cents, and you want to make change with the smallest number of coins possible. Sometimes a greedy algorithm gives the optimal solution. But sometimes it does not: For example, for coin system (12, 5, 1), the greedy algorithm gives 15 = 12 + 1 + 1 + 1 but a better answer is 15 = 5 + 5 + 5. Sometimes Greedy cannot even find the proper changes: (25,10,4), change for 41 cents. So how can we always find the optimal solution (fewest coins)? One way is using dynamic programming.

Optimal coin-changing The idea: Again we go bottom up. But unlike the Fibonacci’s case (where we have just one option) here we have many: to make change for n cents, the optimal method must use some denomination di. That is, the optimal is made by choosing the optimal solution for n – di for some di, and adding one coin of di to it. We don't know which di to use, but some must work. So we try them all, assuming we know how to make optimal changes for < n cents. To formalize: letting opt[j] be the optimal number of coins to make change for j cents, we have: opt[j] = 1 + min( opt[j-di]) over all di≤j

You can also draw a recursion tree: I will draw this tree in class. The tree size is exponential: kO(n), where n is total change, and k is number of coins. How many different nodes? n times k All we need to compute is these nxk different nodes, bottom up. So we already know how to do it!

Optimal coin-change: coins(n, d[1..k]) /* returns optimal number of coins to make change for n using denominations d[1..k], with d[1] = 1 */ for j := 1 to n do // O(n) -- bottom up opt[j] := infinity; for i := k downto 1 do // O(k) if d[i] = j then opt[j] := 1; best[j] := j; else if d[i] < j then a := 1+opt[j-d[i]]; // anything < opt[j] is known if a < opt[j] then opt[j] := a; best[j] := d[i]; // best[j] for remembering the path return(opt[n]) Running time: O(nk) Careful: input size: N=logn Time complexity of the Alg T(N) = O(2N)

Example. j opt[j] best[j] 15 3 5 16 4 5 17 5 5 18 1 18 19 2 18 20 3 18 21 4 18 22 5 18 23 2 18 24 3 18 25 1 25 26 2 25 27 3 25 28 3 18 29 4 18 j opt[j] best[j] 1 1 1 2 2 1 3 3 1 4 4 1 5 1 5 6 2 5 7 3 5 8 4 5 9 5 5 10 2 5 11 3 5 12 4 5 13 5 5 14 6 5 Suppose we use the system of denominations (1,5,18,25). To represent 29, the greedy algorithm gives 25+1+1+1+1 with 5 coins. DP algorithm: best coin 18. 29-18=11 11 has a representation of size 3, with best coin 5. 11-5=6, 6 has a representation of size 2, with best coin 5. 6-5=1. So DP gives 29 = 18 + 5 + 5 + 1, with 4 coins.

Outline For Today What is dynamic programming Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set

Recipe of a DP Algorithm 1: Identify small # subproblems, (e.g., F(1) in Fibonacci number) 2: Gradually solve “larger” subproblems given solutions to smaller ones, usually via a recursive formula, F(k) = F(k-1) + F(k-2) or opt[j] = 1 + minall coins d_i ( opt[j-di]) Solved smaller problems

How to Recognize a DP Solution Step 1: Have a first-cut brute-force-like recursive algorithm, which expresses the larger solution in terms of solutions to smaller subproblems. Step 2: Recognize that different branches have a lot of overlapping work. Step 3: Recognize that there aren’t actually that many different subproblems.

Outline For Today What is dynamic programming Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set

Linear Independent Set Input: Line graph G(V, E), and weights wv ≥ 0 on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight A B C D 1 5 6 3

Linear Independent Set Input: Line graph G(V, E), and weights wv on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight Independent Set 1: {A, D} Weight: 4 A B C D 1 5 6 3

Linear Independent Set Input: Line graph G(V, E), and weights wv on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight Independent Set 2: {A, C} Weight: 7 A B C D 1 5 6 3

Linear Independent Set (IS) Input: Line graph G(V, E), and weights wv on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight **Max Independent Set**: {B, D} Weight: 8 A B C D 1 5 6 3

Possible Approaches (1) Brute-force Search: exponential number of different independent sets

Possible Approaches (2) Greedy: Let S be ∅ while (cannot pick any vertices) let v be max-weight that is not adjacent to vertices in S add v to S A B C D 1 5 6 3

Possible Approaches (2) Greedy: Let S be ∅ while (cannot pick any vertices) let v be max-weight that is not adjacent to vertices in S add v to S Greedy Solution: {A, C} Weight: 7 Not Correct! A B C D 1 5 6 3

Dynamic Programming: First Steps Reason about what the optimal solution looks like **in terms of optimal solutions to sub-problems** Let S* be a max-weight IS Consider vn There are 2 possible cases: (1) Vn ∉ S* or (2): vn ∈ S* v1 v2 vn-1 vn w1 w2 wn-1 wn

Case 1: Vn ∉ S* G` Consider G` = G – vn Then obviously S* is the max-weight IS in G’. S* is optimal for subproblem G` G` v1 v2 vn-1 vn w1 w2 wn-1 wn

Case 2: Vn ∈ S* G`` Then vn-1 ∉ S*! (would violate independence of S*) Let G`` be G – {vn, vn-1} S*-{vn} is optimal in G``. Claim: S*-{vn} is optimal for subproblem G`` G`` v1 v2 vn-2 vn-1 vn w1 w2 wn-2 wn-1 wn

Proof of S*-{vn}’s optimality in G`` Claim: S*-{vn} is an IS in G``. Assume it’s not optimal. Let S`` be an IS for G``with w(S``) > w(S* - {vn}) Then note S`` + {vn} is an IS for G, and has weight > S*, contradicting S*’s optimality. S*-{vn} is optimal for subproblem G`` G`` v1 v2 vn-1 vn w1 w2 wn-1 wn

S* is optimal for subproblem G` Summary of 2 Cases vn ∉ S* => S* is optimal for G`=G-{vn} S* is optimal for subproblem G` v1 v2 vn-1 vn w1 w2 wn-1 wn

If we knew which case we’re in, we’d know how to recurse, and be done! Summary of 2 Cases vn ∉ S* => S* is optimal for G`= G-{vn} vn ∈ S* => S* - {vn} is optimal for G``= G-{vn, vn-1} If we knew which case we’re in, we’d know how to recurse, and be done! S*-{vn} is optimal for subproblem G`` v1 v2 vn-1 vn w1 w2 wn-1 wn

A Possible Recursive Algorithm Recurse on both cases and return the better solution. Recursive-Linear-IS-1: (G(V, E) and weights) Let S1 = Recursive-Linear-IS-1(G`) Let S2 = Recursive-Linear-IS-1(G``) return the better of S1 or S2 ∪ {vn} Good news: The algorithm is correct Problem: This looks like brute-force search Resemblance to Fibonacci number case!

Why is The Runtime Exponential? T(n) = T(n-1) + T(n-2) + O(1) T(n) ≥ 2T(n-2) + O(1) = Ω(2n) 1 T(n) 2 T(n-2) T(n-2) 4 T(n-4) T(n-4) T(n-4) T(n-4) … 2n/2 depth is linear: n/2

Q: How many distinct recursive calls? Alg(G) Alg(G``) Alg(G`) Alg(G````) Alg(G```) Alg(G```) Alg(G``) Answer: only n b/c each input is a prefix of v1 … vn. **The exponential time is due to large redundancy**

DP: Bottom-up Iterative Reformulation Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = v1 v2 vn-1 vn w1 w2 wn-1 wn 37

DP: Bottom-up Iterative Reformulation Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = v1 v2 vn-1 vn w1 w2 wn-1 wn 38

DP: Bottom-up Iterative Reformulation Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = w1 for i = 2,3,…,n: A[i] = v1 v2 vn-1 vn w1 w2 wn-1 wn 39

DP: Bottom-up Iterative Reformulation Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = w1 for i = 2,3,…,n: A[i] = max{A[i-1], A[i-2] + wi} return A[n] Resemblance to F(n)=F(n-2)+F(n-1)! v1 v2 vn-1 vn w1 w2 wn-1 wn 40

Runtime & Correctness of DP-Linear-IS Runtime: O(n) => only looping through the array. Correctness: by induction (exercise) Space: O(n) => but can do constant space (why?) unless want to reconstruct the actual IS. What if we also want to reconstruct the IS! procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = w1 for i = 2,3,…,n: A[i] = max{A[i-1], A[i-2] + wi} return A[n] 41

Reconstructing the Optimal IS (1) Option 1: Store for each A[i] also the ISi. => quadratic space (should avoid in practice) Option 2: Backward trace A and reconstruct the solution. Claim: vi ∈ opt IS for Gi iff wi + w(opt IS of Gi-2) ≥ w(opt IS of Gi-1) Proof: Same as S*’s optimality in G` and S*-{vn}’s optimality in G``. Q: Given this claim, how can we reconstruct the opt IS? 42

wi + w(opt IS of Gi-2) ≥ w(opt IS of Gi-1) Reconstructing the Optimal IS (2) Claim: vi ∈ opt IS for Gi iff wi + w(opt IS of Gi-2) ≥ w(opt IS of Gi-1) A 1 3 7 ... 83 85 90 Weights 1 3 6 … 2 4 7 Q: Is vn in the optimal set? A: No. wn=3 + A[n-2]=85 ≤ A[n-1]=90 => We were in Case 1 Q: Is vn-1 in the optimal set? A: Yes. wn-1=7 + A[n-3]=83 ≥ A[n-2]=85 => We were in Case 2 43

Reconstructing the Optimal IS (3) procedure DP-Linear-IS-Reconstruct(G(V,E)): A = DP-Linear-IS(G(V, E)) let S = ∅ i = n while i ≥ 0: if wi + A[i-2] ≥ A[i-1]: put vi into S; i = i – 2 else: i = i - 1; return S A 1 3 7 ... 83 85 90 Weights 1 3 6 … 2 4 7 44