Download presentation
Presentation is loading. Please wait.
1
Lecture 10. Dynamic Programming I
CS341, Feb. 6, Tuesday
2
Outline For Today What is dynamic programming
Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set
3
Outline For Today What is dynamic programming
Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set
4
What is dynamic programming
Dynamic programming (DP) is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions. You all know and have used dynamic programming!
5
Remember Fibonacci numbers?
Talk at U of Maryland Remember Fibonacci numbers? 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, … F(0) = 0, F(1)=1 F(n) = F(n-1) + F(n-2) If we are not careful, do a recursive algorithm on this. We end up with an exponential number of redundant recursive calls. However, how many of these are distinct?
6
Q: How many distinct recursive calls?
F(n) F(n-2) F(n-1) F(n-4) F(n-3) F(n-3) F(n-2) Answer: only n b/c we only have F(i), i=1 … n. **The exponential time is due to large redundancy**
7
Fix 1: Memoization Store solutions to subproblems each time they’re solved. Afterwards, just lookup the solution, instead of computing it.
8
Yes, this is (trivial) dynamic programming!
Fix 2: Dynamic programming F(2) = F(0)+F(1) F(3)=F(2)+F(1) F(4)=F(3)+F(2) … F(n)=F(n-1)+F(n-2) Yes, this is (trivial) dynamic programming! You already know how to do it: solve the smaller problems (and store them for reuse) bottom up.
9
You wonder what it means? So did Richard Bellman
Richard Bellman: An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research. You can imagine how he felt, then, about the term, mathematical. … Hence, I felt I had to do something to shield Wilson … from the fact that I was really doing mathematical research inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning…. But planning, is not a good word for various reasons. I decided therefore to use the word, ‘programming.’ … Then, I said let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative/negative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it to hide my activities.
10
What does it do It often improves an otherwise exponential time (exhaustive search) algorithm to a polynomial time algorithm. This is a very powerful method, you really need to master it very well. Note: in the Fibonacci’s case, there happens to be a faster algorithm (O(log n)) by divide and conquer, but that’s a special case due to the simplicity of the problem.
11
Outline For Today What is dynamic programming
Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set
12
DP: optimal coin change
Talk at U of Maryland DP: optimal coin change We have seen this problem before: you are given an amount in cents, and you want to make change with the smallest number of coins possible. Sometimes a greedy algorithm gives the optimal solution. But sometimes it does not: For example, for coin system (12, 5, 1), the greedy algorithm gives 15 = but a better answer is 15 = Sometimes Greedy cannot even find the proper changes: (25,10,4), change for 41 cents. So how can we always find the optimal solution (fewest coins)? One way is using dynamic programming.
13
Optimal coin-changing
The idea: Again we go bottom up. But unlike the Fibonacci’s case (where we have just one option) here we have many: to make change for n cents, the optimal method must use some denomination di. That is, the optimal is made by choosing the optimal solution for n – di for some di, and adding one coin of di to it. We don't know which di to use, but some must work. So we try them all, assuming we know how to make optimal changes for < n cents. To formalize: letting opt[j] be the optimal number of coins to make change for j cents, we have: opt[j] = 1 + min( opt[j-di]) over all di≤j
14
You can also draw a recursion tree:
I will draw this tree in class. The tree size is exponential: kO(n), where n is total change, and k is number of coins. How many different nodes? n times k All we need to compute is these nxk different nodes, bottom up. So we already know how to do it!
15
Optimal coin-change: coins(n, d[1..k])
/* returns optimal number of coins to make change for n using denominations d[1..k], with d[1] = 1 */ for j := 1 to n do // O(n) -- bottom up opt[j] := infinity; for i := k downto 1 do // O(k) if d[i] = j then opt[j] := 1; best[j] := j; else if d[i] < j then a := 1+opt[j-d[i]]; // anything < opt[j] is known if a < opt[j] then opt[j] := a; best[j] := d[i]; // best[j] for remembering the path return(opt[n]) Running time: O(nk) Careful: input size: N=logn Time complexity of the Alg T(N) = O(2N)
16
Example. j opt[j] best[j] j opt[j] best[j] Suppose we use the system of denominations (1,5,18,25). To represent 29, the greedy algorithm gives with 5 coins. DP algorithm: best coin =11 11 has a representation of size 3, with best coin =6, 6 has a representation of size 2, with best coin =1. So DP gives 29 = , with 4 coins.
17
Outline For Today What is dynamic programming
Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set
18
Recipe of a DP Algorithm
1: Identify small # subproblems, (e.g., F(1) in Fibonacci number) 2: Gradually solve “larger” subproblems given solutions to smaller ones, usually via a recursive formula, F(k) = F(k-1) + F(k-2) or opt[j] = 1 + minall coins d_i ( opt[j-di]) Solved smaller problems
19
How to Recognize a DP Solution
Step 1: Have a first-cut brute-force-like recursive algorithm, which expresses the larger solution in terms of solutions to smaller subproblems. Step 2: Recognize that different branches have a lot of overlapping work. Step 3: Recognize that there aren’t actually that many different subproblems.
20
Outline For Today What is dynamic programming
Coin change problem revisit Recipe of a dynamic programming algorithm Linear independent set
21
Linear Independent Set
Input: Line graph G(V, E), and weights wv ≥ 0 on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight A B C D 1 5 6 3
22
Linear Independent Set
Input: Line graph G(V, E), and weights wv on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight Independent Set 1: {A, D} Weight: 4 A B C D 1 5 6 3
23
Linear Independent Set
Input: Line graph G(V, E), and weights wv on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight Independent Set 2: {A, C} Weight: 7 A B C D 1 5 6 3
24
Linear Independent Set (IS)
Input: Line graph G(V, E), and weights wv on each vertex Output: The max-weight independent set of vertices in G; i.e. mutually non-adjacent set of vertices with max-weight **Max Independent Set**: {B, D} Weight: 8 A B C D 1 5 6 3
25
Possible Approaches (1)
Brute-force Search: exponential number of different independent sets
26
Possible Approaches (2)
Greedy: Let S be ∅ while (cannot pick any vertices) let v be max-weight that is not adjacent to vertices in S add v to S A B C D 1 5 6 3
27
Possible Approaches (2)
Greedy: Let S be ∅ while (cannot pick any vertices) let v be max-weight that is not adjacent to vertices in S add v to S Greedy Solution: {A, C} Weight: 7 Not Correct! A B C D 1 5 6 3
28
Dynamic Programming: First Steps
Reason about what the optimal solution looks like **in terms of optimal solutions to sub-problems** Let S* be a max-weight IS Consider vn There are 2 possible cases: (1) Vn ∉ S* or (2): vn ∈ S* v1 v2 vn-1 vn w1 w2 wn-1 wn
29
Case 1: Vn ∉ S* G` Consider G` = G – vn
Then obviously S* is the max-weight IS in G’. S* is optimal for subproblem G` G` v1 v2 vn-1 vn w1 w2 wn-1 wn
30
Case 2: Vn ∈ S* G`` Then vn-1 ∉ S*! (would violate independence of S*)
Let G`` be G – {vn, vn-1} S*-{vn} is optimal in G``. Claim: S*-{vn} is optimal for subproblem G`` G`` v1 v2 vn-2 vn-1 vn w1 w2 wn-2 wn-1 wn
31
Proof of S*-{vn}’s optimality in G``
Claim: S*-{vn} is an IS in G``. Assume it’s not optimal. Let S`` be an IS for G``with w(S``) > w(S* - {vn}) Then note S`` + {vn} is an IS for G, and has weight > S*, contradicting S*’s optimality. S*-{vn} is optimal for subproblem G`` G`` v1 v2 vn-1 vn w1 w2 wn-1 wn
32
S* is optimal for subproblem G`
Summary of 2 Cases vn ∉ S* => S* is optimal for G`=G-{vn} S* is optimal for subproblem G` v1 v2 vn-1 vn w1 w2 wn-1 wn
33
If we knew which case we’re in, we’d know how to recurse, and be done!
Summary of 2 Cases vn ∉ S* => S* is optimal for G`= G-{vn} vn ∈ S* => S* - {vn} is optimal for G``= G-{vn, vn-1} If we knew which case we’re in, we’d know how to recurse, and be done! S*-{vn} is optimal for subproblem G`` v1 v2 vn-1 vn w1 w2 wn-1 wn
34
A Possible Recursive Algorithm
Recurse on both cases and return the better solution. Recursive-Linear-IS-1: (G(V, E) and weights) Let S1 = Recursive-Linear-IS-1(G`) Let S2 = Recursive-Linear-IS-1(G``) return the better of S1 or S2 ∪ {vn} Good news: The algorithm is correct Problem: This looks like brute-force search Resemblance to Fibonacci number case!
35
Why is The Runtime Exponential?
T(n) = T(n-1) + T(n-2) + O(1) T(n) ≥ 2T(n-2) + O(1) = Ω(2n) 1 T(n) 2 T(n-2) T(n-2) 4 T(n-4) T(n-4) T(n-4) T(n-4) … 2n/2 depth is linear: n/2
36
Q: How many distinct recursive calls?
Alg(G) Alg(G``) Alg(G`) Alg(G````) Alg(G```) Alg(G```) Alg(G``) Answer: only n b/c each input is a prefix of v1 … vn. **The exponential time is due to large redundancy**
37
DP: Bottom-up Iterative Reformulation
Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = v1 v2 vn-1 vn w1 w2 wn-1 wn 37
38
DP: Bottom-up Iterative Reformulation
Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = v1 v2 vn-1 vn w1 w2 wn-1 wn 38
39
DP: Bottom-up Iterative Reformulation
Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = w1 for i = 2,3,…,n: A[i] = v1 v2 vn-1 vn w1 w2 wn-1 wn 39
40
DP: Bottom-up Iterative Reformulation
Let Gi be the first prefix i vertices from left in G Let A be an array of size n. A[i] = max weight IS to Gi procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = w1 for i = 2,3,…,n: A[i] = max{A[i-1], A[i-2] + wi} return A[n] Resemblance to F(n)=F(n-2)+F(n-1)! v1 v2 vn-1 vn w1 w2 wn-1 wn 40
41
Runtime & Correctness of DP-Linear-IS
Runtime: O(n) => only looping through the array. Correctness: by induction (exercise) Space: O(n) => but can do constant space (why?) unless want to reconstruct the actual IS. What if we also want to reconstruct the IS! procedure DP-Linear-IS(G(V,E)): Base Cases: A[0] = 0 A[1] = w1 for i = 2,3,…,n: A[i] = max{A[i-1], A[i-2] + wi} return A[n] 41
42
Reconstructing the Optimal IS (1)
Option 1: Store for each A[i] also the ISi. => quadratic space (should avoid in practice) Option 2: Backward trace A and reconstruct the solution. Claim: vi ∈ opt IS for Gi iff wi + w(opt IS of Gi-2) ≥ w(opt IS of Gi-1) Proof: Same as S*’s optimality in G` and S*-{vn}’s optimality in G``. Q: Given this claim, how can we reconstruct the opt IS? 42
43
wi + w(opt IS of Gi-2) ≥ w(opt IS of Gi-1)
Reconstructing the Optimal IS (2) Claim: vi ∈ opt IS for Gi iff wi + w(opt IS of Gi-2) ≥ w(opt IS of Gi-1) A 1 3 7 ... 83 85 90 Weights 1 3 6 … 2 4 7 Q: Is vn in the optimal set? A: No. wn=3 + A[n-2]=85 ≤ A[n-1]=90 => We were in Case 1 Q: Is vn-1 in the optimal set? A: Yes. wn-1=7 + A[n-3]=83 ≥ A[n-2]=85 => We were in Case 2 43
44
Reconstructing the Optimal IS (3)
procedure DP-Linear-IS-Reconstruct(G(V,E)): A = DP-Linear-IS(G(V, E)) let S = ∅ i = n while i ≥ 0: if wi + A[i-2] ≥ A[i-1]: put vi into S; i = i – 2 else: i = i - 1; return S A 1 3 7 ... 83 85 90 Weights 1 3 6 … 2 4 7 44
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.