Download presentation
Presentation is loading. Please wait.
Published byRosamund Lawson Modified over 7 years ago
1
Dynamic Programming Typically applied to optimization problems
characterize the structure of an optimal solution recursively define the value of an optimal solution compute optimal value in a bottom-up fashion construct optimal solution steps from computed information (if more than the value is required) Coverage example problem: assembly line scheduling example problem: matrix chain multiplication general characteristics of suitable problems another problem: longest common subsequence
2
Assembly Line Scheduling
There are two “parallel” assembly lines Each assembly line can perform any job There is a cost to switch between assembly lines For a special order item, what is the fastest route? A brute force solution has complexity (2n) (why?)
3
A Particular Example To develop and test our algorithm, we first choose a particular example with fixed values The cost of switching is shown between the assembly lines What is the cost of the minimal solution?
4
Optimal Substructure An optimal solution to the entire problem depends on optimal solutions to subproblems We formulate a solution for the assembly line problem stage-by-stage
5
A Recursive Solution After some manipulation, we find the following recurrence relation Where fi[j] is the fastest possible time to Si,j ti,j is the transfer time from Si,j ai,j is the time at Si,j ei is the entry time for assembly line i
6
Recursion is not a good solution technique
The same subproblems are solved again & again Let ri(j) be the number of references to fi[j] It can be shown ri(j) = 2n-j So fi[1] is referenced 2n-1 times Our improved strategy is to build a table to save previous results so that they don’t have to be recomputed each time
7
The Fastest Way
8
Printing A Solution Printing the solution
The values for our particular problem This printout is “in reverse”; How can the print routine be changed to get a printout from Station 1 to Station 6 ?
9
Matrix Chain Multiplication
Matrix multiplication is associative, so given a chain of multiplications, many orderings are possible The algorithm for a single multiply is
10
Different Algorithm Speeds
A1 A2 A3 with sizes 10 x 100, 100 x 5, 5 x 50 Assume ( ( A1 A2 ) A3 ) A1 A2 takes 10*100*5 = 5000 multiplications result times A3 takes 10*5*50 = 2500 multiplications total multiplications is 7500 Assume ( A1 ( A2 A3 ) ) A2 A3 takes 100*5*50 = multiplications A1 times result takes 10*100*50 = mult. total multiplications is (ten times more!)
11
Number of Parenthesizations
The basic recurrence is The solution is known as Catalan numbers Since this is exponential in n, a brute force approach is not practical
12
Structure of Optimal Parenthesization
To find the optimal parenthesization of A1 .. An has some final multiplication A1 .. Ak * Ak+1 .. An The key observation is that the prefix chain A1 .. Ak must also be optimal; the same is true of the postfix chain Ak+1 .. An The optimal solution to the matrix-chain multiplication must contain optimal solutions to subproblem instances This is a general characteristic of problems for which dynamic programming is well suited
13
A Recursive Solution Let m[i,j] be the minimum # mult to compute Ai..j
if i = j, there is only one matrix and 0 multiplications if i < j then we must find the k such that the number of mult for Ai..k plus the number for Ak+1..j plus the number for performing the final mult. is minimal For convenience we define s[i,j] equals a value k such that m[i,j] = m[i,k] + m[k+1,j] + pi-1 pk pj
14
Overlapping Subproblems
The total number of subproblems is relatively small, the number of i and j such that 1 i j n is A recursive algorithm would be exponential since it encounters each subproblem many times The key idea is to solve each subproblem only once and build more complex solutions from the bottom up
15
The Basic Algorithm Assume Ai has size pi-1 x pi
The algorithm is given the n+1 values <p0,p1, .. , pn> A table m[1..n,1..n] stores each of the costs m[i,j] An auxiliary table s[1..n,1..n] contains the index k that obtained the minimum m[i,i] is set to 0 first; m[i,i+1] is computed next then m[i,i+2] is computed and the process continues in this fashion until all the m[i,j] are computed
16
A Sample Problem
17
Complexity of Dynamic Solution
Matrix-chain-order has three nested loops with indices having at most n values; so the complexity is O(n3) The tables m and s each require n2 space Overall the algorithm is much more efficient than the exponential time method of enumerating all possible parentheses and checking each one
18
Constructing an Optimal Solution
The table s[1..n,1..n] helps reconstruct the solution s[i,j] records the value k that splits Ai..j optimally
19
Dynamic Programming Characteristics
Optimal Substructure the optimal solution to the problem contains within it optimal solutions to subproblems proof that subproblems also have to be optimal is usually done by contradiction you must then find a suitable space for the subproblems for the matrix chain problem the subspace of all arbitrary sequences of the input chain is unnecessarily large chains of the form Ai through Aj where 1 i j n were adequate to solve the problem
20
Overlapping Subproblems - 1
A recursive solution will typically solve the same subproblem again and again
21
Overlapping Subproblems - 2
The total number of distinct problems is usually polynomial in size
22
Complexity of Recursive Solution
The recursive code has the complexity: 1 Solving this recurrence relation shows:
23
Top Down with Memorization
The form of the program looks recursive, but once a problem is solved, the solution is stored to avoid resolving the same problem Each of the n2 calls to lookup-chain takes O(n) time, so the overall complexity is O(n3)
24
Top-down vs. Bottom-up If all subproblems must be solved to find an optimal solution then the bottom-up approach of dynamic programming is usually faster If some subproblems don’t need be solved, then the top-down memorized solution can be faster (why?)
25
Not All Problems Are Suitable - 1
Consider the following problem statement: This problem has optimal substructure, two shorter paths can be combined to give a shortest path This works because the two shorter paths, say from q r and r t, combine to give the shortest path from q t
26
Not All Problems Are Suitable - 2
Consider the following problem statement: This problem does not have optimal substructure, combining two longest paths may produce an “illegal” path with a cycle The problem is the subproblems are not independent and they may share edges
27
Longest Common Subsequence
A subsequence of a given sequence is the given sequence with some elements (maybe none) left out; for example, < B, M, K, L > is a subsequence of < A, B, M, G, P, K, D, L, R > Longest Common Subsequence Given X=<A,B,C,B,D,A,B> and Y=<B,D,C,A,B,A> <B,C,A> is a common subsequence of length 3 <B,C,B,A> is the longest common subsequence, it has length 4 Given two sequences we want to find the maximum length of any common subsequence
28
Optimal substructure of an LCS
For a sequence of length m, there are 2m subsequences, so enumeration is impractical Given a sequence X=<x1,x2, .. ,xm> the prefix Xi=< x1,x2, .. ,xi> This shows the LCS problem has an optimal-substructure property
29
A recursive solution to subproblems
Overlapping subproblems if xm = yn then we must find the LCS of Xm-1 and Yn-1 and then append xm = yn to this result if xm yn then we must find the LCS of Xm-1 and Y and the LCS of X and Yn-1 ; whichever LCS is longer is the LCS of X and Y A recursive formula for c[i,j], the length of the LCS for Xi and Yj
30
Finding the LCS Length If one sequence is empty the LCS has length 0
The table b[1..m,1..n] points to the table entry corresponding to the optimal subproblem solution when computing c[i,j] Lines 7 through 16 compute the tables b and c in row major order; the structure matches the three cases outlined (xm = yn and two cases for xm yn) The figure on the next slide shows how the calculations are performed Since the table c is (m+1) x (n+1) and each entry takes O(1) to compute, the complexity of the algorithm is (mn)
31
A Sample Solution The answer is 4 from the c[m,n] entry
If the letters match, the arrow points and the entry is one greater If the letters do not match the value is the maximum of left and up and the arrow points to the larger of left or up (and up if equal)
32
Constructing an LCS Intuitively you start at c[m,n] and follow the arrows; you print the character whenever the arrow is This would print an LCS in reverse order, so the actual code is written recursively with the printing done on exit from recursion; this will print the LCS from left to right The complexity of printing an LCS is (m + n)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.