Dynamic Programming
Expected Outcomes Students should be able to Write down the four steps of dynamic programming Compute a Fibonacci number and the binomial coefficients by dynamic programming Compute the longest common subsequence and the shortest common supersequence of two given sequences by dynamic programming Solve the invest problem by dynamic programming
Dynamic Programming Dynamic Programming is a general algorithm design technique. Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems Main idea: solve several smaller (overlapping) subproblems record solutions in a table so that each subproblem is only solved once final state of the table will be (or contain) solution Dynamic programming vs. divide-and-conquer partition a problem into overlapping subproblems and independent ones store and not store solutions to subproblems They both solve problems by dividing a problem into small subproblems. But D&C partition the problem into independent subproblems; in contrast, dynamic programming is applicable when the subproblems are not independent, I.e., when subproblems share subsubproblems. In this context, a dandc algorithm does more work than necessary, repeatedly solving the common subproblems, while a DP algorithm solves every subsubproblem just ONCE and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subproblem is encountered.
Frame of Dynamic Programming Problem solved Solution can be expressed in a recursive way Sub-problems occur repeatedly Subsequence of optimal solution is an optimal solution to the sub-problem Frame Characterize the structure of an optimal solution Recursively define the value of an optimal solution Compute the value of an optimal solution in a bottom-up fashion Construct an optimal solution from computed information
Three basic components The development of a dynamic programming algorithm has three basic components: A recurrence relation (for defining the value/cost of an optimal solution); A tabular computation (for computing the value of an optimal solution); A backtracing procedure (for delivering an optimal solution).
Example: Fibonacci numbers Recall definition of Fibonacci numbers: f(0) = 0 f(1) = 1 f(n) = f(n-1) + f(n-2) Computing the nth Fibonacci number recursively (top-down): f(n) f(n-1) + f(n-2) f(n-2) + f(n-3) f(n-3) + f(n-4) ...
Example: Fibonacci numbers Computing the nth fibonacci number using bottom-up iteration: f(0) = 0 f(1) = 1 f(2) = 0+1 = 1 f(3) = 1+1 = 2 f(4) = 1+2 = 3 f(5) = 2+3 = 5 f(n-2) = f(n-1) = f(n) = f(n-1) + f(n-2) ALGORITHM Fib(n) F[0] 0, F[1] 1 for i2 to n do F[i] F[i-1] + F[i-2] return F[n] extra space
Examples of Dynamic Programming Computing binomial coefficients Compute the longest common subsequence Compute the shortest common supersquence Warshall’s algorithm for transitive closure Floyd’s algorithms for all-pairs shortest paths Some instances of difficult discrete optimization problems: knapsack
Computing Binomial Coefficients A binomial coefficient, denoted C(n, k), is the number of combinations of k elements from an n-element set (0 ≤ k ≤ n). Recurrence relation (a problem 2 overlapping subproblems) C(n, k) = C(n-1, k-1) + C(n-1, k), for n > k > 0, and C(n, 0) = C(n, n) = 1 Dynamic programming solution: Record the values of the binomial coefficients in a table of n+1 rows and k+1 columns, numbered from 0 to n and 0 to k respectively. 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 …
Dynamic Binomial Coefficient Algorithm for i = 0 to n do for j = 0 to minimum( i, k ) do if j = 0 or j = i then BiCoeff[ i, j ] = 1 else BiCoeff[ i, j ] = BiCoeff[ i-1, j-1 ] + BiCoeff[ i-1, j ] end if end for j end for i
Longest Common Subsequence (LCS) A subsequence of a sequence S is obtained by deleting zero or more symbols from S. For example, the following are all subsequences of “president”: pred, sdn, predent. The longest common subsequence problem is to find a maximum length common subsequence between two sequences.
LCS For instance, Sequence 1: president Sequence 2: providence Its LCS is priden. president providence
LCS Sequence 1: algorithm Sequence 2: alignment Another example: Sequence 1: algorithm Sequence 2: alignment One of its LCS is algm. a l g o r i t h m a l i g n m e n t
How to compute LCS? Let A=a1a2…am and B=b1b2…bn . len(i, j): the length of an LCS between a1a2…ai and b1b2…bj With proper initializations, len(i, j) can be computed as follows.
Running time and memory: O(mn) and O(mn).
The backtracing algorithm
Shortest common super-sequence Definition: Let X and Y be two sequences. A sequence Z is a super-sequence of X and Y if both X and Y are subsequence of Z. Shortest common super-sequence problem: Input: two sequences X and Y. Output: a shortest common super-sequence of X and Y. Example: X=abc and Y=abb. Both abbc and abcb are the shortest common super-sequences for X and Y.
How to compute SCS? Recursive Equation: Let len[i,j] be the length of an SCS of X[1...i] and Y[1...j]. len[i,j] can be computed as follows: j if i=0, i if j=0, len[i,j] = len[i-1,j-1]+1 if i, j>0 and xi=yj, min{len[i,j-1]+1, len[i-1,j]+1} if i, j>0 and xiyj.
Solution: ABDCABDAB
Exercise Consider the algorithm for LCS as an example, write down the SCS algorithm and analyze it.
An interesting example: Investment Problem Suppose there are m dollars,and n products. Let fi(x) be the profit of investing x dollars to product i. How to arrange the investment such that the total profit f1(x1) + f2(x2) + … + fn(xn) is maximized. Instance:5 thousand dollars,4 products x f1(x) f2(x) f3(x) f4(x) 1 11 2 20 12 5 10 21 3 13 30 22 4 14 15 32 23 40 24
Fk(x)optimum profit for investing x thousand dollars into producing the first k products. xk(x)dollars invested on product k in Fk(x) Dynamic Programming Table x F1(x) x1(x) F2(x) x2(x) F3(x) x3(x) F4(x) x4(x) 1 11 1 11 0 20 1 2 12 2 12 0 13 1 31 1 3 13 3 16 2 30 3 33 1 4 14 4 21 3 41 3 50 1 5 15 5 26 4 43 4 61 1 Solution: x1 =1, x2 =0, x3=3, x4 =1 F4(5) = 61
Algorithm for Investment for y1 to m F1(y)=f1(y) for k2 to n Fk(y)max0<=xk<=y{fk(xk)+Fk-1(y-xk)} return Fn(m)
Time complexity For each Fk(x) (2kn,1x m), there are x+1 addition, x comparison