Dynamic Programming
Expected Outcomes Students should be able to Solve the all pairs shortest paths problem by dynamic programming, that is the Floyd’s algorithm Solve the transitive closure problem by dynamic programming, that is the Warshall’s algorithm Solve the knapsack problem by dynamic programming Solve the matrix chain multiplication problem by dynamic programming
Transitive Closure The transitive closure of a directed graph with n vertices can be defined as the nxn matrix T = {tij}, in which the element in the ith row (1 ≤ i ≤ n) and the jth column (1 ≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a directed path of a positive length) from the ith vertex to the jth vertex; otherwise, tij is 0. Graph traversal-based algorithm and Warshall’s algorithm 3 4 2 1 3 4 2 1 Traversal-based: traverses the same digraph several times, we should hope that a better algorithm can be found. 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 1 1 1 0 0 0 0 Adjacency matrix Transitive closure
Warshall’s Algorithm 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 T(0) Main idea: Use a bottom-up method to construct the transitive closure of a given digraph with n vertices through a series of nxn boolean matrices: T(0),…, T(k-1), T(k) , …, T(n) The question is: how to obtain T(k) from T(k-1) ? T(k) : tij(k) = 1 in T(k) , iff there is an edge from i to j; or there is a path from i to j going through vertex 1; or there is a path from i to j going through vertex 1 and/or 2; or ... there is a path from i to j going through 1, 2, … and/or k 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 T(0) 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 T(1) 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 T(2) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 Main idea: a path exists between two vertices i, j, iff there is an edge from i to j; or there is a path from i to j going through vertex 1; or there is a path from i to j going through vertex 1 and/or 2; or there is a path from i to j going through vertex 1, 2, and/or 3; or ... there is a path from i to j going through any of the other vertices T(3) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 T(4) 0 0 1 0 1 1 1 1 0 0 0 0 Does not allow an intermediate node Allow 1 to be an intermediate node Allow 1,2 to be an intermediate node Allow 1,2,3 to be an intermediate node Allow 1,2,3,4 to be an intermediate node
{ Warshall’s Algorithm In the kth stage: to determine T(k) is to determine if a path exists between two vertices i, j using just vertices among 1,…,k tij(k-1) = 1 (path using just 1 ,…,k-1) tij(k) = 1: or (tik(k-1) = 1 and tkj(k-1) = 1) (path from i to k and from k to i using just 1 ,…,k-1) { k kth stage i Rule to determine whether tij (k) should be 1 in T(k) : If an element tij is 1 in T(k-1) , it remains 1 in T(k) . If an element tij is 0 in T(k-1) ,it has to be changed to 1 in T(k) iff the element in its row i and column k and the element in its column j and row k are both 1’s in T(k-1). Observe the conditions under which rij[k] would be a 1 in R[k] If there exists a path from I to j with the highest vertex number of the intermediate nodes being less than k. rij[k-1] = 1 If there exists a path from I to j with the highest vertex number of the intermediate nodes being k. I, …, k, …j rik[k-1] = 1 and rkj[k-1] = 1 j
Compute Transitive Closure Time Complexity Space Complexity less space?
Floyd’s Algorithm: All pairs shortest paths All pairs shortest paths problem: In a weighted graph, find shortest paths between every pair of vertices. Applicable to: undirected and directed weighted graphs; no negative weight. Same idea as the Warshall’s algorithm : construct solution through series of matrices D(0) , D(1), …, D(n) Example: dij(k) = length of the shortest path from i to j with each vertex numbered no higher than k. 3 4 2 1 6 5 Why infinity: 2 3: 1 + 4 < 1 + infinity 0 4 1 0 6 3 0 5 1 0 0 4 1 0 4 3 0 6 5 1 0 weight matrix distance matrix
Floyd’s Algorithm D(k) : allow 1, 2, …, k to be intermediate vertices. In the kth stage, determine whether the introduction of k as a new eligible intermediate vertex will bring about a shorter path from i to j. dij(k) = min{dij(k-1) , dik(k-1) + dkj(k-1)} for k 1, dij(0) = wij dik(k-1) k Observe whether the introduction of k as a new eligible intermediate vertex will bring about a shorter path from I to j. The procedure of deriving D(k) from D(k-1) actually is a procedure of findout out whether the introduction of node I as a new eligible intermediate node will i dkj(k-1) kth stage dij(k-1) j
Time Complexity: O(n3), Space ? Floyd Algorithm Time Complexity: O(n3), Space ?
Floyd algorithm (less space)
Constructing a shortest path For k=0 For k>=1
Print all-pairs shortest paths
Example: 1 5 4 3 2 7 -4 8 -5 6
D(0)= (0)= D(1)= (1)=
D(2)= (2)= D(3)= (3)=
D(4)= (4)= (5)= D(5)=
Shortest path from 1 to 2 in 4 3 1 3 8 1 -5 -4 2 7 5 4 6
The Knapsack Problem The problem Find the most valuable subset of the given n items that fit into a knapsack of capacity W. Consider the following subproblem P(i, j) Find the most valuable subset of the first i items that fit into a knapsack of capacity j, where 1 i n, and 1 j W Let V[i, j] be the value of an optimal solution to the above subproblem P(i, j). Goal: V[n, W] The question: What is the recurrence relation that expresses a solution to this instance in terms of solutions to smaller subinstances? V[I, j]: the value for the most valuable subset of the first I items that fit into a knapsack of capacity j.
{ The Knapsack Problem The Recurrence Two possibilities for the most valuable subset for the subproblem P(i, j) It does not include the ith item: V[i, j] = V[i-1, j] It includes the ith item: V[i, j] = vi+ V[i-1, j – wi] V[i, j] = max{V[i-1, j], vi+ V[i-1, j – wi] }, if j – wi 0 V[i-1, j] if j – wi < 0 V[0, j] = 0 for j 0 and V[i, 0] = 0 for i 0 {
Dynamic Matrix Multiplication If we have a series of matrices of different sizes that we need to multiply, the order we do it can have a big impact on the number of operations For example, if we need to multiply four matrices M1, M2, M3, and M4 of sizes 205, 5 35, 35 4, and 4 25, depending on the order this can take between 3100 and 24,500 multiplications
Dynamic Matrix Multiplication Problem Input: a series of matrices M1, M2, …, MN with different sizes s1s2, s2s3, …, sNsN+1 Output: the order of multiplication of these matrices such that the total number of multiplications is minimized Can you give a solution to this problem? Try all possibilities to find the order, but It is time consuming because the number of all possibilities is a Catalan number C(N-1) where
Dynamic Matrix Multiplication: Idea We look at the ways to pair the matrices and then combine these into groups of three, and then four keep track of the efficiency of these partial results
Characterize the optimal solution Denote M[i:j] (i<j)as MiMi+1…Mj, our problem is to find the optimal multiplication order of M[1: N]. Consider M[i:j] (i<j) and i k<j: The optimal value for M[i:j] must be the minimum one of the optimal values for M[i:k] plus for M[k+1:j] plus sisk+1sj+1 for i k<j. A key property: If M[i: j] is divide into M[i:k] and M[k+1:j], then the optimal order for M[i: j] contains the optimal order for M[i:k] and M[k+1:j].
Construct the recurrence relation for value of the optimal solution Let cost[i:j] be the number of multiplications of the optimal solution for M[i:j]: cost[i:j] = min i k<j{cost[i:k]+cost[k+1:j]+ sisk+1sj+1} cost[i:i] = 0 cost[1:N]: minimum number of multiplications of the problem
Algorithm: bottom-up computation of the cost[1:n] for i = 1 to N do costi,i = 0 end for for i = 1 to N-1 do for j = 1 to N-i do loc = i + j tempCost = for k = j to loc-1 do if tempCost > costj,k + costk+1,loc + sj*sk+1*sloc+1 then tempCost = costj,k + costk+1,loc + sj*sk+1*sloc+1 tempTrace = k end if end for k costj,loc = tempCost tracej,loc = tempTrace end for j end for i
Algorithm – Construct an Optimal Solution GetOrder( first, last, order ) if first < last then middle = tracefirst,last GetOrder( first, middle, order ) GetOrder( middle+1, last, order ) orderposition = middle position = position + 1 end if
Time Complexity O(n3)