Dynamic Programming 又稱 動態規劃 經常被用來解決最佳化問題. Dynamic Programming2 Rod cutting problem Given a rod of length N inches and a table of prices p[i] for i=1..N,

Slides:



Advertisements
Similar presentations
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Unit 4: Dynamic Programming
Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
Dynamic Programming (II)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
§ 8 Dynamic Programming Fibonacci sequence
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Optimal binary search trees
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Algorithms and Data Structures Lecture X
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Dynamic Programming Chapter 15 Highlights Charles Tappert Seidenberg School of CSIS, Pace University.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
Rod cutting Decide where to cut steel rods:
David Meredith Dynamic programming David Meredith
Advanced Algorithms Analysis and Design
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
Chapter 15: Dynamic Programming III
Dynamic Programming.
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Ch. 15: Dynamic Programming Ming-Te Chi
Longest Common Subsequence
Dynamic Programming II DP over Intervals
Matrix Chain Multiplication
Longest Common Subsequence
Optimal Binary Search Tree. 1.Preface  OBST is one special kind of advanced tree.  It focus on how to reduce the cost of the search of the BST.  It.
Presentation transcript:

Dynamic Programming 又稱 動態規劃 經常被用來解決最佳化問題

Dynamic Programming2 Rod cutting problem Given a rod of length N inches and a table of prices p[i] for i=1..N, determine the maximum revenue r[N] obtainable by cutting up the rod and selling the pieces. Length i Price p[i] Eg : N=7, 7=3+4 produces revenue 17 7=1+6=2+2+3 produces revenue 18.

Dynamic Programming3 Rod cutting problem If an optimal solution cuts the rod into k pieces, then N=i 1 +i 2 +…+i k r[N]=p[i 1 ]+…+p[i k ] ----total revenue r[N] = max i=1..n { p[i]+r[N-i]}, r[0]=0. CUT-ROD(p, n) 1.if n==0 return 0; 2.q= ; 3.for i=1 to n 4. q= max(q, p[i]+CUT-ROD(p, n - i)); 5. return q;

Dynamic Programming4 Rod cutting problem Memoized-CUT-ROD(p, n) 1.let r[0,..n] be a new array 2.for i=0 to n 3. r[i]= ; 4. return Memoized-CUT-ROD-AUX(p, n, r) Memoized-CUT-ROD-AUX(p, n, r) 1.if r[n]>=0 return r[n] ; 2.if n==0 q=0; 3.else q= ; 4. for i=0 to n 5. q = max(q, p[i]+Memoized-CUT-ROD-AUX(p, n-i, r)); 6.r[n] = q; 7. return q; Time =O(n 2 ), why?

Dynamic Programming5 BOTTOM-UP-CUT-ROD(p, n) 1.let r[0,..n] be a new array 2.r[0]=0; 3.for j=1 to n 4. q= ; 5. for i= 1 to j 6. q=max(q, p[i] + r[j-i]); 7. r[j]=q; 8. return r[n]; PRINT-CUT-ROD-SOLUTION(p, n) 1.(r, s) = EXTENDED-BOTTOM-UP-CUT-ROD(p, n); 2.while n > 0 3. print s[n]; n=n - s[n]; EXTENDED-BOTTOM-UP-CUT-ROD(p, n) 1.let r[0,..n] and s[0..n] be a new arrays 2.r[0]=0; 3.for j=1 to n 4. q= ; 5. for i= 1 to j 6. if q < p[i] + r[j-i]) 7. q = p[i] + r[j-i]); 8. s[j]=i; 9. r[j]=q; 10. return r and s; Length i Price p[i] r[i] s[i]

Dynamic Programming6 Crazy eights puzzle Given a sequence of cards c[0], c[1],…,c[n-1], e.g. 7H, 6H, 7D, 3D, 8C, JS,.. Find the longest subsequence c[i 1 ], c[i 2 ],…, c[i k ], (i 1 < i 2 <…< i k ), where c[i j ] and c[i j+1 ] have the same suit or rank or one has rank 8.-- match Let T[i] be the length of the longest subsequence starting at c[i]. T[i]= 1 + max {T[j]: c[i] and c[j] have a match and j >n } Optimal solution: max {T[i]}.

Dynamic Programming7 Matrix-chain multiplication Given a sequence  A 1, A 2, …, A n  of n matrices to be multiplied, where A i has dimension p i  1  p i, fully parenthesize the product A 1 A 2 …A n such that the number of scalar multiplications is minimized. Eg : A 1  A 2  A 3  A 4 p i : (A 1 (A 2 (A 3 A 4 ))), (A 1 ((A 2 A 3 )A 4 )), ((A 1 A 2 )( A 3 A 4 )), ((A 1 (A 2 A 3 ))A 4 ), ((( A 1 A 2 )A 3 )A 4 ).

Dynamic Programming8 Matrix-chain multiplication (A 1 (A 2 (A 3 A 4 ))), costs = (A 1 ((A 2 A 3 )A 4 )), costs = 4055 ((A 1 A 2 )( A 3 A 4 )), costs = ((A 1 (A 2 A 3 ))A 4 ), costs = 2856 ((( A 1 A 2 )A 3 )A 4 ), costs = A 1  A 2  A 3  A (A 1 (A 2 (A 3 A 4 )))  A 1  (A 2 A 3 A 4 )  A 2  (A 3 A 4 )  A 3  A 4 cost = 13*5*34 + 5*89* *3*34 = = 26418

Dynamic Programming9 Catalan Number For any n, # ways to fully parenthesize the product of a chain of n+1 matrices = # binary trees with n nodes. = # permutations generated from 1 2 … n through a stack. = # n pairs of fully matched parentheses. = n-th Catalan Number = C(2n, n)/(n +1) =  (4 n /n 3/2 )

Dynamic Programming10 Multiplication tree (A 1 (A 2 (A 3 A 4 ))) (A 1 ((A 2 A 3 )A 4 )) ((A 1 A 2 )( A 3 A 4 )) ((A 1 (A 2 A 3 ))A 4 ) ((( A 1 A 2 )A 3 )A 4 ) A 1 A 2 A 3 A A1  A2  A3  A4 123

Dynamic Programming11 Matrix-chain multiplication k T : If T is an optimal solution for A 1, A 2, …, A n 1, …, k k+1, …, n T1T1 T2T2 then, T 1 (resp. T 2 ) is an optimal solution for A 1, A 2, …, A k (resp. A k+1, A k+2, …, A n ).

Dynamic Programming12 Matrix-chain multiplication Let m[i, j] be the minmum number of scalar multiplications needed to compute the product A i …A j, for 1  i  j  n. If the optimal solution splits the product A i …A j = (A i …A k )  (A k+1 …A j ), for some k, i  k < j, then m[i, j] = m[i, k] + m[k+1, j] + p i  1 p k p j. Hence, we have : m[i, j] = min i  k < j {m[i, k] + m[k+1, j] + p i  1 p k p j } = 0 if i = j

Dynamic Programming13 MATRIX-CHAIN-ORDER(P) 1. n = p.length -1; 2. let m[1..n, 1..n] and s[1..n-1, 2..n] be new tables; 3. for i= 1 to n m[i, i]=0; 4. for l= 2 to n 5. { for i= 1 to n – l { j=i + l- 1; 7. m[i, j]=  ; 8. for k= i to j-1 9. { q = m[i, k] + m[k+1, j]+ P i-1 P k P j 10. if q<m[i, j] 11. { m[i, j] = q ; s[i, j] = k ;} 12. } } } 13. return m and s Time = O(n 3 )

Dynamic Programming14 l =3 m[3,5] = min m[3,4]+m[5,5] + 15*10*20 = = 3750 m[3,3]+m[4,5] + 15*5*20 = = 2500 l = 2 10*20*25 = *15*5= 2625

Dynamic Programming15 Matrix-chain multiplication Consider an example with sequence of dimensions m[i, j] = min i  k < j {m[i, k] + m[k+1, j] + p i  1 p k p j }

Dynamic Programming16 §Constructing an optimal solution l Each entry s[i, j]=k records that the optimal parenthesization of A i A i+1 …A j splits the product between A k and A k+1 l A i..j  (A i..s[i..j] )(A s[i..j]+1..j )

Dynamic Programming17 Matrix-chain multiplication m[i, j] = min i  k < j {m[i, k] + m[k+1, j] + p i  1 p k p j } s[i, j] = a value of k that gives the minimum s [1,6] [2,6] A1A1 [2,5] A6A6 [2,4] A5A5 A4A4 [2,3] A2A2 A3A3 A 1 (((( A 2 A 3 )A 4 )A 5 ) A 6 )

Dynamic Programming18 Matrix-chain multiplication To fill the entry m[i, j], it needs  (j  i) operations. Hence the execution time of the algorithm is m[i, j] =min i  k < j {m[i, k] + m[k+1, j] + p i  1 p k p j } Time:  (n 3 ) Space:  (n 2 )

Dynamic Programming19 MEMOIZED-MATRIX-CHAIN(P) 1. n = p.length -1; 2. let m[1..n, 1..n] be a new table; 3. for i= 1 to n 4. for j= i to n 5. m[i, j]=  ; 6. return LOOKUP-CHAIN(m, p, 1, n) Lookup-Chain(m, P, i, j) 1. if m[i, j] <  return m[i, j]; 2. if i==j m[i, j] = 0 3. else for k=i to j-1 4. { q= Lookup-Chain(m, P, i, k) + Lookup-Chain(m, P, k+1, j) + P i-1 P k P j ; 5. if q < m[i, j] m[i, j] = q;} 6. return m[i, j] ; } time: O(n 3 )space:  (n 2 )

Dynamic Programming20 Design DP algorithm 1.Characterize the structure of an optimal solution. 2.Derive a recursive formula for computing the values of optimal solutions. 3.Compute the value of an optimal solution in a bottom-up fashion (top-down is also applicable). 4.Construct an optimal solution in a top-down fashion.

Dynamic Programming21 Elements of Dynamic Programming Optimal substructure (a problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems) Overlapping subproblems Reconstructing an optimal solution Memoization

Dynamic Programming22 §Given a directed graph G=(V, E) and vertices u, v  V l Unweighted shortest path: Find a path from u to v consisting the fewest edges. Such a path must be simple (no cycle). Optimal substructure? YES. l Unweighted longest simple path: Find a simple path from u to v consisting the most edges. Optimal substructure? NO. q  r  t is longest but q  r is not the longest between q and r. u w v q st r

Dynamic Programming23 Printing neatly Given a sequence of n words of lengths l[1], l[2],…,l[n], measured in characters, want to print it neatly on a number of lines, of which each has at most M characters.. If a line has words i through j, i<=j, and there is exactly one space between words, the cube of number of extra space characters at the end of the line is: B[i, j] =(M – j + i - (l[i]+l[i+1]+…+l[j])) 3. Want to minimize the sum over all lines (except the last line) of the cubes of the number of extra space at the ends of lines. Let c[i] denote the minimum cost for printing words i through n. c[i]= min i<j<=i+p (c[j+1] + B[i,j]), where p is the maximum number of words starting from i-th word that can be fitted into a line.

Dynamic Programming24 Longest Common Subsequence Given two sequences X = and Y = find a maximum-length common subsequence of X and Y. Eg 1 : Input: ABCBDAB BDCABA C.S.’s: AB, ABA, BCB, BCAB, BCBA … Longest: BCAB, BCBA, … Length = 4 A B C B D A B B D C A B A Eg 2 : vintner writers

Dynamic Programming25 Step 1: Characterize longest common subsequence Let Z= be a LCS of X = and Y =. 1.If x m = y n, then z k = x m = y n and is a LCS of and. 2.If z k  x m, then Z is a LCS of and Y. 3.If z k  y n, then Z is a LCS of X and.

Dynamic Programming26 Step 2: A recursive solution Let C[i, j] be the length of an LCS of the prefixes X i = and Y j =, for 1  i  m and 1  j  n. We have : C[i, j] = 0 if i = 0, or j = 0 = C[i  1, j  1] + 1 if i, j > 0 and x i = y j = max(C[i, j  1], C[i  1, j]) if i, j > 0 and x i  y j

Dynamic Programming27 Step 3: Computing the length of an LCS LCS-Length(X,Y) 1. m = X.length; 2. n = Y.length; 3. let b[1..m, 1..n] and c[0..m, 0..n] be new tables; 4. for i= 1 to m c[i, 0] = 0; 5. for j= 1 to n do c[0, j] = 0; 6. for i= 1 to m do 7. for j= 1 to n do 8. { if x i == y j 9. { c[i,j] = c[i-1,j-1]+1; b[i,j] = “  ”; } 10. else if c[i-1, j]  c[i, j-1] 11. { c[i, j] = c[i-1, j]; b[i, j] = “  ”; } 12. else { c[i, j] = c[i, j-1]; b[i, j] = “  ”; } 13. } 14. return b, c

Dynamic Programming28 Step 4: Constructing an LCS Print-LCS(b, X, i, j) 1. if i==0 or j==0 return; 2. if b[i,j] == “  ”; 3. { Print-LCS(b, X, i-1, j-1); 4. print x i ; 5. } 6. else if b[i,j] == “  ” 7. Print-LCS(b, X, i-1, j); 8. else Print-LCS(b, X, i, j-1);

Dynamic Programming29 Longest Common Subsequence C[i, j] = 0 if i = 0, or j = 0 = C[i  1, j  1] + 1 if i, j > 0 and x i = y j = max(C[i, j  1], C[i  1, j]) if i, j > 0 and x i  y j Time:  (mn) Space:  (mn) A BCBDAB B D C A B A LCS: BCBA

Dynamic Programming30 Find a triangulation s.t. the sum of the weights of the triangles in the triangulation is minimized. Optimal Polygon Triangulation v0v0 v1v1 v2v2 v3v3 v4v4 v5v5 v6v6

Dynamic Programming31 Optimal Polygon Triangulation If T is an optimal solution for  v 0, v 1, …, v n  v0v0 vkvk vnvn T1T1 T2T2 then, T 1 (resp. T 2 ) is an optimal solution for  v 0, v 1, …, v k  (resp.  v k, v k+1, …, v n  ), 1  k < n.

Dynamic Programming32 Optimal Polygon Triangulation Let t[i, j] be the weight of an optimal triangulation of the polygon  v i  1, v i,…, v j , for 1  i < j  n. If the triangulation splits the polygon into  v i  1, v i,…, v k  and  v k, v k+1, …,v j  for some k, then t[i, j] = t[i, k] + t[k+1, j] + w(  v i  1 v k v j ). Hence, we have : t[i, j] = min i  k < j {t[i, k] + t[k+1, j] + w(  v i  1 v k v j ) } = 0 if i = j

Dynamic Programming33 Optimal binary search trees: §k=(k 1,k 2,…,k n ) of n distinct keys in sorted order (k 1 <k 2 <…< k n ) §Each key k i has a probability p i that a search will be for k i §Some searches may fail, so we also have n+1 dummy keys: d 0,d 1,…,d n, where d 0 <k 1, k n <d n and k i <d i <k i+1 for i=1,…,n-1. §For each dummy key d i, we have a probability q i that a search will correspond to d i.

Dynamic Programming34 Optimal binary search trees: i pipi qiqi Expected search cost: 2.8 k2k2 k1k1 k4k4 k3k3 k5k5 d0d0 d1d1 d2d2 d3d3 d4d4 d5d5

Dynamic Programming35 §Goal: Given a set of probabilities, want to construct a binary search tree whose expected search cost is the smallest. l Step 1: The structure of an optimal BST. Consider any subtree of a BST, which must contain contiguous keys k i,…,k j for some 1  i  j  n and must also have as its leaves the dummy keys d i-1,…,d j. Optimal-BST has the property of optimal substructure. T: T’

Dynamic Programming36 Step 2: A recursive solution l Find an optimal BST for the keys k i,…,k j, where j  i-1, i  1, j  n. (When j=i-1, there is no actual key, but d i-1 ) l Define e[i,j] as the expected cost of searching an optimal BST containing k i,…,k j. Want to find e[1,n]. For dummy key d i-1, e[i,i-1]=q i-1. l For j  i, need to select a root k r among k i,…,k j. l For a subtree with keys k i,…,k j, denote krkr k i,…,k r-1 k r+1,…,k j

Dynamic Programming37 l e[i,j]=p r +(e[i,r-1]+w(i,r-1))+(e[r+1,j]+w(r+1,j)) =e[i,r-1]+e[r+1,j]+w(i,j) l Thus e[i,j]= l Step 3: Computing the expected search cost of an optimal BST l Use a table w[1..n+1,0..n] for w(i,j)’s. w[i,i-1]=q i-1 w[i,j]=w[i,j-1]+p j +q j

Dynamic Programming38 OPTIMAL-BST(p,q,n) 1. let e[1..n+1, 0..n], w[1..n+1, 0..n] and root[1..n, 1..n] be new tables; 2. for i=1 to n+1 3. e[i,i-1]=q i-1 ; 4. w[i,i-1]=q i-1 ; 5. for ℓ=1 to n 6. for i=1 to n-ℓ+1 7. { j=i+ℓ-1; 8. e[i,j]=  ; 9. w[i,j]=w[i,j-1]+p j +q j ; 10. for r = i to j 11. { t=e[i, r-1]+e[r+1, j] + w[i, j]; 12. if t < e[i,j] 13. e[i,j]=t; 14. root[i,j] = r;}} 15. return e and root  (n 3 )