Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.

Slides:



Advertisements
Similar presentations
Welcome to our presentation
Advertisements

Dynamic Programming.
Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Dynamic Programming Carrie Williams. What is Dynamic Programming? Method of breaking the problem into smaller, simpler sub-problems Method of breaking.
Dynamic Programming Reading Material: Chapter 7..
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
First Ingredient of Dynamic Programming
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Catalan Numbers.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Lecture 5 Dynamic Programming
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Advanced Design and Analysis Techniques
Dynamic programming techniques
Dynamic programming techniques
Lecture 5 Dynamic Programming
Dynamic Programming.
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
DYNAMIC PROGRAMMING.
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Asst. Prof. Dr. İlker Kocabaş
Analysis of Algorithms CS 477/677
Seminar on Dynamic Programming.
Presentation transcript:

Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from the web.

chap15Hsiu-Hui Lee2 Introduction Dynamic programmingDynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal value.

chap15Hsiu-Hui Lee3 The development of a dynamic programming algorithm can be broken into a sequence of four steps: 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom up fashion. 4. Construct an optimal solution from computed information.

chap15Hsiu-Hui Lee4 Assembly-line scheduling

chap15Hsiu-Hui Lee5 a manufacturing problem to find the fast way through a factory a i,j : the assembly time required at station S i,j t i,j : the time to transfer a chassis away from assembly line i after having gone through station S i,j

chap15Hsiu-Hui Lee6

chap15Hsiu-Hui Lee7 An instance of the assembly-line problem with costs

chap15Hsiu-Hui Lee8 Step 1: The structure of the fastest way through the factory

chap15Hsiu-Hui Lee9 Optimal substructure An optimal solution to a problem (fastest way through S 1, j ) contains within it an optimal solution to subproblems. (fastest way through S 1, j−1 or S 2, j−1 ). Use optimal substructure to construct optimal solution to problem from optimal solutions to subproblems. To solve problems of finding a fastest way through S 1, j and S 2, j, solve subproblems of finding a fastest way through S 1, j-1 and S 2, j-1.

chap15Hsiu-Hui Lee10 Step 2: A recursive solution f i [j]: the fastest possible time to get a chassis from the starting point through station S i, j f*: the fastest time to get a chassis all the way through the factory.

chap15Hsiu-Hui Lee11 l i [ j ] = line # (1 or 2) whose station j − 1 is used in fastest way through S i, j. S li [ j ], j−1 precedes S i, j. Defined for i = 1,2 and j = 2,..., n. l ∗ = line # whose station n is used.

chap15Hsiu-Hui Lee12 step 3: computing an optimal solution Let r i (j)be the number of references made to f i [j] in a recursive algorithm. r 1 (n)=r 2 (n)=1 r 1 (j) = r 2 (j)=r 1 (j+1)+r 2 (j+1) The total number of references to all f i [j] values is  (2 n ). We can do much better if we compute the f i [j] values in different order from the recursive way. Observe that for j  2, each value of f i [j] depends only on the values of f 1 [j-1] and f 2 [j-1].

chap15Hsiu-Hui Lee13 F ASTEST -W AY procedure F ASTEST -W AY (a, t, e, x, n) 1 f 1 [1]  e 1 + a 1,1 2 f 2 [1]  e 2 + a 2,1 3 for j  2 to n 4do if f 1 [j-1] + a 1,j  f 2 [j-1] + t 2,j-1 +a 1,j 5 then f 1 [j]  f 1 [j-1] + a 1,j 6 l 1 [j]  1 7 else f 1 [j]  f 2 [j-1] + t 2,j-1 +a 1,j 8 l 1 [j]  2 9 if f 2 [j-1] + a 2, j  f 1 [j-1] + t 1,j-1 +a 2,j

chap15Hsiu-Hui Lee14 10 then f 2 [j]  f 2 [j – 1] + a 2,j 11 l2[j]  2 12 else f 2 [j]  f 1 [j – 1] + t 1,j-1 + a 2,j 13 l 2 [j]  1 14 if f 1 [n] + x 1  f 2 [n] + x 2 15 then f * = f 1 [n] + x 1 16 l * = 1 17else f * = f 2 [n] + x 2 18 l * = 2

chap15Hsiu-Hui Lee15 step 4: constructing the fastest way through the factory P RINT -S TATIONS (l, l*, n) 1i  l* 2print “ line ” i “,station ” n 3for j  n downto 2 4do i  l i [j] 5 print “ line ” i “,station ” j – 1 output line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1

chap15Hsiu-Hui Lee Matrix-chain multiplication A product of matrices is fully parenthesized if it is either a single matrix, or a product of two fully parenthesized matrix product, surrounded by parentheses.

chap15Hsiu-Hui Lee17 How to compute where is a matrix for every i. Example:

chap15Hsiu-Hui Lee18 MATRIX MULTIPLY MATRIX MULTIPLY(A, B) 1if columns[A] column[B] 2then error “ incompatible dimensions ” 3else for to rows[A] 4do for to columns[B] 5do 6for to columns[A] 7do 8return C

chap15Hsiu-Hui Lee19 Complexity: Let A be a matrix B be a matrix. Then the complexity is

chap15Hsiu-Hui Lee20 Example: is a matrix, is a matrix, and is a matrix Then takes time. However takes time.

chap15Hsiu-Hui Lee21 The matrix-chain multiplication problem: Given a chain of n matrices, where for i=0,1, …,n, matrix A i has dimension p i-1  p i, fully parenthesize the product in a way that minimizes the number of scalar multiplications.

chap15Hsiu-Hui Lee22 Counting the number of parenthesizations: [Catalan number]

chap15Hsiu-Hui Lee23 Step 1: The structure of an optimal parenthesization

chap15Hsiu-Hui Lee24 Step 2: A recursive solution Define m[i, j]= minimum number of scalar multiplications needed to compute the matrix goal m[1, n]

Step 3: Computing the optimal costs

chap15Hsiu-Hui Lee26 MATRIX_CHAIN_ORDER MATRIX_CHAIN_ORDER(p) 1 n  length[p] – 1 2 for i  1 to n 3do m[i, i]  0 4 for l  2 to n 5do for i  1 to n – l + 1 6do j  i + l – 1 7 m[i, j]   8 for k  i to j – 1 9do q  m[i, k] + m[k+1, j]+ p i-1 p k p j 10 if q < m[i, j] 11 then m[i, j]  q 12 s[i, j]  k 13 return m and s Complexity:

chap15Hsiu-Hui Lee27 Example:

chap15Hsiu-Hui Lee28 the m and s table computed by MATRIX-CHAIN-ORDER for n=6

chap15Hsiu-Hui Lee29 m[2,5]= min{ m[2,2]+m[3,5]+p 1 p 2 p 5 =  15  20=13000, m[2,3]+m[4,5]+p 1 p 3 p 5 =  5  20=7125, m[2,4]+m[5,5]+p 1 p 4 p 5 =  10  20=11374 } =7125

Step 4: Constructing an optimal solution

chap15Hsiu-Hui Lee31 MATRIX_CHAIN_MULTIPLY MATRIX_CHAIN_MULTIPLY(A, s, i, j) 1 if j > i 2 then 3 4return MATRIX-MULTIPLY(X,Y) 5else return A i example: