Dynamic Programming Lets begin by looking at the Fibonacci sequence.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Introduction to Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Chapter 4 Systems of Linear Equations; Matrices Section 6 Matrix Equations and Systems of Linear Equations.
Matrices: Inverse Matrix
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Data Structures and Algorithms Graphs Minimum Spanning Tree PLSD210.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Dynamic Programming.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
§5 Backtracking Algorithms A sure-fire way to find the answer to a problem is to make a list of all candidate answers, examine each, and following the.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming for the Edit Distance Problem.
Dynamic Programming Typically applied to optimization problems
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Advanced Design and Analysis Techniques
Matrix Chain Multiplication
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Chapter 8 Dynamic Programming.
CSCE 411 Design and Analysis of Algorithms
Matrix Chain Multiplication
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Chapter 15: Dynamic Programming II
Lecture 8. Paradigm #6 Dynamic Programming
Dynamic Programming-- Longest Common Subsequence
DYNAMIC PROGRAMMING.
COMP108 Algorithmic Foundations Dynamic Programming
Dynamic Programming II DP over Intervals
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Matrix Chain Multiplication
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Analysis of Algorithms CS 477/677
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Dynamic Programming Lets begin by looking at the Fibonacci sequence.

The Fibonacci sequence is defined by the recursive formula Fib(x) = Fib(x-1) + Fib(x-2) However, actually using recursion to calculate Fibonacci numbers is extremely inefficient. Does anyone remember why? Consider what happens if we attempt to calculate Fib(100) using recursion. We get a binary tree with leaf nodes at a minimum depth of 100/2 and a maximum depth of 100. More than 2 50 calculations will be needed.

Fib(100 ) Fib(98) Fib(99) Fib(96)Fib(97) Fib(98) 97 more levels 48 more levels Fib(100 ) Fib(0) Fib(1)Fib(0)Fib(1)Fib(0)Fib(1)

Clearly, the recursive approach gives us an Ω (2 n/2 ) algorithm. It is much faster to use the recursive formula to build from the bottom up. Fib(0) = 1, Fib(1) = 1, Fib(2) = 2, Fib(3) = 3, Fib(4) = 5, Fib(5) = 8 etc. up to Fib(100) This approach gives us an Θ (n) algorithm. If building a table of Fibonacci numbers instead of just one, the algorithm is Θ (c), since each new number only requires two additions.

The Fibonacci sequence is NOT a dynamic programming problem, since it does not involve optimization. However, it does illustrate an important point: Sometimes it is much faster to use a recursive formula to work from the bottom up, building a table of values as you go. Each new value depends on the values that were calculated previously. This is similar to the dynamic programming approach.

Colonel Motors Problem from Cormen Leiserson Rivest Stein Second Edition Chapter 15 pages 324 to 330

Problem: Two assembly lines both produce the same cars, but take different times at each station. Partially completed cars can move to the next station on the same line, or cross over to the next station on other line. Crossing over takes extra time. Problem is to find the fastest path through the entire line. If there are n stations and 2 lines, how many different ways can we select a path? Brute force method is an exponential problem since there are 2 n possible ways to select stations. We will use dynamic programming.

First step is to define the structure of an optimal solution. Suppose we have taken the fastest path through station S 1, j There are only 2 ways we could have arrived there, from S 1, j-1 or by crossing over from S 2, j-1 If the fastest path was from S 1, j-1 then the car must have taken the fastest path from the start to S 1, j-1 If there was a faster path to S 1, j-1 we could substitute this faster path to get a faster path to S 1, j as well. But this is a contradiction since we assumed we already had the fastest path to S 1, j.

We can use similar reasoning for the path crossing over from S 2, j-1 This property is called optimal substructure, and it is necessary for a problem to have this property for a dynamic programming solution to be applicable. The next step is to define the recursive formula. Denote fastest path as f*. Denote f i, j as the fastest time to get from the start through station S i, j Recursive formula (on board) Now fill the total cost array using the recursive formulas.

F 1, 1 = (2+7) = 9 from base case F 2, 1 = (4+8) = 12 from base case j123456exit f 1, j 9 f 2, j 12

F 1, 2 = min(9 + 9, ) = 18 F 2, 2 = min(12 + 5, ) = 16 j123456exit f 1, j 918 f 2, j 1216

Continue in this fashion until the entire table is filled. The last column just adds the exit times. Are we done? No. We still do not know the optimum selection of assembly stations. With some problems (e.g. edit distance) we only need the optimum number, not the sequence of steps need to achieve it. Here, we need the sequence. j123456exit f 1, j f 2, j

Can anyone suggest a method of retrieving the station sequence from the total cost array? There are two ways. One – Maintain a system of pointers that tells us where each optimum was obtained from. For example use an up arrow ↑ to indicate the value was obtained by moving from the previous station on the same line, and a right arrow → to indicate is was obtained by crossing over from the other line. j123456exit f 1, j 918 ↑20 →24 ↑32 ↑35 →38 f 2, j 1216 →22 ↑25 →30 ↑37 ↑39

Trace back using arrows to give shaded path. j123456exit f 1, j 918 ↑20 →24 ↑32 ↑35 →38 f 2, j 1216 →22 ↑25 →30 ↑37 ↑39

The other alternative is to trace back through the total cost array using the recursive formula. j123456exit f 1, j f 2, j

For example, f 1, 6 = min(32 + 4, ) 35 = min(36, 35) So the vaue 35 was obtained by crossing from the other line. Repeat this process to the start. Dynamic programming steps: 1. Verify the problem has optimal substructure. 2. Define the recursive formula. 3.Use the recusive formula to build a table of partial solutions from the bottom up. 4. If required, trace back to obtain the sequence to steps comprising the optimal solution.

Matrix Chain Multiplication Review of matrix multiplication Matrix multiplication is not commutative. AB ≠ BA Matrix multiplication is associative. So that ABC = (AB)C = A(BC) For AB to be defined the number of colums in A must equal the number of rows in B. If A is p x q and B is q x r AB will be p x r.

If A is p x q and B is q x r then C = AB is given by the following formula: C has pr elements and each element takes q multiplications to compute so matrix multiplicaiton is θ (pqr).

Consider the multiplication ABC where A is 10x100, B is 100x5 and C is 5x50. There are two possible parenthesizations. 1. ((AB)C) first = 10*100*5 = 5000 multiplications. Result is a 10x5 matrix. Second = 10*5*50 = 2500 multiplications. Total = (A(BC)) first = 100*5*50 = 25,000 multiplications. Result is a 100x50 matrix. Second = 10*100*50 = 50,000 multiplications. Total = 75,000 Brute force is a poor strategy since it is Ω (4 n / n 3/2 ).

Step 1. Demonstrate that the problem has optimal substructure. Suppose (A 1....A k ) (A k+1....A n ) is an optimum parenthesization of A 1....A n for some k. Then (A 1....A k ) and (A k+1....A n ) must also be optimal. (Use the normal proof by contradiction here.) Step 2. Define a recursive solution. Let m[i][j] be the minimum number of multiplications needed to compute A i....A j where i<j. The cost of the best overall solution is A 1....A n If i=j then m[i][j] = 0 since there is only 1 matrix.

Suppose we are looking for the optimum way to parenthesize A i....A j We will place the parenthesis between A k and A k+1 where i ≤ k < j. The trick is to find the best value for k to minimize m[i][j]. m[i][j] = 0 for i = j = Min( m[i][k] + m[k+1][j] + P i-1 * P k * P j ) for i ≤ k < j where P i-1 * P k * P j is the cost of multiplying the resulting matrices of size P i-1 x P k and P k x P j Here P i refers to the number of columns in A i (or the number of rows in A i+1 )

It is important to note that when we use the recursive formula we must already have calculated m[i][k] and m[k+1][j]. Thus we should fill the cost array in increasing order of the length of the chain. Thus we fill in this order: 1. m[1][2], m[2][3], m[3][4]... all chains of length 2 2. m[1][3], m[2][4], m[3][5]....all chains of length 3 3. m[1][4], m[2][5], m[3][6]....all chains of length 4 4. … 5. m[1][n] the only chain of length n

Fill this diagonal first j = 1 to n final result m[1][n] i = 1 to n

Example with 4 matrices A 1 = 5x4, A 2 = 4x6, A 3 = 6x2, A 4 = 2x7. This makes P 0 = 5, P 1 = 4, P 2 = 6, P 3 = 2, P 4 = i = J =1234

M[1][2] = Min( m[1][k] + m[k+1][2] + P 0 * P k * P 2 ) for 1 ≤ k < 2 = Min( m[1][1] + m[2][2] + P 0 * P 1 * P 2 ) for k = 1 = *4*6 = i = J =

M[2][3] = Min( m[2][k] + m[k+1][3] + P 1 * P k * P 3 ) for 2 ≤ k < 3 = Min( m[2][2] + m[3][3] + P 1 * P 2 * P 3 ) for k = 2 = *6*2 = i = J =

M[3][4] = Min( m[3][k] + m[k+1][4] + P 2 * P k * P 4 ) for 3 ≤ k < 4 = Min( m[3][3] + m[4][4] + P 2 * P 3 * P 4 ) for k = 3 = *2*7 = i = J =

M[1][3] = Min( m[1][k] + m[k+1][3] + P 0 * P k * P 3 ) for 1 ≤ k < 3 = Min ( m[1][1] + m[2][3] + P 0 * P 1 * P 3 ) where k = 1 ( m[1][2] + m[3][3] + P 0 * P 2 * P 3 ) where k = 2 = Min ( *4*2), ( *6*2) = i = J =

M[2][4] = Min( m[2][k] + m[k+1][4] + P 1 * P k * P 4 ) for 2 ≤ k < 4 = Min ( m[2][2] + m[3][4] + P 1 * P 2 * P 4 ) where k = 2 ( m[2][3] + m[4][4] + P 1 * P 3 * P 4 ) where k = 3 = Min ( *6*7), ( *2*7) = i = J =

M[1][4] = Min( m[1][k] + m[k+1][4] + P 0 * P k * P 4 ) for 1 ≤ k < 4 = Min ( m[1][1] + m[2][4] + P 0 * P 1 * P 4 ) where k = 1 ( m[1][2] + m[3][4] + P 0 * P 2 * P 4 ) where k = 2 ( m[1][3] + m[4][4] + P 0 * P 3 * P 4 ) where k = 3 = Min ( *4*7), ( *6*7), ( *2*7) = i = J =

Are we done? No. Just like the Colonel Motors problem, we have the lowest cost, but we don't know the sequence needed to achieve it. The best way is to maintain a separate array s[n][n] that stores the k value that was used to achieve the optimal splitting in the total cost array. The correct placement of parenthesis can then be recovered with the following recursive algorithm.

PrintBrackets( i, j) if i = j then print “A i ” else print “(“ PrintBrackets( i, s[i][j]) PrintBrackets( s[i][j] + 1, j) print “)”