Dynamic Programming Carrie Williams. What is Dynamic Programming? Method of breaking the problem into smaller, simpler sub-problems Method of breaking.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Dynamic Programming.
Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Problem Solving Dr. Andrew Wallace PhD BEng(hons) EurIng
CMSC 150 RECURSION CS 150: Mon 26 Mar Motivation : Bioinformatics Example  A G A C T A G T T A C  C G A G A C G T  Want to compare sequences.
Introduction to Algorithms
CSC 252 Algorithms Haniya Aslam
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
Divide and Conquer. Recall Complexity Analysis – Comparison of algorithm – Big O Simplification From source code – Recursive.
Outline 1. General Design and Problem Solving Strategies 2. More about Dynamic Programming – Example: Edit Distance 3. Backtracking (if there is time)
1 Dynamic Programming Jose Rolim University of Geneva.
Nattee Niparnan. Recall  Complexity Analysis  Comparison of Two Algos  Big O  Simplification  From source code  Recursive.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Code
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Minimum Spanning Network: Brute Force Solution
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 6 Instructor: Paul Beame TA: Gidon Shavit.
Optimal binary search trees
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Catalan Numbers.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Fundamentals of Algorithms MCS - 2 Lecture # 7
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Lecture 5 Dynamic Programming
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Advanced Design and Analysis Techniques
Matrix Chain Multiplication
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Lecture 5 Dynamic Programming
CSCE 411 Design and Analysis of Algorithms
Matrix Chain Multiplication
ICS 353: Design and Analysis of Algorithms
Binhai Zhu Computer Science Department, Montana State University
Data Structure and Algorithms
Dynamic Programming.
Lecture 8. Paradigm #6 Dynamic Programming
DYNAMIC PROGRAMMING.
Dynamic Programming II DP over Intervals
Dynamic Programming.
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Seminar on Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Dynamic Programming Carrie Williams

What is Dynamic Programming? Method of breaking the problem into smaller, simpler sub-problems Method of breaking the problem into smaller, simpler sub-problems Start with the smallest sub-problems and combine to produce a solution Start with the smallest sub-problems and combine to produce a solution Similar to Divide and Conquer Similar to Divide and Conquer –But uses a bottom-top approach

When is it used? Most often used with Optimization Problems Most often used with Optimization Problems When the Brute Force approach becomes too time consuming When the Brute Force approach becomes too time consuming –Brute Force is when you find all possible solutions and compare them

Example: Matrix-Chain Multiplication Given: A sequence of matrices (A 1 A 2,…,A n ) with A i having dimension m i-1 x m Given: A sequence of matrices (A 1 A 2,…,A n ) with A i having dimension m i-1 x m Cost of a Solution: The number of operations needed to compute A 1 *A 2 *…*A n Cost of a Solution: The number of operations needed to compute A 1 *A 2 *…*A n Optimal Solution: The solution with minimal cost Optimal Solution: The solution with minimal cost

MCM Example We have a matrix sequence of (90,20,15,50,180) We have a matrix sequence of (90,20,15,50,180) So we have matrices of the following sizes: So we have matrices of the following sizes: –A 1 = 90 x 20 –A 2 = 20 x 15 –A 3 = 15 x 50 –A 4 = 50 x 180

MCM Example Cont. We want to compute A 1 *A 2 *A 3 *A 4 We want to compute A 1 *A 2 *A 3 *A 4 5 possible ways 5 possible ways –(A 1 (A 2 (A 3 A 4 ))) –(A 1 ((A 2 A 3 )A 4 )) –((A 1 A 2 )(A 3 A 4 )) –((A 1 (A 2 A 3 ))A 4 ) –(((A 1 A 2 )A 3 )A 4 ) Recall that the number of operations needed to compute an m x n matrix with a n x p is mnp Recall that the number of operations needed to compute an m x n matrix with a n x p is mnp

MCM Example Cont. We could try all possible solutions and see what solution gives us the least cost We could try all possible solutions and see what solution gives us the least costOR We could use the method of dynamic programming We could use the method of dynamic programming –Finding the optimal solution by finding the optimal solution of sub-problems

Finding the Solution Start with the computation of two matrices Start with the computation of two matrices Let M[i,j] be the cost of the optimal solution Let M[i,j] be the cost of the optimal solution M[i,j]=min {M[i,k] + M[k+1,j] + m i-1 m k m j } M[i,j]=min {M[i,k] + M[k+1,j] + m i-1 m k m j } i<k<j i<k<j Now we must compute M[i,j], i>1, j 1, j<n, from the bottom up

Finding the Solution cont. Using the previous example, we need to compute M[i,j] for i>1, j 1, j<4 M[i,i]=0, for all i M[i,i]=0, for all i The solutions for 2 matrices are the following: The solutions for 2 matrices are the following: –M[1,2] = 27,000 –M[2,3] = 15,000 –M[3,4] = 135,000

Chart of Optimal Solutions , , ,

Computing the Cost of Three Matrices M[1,3] = min {M[1,2] + M[3,3] + m 0 m 2 m 3 } M[1,3] = min {M[1,2] + M[3,3] + m 0 m 2 m 3 } {M[1,1] + M[2,3] + m 0 m 1 m 3 } = 27, *15*50 = 94,500 {M[1,1] + M[2,3] + m 0 m 1 m 3 } = 27, *15*50 = 94,500 = , *20*50 = 105,000 = , *20*50 = 105,000 M[2,4] = min {M[2,3] + M[4,4] + m 1 m 3 m 4 } M[2,4] = min {M[2,3] + M[4,4] + m 1 m 3 m 4 } {M[2,2] + M[3,4] + m 1 m 2 m 4 } {M[2,2] + M[3,4] + m 1 m 2 m 4 } = 15, *50*180 = 195,000 = 15, *50*180 = 195,000 = , *50*180 = 315,000 = , *50*180 = 315,000 Minimum for M[1,3] = 94,500 = ((A 1 A 2 )A 3 ) Minimum for M[1,3] = 94,500 = ((A 1 A 2 )A 3 ) Minimum for M[2,4] = 195,000 = (A 2 (A 3 A 4 )) Minimum for M[2,4] = 195,000 = (A 2 (A 3 A 4 ))

Chart of Optimal Solutions ,50027, ,00015, ,

Computing the Cost of Four Matrices M[1,4] = min {M[1,3] + M[4,4] +m 0 m 3 m 4 } M[1,4] = min {M[1,3] + M[4,4] +m 0 m 3 m 4 } {M[1,2] + M[3,4] + m 0 m 2 m 4 } {M[1,2] + M[3,4] + m 0 m 2 m 4 } {M[1,1] + M[2,4] + m 0 m 1 m 4 } {M[1,1] + M[2,4] + m 0 m 1 m 4 } = 94, *50*180 = 904,000 = 94, *50*180 = 904,000 = 27, , *15*180=405,000 = 27, , *15*180=405,000 = , *20*180 = 519,000 = , *20*180 = 519,000 Hence, the minimum cost is 405,000, which is ((A 1 A 2 )(A 3 A 4 )) Hence, the minimum cost is 405,000, which is ((A 1 A 2 )(A 3 A 4 ))

Chart of Optimal Solutions ,00094,50027, ,00015, ,

Conclusion We used the method of dynamic programming to solve the matrix-chain multiplication problem We used the method of dynamic programming to solve the matrix-chain multiplication problem We broke the problem into three sub- problems We broke the problem into three sub- problems –The minimal cost of two matrices, then three matrices, and finally four matrices We found the solution by using the sub- problem’s optimal solutions We found the solution by using the sub- problem’s optimal solutions