Dynamic Programming.

Slides:



Advertisements
Similar presentations
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Advertisements

Overview What is Dynamic Programming? A Sequence of 4 Steps
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Dynamic Programming Reading Material: Chapter 7..
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
1 0-1 Knapsack problem Dr. Ying Lu RAIK 283 Data Structures & Algorithms.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Advanced Algorithms Analysis and Design
Advanced Algorithms Analysis and Design
Advanced Algorithms Analysis and Design
Seminar on Dynamic Programming.
Advanced Design and Analysis Techniques
Least common subsequence:
Dynamic programming techniques
Dynamic programming techniques
Dynamic Programming Several problems Principle of dynamic programming
Chapter 8 Dynamic Programming.
Dynamic Programming Comp 122, Fall 2004.
Dynamic Programming.
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
ICS 353: Design and Analysis of Algorithms
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Advanced Algorithms Analysis and Design
Dynamic Programming Dynamic Programming 1/15/ :41 PM
CS6045: Advanced Algorithms
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Longest Common Subsequence
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Data Structure and Algorithms
Dynamic Programming Comp 122, Fall 2004.
Lecture 8. Paradigm #6 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
CSC 413/513- Intro to Algorithms
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming II DP over Intervals
Algorithms and Data Structures Lecture X
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
0-1 Knapsack problem.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Seminar on Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Dynamic Programming

Dynamic Programming Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to sub-problems. “Programming” in this context refers to a tabular method, not to writing computer code. We typically apply dynamic programming to optimization problems Many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value.

Dynamic Programming When developing a dynamic-programming algorithm, we follow a sequence of four steps: Characterize the structure of an optimal solution. Recursively define the value of an optimal solution. Compute the value of an optimal solution, typically in a bottom-up fashion. Construct an optimal solution from computed information. If we need only the value of an optimal solution, and not the solution itself, then we can omit step 4.

Knapsack problem Given some items, pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that we can carry is no more than some fixed number W. So we must consider weights of items as well as their values. Item # Weight Value 1 1 8 2 3 6 3 5 5

Knapsack problem There are two versions of the problem: Items are indivisible; you either take an item or not. Some special instances can be solved with dynamic programming “Fractional knapsack problem” Items are divisible: you can take any fraction of an item

0-1 Knapsack problem Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight wi and benefit value bi (all wi and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items?

0-1 Knapsack problem Problem, in other words, is to find The problem is called a “0-1” problem, because each item must be entirely accepted or rejected.

0-1 Knapsack problem: brute-force approach Let’s first solve this problem with a straightforward algorithm Since there are n items, there are 2n possible combinations of items. We go through all combinations and find the one with maximum value and with total weight less or equal to W Running time will be O(2n)

0-1 Knapsack problem: dynamic programming approach We can do better with an algorithm based on dynamic programming We need to carefully identify the subproblems

Defining a Subproblem Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight wi and benefit value bi (all wi and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items?

Defining a Subproblem We can do better with an algorithm based on dynamic programming We need to carefully identify the subproblems Let’s try this: If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k}

Defining a Subproblem If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k} This is a reasonable subproblem definition. The question is: can we describe the final solution (Sn ) in terms of subproblems (Sk)? Unfortunately, we can’t do that.

? Defining a Subproblem wi bi Weight Benefit wi bi Item # ? 1 2 3 Max weight: W = 20 For S4: Total weight: 14 Maximum benefit: 20 S4 2 4 5 S5 3 5 8 4 3 4 5 9 10 w1 =2 b1 =3 w2 =4 b2 =5 w3 =5 b3 =8 w5 =9 b5 =10 Solution for S4 is not part of the solution for S5!!! For S5: Total weight: 20 Maximum benefit: 26

Defining a Subproblem As we have seen, the solution for S4 is not part of the solution for S5 So our definition of a subproblem is flawed and we need another one!

Defining a Subproblem Given a knapsack with maximum capacity W, and a set S consisting of n items Each item i has some weight wi and benefit value bi (all wi and W are integer values) Problem: How to pack the knapsack to achieve maximum total value of packed items?

Defining a Subproblem Let’s add another parameter: w, which will represent the maximum weight for each subset of items The subproblem then will be to compute V[k,w], i.e., to find an optimal solution for Sk = {items labeled 1, 2, .. k} in a knapsack of size w

Recursive Formula for subproblems The subproblem will then be to compute V[k,w], i.e., to find an optimal solution for Sk = {items labeled 1, 2, .. k} in a knapsack of size w Assuming knowing V[i, j], where i=0,1, 2, … k-1, j=0,1,2, …w, how to derive V[k,w]?

Recursive Formula for subproblems (continued) It means, that the best subset of Sk that has total weight w is: 1) the best subset of Sk-1 that has total weight  w, or 2) the best subset of Sk-1 that has total weight  w-wk plus the item k

Recursive Formula The best subset of Sk that has the total weight  w, either contains item k or not. First case: wk>w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable. Second case: wk  w. Then the item k can be in the solution, and we choose the case with greater value.

0-1 Knapsack Algorithm for w = 0 to W V[0,w] = 0 for i = 1 to n V[i,0] = 0 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Remember that the brute-force algorithm Running time for w = 0 to W V[0,w] = 0 for i = 1 to n V[i,0] = 0 < the rest of the code > O(W) Repeat n times O(W) What is the running time of this algorithm? O(n*W) Remember that the brute-force algorithm takes O(2n)

Example Let’s run our algorithm on the following data: n = 4 (# of elements) W = 5 (max weight) Elements (weight, benefit): (2,3), (3,4), (4,5), (5,6)

Example (2) i\W 1 2 3 4 5 1 2 3 4 for w = 0 to W V[0,w] = 0

Example (3) 1 2 3 4 5 i\W for i = 1 to n V[i,0] = 0

Example (4) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=1 bi=3 wi=2 1 2 3 4 5 i=1 bi=3 wi=2 w=1 w-wi =-1 1 2 3 4 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (5) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=1 bi=3 wi=2 1 2 3 4 5 i\W i=1 bi=3 wi=2 w=2 w-wi =0 3 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (6) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=1 bi=3 wi=2 1 2 3 4 5 i\W i=1 bi=3 wi=2 w=3 w-wi =1 3 3 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (7) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=1 bi=3 wi=2 1 2 3 4 5 i\W i=1 bi=3 wi=2 w=4 w-wi =2 3 3 3 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (8) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=1 bi=3 wi=2 1 2 3 4 5 i\W i=1 bi=3 wi=2 w=5 w-wi =3 3 3 3 3 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (9) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=2 bi=4 wi=3 1 2 3 4 5 i\W i=2 bi=4 wi=3 w=1 w-wi =-2 3 3 3 3 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (10) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=2 bi=4 wi=3 1 2 3 4 5 i\W i=2 bi=4 wi=3 w=2 w-wi =-1 3 3 3 3 3 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (11) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=2 bi=4 wi=3 1 2 3 4 5 i\W i=2 bi=4 wi=3 w=3 w-wi =0 3 3 3 3 3 4 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (12) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=2 bi=4 wi=3 1 2 3 4 5 i\W i=2 bi=4 wi=3 w=4 w-wi =1 3 3 3 3 3 4 4 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (13) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=2 bi=4 wi=3 1 2 3 4 5 i\W i=2 bi=4 wi=3 w=5 w-wi =2 3 3 3 3 3 4 4 7 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (14) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=3 bi=5 wi=4 1 2 3 4 5 i\W i=3 bi=5 wi=4 w= 1..3 3 3 3 3 3 4 4 7 3 4 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (15) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=3 bi=5 wi=4 1 2 3 4 5 i\W i=3 bi=5 wi=4 w= 4 w- wi=0 3 3 3 3 3 4 4 7 3 4 5 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (16) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=3 bi=5 wi=4 1 2 3 4 5 i\W i=3 bi=5 wi=4 w= 5 w- wi=1 3 3 3 3 3 4 4 7 3 4 5 7 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (17) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=4 bi=6 wi=5 1 2 3 4 5 i\W i=4 bi=6 wi=5 w= 1..4 3 3 3 3 3 4 4 7 3 4 5 7 3 4 5 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

Example (18) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) i=4 bi=6 wi=5 1 2 3 4 5 i\W i=4 bi=6 wi=5 w= 5 w- wi=0 3 3 3 3 3 4 4 7 3 4 5 7 3 4 5 7 if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w] else V[i,w] = V[i-1,w] // wi > w

How to find actual Knapsack Items All of the information we need is in the table. V[n,W] is the maximal value of items that can be placed in the Knapsack. Let i=n and k=W if V[i,k]  V[i1,k] then mark the ith item as in the knapsack i = i1, k = k-wi else i = i1 // Assume the ith item is not in the knapsack // Could it be in the optimally packed knapsack?

Finding the Items Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 1 2 3 4 5 1 2 3 4 5 i\W i=4 k= 5 bi=6 wi=5 V[i,k] = 7 V[i1,k] =7 3 3 3 3 3 4 4 7 3 4 5 7 3 4 5 7 i=n, k=W while i,k > 0 if V[i,k]  V[i1,k] then mark the ith item as in the knapsack i = i1, k = k-wi else i = i1

Finding the Items (2) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 1 2 3 1 2 3 4 5 i\W i=4 k= 5 bi=6 wi=5 V[i,k] = 7 V[i1,k] =7 3 3 3 3 3 4 4 7 3 4 5 7 3 4 5 7 i=n, k=W while i,k > 0 if V[i,k]  V[i1,k] then mark the ith item as in the knapsack i = i1, k = k-wi else i = i1

Finding the Items (3) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 1 2 3 1 2 3 4 5 i\W i=3 k= 5 bi=5 wi=4 V[i,k] = 7 V[i1,k] =7 3 3 3 3 3 4 4 7 3 4 5 7 3 4 5 7 i=n, k=W while i,k > 0 if V[i,k]  V[i1,k] then mark the ith item as in the knapsack i = i1, k = k-wi else i = i1

Finding the Items (4) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 1 2 3 1 2 3 4 5 i\W i=2 k= 5 bi=4 wi=3 V[i,k] = 7 V[i1,k] =3 k  wi=2 3 3 3 3 3 4 4 7 7 3 4 5 7 3 4 5 7 i=n, k=W while i,k > 0 if V[i,k]  V[i1,k] then mark the ith item as in the knapsack i = i1, k = k-wi else i = i1

Finding the Items (5) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 1 2 3 1 2 3 4 5 i\W i=1 k= 2 bi=3 wi=2 V[i,k] = 3 V[i1,k] =0 k  wi=0 3 3 3 3 3 3 4 4 7 3 4 5 7 3 4 5 7 i=n, k=W while i,k > 0 if V[i,k]  V[i1,k] then mark the ith item as in the knapsack i = i1, k = k-wi else i = i1

Finding the Items (6) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 1 2 3 1 2 3 4 5 i\W i=0 k= 0 3 3 3 3 3 4 4 7 The optimal knapsack should contain {1, 2} 3 4 5 7 3 4 5 7 i=n, k=W while i,k > 0 if V[i,k]  V[i1,k] then mark the nth item as in the knapsack i = i1, k = k-wi else i = i1

Finding the Items (7) Items: 1: (2,3) 2: (3,4) 3: (4,5) 4: (5,6) 1 2 3 1 2 3 4 5 i\W 3 3 3 3 3 3 4 4 7 7 The optimal knapsack should contain {1, 2} 3 4 5 7 3 4 5 7 i=n, k=W while i,k > 0 if V[i,k]  V[i1,k] then mark the nth item as in the knapsack i = i1, k = k-wi else i = i1

Memorization (Memory Function Method) Goal: Solve only subproblems that are necessary and solve it only once Memorization is another way to deal with overlapping subproblems in dynamic programming With memorization, we implement the algorithm recursively: If we encounter a new subproblem, we compute and store the solution. If we encounter a subproblem we have seen, we look up the answer Most useful when the algorithm is easiest to implement recursively Especially if we do not need solutions to all subproblems.

0-1 Knapsack Memory Function Algorithm for i = 1 to n for w = 1 to W V[i,w] = -1 for w = 0 to W V[0,w] = 0 V[i,0] = 0 MFKnapsack(i, w) if V[i,w] < 0 if w < wi value = MFKnapsack(i-1, w) else value = max(MFKnapsack(i-1, w), bi + MFKnapsack(i-1, w-wi)) V[i,w] = value return V[i,w]

Matrix-chain multiplication We are given a sequence (chain) (A1,A2,…,An) of n matrices to be multiplied, and we wish to compute the product A1*A2*….*An We can solve this by using the standard algorithm for multiplying pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how the matrices are multiplied together. A product of matrices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses.

Matrix-chain multiplication If we have four matrices A1, A2, A3, A4 , then:

Standard Algorithm for Multiplication

Order of Multiplication Suppose we have 3 Matrices A1, A2, A3 of sizes 10 x 100, 100 x 5, and 5 x 50 respectively We have two options ((A1, A2), A3) and (A1, (A2, A3)) For option 1 we will do 10*100*5 = 5000 + 10*5*50 = 2500 => 7,500 operations For option 2 we will do 100*5*50 = 25,000 + 10*100*50 = 50,000 => 75,000 operations Option 1 is 10 times faster than option 2

matrix-chain multiplication problem Given a chain (A1,A2,…,An) of n matrices, where for i = 1,2,…,n, matrix Ai has dimension pi-1 x pi , fully parenthesize the product A1A2 … An in a way that minimizes the number of scalar multiplications. Counting the number of parenthesizations

Applying dynamic programming Recall Characterize the structure of an optimal solution. Recursively define the value of an optimal solution. Compute the value of an optimal solution, typically in a bottom-up fashion. Construct an optimal solution from computed information.

Step 1: The structure of an optimal parenthesization Let us use the notation Ai..j for the matrix that results from the product Ai Ai+1 … Aj An optimal parenthesization of the product A1A2…An splits the product between Ak and Ak+1 for some integer k where1 ≤ k < n First compute matrices A1..k and Ak+1..n ; then multiply them to get the final matrix A1..n Example, k = 4 (A1A2A3A4)(A5A6) Total cost of A1..6 = cost of A1..4 plus total cost of multiplying these two matrices together.

Step 1: The structure of an optimal parenthesization Key observation: parenthesizations of the subchains A1A2…Ak and Ak+1Ak+2…An must also be optimal if the parenthesization of the chain A1A2…An is optimal (why?) That is, the optimal solution to the problem contains within it the optimal solution to subproblems We must ensure that when we search for the correct place to split the product, we have considered all possible places, so that we are sure of having examined the optimal one.

Step 1: The structure of an optimal parenthesization

Step 2: A recursive solution we define the cost of an optimal solution recursively in terms of the optimal solutions to subproblems. We have to find the minimum cost for finding Ai Ai+1 … Aj for 1 <= i <= j <= n. Let m[i, j] be the minimum number of scalar multiplications necessary to compute Ai..j Minimum cost to compute A1..n is m[1, n] Suppose the optimal parenthesization of Ai..j splits the product between Ak and Ak+1 for some integer k where i ≤ k < j

Step 2: A recursive solution We can define m[i, j] recursively as follows. If i = j , the problem is trivial; To compute m[i, j] for i < j, we observe Ai..j = (Ai Ai+1…Ak)·(Ak+1Ak+2…Aj)= Ai..k · Ak+1..j Cost of computing Ai..j = cost of computing Ai..k + cost of computing Ak+1..j + cost of multiplying Ai..k and Ak+1..j Cost of multiplying Ai..k and Ak+1..j is pi-1pk pj m[i, j ] = m[i, k] + m[k+1, j ] + pi-1pk pj for i ≤ k < j m[i, i ] = 0 for i=1,2,…,n

Step 2: A recursive solution But… optimal parenthesization occurs at one value of k among all possible i ≤ k < j Check all these and select the best one 0 if i=j m[i, j ] = min {m[i, k] + m[k+1, j ] + pi-1pk pj } if i<j i ≤ k< j To keep track of how to construct an optimal solution, we use a table s s[i, j ] = value of k at which Ai Ai+1 … Aj is split for optimal parenthesization

Step 3: Computing the optimal costs Input: Array p[0…n] containing matrix dimensions and n Result: Minimum-cost table m and split table s MATRIX-CHAIN-ORDER(p[ ], n) for i ← 1 to n m[i, i] ← 0 for l ← 2 to n for i ← 1 to n-l+1 j ← i+l-1 m[i, j] ←  for k ← i to j-1 q ← m[i, k] + m[k+1, j] + p[i-1] p[k] p[j] if q < m[i, j] m[i, j] ← q s[i, j] ← k return m and s Takes O(n3) time Requires O(n2) space

Step 3: Computing the optimal costs How much subproblems in total? One for each choice of i and j satisfying 1 ≤ i ≤ j ≤ n Θ(n2) MATRIX-CHAIN-ORDER(p) Input: a sequence p = < p0, p1, p2,…, pn> (length[p] = n+1) Try to fill in the table m in a manner that corresponds to solving the parenthesization problem on matrix chains of increasing length Lines 4-12: compute m[i, i+1], m[i, i+2], … each time

Step 3: Computing the optimal costs

Step 3: Computing the optimal costs

Step 3: Computing the optimal costs

Step 4: Constructing an optimal solution Each entry s[i, j] records the value of k such that the optimal parenthesization of AiAi+1…Aj splits the product between Ak and Ak+1 A1..n → A1..s[1..n] As[1..n]+1..n A1..s[1..n] → A1..s[1, s[1..n]] As[1, s[1..n]]+1..s[1..n] Recursive…

Step 4: Constructing an optimal solution A1 A2 A3 A4 A5 A6

Matrix Chain Multiply Algorithm Matrix-Chain-Multiply(A, s, i, j) if j > i then x = Matrix-Chain-Multiply(A, s, i, s[i, j) y = Matrix-Chain-Multiply(A, s, s[i, j]+1, j) return (x, y) else return Ai

Optimal binary search trees Given sequence K = k1, k2, . . . , kn of n distinct keys, sorted (k1 < k2 < · · · < kn). Want to build a binary search tree from the keys. For ki , have probability pi that a search is for ki . Want BST with minimum expected search cost.

Optimal binary search trees: Example

Example

Optimal substructure

Optimal substructure (1)

Recursive solution

Recursive solution (1)

Computing an optimal solution

Computing an optimal solution(1)

Computing an optimal solution(2)

Computing an optimal solution(3)

Construct an optimal solution

Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms. A strand of DNA consists of a string of molecules called bases, where the possible bases are adenine, guanine, cytosine, and thymine. Representing each of these bases by its initial letter, we can express a strand of DNA as a string over the finite set {A; C; G; T}. For example, the DNA of one organism may be S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA, and the DNA of another organism may be S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA. GTCGTCGGAAGCCGGCCGAA.

Longest common subsequence Formally, given a sequence X = {x1; x2; : : : ;xm}, another sequence Z = {z1, z2, …. , zk} is a subsequence of X if there exists a strictly increasing sequence {i1, i2, … ,ik} of indices of X such that for all j = 1,2,…,k, we have xij = zj . For example, Z = {B; C; D; B} is a subsequence of X = {A; B;C; B;D;A;B} with corresponding index sequence {2; 3; 5; 7}. Given two sequences X and Y , we say that a sequence Z is a common subsequence of X and Y if Z is a subsequence of both X and Y . Longest common subsequence (LCS) is {B, C, B, A}

longest-common-subsequence problem we are given two sequences X = {x1; x2; : : : ; xm} and Y = {y1; y2; : : : ; yn} and wish to find a maximum length common subsequence of X and Y .

Step 1: Characterizing a longest common subsequence To be precise, given a sequence X = {x1; x2; : : : ;xm}, we define the ith prefix of X, for i = { 0; 1; : : : ;m} as Xi = {x1; x2; : : : ; xi }. For example, if X = {A; B; C; B; D; A; B}, then X4 = {A;B;C;B} and X0 is the empty sequence.

Step 2: A recursive solution We can readily see the overlapping-subproblems property in the LCS problem. To find an LCS of X and Y , we may need to find the LCSs of X and Yn-1 and of Xm-1 and Y . But each of these subproblems has the subsubproblem of finding an LCS of Xm-1 and Yn-1. Many other subproblems share subsubproblems.

Step 3: Computing the length of an LCS Procedure LCS-LENGTH takes two sequences X = {x1; x2; : : : ; xm} and Y = {y1;y2; : : : ;yn} as inputs. It stores the c[I, j] values in a table c[0..m 0.. n], It computes the entries in row-major order. The procedure also maintains the table b[1 .. m 1 … n] to help us construct an optimal solution.

Step 3: Computing the length of an LCS

Step 3: Computing the length of an LCS

Step 4: Constructing an LCS