1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]

Slides:



Advertisements
Similar presentations
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Advertisements

Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
1 Dynamic Programming Jose Rolim University of Geneva.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Unit 3: Matrices.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Greedy Algorithms and Matroids Andreas Klappenecker.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Dynamic Programming.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Example 2 You are traveling by a canoe down a river and there are n trading posts along the way. Before starting your journey, you are given for each 1
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
2.1 Matrix Operations 2. Matrix Algebra. j -th column i -th row Diagonal entries Diagonal matrix : a square matrix whose nondiagonal entries are zero.
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Matrix Chain Multiplication
CSCE 411 Design and Analysis of Algorithms
CSCE 411 Design and Analysis of Algorithms
Matrix Chain Multiplication
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Data Structure and Algorithms
Dynamic Programming.
Lecture 8. Paradigm #6 Dynamic Programming
Dynamic Programming.
//****************************************
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Matrix Chain Multiplication
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Presentation transcript:

1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]

2 Dynamic Programming Optimal substructure An optimal solution to the problem contains within it optimal solutions to subproblems. Overlapping subproblems The space of subproblem is “small” so that the recursive algorithm has to solve the same problems over and over.

Giving Optimal Change 3

Motivation We have discussed a greedy algorithm for giving change. However, the greedy algorithm is not optimal for all denominations. Can we design an algorithm that will give the minimum number of coins as change for any given amount? Answer: Yes, using dynamic programming. 4

Dynamic Programming Task For dynamic programming, we have to find some subproblems that might help in solving the coin-change problem. Idea: Vary amount Restrict the available coins 5

Initial Set Up Suppose we want to compute the minimum number of coins with values v[1]>v[2]>…>v[n]=1 to give change for an amount C. Let us call the (i,j)-problem the problem of computing minimum number of coins with values v[i]>v[i+1]>…>v[n]=1 to give change for an amount 1<= j <= C. The original problem is the (1,C)-problem. 6

Tabulation Let m[i][j] denote the solution to the (i,j)-problem. Thus, m[i][j] denotes the minimum number of coins to make change for the amount j using coins with values v[i],…,v[n]. 7

Tabulation Example Denomination v[1]=10, v[2]=6, v[3]=1 Table of m[i][j] values: 8

A Simple Observation In calculating m[i][j], notice that: a)Suppose we do not use the coin with value v[i] in the solution of the (i,j)- problem, then m[i][j] = m[i+1][j] b)Suppose we use the coin with value v[i] in the solution of the (i,j)-problem, then m[i][j] = 1 + m[i][ j-v[i] ] 9

Tabulation Example Denomination v[1]=10, v[2]=6, v[3]=1 Table of m[i][j] values: 10

Recurrence We either use a coin with value v[i] in the solution or we don’t. m[i+1][j] if v[i]>j m[i][j] = min{ m[i+1][j], 1+m[i][ j-v[i] ] } otherwise 11

DP Coin-Change Algorithm Dynamic_Coin_Change(C,v,n) allocate array m[1..n][0..C]; for(i = 0; i<=C, i++) m[n][i] = i; for(i=n-1; i=>1; i--) { for(j=0; j<=C; j++) { if( v[i]>j || m[i+1][j]<1+m[i][j – v[i]] ) m[i][j] = m[i+1][j]; else m[i][j] = 1+m[i][j –v[i]]; } return &m; 12

Question The previous algorithm allows us to find the minimum number of coins. How can you modify the algorithm to actually compute the change (i.e., the multiplicities of the coins)? 13

Matrix Chain Algorithm 14

Matrices An n x m matrix A over the real numbers is a rectangular array of nm real numbers that are arranged in n rows and m columns. For example, a 3 x 2 matrix A has 6 entries A = where each of the entries a ij is a real number. 15 a 11 a 12 a 21 a 22 a 31 a 32

Definition of Matrix Multiplication Let A be an n x m matrix B an m x p matrix The product of A and B is n x p matrix AB whose (i,j)-th entry is ∑ k=1 m a ik b k j In other words, we multiply the entries of the i-th row of A with the entries of the j-th column of B and add them up. 16

Matrix Multiplication [Images courtesy of Wikipedia] 17

Complexity of Naïve Matrix Multiplication Multiplying non-square matrices: A is n x m, B is m x p AB is n x p matrix [ whose (i, j ) entry is ∑a ik b k j ] Computing the product AB takes nmp scalar multiplications n(m-1)p scalar additions if we take basic matrix multiplication algorithm. 18

19 Matrix Chain Order Problem Matrix multiplication is associative, meaning that (AB)C = A(BC). Therefore, we have a choice of forming the product of several matrices. What is the least expensive way to form the product of several matrices if the naïve matrix multiplication algorithm is used? [Use number of scalar multiplications as cost.]

20 Why Order Matters Suppose we have 4 matrices: A: 30 x 1 B: 1 x 40 C: 40 x 10 D: 10 x 25 ((AB)(CD)) : requires 41,200 mults. (A((BC)D)) : requires 1400 mults.

21 Matrix Chain Order Problem Given matrices A 1, A 2, …, A n, where A i is a d i-1 x d i matrix. [1] What is minimum number of scalar multiplications required to compute the product A 1 · A 2 ·… · A n ? [2] What order of matrix multiplications achieves this minimum? We focus on question [1]; We will briefly sketch an answer to [2].

22 A Possible Solution Try all possibilities and choose the best one. Drawback: There are too many of them (exponential in the number of matrices to be multiplied) Need to be more clever - try dynamic programming!

23 Step 1: Develop a Recursive Solution Define M(i, j ) to be the minimum number of multiplications needed to compute A i · A i+1 ·… · A j Goal: Find M(1,n). Basis: M(i,i) = 0. Recursion: How can one define M(i, j ) recursively?

24 Defining M(i, j ) Recursively Consider all possible ways to split A i through A j into two pieces. Compare the costs of all these splits: best case cost for computing the product of the two pieces plus the cost of multiplying the two products Take the best one M(i,j) = min k (M(i,k) + M(k+1, j ) + d i-1 d k d j )

25 Defining M(i, j ) Recursively (A i ·…· A k )·(A k+1 ·… · A j ) P1P1 P2P2 minimum cost to compute P 1 is M(i,k) minimum cost to compute P 2 is M(k+1,j) cost to compute P 1 · P 2 is d i-1 d k d j

26 Step 2: Find Dependencies Among Subproblems n/a GOAL! M: computing the pink square requires the purple ones: to the left and below.

27 Defining the Dependencies Computing M(i, j ) uses everything in same row to the left: M(i,i), M(i,i+1), …, M(i, j -1) and everything in same column below: M(i, j ), M(i+1, j ),…,M( j, j )

28 Step 3: Identify Order for Solving Subproblems Recall the dependencies between subproblems just found Solve the subproblems (i.e., fill in the table entries) this way: go along the diagonal start just above the main diagonal end in the upper right corner (goal)

29 Order for Solving Subproblems n/a M:

30 Pseudocode for i := 1 to n do M[i,i] := 0 for d := 1 to n-1 do // diagonals for i := 1 to n-d to // rows w/ an entry on d-th diagonal j := i + d // column corresponding to row i on d-th diagonal M[i,j] := infinity for k := 1 to j-1 to M[i,j] := min(M[i,j], M[i,k]+M[k+1,j]+d i-1 d k d j ) endfor running time O(n 3 ) pay attention here to remember actual sequence of mults.

31 Example M: n/a n/a 010,000 4n/a 0 1: A is 30x1 2: B is 1x40 3: C is 40x10 4: D is 10x25

32 Keeping Track of the Order It's fine to know the cost of the cheapest order, but what is that cheapest order? Keep another array S and update it when computing the minimum cost in the inner loop After M and S have been filled in, then call a recursive algorithm on S to print out the actual order

33 Modified Pseudocode for i := 1 to n do M[i,i] := 0 for d := 1 to n-1 do // diagonals for i := 1 to n-d to // rows w/ an entry on d-th diagonal j := i + d // column corresponding to row i on d-th diagonal M[i,j] := infinity for k := 1 to j-1 to M[i,j] := min(M[i,j], M[i,k]+M[k+1,j]+d i-1 d k d j ) if previous line changed value of M[i,j] then S[i,j] := k endfor keep track of cheapest split point found so far: between A k and A k+1

34 Example M: n/a n/a 010,000 4n/a 0 1: A is 30x1 2: B is 1x40 3: C is 40x10 4: D is 10x S:

35 Using S to Print Best Ordering Call Print(S,1,n) to get the entire ordering. Print(S,i, j ): if i = j then output "A" + i //+ is string concat else k := S[i, j ] output "(" + Print(S,i,k) + Print(S,k+1, j ) + ")"