TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING 0 0 0 0 0 0 0 0 0 0.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Analysis of Algorithms
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Dynamic Programming Carrie Williams. What is Dynamic Programming? Method of breaking the problem into smaller, simpler sub-problems Method of breaking.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
Analysis of Algorithms CS 477/677
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
First Ingredient of Dynamic Programming
Algorithms and Data Structures Lecture X
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Introduction to Algorithms All-Pairs Shortest Paths My T. UF.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Dynamic Programming Typically applied to optimization problems
Lecture 5 Dynamic Programming
Lecture 5 Dynamic Programming
Unit-5 Dynamic Programming
Algorithms (2IL15) – Lecture 2
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Algorithms and Data Structures Lecture X
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Matrix Chain Multiplication
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Presentation transcript:

TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING

TU/e Algorithms (2IL15) – Lecture 3 2 Optimization problems  for each instance there are (possibly) multiple valid solutions  goal is to find an optimal solution minimization problem: associate cost to every solution, find min-cost solution maximization problem: associate profit to every solution, find max-profit solution

TU/e Algorithms (2IL15) – Lecture 3 3 Techniques for optimization optimization problems typically involve making choices backtracking: just try all solutions  can be applied to almost all problems, but gives very slow algorithms  try all options for first choice, for each option, recursively make other choices greedy algorithms: construct solution iteratively, always make choice that seems best  can be applied to few problems, but gives fast algorithms  only try option that seems best for first choice (greedy choice), recursively make other choices dynamic programming  in between: not as fast as greedy, but works for more problems

TU/e Algorithms (2IL15) – Lecture 3 4 Algorithms for optimization: how to improve on backtracking for greedy algorithms  try to discover structure of optimal solutions: what properties do optimal solutions have ?  what are the choices that need to be made ?  do we have optimal substructure ? optimal solution = first choice + optimal solution for subproblem  do we have greedy-choice property for the first choice ? what if we do not have a good greedy choice? then we can often use dynamic programming

TU/e Algorithms (2IL15) – Lecture 3 Dynamic programming: examples  maximum weight independent set (non-adjacent vertices) in path graph  optimal matrix-chain multiplication: today  more examples in book and next week

TU/e Algorithms (2IL15) – Lecture 3 6 Matrix-chain multiplication Input: n matrices A 1, A 2, …, A n Required output: A 1 A 2 … A n = a b c a c a x b matrix b b x c matrix a x c matrix cost = a∙b∙c scalar multiplications = Θ(a∙b∙c ) time multiplying two matrices

TU/e Algorithms (2IL15) – Lecture 3 7 Matrix-chain mutiplication Multiplying three matrices: A B C 2 x x x 20 Does it matter in which order we multiply? Note for answer: (AB)C = A(BC) matrix multiplication is associative: What about the cost?  (AB)C costs (2∙10∙50) + (2∙50∙20) = 3,000 scalar multiplications  A(BC) costs (10∙50∙20) + (2∙10∙20) = 10,400 scalar multiplications

TU/e Algorithms (2IL15) – Lecture 3 8 Matrix-chain multiplication: the optimization problem Input: n matrices A 1, A 2, …, A n, where A i is p i-1 x p i matrix In fact: we only need p 0, …,p n as input Required output: an order to multiply the matrices minimizing the cost in other words: the best way to place brackets So we have valid solution = complete parenthesization: ((A 1 A 2 ) (A 3 (A 4 (A 5 A 6 )))) cost = total number of scalar multiplications needed to multiply according to the given parenthesization

TU/e Algorithms (2IL15) – Lecture 3 9 (A 1 (A 2 A 3 ) ) ( (A 4 A 5 ) (A 6 A 7 ) ) choice: final multiplication optimal way to multiply A 1,…, A 3 optimal way to multiply A 4,…, A 7 optimal substructure: optimal solution is composed of optimal solutions to subproblems of same type as original problem optimal solution necessary for dynamic programming (and for greedy) Let’s study the structure of an optimal solution  what are the choices that need to be made?  what are the subproblems that remain? do we have optimal substructure?

TU/e Algorithms (2IL15) – Lecture 3 10 (A 1 (A 2 A 3 ) ) ( (A 4 A 5 ) (A 6 A 7 ) ) optimal solution Let’s study the structure of an optimal solution  what are the choices that need to be made?  what are the subproblems that remain? do we have optimal substructure? choice: final multiplication Do we have a good greedy choice ? no … then what to do? try all possible options for position of final multiplication

TU/e Algorithms (2IL15) – Lecture 3 11 Approach:  try all possible options for final multiplication  for each option recursively solve subproblems Need to specify exactly what the subproblems are initial problem: compute A 1 … A n in cheapest way (A 1 … A k ) ( A k+1 … A n ) subproblem: compute A 1 … A k in cheapest way (A 1 … A i ) ( A i+1 … A k ) subproblem: compute A i+1 … A k in cheapest way general form: given i,j with 1≤i<j ≤n, compute A i … A j in cheapest way

TU/e Algorithms (2IL15) – Lecture 3 12 Now let’s prove that we indeed have optimal substructure Subproblem: given i,j with 1≤i<j ≤n, compute A i … A j in cheapest way Define m [ i, j ] = value of optimal solution to subproblem = minimum cost to compute A i … A j Lemma: m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i < j i≤k<j optimal solution composed of optimal solutions to subproblems but we don’t know the best way to decompose into subproblems

TU/e Algorithms (2IL15) – Lecture 3 13 Lemma: m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i < j Proof: i=j : trivial i<j : Consider optimal solution OPT for computing A i.. A j and let k* be the index such that OPT has the form (A i.. A k* ) (A k*+1.. A j ). Then m [ i,j ] = (cost in OPT for A i.. A k* ) + (cost in OPT for A k*+1.. A n ) + p i-1 p k* p j ≥ m [i,k* ] + m [k*+1,j ] + p i-1 p k* p j ≥ min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } On the other hand, for any i ≤ k <j, there is a solution of cost m [i,k ] + m [k+1,j ] + p i-1 p k p j so m [ i,j ] ≤ min { m [i,k ] + m [k+1,j ] + p i-1 p k p j }. i≤k<j “cut-and-paste” argument: replace solution that OPT uses for subproblem by optimal solution to subproblem, without making overall solution worse.

TU/e Algorithms (2IL15) – Lecture 3 14 RMC (p,i, j ) \\ RMC (RecursiveMatrixChain) determines min cost for computing A i … A j \\ p [0..n] is array such that A i = p[i-1] x p[i ] matrix 1. if i = j 2. then 3. cost ← 0 4. else 5. cost ← ∞ 6. for k ← i to j-1 7. do cost ← min(cost, RMC(p,i,k) + RMC(p,k+1,j) + p i-1 p k p j ) 8. return cost Lemma: m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i < j i≤k<j initial call: RMC(p[0..n],1,n)

TU/e Algorithms (2IL15) – Lecture 3 15 Correctness: by induction and lemma on optimal substructure Analysis of running time: Define T(m) = worst-case running time of RMC(i,j), where m = j - i +1. Then: T(m) = Θ(1) if m = 1 Θ(1) + ∑ 1≤s 1 Claim: T(m) = Ω (2 m ) Proof: by induction (exercise) Algorithm is faster than completely brute-force, but still very slow …

TU/e Algorithms (2IL15) – Lecture 3 16 (1,n) (1,1) (2,n) (1,2) (3,n) (1,3) (4,n) … (1,n-1) (n,n) (2,2) (3,n) (2,3) (4,n) … (2,n-1) (n,n) (3,3) (4,n) … overlapping subproblems, so same subproblem is solved many times Why is the algorithm so slow? optimal substructure + overlapping subproblems: then you can often use dynamic programming recursive calls

TU/e Algorithms (2IL15) – Lecture 3 17 (1,n) (1,1) (2,n) (1,2) (3,n) (1,3) (4,n) … (1,n-1) (n,n) (2,2) (3,n) (2,3) (4,n) … (2,n-1) (n,n) (3,3) (4,n) … How many different subproblems are there? subproblem: given i,j with 1≤i<j ≤n, compute A i … A j in cheapest way number of subproblems = Θ (n 2 ) recursive calls

TU/e Algorithms (2IL15) – Lecture Lemma: m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i < j i≤k<j Idea: fill in the values m [i,j ] in a table (that is, an array) m [1..n,1..n] 0 0 i j 1 1n n value of optimal solution to original problem

TU/e Algorithms (2IL15) – Lecture 3 19 Lemma: m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i < j i≤k<j MatrixChainDP ( p [0..n] ) \\ algorithm uses two-dimensional table (array) m[1..n,1..n] to store values of solutions to subproblems 1. for i ← 1 to n 2. do m[i,i] ← 0 3. for i ← n-1 downto 1 4. do for j ← i +1 to n 5. do m[i,j] ← min { m[i,k] + m[k+1,j] + p i-1 p k p j } 6. return m[1,n] i≤k<j

TU/e Algorithms (2IL15) – Lecture 3 20 i≤k<j ∑ 1≤i≤n-1 ∑ i+1≤j≤n Θ(j-i) = ∑ 1≤i≤n-1 ∑ 1≤j≤n-i Θ(j) = ∑ 1≤i≤n-1 Θ((n-i) 2 ) = ∑ 1≤i≤n-1 Θ(i 2 ) = Θ(n 3 ) Analysis of running time Θ(n) Θ(1) MatrixChainDP ( p [0..n] ) 1. for i ← 1 to n 2. do m[i,i] ← 0 3. for i ← n-1 downto 1 4. do for j ← i +1 to n 5. do m[i,j] ← min { m[i,k] + m[k+1,j] + p i-1 p k p j } 6. return m[1,n] for each of the O(n 2 ) cells we spend O(n) time

TU/e Algorithms (2IL15) – Lecture 3 21 Designing dynamic-programming algorithms  Investigate structure of optimal solution: do we have optimal substructure, no greedy choice, and overlapping subproblems?  Give recursive formula for the value of an optimal solution  define subproblem in terms of a few parameters  define variable m[..] = value of optimal solution for subproblem  give recursive formula for m[..]  Algorithm: fill in table for m[..] in suitable order  Extend algorithm so that it computes an actual solution that is optimal, and not just the value of an optimal solution m [i,j ] =minimum cost to compute A i … A j given i,j with 1≤i<j ≤n, compute A i … A j in cheapest way i≤k<j m [i,j ] = 0 if i = j min { m [i,k ] + m [k+1,j ] + p i-1 p k p j } if i< j

TU/e Algorithms (2IL15) – Lecture 3 22  Extend algorithm so that it computes an actual solution that is optimal, and not just the value of an optimal solution  introduce extra table that remembers choices leading to optimal solution m [i,j ] = minimum cost to compute A i … A j s [i,j ] = index k such that there is an optimal solution of the form (A i … A k ) (A k+1 …A j )  modify algorithm such that it also fills in the new table  write another algorithm that computes an optimal solution by walking “backwards” through the new table choice leading to opt value of opt

TU/e Algorithms (2IL15) – Lecture 3 23 MatrixChainDP ( p [0..n] ) 1. for i ← 1 to n 2. do m[i,i] ← 0 3. for i ← n-1 downto 1 4. do for j ← i +1 to n 5. do m[i,j] ← min { m[i,k] + m[k+1,j] + p i-1 p k p j } 6. return m[1,n] i≤k<j m[i,j] ← ∞ for k ← i to j-1 do begin cost ← m[i,k] + m[k+1,j] + p i-1 p k p j if cost < m[i,j] then m[i,j] ← cost end s[i,j] ← k

TU/e Algorithms (2IL15) – Lecture 3 24 PrintParentheses ( i, j ) 1. if i = j 2. then print “A i ” 3. else begin 4. print “(“ 5. PrintParentheses ( i, s[i,j ] ) 6. PrintParentheses ( s[i,j ]+1, j ) 7. print “)” 8. end s[i,j ] = index k such that there is an optimal solution of the form (A i … A k ) (A k+1 …A j ) to compute A i … A j

TU/e Algorithms (2IL15) – Lecture 3 25 RMC(i,j) if i = j 1. then 2. return 0 3. else 4. cost ← ∞ 5. for k ← i to j-1 6. do cost ← min(m, RMC(i,k) + RMC(k+1,j) + p i-1 p k p j ) 7. return cost initialize array m[i,j] (global variable): m[i,j] = ∞ for all 1 ≤ i,j ≤ n if m[i,j] < ∞ then return m[i,j] else lines 4 – 7 m[i,j] ← cost Alternative to filling in table in suitable order: memoization recursive algorithm, but remember which subproblems have been solved already

TU/e Algorithms (2IL15) – Lecture 3 26 Summary dynamic programming  Investigate structure of optimal solution: do we have optimal substructure, no greedy choice, and overlapping subproblems?  Give recursive formula for the value of an optimal solution  define subproblem in terms of a few parameters  define variable m[..] = value of optimal solution for subproblem  give recursive formula for m[..]  Algorithm: fill in table for m[..] in suitable order  Extend algorithm so that it computes an actual solution that is optimal, and not just the value of an optimal solution next week more examples of dynamic programming