1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Dynamic Programming.
Greedy Algorithms Greed is good. (Some of the time)
Analysis of Algorithms
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Dynamic Programming.
September 12, Algorithms and Data Structures Lecture III Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Introduction to Algorithms
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Introduction to Algorithms Second Edition by Cormen, Leiserson, Rivest & Stein Chapter 15.
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
Analysis of Algorithms
16.Greedy algorithms Hsu, Lih-Hsing. Computer Theory Lab. Chapter 16P An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a.
CS 206 Introduction to Computer Science II 02 / 25 / 2009 Instructor: Michael Eckmann.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
First Ingredient of Dynamic Programming
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
Called as the Interval Scheduling Problem. A simpler version of a class of scheduling problems. – Can add weights. – Can add multiple resources – Can ask.
Advanced Algorithm Design and Analysis (Lecture 5) SW5 fall 2004 Simonas Šaltenis E1-215b
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Dynamic Programming Nattee Niparnan. Dynamic Programming  Many problem can be solved by D&C (in fact, D&C is a very powerful approach if you generalized.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
1 CPSC 320: Intermediate Algorithm Design and Analysis July 28, 2014.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
Tuesday, April 30 Dynamic Programming – Recursion – Principle of Optimality Handouts: Lecture Notes.
Introduction to Algorithms Jiafen Liu Sept
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
1 Algorithms CSCI 235, Fall 2015 Lecture 29 Greedy Algorithms.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
CSCI 58000, Algorithm Design, Analysis & Implementation Lecture 12 Greedy Algorithms (Chapter 16)
Greedy algorithms: CSC317
Algorithms (2IL15) – Lecture 2
Algorithms and Data Structures Lecture III
Advanced Algorithms Analysis and Design
Chapter 15-1 : Dynamic Programming I
Chapter 15: Dynamic Programming II
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
Introduction to Algorithms: Dynamic Programming
DYNAMIC PROGRAMMING.
Lecture 4 Dynamic Programming
Analysis of Algorithms CS 477/677
COMPSCI 330 Design and Analysis of Algorithms
Analysis of Algorithms CS 477/677
Presentation transcript:

1 Chapter 15-1 : Dynamic Programming I

2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems may be solved using a stronger strategy: dynamic programming We will see some examples today About this lecture

3 Assembly Line Scheduling You are the boss of a company which assembles Gundam models to customers

4 Assembly Line Scheduling Normally, to assemble a Gundam model, there are n sequential steps Step 0: getting model Step 1: assembling body Step 2: assembling legs … Step n-1: polishing Step n: packaging

5 Assembly Line Scheduling To improve efficiency, there are two separate assembly lines: … … Line 1 Line 2

6 Assembly Line Scheduling Since different lines hire different people, processing speed is not the same: … … Line 1 Line E.g., Line 1 may need 34 mins, and Line 2 may need 38 mins

7 Assembly Line Scheduling With some transportation cost, after a step in a line, we can process the model in the other line during the next step … … Line 1 Line

8 Assembly Line Scheduling When there is an urgent request, we may finish faster if we can make use of both lines + transportation in between … … Line 1 Line E.g., Process Step 0 at Line 2, then process Step 1 at Line 1,  better than process both steps in Line 1

9 Assembly Line Scheduling Question: How to compute the fastest assembly time? Let p 1,k = Step k’s processing time in Line 1 p 2,k = Step k’s processing time in Line 2 t 1,k = transportation cost from Step k in Line 1 (to Step k+1 in Line 2) t 2,k = transportation cost from Step k in Line 2 (to Step k+1 in Line 1)

10 Assembly Line Scheduling Let f 1,j = fastest time to finish Steps 0 to j, ending at Line 1 f 2,j = fastest time to finish Steps 0 to j, ending at Line 2 So, we have: f 1,0 = p 1,0, f 2,0 = p 2,0 fastest time = min { f 1,n, f 2,n }

11 Assembly Line Scheduling How can we get f 1,j ? Intuition: Let (1,j) = j th step of Line 1 The fastest way to get to (1,j) must be: First get to the (j-1) th step of each lines using the fastest way, and choose whichever one that goes to (1,j) faster Is our intuition correct ?

12 Assembly Line Scheduling Lemma: For any j > 0, f 1,j = min { f 1,j-1 + p 1,j, f 2,j-1 + t 2,j-1 + p 1,j } f 2,j = min { f 2,j-1 + p 2,j, f 1,j-1 + t 1,j-1 + p 2,j } Proof: By induction + contradiction Here, optimal solution to a problem (e.g., f 1,j ) is based on optimal solution to subproblems (e.g., f 1,j-1 and f 2,j-1 )  optimal substructure property

13 Define a function Compute_F(i,j) as follows: Compute_F(i, j) /* Finding f i,j where i = 1 or 2*/ 1. if (j == 0) return p i,0 ; 2. g = Compute_F(i,j-1) + p i,j ; 3. h = Compute_F(3-i,j-1) + t 3-i,j-1 + p i,j ; 4. return min { g, h } ; Calling Compute_F(1,n) and Compute_F(2,n) gives the fastest assembly time Assembly Line Scheduling

14 Assembly Line Scheduling Question: What is the running time of Compute_F(i,n)? Let T(n) denote its running time So, T(n) = 2T(n-1) +  (1)  By Recursion-Tree Method, T(n) =  (2 n )

15 Assembly Line Scheduling To improve the running time, observe that: To Compute_F(1,j) and Compute_F(2,j), both require the SAME subproblems: Compute_F(1,j-1) and Compute_F(2,j-1) So, in our recursive algorithm, there are many repeating subproblems which create redundant computations ! Question: Can we avoid it ?

16 Bottom-Up Approach (Method I) We notice that f i,j depends only on f 1,k or f 2,k with k  j Let us create a 2D table F to store all f i,j values once they are computed Then, let us compute f i,j from j = 0 to n

17 BottomUp_F( ) /* Finding fastest time */ 1. F[1,0] = p 1,0, F[2,0] = p 2,0 ; 2. for (j = 1,2,…, n) { F(1, j) = min { F(1, j-1) + p 1,j, F(2, j-1) + t 2,j-1 + p 1,j } F(2, j) = min { F(2, j-1) + p 2,j, F(1, j-1) + t 1,j-1 + p 2,j } } 3. return min { F[1,n], F[2,n] } ; Running Time =  (n) Bottom-Up Approach (Method I)

18 Memoization (Method II) Similar to Bottom-Up Approach, we create a table F to store all f i,j once computed However, we modify the recursive algorithm a bit, so that we still solve compute the fastest time in a Top-Down Assume: entries of F are initialized empty Memoization comes from the word “memo”

19 Compute_F(i, j) /* Finding f i,j */ 1. if (j == 0) return p i,0 ; 2. g = Compute_F(i,j-1) + p i,j ; 3. h = Compute_F(3-i,j-1) + t 3-i,j-1 + p i,j ; 4. return min { g, h } ; Original Recursive Algorithm

20 Memo_F(i, j) /* Finding f i,j */ 1. if (j == 0) return p i,0 ; 2. if (F[i,j-1] is empty) F[i,j-1] = Memo_F(i,j-1) ; 3. if (F[3-i,j-1] is empty) F[3-i,j-1] = Memo_F(3-i,j-1); 4. g = F[i,j-1] + p i,j ; 5. h = F[3-i,j-1] + t 3-i,j-1 + p i,j ; 6. return min { g, h } ; Memoized Version

21 Memoized Version (Running Time) To find Memo_F(1, n): 1.Memo_F(i, j) is only called when F[i,j] is empty (it becomes nonempty afterwards)   (n) calls 2. Each Memo_F(i, j) call only needs  (1) time apart from recursive calls Running Time =  (n)

22 Dynamic Programming The previous strategy that applies “tables” is called dynamic programming (DP) [ Here, programming means: a good way to plan things / to optimize the steps ] A problem that can be solved efficiently by DP often has the following properties: 1. Optimal Substructure (allows recursion) 2. Overlapping Subproblems (allows speed up)

23 Assembly Line Scheduling Challenge: We now know how to compute the fastest assembly time. How to get the exact sequence of steps to achieve this time? Answer: When we compute f i,j, we remember whether its value is based on f 1,j-1 or f 2,j-1  easy to modify code to get the sequence

24 Sharing Gold Coins Five lucky pirates has discovered a treasure box with 1000 gold coins …

25 Sharing Gold Coins There are rankings among the pirates: … and they decide to share the gold coins in the following way:

26 Sharing Gold Coins First, Rank-1 pirate proposes how to share the coins… If at least half of them agree, go with the proposal Else, Rank-1 pirate is out of the game Hehe, I am going to make the first proposal … but there is a danger that I cannot share any coins

27 Sharing Gold Coins If Rank-1 pirate is out, then Rank-2 pirate proposes how to share the coins… If at least half of the remaining agree, go with the proposal Else, Rank-2 pirate is out of the game Hehe, I get a chance to propose if Rank-1 pirate is out of the game

28 Sharing Gold Coins In general, if Rank-1, Rank-2, …, Rank-k pirates are out, then Rank-(k+1) pirate proposes how to share the coins… If at least half of the remaining agree, go with the proposal Else, Rank-(k+1) pirate is out of the game Question: If all the pirates are smart, who will get the most coin? Why?

29 Homework Exercise: Practice at home: ,