DYNAMIC PROGRAMMING.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Dynamic Programming.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Problem Solving Dr. Andrew Wallace PhD BEng(hons) EurIng
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Introduction to Algorithms
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
Introduction to Algorithms Second Edition by Cormen, Leiserson, Rivest & Stein Chapter 15.
Dynamic Programming CIS 606 Spring 2010.
Analysis of Algorithms CS 477/677
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Fundamentals of Algorithms MCS - 2 Lecture # 7
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Advanced Algorithms Analysis and Design
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Advanced Algorithms Analysis and Design
Greedy algorithms: CSC317
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Lecture 12.
All-pairs Shortest paths Transitive Closure
Advanced Algorithms Analysis and Design
Design & Analysis of Algorithm Dynamic Programming
Lecture 5 Dynamic Programming
Dynamic Programming (DP)
Analysis of Algorithms CS 477/677
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Advanced Design and Analysis Techniques
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Types of Algorithms.
Lecture 5 Dynamic Programming
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Advanced Algorithms Analysis and Design
Dynamic Programming.
Chapter 15-1 : Dynamic Programming I
Analysis of Algorithms CS 477/677
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Dynamic Programming.
Lecture 4 Dynamic Programming
COMP108 Algorithmic Foundations Dynamic Programming
Analysis of Algorithms CS 477/677
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Asst. Prof. Dr. İlker Kocabaş
Dynamic Programming.
Analysis of Algorithms CS 477/677
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

DYNAMIC PROGRAMMING

Optimization Problems If a problem has only one correct solution, then optimization is not required For example, there is only one sorted sequence containing a given set of numbers. Optimization problems have many solutions. We want to compute an optimal solution e. g. with minimal cost and maximal gain. There could be many solutions having optimal value. Dynamic programming is very effective technique. Development of dynamic programming algorithms can be broken into a sequence steps as in the next.

Why Dynamic Programming? Dynamic programming, like divide and conquer method, solves problems by combining the solutions to sub-problems. Divide and conquer algorithms: Partition the problem into independent sub-problem Solve the sub-problem recursively Combine their solutions to solve the original problem In contrast, dynamic programming is applicable when the sub- problems are not independent. Dynamic programming is typically applied to optimization problems.

Dynamic programming Dynamic programming is a way of improving on inefficient divide-and-conquer algorithms. By “inefficient”, we mean that the same recursive call is made over and over. If same subproblem is solved several times, we can use table to store result of a subproblem the first time it is computed and thus never have to recompute it again. Dynamic programming is applicable when the subproblems are dependent, that is, when subproblems share subsubproblems. “Programming” refers to a tabular method

Difference between DP and Divide-and-Conquer Using Divide-and-Conquer to solve these problems is inefficient because the same common subproblems have to be solved many times. DP will solve each of them once and their answers are stored in a table for future use.

Elements of Dynamic Programming (DP) DP is used to solve problems with the following characteristics: Simple subproblems We should be able to break the original problem to smaller subproblems that have the same structure Optimal substructure of the problems The optimal solution to the problem contains within optimal solutions to its subproblems. Overlapping sub-problems there exist some places where we solve the same subproblem more than once.

Steps to Designing a Dynamic Programming Algorithm 1. Characterize optimal substructure 2. Recursively define the value of an optimal solution 3. Compute the value bottom up 4. (if needed) Construct an optimal solution

Fibonacci Numbers Fn= Fn-1+ Fn-2 n ≥ 2 F0 =0, F1 =1 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, … Straightforward recursive procedure is slow! Let’s draw the recursion tree

Fibonacci Numbers

Fibonacci Numbers We can calculate Fn in linear time by remembering solutions to the solved subproblems – dynamic programming Compute solution in a bottom-up fashion In this case, only two values need to be remembered at any time

Assembly-line scheduling

Assembly-line scheduling Automobiles factory with two assembly lines. Each line has the same number “n” of stations. Numbered j = 1, 2, ..., n. We denote the jth station on line i (where i is 1 or 2) by Si,j . The jth station on line 1 (S1,j) performs the same function as the jth station on line 2 (S2,j ). The time required at each station varies, even between stations at the same position on the two different lines, as each assembly line has different technology. Time required at station Si,j is (ai,j) . There is also an entry time (ei) for the auto to enter assembly line i and an exit time (xi) for the completed auto to exit assembly line i.

Assembly-line scheduling After going through station Si,j, can either q stay on same line next station is Si,j+1 no transfer cost , or q transfer to other line next station is S3-i,j+1 or Si+i,j+1 transfer cost from Si,j to S3-i,j+1 is ti,j ( j = 1, … , n–1) No ti,n, because the assembly line is complete after Si,n

Assembly-line Scheduling Station S1,1 Station S1,2 Station S1,3 Station S1,4 … Station S1,n-1 Station S1,n Assembly line 1 a1,1 a1,2 a1,3 a1,4 a1,n-1 a1,n e1 t2,1 t1,1 t1,2 t2,2 t1,3 t2,3 t1,n-1 t2,n-1 … x1 Stations Si,j; 2 assembly lines, i = 1,2; n stations, j = 1,...,n. ai,j = assembly time at Si,j; ti,j = transfer time to other line after station Si,j ei = entry time from line i; xi = exit time from line i . auto enters Completed auto exits a2,1 a2,2 a2,3 a2,n-1 a2,n a2,4 x2 e2 Assembly line 2 Station S2,1 Station S2,2 Station S2,3 Station S2,4 … Station S2,n-1 Station S2,n

To minimize the total processing time for one auto. Problem Definition Which stations should be chosen from line 1 and which from line 2 in order to minimize the total time through the factory for one car? To minimize the total processing time for one auto.

Requires to examine Ω(2n) possibilities Brute Force Enumerate all possibilities of selecting stations Compute how long it takes in each case and choose the best one Solution: There are 2n possible ways to choose stations Infeasible when n is large!! 1 1 if choosing line 1 at step j (= n) 2 3 4 n 0 if choosing line 2 at step j (= 3) Requires to examine Ω(2n) possibilities

Step 1: Find Optimal Structure An optimal solution to a problem contains within it an optimal solution to subproblems. The fastest way through station Si,j contains within it the fastest way through station S1,j-1 or S2,j-1 . Thus can construct an optimal solution to a problem from the optimal solutions to subproblems.

Step 1: Optimal Solution Structure Dynamic Programming Step 1: Optimal Solution Structure optimal substructure : choosing the best path to Sij. The structure of the fastest way through the factory (from the starting point) The fastest possible way to get through Si,1 (i = 1, 2) Only one way: from entry starting point to Si,1 Take time is entry time (ei)

Step 1: Optimal Solution Structure a1,j-1 a1,j The fastest way to reach S1,j-1 The fastest possible way to get through Si,j (i = 1, 2) (j = 2, 3, ..., n). Two choices: Stay in the same line: Si,j-1  Si,j Time is fi[j-i]+ ai,j Transfer to other line: S1,j-1  S2,j or S2,j-1  S1,j Time is f2[j-1] + t2,j-1 + a1,j or f1[j-1] + t1,j-1 + a2,j or a2,j-1 a1,j The fastest way to reach S2,j-1 t2,n-1

Step 2: Recursive Solution Define the value of an optimal solution recursively in terms of the optimal solution to sub-problems. Sub-problem here Finding the fastest way through station j on both lines (i=1,2) Let fi [j] be the fastest possible time to go from starting point through Si,j The fastest time to go all the way through the factory: f* x1 and x2 are the exit times from lines 1 and 2, respectively

Step 2: Recursive Solution The fastest time to go through Si,j e1 and e2 are the entry times for lines 1 and 2 3 2 In Out f1[1] = e1 + a1,1; f2[1] = e2 + a2,1. f1[j] = min (f1[j-1] + a1,j, f2[j-1] + t2,j-1 + a1,j) for j ≥ 2; f2[j] = min (f2[j-1] + a2,j, f1[j-1] + t1,j-1 + a2,j) for j ≥ 2; 4 2

Step 2 : Recursive Solution ․ Overlapping subproblem: The fastest way through station S1,j is either through S1,j-1 and then S1,j , or through S2,j-1 and then transfer to line 1 and through S1,j. ․ fi[j]: fastest time from the starting point through Si,j ․ The fastest time all the way through the factory. fi[j] (i=1,2; j=1,2,…,n) records optimal values to the subproblems. f* = min(f1[n] + x1, f2[n] + x2)

Step 2: Recursive Solution To keep the track of the fastest way, introduce li[j] to record the line number (1 or 2), whose station j-1 is used in a fastest way through Si,j. Introduce l* to be the line whose station n is used in a fastest way through the factory. We avoid defining li[1] because no station precedes station 1 on either lines.

Step 2: Recursive Solution Base Cases f1[1] = e1 + a1,1 f2[1] = e2 + a2,1

Step 2: Recursive Solution Two possible ways of computing f1[j],For j = 2, 3, . . ., n f1[j] = min (f1[j-1] + a1, j, f2[j-1] + t2, j-1 + a1, j) Symmetrically, f2[j], For j = 2, 3, . . ., n f2[j] = min (f2[j-1] + a2, j, f1[j-1] + t1, j-1 + a2, j) Objective function = f* = min(f1[n] + x1, f2[n] + x2)

Example: Computation of f1[1] & f2[1]] 7 9 3 4 8 5 6 In Out 2 1 f1[1] = e1 + a1,1 = 2 + 7 = 9 7 9 3 4 8 5 6 In Out 2 1 f2[1] = e2 + a2,1 = 4 + 8 = 12

Example: Computation of f1[2] 7 9 3 4 8 5 6 In Out 2 1 f1[1] = 9 f2[1] = 12 j = 2 f1[j] = min (f1[j-1] + a1, j, f2[j-1] + t2, j-1 + a1, j) f1[2] = min (f1[1] + a1, 2, f2[1] + t2, 1 + a1, 2) = min (9 + 9, 12 + 2 + 9) = min (18, 23) = 18 l1[2] = l1[2] = 1

Example: Computation of f2[2] 7 9 3 4 8 5 6 In Out 2 1 f1[1] = 9 f2[1] = 12 f2[j] = min (f2[j-1] + a2, j, f1[j-1] + t1, j-1 + a2, j) j = 2 f2[2] = min (f2[1] + a2, 2, f1[1] + t1, 1 + a2, 2) = min (12 + 5, 9 + 2 + 5) = min (17, 16) = 16 l2[2] = l2[2] = 1

Example: Computation of f1[3] 7 9 3 4 8 5 6 In Out 2 1 f1[2] = 18 f2[2] = 16 f1[j] = min (f1[j-1] + a1, j, f2[j-1] + t2, j-1 + a1, j) j = 3 f1[3] = min (f1[2] + a1, 3, f2[2] + t2, 2 + a1, 3) = min (18 + 3, 16 + 1 + 3) = min (21, 20) = 20 l1[3] = l1[3] = 2

Example: Computation of f2[3] 7 9 3 4 8 5 6 In Out 2 1 f1[2] = 18 f2[2] = 16 f2[j] = min (f2[j-1] + a2, j, f1[j-1] + t1, j-1 + a2, j) j = 3 f2[3] = min (f2[2] + a2, 3, f1[2] + t1, 2 + a2, 3) = min (16 + 6, 18 + 3 + 6) = min (22, 27) = 22 l2[3] = l2[3] = 2

Example: Computation of f1[4] 7 9 3 4 8 5 6 In Out 2 1 f1[3] = 20 f2[3] = 22 f1[j] = min (f1[j-1] + a1, j, f2[j-1] + t2, j-1 + a1, j) j = 4 f1[4] = min (f1[3] + a1, 4, f2[3] + t2, 3 + a1, 4) = min (20 + 4, 22 + 1 + 4) = min (24, 27) = 24 l1[4] = l1[4] = 1

Example: Computation of f2[4] 7 9 3 4 8 5 6 In Out 2 1 f1[3] = 20 f2[3] = 22 f1[j] = min (f1[j-1] + a1, j, f2[j-1] + t2, j-1 + a1, j) j = 4 f2[4] = min (f2[3] + a2, 4, f1[3] + t1, 3 + a2, 4) = min (22 + 4, 20 + 1 + 4) = min (26, 25) = 25 l2[4] = l2[4] = 1

Example: Computation of f1[5] 7 9 3 4 8 5 6 In Out 2 1 f1[4] = 24 f2[4] = 25 f1[j] = min (f1[j-1] + a1, j, f2[j-1] + t2, j-1 + a1, j) j = 5 f1[5] = min (f1[4] + a1, 5, f2[4] + t2, 4 + a1, 5) = min (24 + 8, 25 + 2 + 8) = min (32, 35) = 32 l1[5] = l1[5] = 1

Example: Computation of f2[5] 7 9 3 4 8 5 6 In Out 2 1 f1[4] = 24 f2[4] = 25 f2[j] = min (f2[j-1] + a2, j, f1[j-1] + t1, j-1 + a2, j) j = 5 f2[5] = min (f2[4] + a2, 5, f1[4] + t1, 4 + a2, 5) = min (25 + 5, 24 + 3 + 5) = min (30, 32) = 30 l2[5] = l2[5] = 2

Example: Computation of f1[6] 7 9 3 4 8 5 6 In Out 2 1 f1[5] = 32 f2[5] = 30 f1[j] = min (f1[j-1] + a1, j, f2[j-1] + t2, j-1 + a1, j) j = 6 f1[6] = min (f1[5] + a1, 6, f2[5] + t2, 5 + a1, 6) = min (32 + 4, 30 + 1 + 4) = min (36, 35) = 35 l1[6] = l1[6] = 2

Example: Computation of f2[6] 7 9 3 4 8 5 6 In Out 2 1 f1[5] = 32 f2[5] = 30 f2[j] = min (f2[j-1] + a2, j, f1[j-1] + t1, j-1 + a2, j) j = 6 f2[6] = min (f2[5] + a2, 6, f1[5] + t1, 5 + a2, 6) = min (30 + 7, 32 + 4 + 7) = min (37, 43) = 37 l2[6] = l2[6] = 2

Example: Computation of f* f* = min (f1[6] + x1, f2[6] + x2) = min (35 + 3, 37 + 2) = min (38, 39) = 38 f1[6] = 35 f2[6] = 37 L* = 1

Entire Solution Set: Assembly-Line Scheduling 7 9 3 4 8 5 6 In Out 2 1 j j 1 2 3 4 5 6 9 18 20 24 32 35 12 16 22 25 30 37 2 3 4 5 6 1 Fi[j] Li[j] f* = 38 l* = 1

Fastest Way: Assembly-Line Scheduling Let li[j] be the line number, 1 or 2, whose station j-1 is used in the solution. eg. l1[5] = 2 means that the fastest way to reach station 5 of line 1 should pass through station 4 of line 2. Let l* be the line whose station n is used in the solution. eg. l*=2 means that the fastest way involves station n of line 2. 7 9 3 4 8 4 3 2 1 3 4 2 3 In Out 2 1 2 1 2 4 2 8 5 6 4 5 7 l* = 1 => l1[6] = 2 => l2[5] = 2 => l2[4] = 1 => l1[3] = 2 => l2[2] = 1 => Station S1, 6 j 2 3 4 5 6 1 Station S2,5 Li[j] Station S2,4 Station S1,3 Station S2,2 Station S1,1 l* = 1

Step 3: Optimal Solution Value What we are doing, is: Starting at the bottom level, continuously filling in the tables (f1[], f2[] and l1[], l2[]), and Looking up the tables to compute new results for next higher level. Most of the sub-problems are visited more than once. Any clever method to handle them? Exercise: what is the complexity of this algorithm?

Step 3: Optimal Solution Value Compute initial values of f1 and f2 Compute the values of f1[j] and l1[j] Compute the values of f1[j] and l1[j] O(N) Compute the values of the fastest time through the entire factory

Step 3: Optimal Solution Value

Step 4: Construct an optimal solution Step 4. Construct an optimal solution from computed information. Sample PRINT-STATIONS() 1 i  l* 2 print “line ” i “, station ” n 3 for j  n down to 2 4 do i  li[j] 5 print “line ” i “, station ” j-1 line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1

Assembly-line Scheduling What we have done is indeed Dynamic Programming Now it is time to test your memory: Dynamic Programming Step 1 Characterize the structure of an optimal solution Eg. Study the structure of the fastest way through the factory. Step 2 Recursively define the value of an optimal solution Step 3 Compute the value of an optimal solution in a bottom-up fashion Step 4 Construct an optimal solution from computed information.