Analysis of Algorithms CS 477/677

Slides:



Advertisements
Similar presentations
Subsequence A subsequence of a sequence/string X = is a sequence obtained by deleting 0 or more elements from X. Example: sudan is a subsequence of sesquipedalian.
Advertisements

Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
Introduction to Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
Introduction to Algorithms Second Edition by Cormen, Leiserson, Rivest & Stein Chapter 15.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Analysis of Algorithms CS 477/677
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Algorithms CSCI 235, Fall 2015 Lecture 29 Greedy Algorithms.
Dynamic Programming (optimization problem) CS3024 Lecture 10.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Advanced Algorithms Analysis and Design
Greedy algorithms: CSC317
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Dynamic Programming (DP)
CS 3343: Analysis of Algorithms
Advanced Algorithms Analysis and Design
Analysis of Algorithms CS 477/677
Greedy Algorithms (Chap. 16)
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Advanced Design and Analysis Techniques
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
CSCE 411 Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
ICS 353: Design and Analysis of Algorithms
Chapter 16: Greedy algorithms Ming-Te Chi
Advanced Algorithms Analysis and Design
A brief introduction to Dynamic Programming
Advanced Algorithms Analysis and Design
Lecture 6 Topics Greedy Algorithm
Chapter 15-1 : Dynamic Programming I
Analysis of Algorithms CS 477/677
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Lecture 8. Paradigm #6 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
DYNAMIC PROGRAMMING.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
COMP108 Algorithmic Foundations Dynamic Programming
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Asst. Prof. Dr. İlker Kocabaş
Dynamic Programming.
Analysis of Algorithms CS 477/677
Presentation transcript:

Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 16

Mid-Term Results – CS 477 Max = 104 Min = 35 Avg. = 80.25 CS 477/677 - Lecture 16

Mid-Term Results – CS 677 Max = 114 Min = 61 Avg. = 99 CS 477/677 - Lecture 16

Dynamic Programming Used for optimization problems Find a solution with the optimal value (minimum or maximum) A set of choices must be made to get an optimal solution There may be many solutions that return the optimal value: we want to find one of them CS 477/677 - Lecture 16

Assembly Line Scheduling Automobile factory with two assembly lines Each line has n stations: S1,1, . . . , S1,n and S2,1, . . . , S2,n Corresponding stations S1, j and S2, j perform the same function but can take different amounts of time a1, j and a2, j Times to enter are e1 and e2 and times to exit are x1 and x2 CS 477/677 - Lecture 16

Assembly Line After going through a station, the car can either: stay on same line at no cost, or transfer to other line: cost after Si,j is ti,j , i = 1, 2, j = 1, . . . , n-1 CS 477/677 - Lecture 16

Assembly Line Scheduling Problem: What stations should be chosen from line 1 and what from line 2 in order to minimize the total time through the factory for one car? CS 477/677 - Lecture 16

One Solution Brute force Enumerate all possibilities of selecting stations Compute how long it takes in each case and choose the best one There are 2n possible ways to choose stations Infeasible when n is large 1 1 if choosing line 1 at step j (= n) 2 3 4 n 0 if choosing line 2 at step j (= 3) CS 477/677 - Lecture 16

1. Structure of the Optimal Solution How do we compute the minimum time of going through the station? CS 477/677 - Lecture 16

1. Structure of the Optimal Solution Let’s consider all possible ways to get from the starting point through station S1,j We have two choices of how to get to S1, j: Through S1, j - 1, then directly to S1, j Through S2, j - 1, then transfer over to S1, j a1,j a1,j-1 a2,j-1 t2,j-1 S1,j S1,j-1 S2,j-1 CS 477/677 - Lecture 16

1. Structure of the Optimal Solution Suppose that the fastest way through S1, j is through S1, j – 1 We must have taken the fastest way from entry through S1, j – 1 If there were a faster way through S1, j - 1, we would use it instead Similarly for S2, j – 1 a1,j a1,j-1 a2,j-1 t2,j-1 S1,j S1,j-1 S2,j-1 CS 477/677 - Lecture 16

Optimal Substructure Generalization: an optimal solution to the problem find the fastest way through S1, j contains within it an optimal solution to subproblems: find the fastest way through S1, j - 1 or S2, j - 1. This is referred to as the optimal substructure property We use this property to construct an optimal solution to a problem from optimal solutions to subproblems CS 477/677 - Lecture 16

2. A Recursive Solution Define the value of an optimal solution in terms of the optimal solution to subproblems Assembly line subproblems Finding the fastest way through each station j on each line i (i = 1,2, j = 1, 2, …, n) CS 477/677 - Lecture 16

2. A Recursive Solution f* = the fastest time to get through the entire factory fi[j] = the fastest time to get from the starting point through station Si,j f* = min (f1[n] + x1, f2[n] + x2) CS 477/677 - Lecture 16

2. A Recursive Solution j = 1 (getting through station 1) fi[j] = the fastest time to get from the starting point through station Si,j j = 1 (getting through station 1) f1[1] = e1 + a1,1 f2[1] = e2 + a2,1 CS 477/677 - Lecture 16

2. A Recursive Solution Compute fi[j] for j = 2, 3, …,n, and i = 1, 2 Fastest way through S1, j is either: the way through S1, j - 1 then directly through S1, j, or f1[j - 1] + a1,j the way through S2, j - 1, transfer from line 2 to line 1, then through S1, j f2[j -1] + t2,j-1 + a1,j f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) a1,j a1,j-1 a2,j-1 t2,j-1 S1,j S1,j-1 S2,j-1 CS 477/677 - Lecture 16

2. A Recursive Solution e1 + a1,1 if j = 1 f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2 e2 + a2,1 if j = 1 f2[j] = min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j) if j ≥ 2 CS 477/677 - Lecture 16

3. Computing the Optimal Value f* = min (f1[n] + x1, f2[n] + x2) f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) Solving top-down would result in exponential running time 1 2 3 4 5 f1[j] f1(1) f1(2) f1(3) f1(4) f1(5) f2[j] f2(1) f2(2) f2(3) f2(4) f2(5) 4 times 2 times CS 477/677 - Lecture 16

3. Computing the Optimal Value For j ≥ 2, each value fi[j] depends only on the values of f1[j – 1] and f2[j - 1] Compute the values of fi[j] in increasing order of j Bottom-up approach First find optimal solutions to subproblems Find an optimal solution to the problem from the subproblems increasing j 1 2 3 4 5 f1[j] f2[j] CS 477/677 - Lecture 16

4. Construct the Optimal Solution We need the information about which line has been used at each station: li[j] – the line number (1, 2) whose station (j - 1) has been used to get in fastest time through Si,j, j = 2, 3, …, n l* – the line number (1, 2) whose station n has been used to get in fastest time through the exit point increasing j 2 3 4 5 l1[j] l2[j] CS 477/677 - Lecture 16

FASTEST-WAY(a, t, e, x, n) f1[1] ← e1 + a1,1 f2[1] ← e2 + a2,1 for j ← 2 to n do if f1[j - 1] + a1,j ≤ f2[j - 1] + t2, j-1 + a1, j then f1[j] ← f1[j - 1] + a1, j l1[j] ← 1 else f1[j] ← f2[j - 1] + t2, j-1 + a1, j l1[j] ← 2 if f2[j - 1] + a2, j ≤ f1[j - 1] + t1, j-1 + a2, j then f2[j] ← f2[j - 1] + a2, j l2[j] ← 2 else f2[j] ← f1[j - 1] + t1, j-1 + a2, j l2[j] ← 1 Compute initial values of f1 and f2 Compute the values of f1[j] and l1[j] Compute the values of f2[j] and l2[j] CS 477/677 - Lecture 16

FASTEST-WAY(a, t, e, x, n) (cont.) if f1[n] + x1 ≤ f2[n] + x2 then f* = f1[n] + x1 l* = 1 else f* = f2[n] + x2 l* = 2 Compute the values of the fastest time through the entire factory CS 477/677 - Lecture 16

Example e1 + a1,1, if j = 1 f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2 1 2 3 4 5 f1[j] 9 18[1] 20[2] 24[1] 32[1] f* = 35[1] f2[j] 12 16[1] 22[2] 25[1] 30[2] CS 477/677 - Lecture 16

4. Construct an Optimal Solution Alg.: PRINT-STATIONS(l, n) i ← l* print “line ” i “, station ” n for j ← n downto 2 do i ←li[j] print “line ” i “, station ” j - 1 line 1, station 5 line 1, station 4 line 1, station 3 line 2, station 2 line 1, station 1 1 2 3 4 5 f1[j] l1[j] 9 18[1] 20[2] 24[1] 32[1] l* = 1 f2[j] l2[j] 12 16[1] 22[2] 25[1] 30[2] CS 477/677 - Lecture 16

Dynamic Programming Algorithm Characterize the structure of an optimal solution Fastest time through a station depends on the fastest time on previous stations Recursively define the value of an optimal solution f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) Compute the value of an optimal solution in a bottom-up fashion Fill in the fastest time table in increasing order of j (station #) Construct an optimal solution from computed information Use an additional table to help reconstruct the optimal solution CS 477/677 - Lecture 16

Readings Chapters 14, 15 CS 477/677 - Lecture 16