Dynamic Programming Examples By Alexandra Stefan and Vassilis Athitsos.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Analysis of Algorithms
Dynamic Programming.
Recursion and Dynamic Programming CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Introduction to Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Greedy vs Dynamic Programming Approach
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Solving Optimization Problems.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CS 206 Introduction to Computer Science II 10 / 14 / 2009 Instructor: Michael Eckmann.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
CS 206 Introduction to Computer Science II 02 / 25 / 2009 Instructor: Michael Eckmann.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
ITEC 2620A Introduction to Data Structures Instructor: Prof. Z. Yang Course Website: 2620a.htm Office: TEL 3049.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
1 CSC 427: Data Structures and Algorithm Analysis Fall 2008 Dynamic programming  top-down vs. bottom-up  divide & conquer vs. dynamic programming  examples:
Recursion and Dynamic Programming CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
Quicksort CSE 2320 – Algorithms and Data Structures Vassilis Athitsos University of Texas at Arlington 1.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Dynamic Programming David Kauchak cs161 Summer 2009.
Lecture 151 Programming & Data Structures Dynamic Programming GRIFFITH COLLEGE DUBLIN.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Dynamic Programming. What is Dynamic Programming  A method for solving complex problems by breaking them down into simpler sub problems. It is applicable.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Alignment, Part II Vasileios Hatzivassiloglou University of Texas at Dallas.
Greedy algorithms: CSC317
Lecture 12.
Fundamental Structures of Computer Science
Advanced Design and Analysis Techniques
Topic 25 Dynamic Programming
CS38 Introduction to Algorithms
Dynamic Programming.
Dynamic Programming.
Prepared by Chen & Po-Chuan 2016/03/29
Unit-5 Dynamic Programming
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Chapter 15: Dynamic Programming II
Matrix Multiplication (Dynamic Programming)
DYNAMIC PROGRAMMING.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
COMP108 Algorithmic Foundations Dynamic Programming
Analysis of Algorithms CS 477/677
Presentation transcript:

Dynamic Programming Examples By Alexandra Stefan and Vassilis Athitsos

Generate Fibonacci numbers – 3 solutions: inefficient recursive, memoization (top-down dynamic programming (DP)), bottom- up DP. – Not an optimization problem but it has overlapping subproblems => DP eliminates the recomputing the same problem over and over again. Weighted interval scheduling Matrix multiplication

Fibonacci Numbers Fibonacci(0) = 0 Fibonacci(1) = 1 If N >= 2: Fibonacci(N) = Fibonacci(N-1) + Fibonacci(N-2) How can we write a function that computes Fibonacci numbers? 3

Fibonacci Numbers Fibonacci(0) = 0 Fibonacci(1) = 1 If N >= 2: Fibonacci(N) = Fibonacci(N-1) + Fibonacci(N-2) Consider this function: what is its running time? 4 int Fib(int i) { if (i < 1) return 0; if (i == 1) return 1; return Fib(i-1) + Fib(i-2); }

Fibonacci Numbers Fibonacci(0) = 0 Fibonacci(1) = 1 If N >= 2: Fibonacci(N) = Fibonacci(N-1) + Fibonacci(N-2) Consider this function: what is its running time? – g(N) = g(N-1) + g(N-2) + constant – g(N) = O(Fibonacci(N)) = O(1.618 N ) – We cannot even compute Fibonacci(40) in a reasonable amount of time. – See how many times this function is executed. 5 int Fib(int i) { if (i < 1) return 0; if (i == 1) return 1; return Fib(i-1) + Fib(i-2); }

Fibonacci Numbers Fibonacci(0) = 0 Fibonacci(1) = 1 If N >= 2: Fibonacci(N) = Fibonacci(N-1) + Fibonacci(N-2) Alternative: remember values we have already computed. 6 exponential version: int Fib(int i) { if (i < 1) return 0; if (i == 1) return 1; return Fib(i-1) + Fib(i-2); } linear version: int Fib_iter (int i) { int * F = (int*)malloc(sizeof(int) * (i+1)); F[0] = 0; F[1] = 1; int j; for (j = 2; j <= i; j++) F[j] = F[j-1] + F[j-2]; return F[i]; }

Fibonacci Numbers Fibonacci(0) = 0 Fibonacci(1) = 1 If N >= 2: Fibonacci(N) = Fibonacci(N-1) + Fibonacci(N-2) Alternative: remember values we have already computed. 7 exponential version: int Fib(int i) { if (i < 1) return 0; if (i == 1) return 1; return Fib(i-1) + Fib(i-2); } memoized version (helper function only): int Fib_aux (int i, int[] sol) { if (sol[i]!=-1) return sol[i]; int res; if (i < 1) res = 0; else if (i==1) res =1 ; else res = Fib_aux(i-1, sol) + Fib_aux(i-2, sol); sol[i] = res; return res; }

Weighted Interval Scheduling (WIS)

Suppose you are a plumber. You are offered N jobs. Each job has the following attributes: – start: the start time of the job. – finish: the finish time of the job. – value: the amount of money you get paid for that job. What is the best set of jobs you can take up? – You want to make the most money possible. Why can't you just take up all the jobs? 9

Weighted Interval Scheduling (WIS) Suppose you are a plumber. You are offered N jobs. Each job has the following attributes: – start: the start time of the job. – finish: the finish time of the job. – value: the amount of money you get paid for that job. What is the best set of jobs you can take up? – You want to make the most money possible. Why can't you just take up all the jobs? Because you cannot take up two jobs that are overlapping. 10

Example WIS Input We assume, for simplicity, that jobs have been sorted in ascending order of the finish time. – We have not learned yet good methods for sorting that we can use. If we take job A, we cannot take any other job that starts BEFORE job A finishes. Can we do both job 0 and job 1? Can we do both job 0 and job 2? job ID start finish value

Example WIS Input We assume, for simplicity, that jobs have been sorted in ascending order of the finish time. – We have not learned yet good methods for sorting that we can use. If we take job A, we cannot take any other job that starts BEFORE job A finishes. Can we do both job 0 and job 1? – Yes. Can we do both job 0 and job 2? – No (they overlap) job ID start finish value

Example WIS Input A possible set of jobs we could take: 0, 1, 5, 10, 13. What is the value? – = 25. Can you propose any algorithm (even horribly slow) for finding the best set of jobs? job ID start finish value

Example WIS Input Simplest algorithm for finding the best subset of jobs: – Consider all possible subsets of jobs. – Ignore subsets with overlapping jobs. – Find the subset with the best total value. Time complexity? If we have N jobs, what is the total number of subsets of jobs? job ID start finish value

Example WIS Input Simplest algorithm for finding the best subset of jobs: – Consider all possible subsets of jobs. – Ignore subsets with overlapping jobs. – Find the subset with the best total value. Time complexity? If we have N jobs, what is the total number of subsets of jobs? – Total number of subsets: 2 N. – Exponential time complexity job ID start finish value

Solving WIS With Dynamic Programming To use dynamic programming, we must relate the solution to our problem to solutions to smaller problems. For example, consider job 14. What kind of problems that exclude job 14 would be relevant in solving the original problem, that includes job 14? job ID start finish value

Solving WIS With Dynamic Programming We can easily solve the problem for jobs 0-14, given solutions to these two smaller problems: Problem 1: best set using jobs – When job 14 is available, the best set using jobs 0-13 is still an option to us, although not necessarily the best one. Problem 2: best set using jobs – Why is this problem relevant? job ID start finish value

Solving WIS With Dynamic Programming We can easily solve the problem for jobs 0-14, given solutions to these two smaller problems: Problem 1: best set using jobs – When job 14 is available, the best set using jobs 0-13 is still an option to us, although not necessarily the best one. Problem 2: best set using jobs – Why is this problem relevant? – Because job 10 is the last job before job 14 that does NOT overlap with job 14. – Thus, job 14 can be ADDED to the solution for jobs job ID start finish value

Solving WIS With Dynamic Programming We can easily solve the problem for jobs 0-14, given solutions to these two smaller problems: Problem 1: best set using jobs Problem 2: best set using jobs The solution for jobs 0-14 is simply the best of these two options: – Best set using jobs – Best set using jobs 0-10, plus job 14. How can we write this solution in pseudocode? job ID start finish value

Solving WIS With Dynamic Programming Step 1: to make our life easier, we will insert a zero job at the beginning. The zero job: – Starts at time zero – Finishes at time zero. – Has zero value. – Causes jobIDs to increment by 1 Step 2: we need to preprocess jobs, so that for each job i we compute: – last [i] = the index of the last job preceding job i that does NOT overlap with job i job ID start finish value

Solving WIS With Dynamic Programming Step 1: to make our life easier, we will insert a zero job at the beginning. The zero job: – Starts at time zero – Finishes at time zero. – Has zero value. – Causes jobIDs to increment by 1 Step 2: we need to preprocess jobs, so that for each job i we compute: – last [i] = the index of the last job preceding job i that does NOT overlap with job i job ID start finish value last

Solving WIS With Dynamic Programming float wis(jobs, last) N = number of jobs. Initialize solutions array. solutions[0] = 0. For (i = 1 to N) – S1 = solutions[i-1]. – L = last[i]. – SL = solutions[L]. – S2 = SL + jobs[i].value. – solutions[i] = max(S1, S2). Return solutions[N]; job ID start finish value last

Backtracking As in our solution to the knapsack problem, the pseudocode we just saw returns a number: – The best total value we can achieve. In addition to the best value, we also want to know the set of jobs that achieves that value. This is a general issue in dynamic programming. How can we address it? 23

Backtracking As in our solution to the knapsack problem, the pseudocode we just saw returns a number: – The best total value we can achieve. In addition to the best value, we also want to know the set of jobs that achieves that value. This is a general issue in dynamic programming. There is a general solution, called backtracking. The key idea is: – In DP the final solution is always built from smaller solutions. – At each smaller problem, we have to choose which (even smaller) solutions to use for solving that problem. – We must record, for each smaller problem, the choice we made. – At the end, we backtrack and recover the individual decisions that led to the best solution. 24

Backtracking for the WIS Solution 25 First of all, what should the function return?

Backtracking for the WIS Solution 26 First of all, what should the function return? – The best value we can achieve. – The set of intervals that achieves that value. How can we make the function return both these things? The solution that will be preferred throughout the course: – Define a Result structure containing as many member variables as we need to store in the result. – Make the function return an object of that structure.

Backtracking for the WIS Solution 27 First of all, what should the function return? – The best value we can achieve. – The set of intervals that achieves that value. struct WIS_result { float value; list set; }; struct WIS_result wis(struct Intervals intervals)

Backtracking Solution Result wis(jobs, last) N = number of jobs. solutions[0] = 0. For (i = 1 to N) – L = last[i]. – SL = solutions[L]. – S1 = solutions[i-1]. – S2 = SL + jobs[i].value. – solutions[i] = max(S1, S2). How can we keep track of the decisions we make? job ID start finish value last

Backtracking Solution Result wis(jobs, last) N = number of jobs. solutions[0] = 0. For (i = 1 to N) – L = last[i]. – SL = solutions[L]. – S1 = solutions[i-1]. – S2 = SL + jobs[i].value. – solutions[i] = max(S1, S2). How can we keep track of the decisions we make? Remember the last job of each solution job ID start finish value last

Backtracking Solution Result wis(jobs, last) N = number of jobs. solutions[0] = 0. used[0] = 0. For (i = 1 to N) – L = last[i]. – SL = solutions[L]. – S1 = solutions[i-1]. – S2 = SL + jobs[i].value. – solutions[i] = max(S1, S2). – If S2 > S1 then used[i] = i. – Else used[i] = used[i-1] job ID start finish value last

Backtracking Solution // backtracking part list set = new List. counter = used[N]. while(counter != 0) – job = jobs[counter]. // job info – insertAtBeginning(set, job). – counter = ??? WIS_result result. result.value = solutions[N]. result.set = set. return result job ID start finish value last

Backtracking Solution // backtracking part list set = new List. counter = used[N]. while(counter != 0) – job = jobs[counter]. // job info – insertAtBeginning(set, job). – counter = used[last[counter] ]. WIS_result result. result.value = solutions[N]. result.set = set. return result job ID start finish value last

Matrix Multiplication

Matrix Multiplication: Review Suppose that A 1 is of size S 1 x S 2, and A 2 is of size S 2 x S 3. What is the time complexity of computing A 1 * A 2 ? What is the size of the result? 34

Matrix Multiplication: Review Suppose that A 1 is of size S 1 x S 2, and A 2 is of size S 2 x S 3. What is the time complexity of computing A 1 * A 2 ? What is the size of the result? S 1 x S 3. Each number in the result is computed in O(S 2 ) time by: – multiplying S 2 pairs of numbers. – adding S 2 numbers. Overall time complexity: O(S 1 * S 2 * S 3 ). 35

Optimal Ordering for Matrix Multiplication Suppose that we need to do a sequence of matrix multiplications: – result = A 1 * A 2 * A 3 *... * A K The number of columns for A i must equal the number of rows for A i+1. What is the time complexity for performing this sequence of multiplications? 36

Optimal Ordering for Matrix Multiplication Suppose that we need to do a sequence of matrix multiplications: – result = A 1 * A 2 * A 3 *... * A K The number of columns for A i must equal the number of rows for A i+1. What is the time complexity for performing this sequence of multiplications? The answer is: it depends on the order in which we perform the multiplications. 37

An Example Suppose: – A 1 is17x2. – A 2 is 2x35. – A 3 is 35x4. (A 1 * A 2 ) * A 3 : A 1 * (A 2 * A 3 ): 38

An Example Suppose: – A 1 is17x2. – A 2 is 2x35. – A 3 is 35x4. (A 1 * A 2 ) * A 3 : – 17*2*35 = 1190 multiplications and additions to compute A 1 * A 2. – 17*35*4 = 2380 multiplications and additions to compute multiplying the result of (A 1 * A 2 ) with A 3. – Total: 3570 multiplications and additions. A 1 * (A 2 * A 3 ): – 2*35*4 = 280 multiplications and additions to compute A 2 * A 3. – 17*2*4 = 136 multiplications and additions to compute multiplying A 1 with the result of (A 2 * A 3 ). – Total: 416 multiplications and additions. 39

Adaptation to Dynamic Programming Suppose that we need to do a sequence of matrix multiplications: – result = A 1 * A 2 * A 3 *... * A K To figure out if and how we can use dynamic programming, we must address the standard two questions we always need to address for dynamic programming: 1.Can we define a set of smaller problems, such that the solutions to those problems make it easy to solve the original problem? 2.Can we arrange those smaller problems in a sequence of reasonable size, so that each problem in that sequence only depends on problems that come earlier in the sequence? 40

Defining Smaller Problems 1.Can we define a set of smaller problems, whose solutions make it easy to solve the original problem? – Original problem: optimal ordering for A 1 * A 2 * A 3 *... * A K Yes! Suppose that, for every i between 1 and K-1 we know: – The best order (and best cost) for multiplying matrices A 1,..., A i. – The best order (and best cost) for multiplying matrices A i+1,..., A K. Then, for every such i, we obtain a possible solution for our original problem: – Multiply matrices A 1,..., A i in the best order. Let C 1 be the cost of that. – Multiply matrices A i+1,..., A K in the best order. Let C 2 be the cost of that. – Compute (A 1 *... * A i ) * (A i+1 *... * A K ). Let C 3 be the cost of that. C 3 = rows of (A 1 *... * A i ) * cols of (A 1 *... * A i ) * cols of (A i+1 *... * A K ). = rows of A 1 * cols of A i * cols of A K – Total cost of this solution = C 1 + C 2 + C 3. 41

Defining Smaller Problems 1.Can we define a set of smaller problems, whose solutions make it easy to solve the original problem? – Original problem: optimal ordering for A 1 * A 2 * A 3 *... * A K Yes! Suppose that, for every i between 1 and K-1 we know: – The best order (and best cost) for multiplying matrices A 1,..., A i. – The best order (and best cost) for multiplying matrices A i+1,..., A K. Then, for every such i, we obtain a possible solution. We just need to compute the cost of each of those solutions, and choose the smallest cost. Next question: 2.Can we arrange those smaller problems in a sequence of reasonable size, so that each problem in that sequence only depends on problems that come earlier in the sequence? 42

Defining Smaller Problems 2.Can we arrange those smaller problems in a sequence of reasonable size, so that each problem in that sequence only depends on problems that come earlier in the sequence? To compute answer for A 1 * A 2 * A 3 *... * A K : For i = 1, …, K-1, we had to consider solutions for: – A 1,..., A i. – A i+1,..., A K. So, what is the set of all problems we must solve? 43

Defining Smaller Problems 2.Can we arrange those smaller problems in a sequence of reasonable size, so that each problem in that sequence only depends on problems that come earlier in the sequence? To compute answer for A 1 * A 2 * A 3 *... * A K : For i = 1, …, K-1, we had to consider solutions for: – A 1,..., A i. – A i+1,..., A K. So, what is the set of all problems we must solve? For M = 1,..., K. – For N = 1,..., M. Compute the best ordering for A N *... * A M. What this the number of problems we need to solve? Is the size reasonable? – We must solve Θ(K 2 ) problems. We consider this a reasonable number. 44

Defining Smaller Problems The set of all problems we must solve: For M = 1,..., K. – For N = 1,..., M. Compute the best ordering for A N *... * A M. What is the order in which we must solve these problems? 45

Defining Smaller Problems The set of all problems we must solve, in the correct order: For M = 1,..., K. – For N = M,..., 1. Compute the best ordering for A N *... * A M. N must go from M to 1, NOT the other way around. Why? Because, given M, the larger the N is, the smaller the problem is of computing the best ordering for A N *... * A M. 46

Solving These Problems For M = 1,..., K. – For N = M,..., 1. Compute the best ordering for A N *... * A M. What are the base cases? N = M. – costs[N][M] = 0. N = M - 1. – costs[N][M] = rows(A N ) * cols(A N ) * cols(A M ). Solution for the recursive case: 47

Solving These Problems For M = 1,..., K. – For N = M,..., 1. Compute the best ordering for A N *... * A M. Solution for the recursive case: minimum_cost = 0 For R = N,..., M-1: – cost1 = costs[N][R] – cost2 = costs[R+1][M] – cost3 = rows(A N ) * cols(A R ) * cols(A M ) – cost = cost1 + cost2 + cost3 – if (cost < minimum_cost) minimum_cost = cost costs[N][M] = minimum_cost 48