Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Greedy vs Dynamic Programming Approach
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Fundamental Techniques
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming … Continued 0-1 Knapsack Problem.
2/19/ ITCS 6114 Dynamic programming 0-1 Knapsack problem.
Dynamic Programming … Continued
Greedy algorithms: CSC317
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Weighted Interval Scheduling
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
The Greedy Method and Text Compression
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Matrix Chain Multiplication
Pattern Matching 9/14/2018 3:36 AM
Chapter 8 Dynamic Programming.
CS38 Introduction to Algorithms
CSCE 411 Design and Analysis of Algorithms
Matrix Chain Multiplication
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
CS Algorithms Dynamic programming 0-1 Knapsack problem 12/5/2018.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
MCA 301: Design and Analysis of Algorithms
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming.
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
CS6045: Advanced Algorithms
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Dynamic Programming-- Longest Common Subsequence
CSC 413/513- Intro to Algorithms
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming II DP over Intervals
Matrix Chain Product 張智星 (Roger Jang)
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
0-1 Knapsack problem.
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Seminar on Dynamic Programming.
Presentation transcript:

Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008

Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3) Spring 2008 Dynamic Programming

Matrix Chain-Products Dynamic Programming is a general algorithm design paradigm. Rather than give the general structure, let us first give a motivating example: Matrix Chain-Products Review: Matrix Multiplication. C = A*B A is d  e and B is e  f O(def ) time A C B d f e i j i,j Spring 2008 Dynamic Programming

Matrix Chain-Products Compute A=A0*A1*…*An-1 Ai is di × di+1 Problem: How to parenthesize? Example B is 3  100 C is 100  5 D is 5  5 (B*C)*D takes 1500 + 75 = 1575 ops B*(C*D) takes 1500 + 2500 = 4000 ops Spring 2008 Dynamic Programming

An Enumeration Approach Matrix Chain-Product Alg.: Try all possible ways to parenthesize A=A0*A1*…*An-1 Calculate number of ops for each one Pick the one that is best Running time: The number of parenthesizations is equal to the number of binary trees with n leaf nodes (Why?) This is exponential! It is called the Catalan number, and it is almost 4n. This is a terrible algorithm! Spring 2008 Dynamic Programming

A Greedy Approach Idea #1: repeatedly select the product that uses (up) the most operations. Counter-example: A is 10  5 B is 5  10 C is 10  5 D is 5  10 Greedy idea #1 gives (A*B)*(C*D), which takes 500+1000+500 = 2000 ops A*((B*C)*D) takes 500+250+250 = 1000 ops Spring 2008 Dynamic Programming

Another Greedy Approach Idea #2: repeatedly select the product that uses the fewest operations. Counter-example: A is 101  11 B is 11  9 C is 9  100 D is 100  99 Greedy idea #2 gives A*((B*C)*D)), which takes 109989+9900+108900=228789 ops (A*B)*(C*D) takes 9999+89991+89100=189090 ops The greedy approach is not giving us the optimal value. Spring 2008 Dynamic Programming

A “Recursive” Approach Define subproblems: Find the best parenthesization of Ai*Ai+1*…*Aj. Let Ni,j denote the number of operations done by this subproblem. The optimal solution for the whole problem is N0,n-1. Subproblem optimality: The optimal solution can be defined in terms of optimal subproblems There has to be a final multiplication (root of the expression tree) for the optimal solution. Say, the final multiply is at index i: (A0*…*Ai)*(Ai+1*…*An-1). Then the optimal solution N0,n-1 is the sum of two optimal subproblems, N0,i and Ni+1,n-1 plus the time for the last multiply. If the global optimum did not have these optimal subproblems, we could define an even better “optimal” solution. Spring 2008 Dynamic Programming

A Characterizing Equation The global optimal has to be defined in terms of optimal subproblems, depending on the position of the final multiply. Let us consider all possible places for that final multiply: Assume that Ai is a di  di+1 dimensional matrix. So, a characterizing equation for Ni,j is the following: Note that subproblems are not independent--the subproblems overlap. Spring 2008 Dynamic Programming

A Dynamic Programming Algorithm Since subproblems overlap, we don’t use recursion. Instead, we construct optimal subproblems “bottom-up.” Ni,i’s are easy, so start with them Then do length 2,3,… subproblems, and so on. Running time: O(n3) Algorithm matrixChain(S): Input: sequence S of n matrices to be multiplied Output: number of operations in an optimal parenthesization of S for i  1 to n-1 do Ni,i  0 for b  1 to n-1 do for i  0 to n-b-1 do j  i+b Ni,j  +infinity for k  i to j-1 do Ni,j  min{Ni,j , Ni,k +Nk+1,j +di dk+1 dj+1} Spring 2008 Dynamic Programming

A Dynamic Programming Algorithm Visualization Merge Sort 5/23/2019 6:18 PM A Dynamic Programming Algorithm Visualization The bottom-up construction fills in the N array by diagonals Ni,j gets values from pervious entries in i-th row and j-th column Filling in each entry in the N table takes O(n) time. Total run time: O(n3) Getting actual parenthesization can be done by remembering “k” for each N entry answer N n-1 1 2 j … 1 … i Lets do the algorithmic analysis here. How much work do we have to do to fill in each entry? Well, filling in the first row is easy. Now, how much time to fill in each entry in the superdiagonal? Well, n-1 Spring 2008 Dynamic Programming

Determining the Run Time It’s not easy to see that filling in each square takes O(n) time. So, lets look at this another way O(n3) Spring 2008 Dynamic Programming

The General Dynamic Programming Technique Applies to a problem that at first seems to require a lot of time (possibly exponential), provided we have: Simple subproblems: the subproblems can be defined in terms of a few variables, such as j, k, l, m, and so on. Subproblem optimality: the global optimum value can be defined in terms of optimal subproblems Subproblem overlap: the subproblems are not independent, but instead they overlap (hence, should be constructed bottom-up). Spring 2008 Dynamic Programming

The 0/1 Knapsack Problem Given: A set S of n items, with each item i having bi - a positive benefit wi - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. If we are not allowed to take fractional amounts, then this is the 0/1 knapsack problem. In this case, we let T denote the set of items we take Objective: maximize Constraint: Spring 2008 Dynamic Programming

Example Given: A set S of n items, with each item i having bi - a positive benefit wi - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. “knapsack” Solution: 5 (2 in) 3 (2 in) 1 (4 in) Items: 1 2 3 4 5 Weight: 4 in 2 in 2 in 6 in 2 in 9 in Benefit: $20 $3 $6 $25 $80 Spring 2008 Dynamic Programming

A 0/1 Knapsack Algorithm, First Attempt Sk: Set of items numbered 1 to k. Define B[k] = best selection from Sk. Problem: does not have subproblem optimality: Consider S={(3,2),(5,4),(8,5),(4,3),10,9)} weight-benefit pairs Best for S4: Best for S5: Spring 2008 Dynamic Programming

A 0/1 Knapsack Algorithm, Second Attempt Sk: Set of items numbered 1 to k. Define B[k,w] = best selection from Sk with weight exactly equal to w Good news: this does have subproblem optimality: I.e., best subset of Sk with weight exactly w is either the best subset of Sk-1 w/ weight w or the best subset of Sk-1 w/ weight w-wk plus item k. Spring 2008 Dynamic Programming

The 0/1 Knapsack Algorithm Recall definition of B[k,w]: Since B[k,w] is defined in terms of B[k-1,*], we can reuse the same array Running time: O(nW). Not a polynomial-time algorithm if W is large This is a pseudo-polynomial time algorithm Algorithm 01Knapsack(S, W): Input: set S of items w/ benefit bi and weight wi; max. weight W Output: benefit of best subset with weight at most W for w  0 to W do B[w]  0 for k  1 to n do for w  W downto wk do if B[w-wk]+bk > B[w] then B[w]  B[w-wk]+bk Spring 2008 Dynamic Programming

Pseudo-Polynomial Time Algorithms O(nW) in the last slide is not strictly polynomial time. E.g. W could be 2n, which makes the running time of the algorithm O(n2n), worse than brute force. In practice such algorithms should do much better than brute force. We’ll see later, when we discuss NP-completeness that it is very unlikely that anyone will ever find a true polynomial time algorithm for the 0-1 Knapsack problem Spring 2008 Dynamic Programming