CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Dynamic Programming Nithya Tarek. Dynamic Programming Dynamic programming solves problems by combining the solutions to sub problems. Paradigms: Divide.
Introduction to Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Dynamic Programming Carrie Williams. What is Dynamic Programming? Method of breaking the problem into smaller, simpler sub-problems Method of breaking.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Dynamic Programming Andreas Klappenecker [based on slides by Prof. Welch]
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 6 Instructor: Paul Beame TA: Gidon Shavit.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Algorithms and Data Structures Lecture X
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Dynamic Programming.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.
1 Dynamic Programming Andreas Klappenecker [partially based on slides by Prof. Welch]
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Dynamic Programming.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Advanced Design and Analysis Techniques
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
CSCE 411 Design and Analysis of Algorithms
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Data Structures and Algorithms
Unit-5 Dynamic Programming
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Lecture 8. Paradigm #6 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Trevor Brown DC 2338, Office hour M3-4pm
DYNAMIC PROGRAMMING.
COMP108 Algorithmic Foundations Dynamic Programming
//****************************************
Algorithms and Data Structures Lecture X
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1

2 Drawback of Divide & Conquer Sometimes can be inefficient Fibonacci numbers: F 0 = 0 F 1 = 1 F n = F n-1 + F n-2 for n > 1 Sequence is 0, 1, 1, 2, 3, 5, 8, 13, …

CPSC 411, Spring 2011: Set 53 Computing Fibonacci Numbers Obvious recursive algorithm: Fib(n): if n = 0 or 1 then return n else return (F(n-1) + Fib(n-2))

CPSC 411, Spring 2011: Set 54 Recursion Tree for Fib(5) Fib(5) Fib(4) Fib(3) Fib(2) Fib(1) Fib(2)Fib(1) Fib(0)Fib(1)Fib(0) Fib(1)Fib(0)

CPSC 411, Spring 2011: Set 55 How Many Recursive Calls? If all leaves had the same depth, then there would be about 2 n recursive calls. But this is over-counting. However with more careful counting it can be shown that it is  ((1.6) n ) Still exponential!

CPSC 411, Spring 2011: Set 56 Save Work Wasteful approach - repeat work unnecessarily Fib(2) is computed three times Instead, compute Fib(2) once, store result in a table, and access it when needed

CPSC 411, Spring 2011: Set 57 More Efficient Recursive Alg F[0] := 0; F[1] := 1; F[n] := Fib(n); Fib(n): if n = 0 or 1 then return F[n] if F[n-1] = NIL then F[n-1] := Fib(n-1) if F[n-2] = NIL then F[n-2] := Fib(n-2) return (F[n-1] + F[n-2]) computes each F[i] only once

CPSC 411, Spring 2011: Set Example of Memoized Fib NIL F Fib(5) Fib(4) Fib(3) Fib(2) returns 0+1 = 1fills in F[2] with 1, returns 1+1 = 2 fills in F[3] with 2, returns 2+1 = 3 fills in F[4] with 3, returns 3+2 = 5

CPSC 411, Spring 2011: Set 59 Get Rid of the Recursion Recursion adds overhead extra time for function calls extra space to store information on the runtime stack about each currently active function call Avoid the recursion overhead by filling in the table entries bottom up, instead of top down.

CPSC 411, Spring 2011: Set 510 Subproblem Dependencies Figure out which subproblems rely on which other subproblems Example: F 0 F 1 F 2 F 3 … F n-2 F n-1 F n

CPSC 411, Spring 2011: Set 511 Order for Computing Subproblems Then figure out an order for computing the subproblems that respects the dependencies: when you are solving a subproblem, you have already solved all the subproblems on which it depends Example: Just solve them in the order F 0, F 1, F 2, F 3, …

CPSC 411, Spring 2011: Set 512 DP Solution for Fibonacci Fib(n): F[0] := 0; F[1] := 1; for i := 2 to n do F[i] := F[i-1] + F[i-2] return F[n] Can perform application-specific optimizations e.g., save space by only keeping last two numbers computed

CPSC 411, Spring 2011: Set 513 Matrix Chain Order Problem Multiplying non-square matrices: A is n x m, B is m x p AB is n x p whose (i, j ) entry is ∑a ik b k j Computing AB takes nmp scalar multiplications and n(m-1)p scalar additions (using basic algorithm). Suppose we have a sequence of matrices to multiply. What is the best order? must be equal

CPSC 411, Spring 2011: Set 514 Why Order Matters Suppose we have 4 matrices: A, 30 x 1 B, 1 x 40 C, 40 x 10 D, 10 x 25 ((AB)(CD)) : requires 41,200 mults. (A((BC)D)) : requires 1400 mults.

CPSC 411, Spring 2011: Set 515 Matrix Chain Order Problem Given matrices A 1, A 2, …, A n, where A i is d i-1 x d i : [1] What is minimum number of scalar mults required to compute A 1 · A 2 ·… · A n ? [2] What order of matrix multiplications achieves this minimum? Focus on [1]; see textbook for how to do [2]

CPSC 411, Spring 2011: Set 516 A Possible Solution Try all possibilities and choose the best one. Drawback is there are too many of them (exponential in the number of matrices to be multiplied) Need to be more clever - try dynamic programming!

CPSC 411, Spring 2011: Set 517 Step 1: Develop a Recursive Solution Define M(i, j ) to be the minimum number of mults. needed to compute A i · A i+1 ·… · A j Goal: Find M(1,n). Basis: M(i,i) = 0. Recursion: How to define M(i, j ) recursively?

CPSC 411, Spring 2011: Set 518 Defining M(i, j ) Recursively Consider all possible ways to split A i through A j into two pieces. Compare the costs of all these splits: best case cost for computing the product of the two pieces plus the cost of multiplying the two products Take the best one M(i,j) = min k (M(i,k) + M(k+1, j ) + d i-1 d k d j )

CPSC 411, Spring 2011: Set 519 Defining M(i, j ) Recursively (A i ·…· A k )·(A k+1 ·… · A j ) P1P1 P2P2 minimum cost to compute P 1 is M(i,k) minimum cost to compute P 2 is M(k+1,j) cost to compute P 1 · P 2 is d i-1 d k d j

CPSC 411, Spring 2011: Set 520 Step 2: Find Dependencies Among Subproblems n/a GOAL! M: computing the red square requires the blue ones: to the left and below.

CPSC 411, Spring 2011: Set 521 Defining the Dependencies Computing M(i, j ) uses everything in same row to the left: M(i,i), M(i,i+1), …, M(i, j -1) and everything in same column below: M(i, j ), M(i+1, j ),…,M( j, j )

CPSC 411, Spring 2011: Set 522 Step 3: Identify Order for Solving Subproblems Recall the dependencies between subproblems just found Solve the subproblems (i.e., fill in the table entries) this way: go along the diagonal start just above the main diagonal end in the upper right corner (goal)

CPSC 411, Spring 2011: Set 523 Order for Solving Subproblems n/a M:

CPSC 411, Spring 2011: Set 524 Pseudocode for i := 1 to n do M[i,i] := 0 for d := 1 to n-1 do // diagonals for i := 1 to n-d to // rows w/ an entry on d-th diagonal j := i + d // column corresponding to row i on d-th diagonal M[i,j] := infinity for k := 1 to j-1 to M[i,j] := min(M[i,j], M[i,k]+M[k+1,j]+d i-1 d k d j ) endfor running time O(n 3 ) pay attention here to remember actual sequence of mults.

CPSC 411, Spring 2011: Set 525 Example M: n/a n/a 010,000 4n/a 0 1: A is 30x1 2: B is 1x40 3: C is 40x10 4: D is 10x25

CPSC 411, Spring 2011: Set 526 Keeping Track of the Order It's fine to know the cost of the cheapest order, but what is that cheapest order? Keep another array S and update it when computing the minimum cost in the inner loop After M and S have been filled in, then call a recursive algorithm on S to print out the actual order

CPSC 411, Spring 2011: Set 527 Modified Pseudocode for i := 1 to n do M[i,i] := 0 for d := 1 to n-1 do // diagonals for i := 1 to n-d to // rows w/ an entry on d-th diagonal j := i + d // column corresponding to row i on d-th diagonal M[i,j] := infinity for k := 1 to j-1 to M[i,j] := min(M[i,j], M[i,k]+M[k+1,j]+d i-1 d k d j ) if previous line changed value of M[i,j] then S[i,j] := k endfor keep track of cheapest split point found so far: between A k and A k+1

CPSC 411, Spring 2011: Set 528 Example M: n/a n/a 010,000 4n/a 0 1: A is 30x1 2: B is 1x40 3: C is 40x10 4: D is 10x S:

CPSC 411, Spring 2011: Set 529 Using S to Print Best Ordering Call Print(S,1,n) to get the entire ordering. Print(S,i, j ): if i = j then output "A" + i //+ is string concat else k := S[i, j ] output "(" + Print(S,i,k) + Print(S,k+1, j ) + ")"

CPSC 411, Spring 2011: Set 530 Example S: 124 1n/a <<draw recursion tree on board>>