Dynamic Programming Steps.

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Chapter 7 Dynamic Programming 7.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
CHAPTER 10 Recursion. 2 Recursive Thinking Recursion is a programming technique in which a method can call itself to solve a problem A recursive definition.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
A Review of Recursion Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
The power of logarithmic computations Jordi Cortadella Department of Computer Science.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
Dynamic Programming Sequence of decisions. Problem state. Principle of optimality. Dynamic Programming Recurrence Equations. Solution of recurrence equations.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
§5 Backtracking Algorithms A sure-fire way to find the answer to a problem is to make a list of all candidate answers, examine each, and following the.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Advanced Algorithms Analysis and Design
Dynamic Programming Typically applied to optimization problems
Advanced Algorithms Analysis and Design
Dynamic Programming Sequence of decisions. Problem state.
3.5 Recurrence Relations.
Advanced Algorithms Analysis and Design
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Advanced Design and Analysis Techniques
Matrix Chain Multiplication
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Matrix Multiplication Chains
CS38 Introduction to Algorithms
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
Matrix Multiplication Chains
Dynamic Programming Steps.
CS 3343: Analysis of Algorithms
Unit-5 Dynamic Programming
Matrix Chain Multiplication
Dynamic Programming General Idea
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
ICS 353: Design and Analysis of Algorithms
Advanced Algorithms Analysis and Design
CMSC Discrete Structures
Chapter 15: Dynamic Programming II
Branch and Bound Searching Strategies
Lecture 8. Paradigm #6 Dynamic Programming
Matrix Multiplication (Dynamic Programming)
All pairs shortest path problem
Dynamic Programming General Idea
Lecture 4 Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming Sequence of decisions. Problem state.
Recurrence Relations Discrete Structures.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Steps.
Advanced Analysis of Algorithms
Matrix Chain Multiplication
Dynamic Programming.
Dynamic Programming Sequence of decisions. Problem state.
Seminar on Dynamic Programming.
Presentation transcript:

Dynamic Programming Steps. View the problem solution as the result of a sequence of decisions. Obtain a formulation for the problem state. Verify that the principle of optimality holds. Set up the dynamic programming recurrence equations. Solve these equations for the value of the optimal solution. Perform a traceback to determine the optimal solution. Fibonacci sequence example

Dynamic Programming When solving the dynamic programming recurrence recursively, be sure to avoid the recomputation of the optimal value for the same problem state. To minimize run time overheads, and hence to reduce actual run time, dynamic programming recurrences are almost always solved iteratively (no recursion).

0/1 Knapsack Recurrence If wn <= y, f(n,y) = pn. When i < n f(i,y) = f(i+1,y) whenever y < wi. f(i,y) = max{f(i+1,y), f(i+1,y-wi) + pi}, y >= wi. Assume the weights and capacity are integers. Only f(i,y)s with 1 <= i <= n and 0 <= y <= c are of interest.

Iterative Solution Example n = 5, c = 8, w = [4,3,5,6,2], p = [9,7,10,9,3] y 5 4 3 2 1 f[i][y] 6 7 8 i Note: no recomputation recursive version computes f(1,c) and some (maybe even all) f(i,y), i > 1, 0 <= y <= c. Iterative version computes f(1,c) and all f(i,y), i > 1, 0 <= y <= c.

Compute f[5][*] n = 5, c = 8, w = [4,3,5,6,2], p = [9,7,10,9,3] 1 2 3 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 i 3 2 1 y

Compute f[4][*] n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 2 1 y f(i,y) = max{f(i+1,y), f(i+1,y-wi) + pi}, y >= wi

Compute f[3][*] n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 3 3 3 10 10 13 13 2 1 y f(i,y) = max{f(i+1,y), f(i+1,y-wi) + pi}, y >= wi

Compute f[2][*] n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 3 3 3 10 10 13 13 2 3 8 8 11 11 13 18 1 y f(i,y) = max{f(i+1,y), f(i+1,y-wi) + pi}, y >= wi

Compute f[1][c] n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 3 3 3 10 10 13 13 2 3 8 8 11 11 13 18 1 18 y f(i,y) = max{f(i+1,y), f(i+1,y-wi) + pi}, y >= wi

Iterative Implementation // initialize f[n][] int yMax = min(w[n] - 1, c); for (int y = 0; y <= yMax; y++) f[n][y] = 0; for (int y = w[n]; y <= c; y++) f[n][y] = p[n];

Iterative Implementation // compute f[i][y], 1 < i < n for (int i = n - 1; i > 1; i--) { yMax = min(w[i] - 1, c); for (int y = 0; y <= yMax; y++) f[i][y] = f[i + 1][y]; for (int y = w[i]; y <= c; y++) f[i][y] = max(f[i + 1][y], f[i + 1][y - w[i]] + p[i]); }

Iterative Implementation // compute f[1][c] f[1][c] = f[2][c]; if (c >= w[1]) f[1][c] = max(f[1][c], f[2][c-w[1]] + p[1]); }

Time Complexity O(cn). Same as for the recursive version with no recomputations. Iterative version is expected to run faster because of lower overheads. No checks to see if f[i][j] already computed (but all f[i][j] are computed). Method calls replaced by for loops.

Traceback n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] y 5 4 3 2 1 f[i][y] 6 7 8 i 9 12 10 13 11 18 f[1][8] = f[2][8] => x1 = 0

Traceback n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 4 5 6 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 3 3 3 10 10 10 13 13 2 3 8 8 11 11 13 18 1 18 y f[2][8] != f[3][8] => x2 = 1

Traceback n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 4 5 6 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 3 3 3 10 10 13 13 2 3 8 8 11 11 13 18 1 18 y f[3][5] != f[4][5] => x3 = 1

Traceback n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 4 5 6 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 3 3 3 10 10 13 13 2 3 8 8 11 11 13 18 1 18 y f[4][0] = f[5][0] => x4 = 0

Traceback n = 5, c = 8, w = [4,3,5,6,2], p = [9,8,10,9,3] 1 2 3 4 5 6 1 2 3 4 5 6 7 8 f[i][y] 5 3 3 3 3 3 3 3 4 3 3 3 3 9 9 12 i 3 3 3 3 10 10 13 13 2 3 8 8 11 11 13 18 1 18 y f[5][0] = 0 => x5 = 0

Complexity Of Traceback O(n)

Matrix Multiplication Chains Multiply an m x n matrix A and an n x p matrix B to get an m x p matrix C. k = 1 n C(i,j) = A(i,k) * B(k,j) We shall use the number of multiplications as our complexity measure. n multiplications are needed to compute one C(i,j). mnp multiplicatons are needed to compute all mp terms of C.

Matrix Multiplication Chains Suppose that we are to compute the product X*Y*Z of three matrices X, Y and Z. The matrix dimensions are: X:(100 x 1), Y:(1 x 100), Z:(100 x 1) Multiply X and Y to get a 100 x 100 matrix T. 100 * 1 * 100 = 10,000 multiplications. Multiply T and Z to get the 100 x 1 answer. 100 * 100 * 1 = 10,000 multiplications. Total cost is 20,000 multiplications. 10,000 units of space are needed for T.

Matrix Multiplication Chains The matrix dimensions are: X:(100 x 1) Y:(1 x 100) Z:(100 x 1) Multiply Y and Z to get a 1 x 1 matrix T. 1 * 100 * 1 = 100 multiplications. Multiply X and T to get the 100 x 1 answer. 100 * 1 * 1 = 100 multiplications. Total cost is 200 multiplications. 1 unit of space is needed for T.

Product Of 5 Matrices Some of the ways in which the product of 5 matrices may be computed. A*(B*(C*(D*E))) right to left (((A*B)*C)*D)*E left to right (A*B)*((C*D)*E) (A*B)*(C*(D*E)) (A*(B*C))*(D*E) ((A*B)*C)*(D*E)

Find Best Multiplication Order Number of ways to compute the product of q matrices is O(4q/q1.5). Evaluating all ways to compute the product takes O(4q/q0.5) time.

An Application Registration of pre- and post-operative 3D brain MRI images to determine volume of removed tumor.

3D Registration

3D Registration Each image has 256 x 256 x 256 voxels. In each iteration of the registration algorithm, the product of three matrices is computed at each voxel … (12 x 3) * (3 x 3) * (3 x 1) Left to right computation => 12 * 3 * 3 + 12 * 3*1 = 144 multiplications per voxel per iteration. 100 iterations to converge.

3D Registration Total number of multiplications is about 2.4 * 1011. Right to left computation => 3 * 3*1 + 12 * 3 * 1 = 45 multiplications per voxel per iteration. Total number of multiplications is about 7.5 * 1010. With 108 multiplications per second, time is 40 min vs 12.5 min.