Binhai Zhu Computer Science Department, Montana State University

Slides:



Advertisements
Similar presentations
Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Advertisements

CSC 252 Algorithms Haniya Aslam
Dynamic Programming Code
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming (16.0/15) The 3-d Paradigm 1st = Divide and Conquer 2nd = Greedy Algorithm Dynamic Programming = metatechnique (not a particular algorithm)
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
CS 8833 Algorithms Algorithms Dynamic Programming.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming Typically applied to optimization problems
With Remote Capabilities by Justin Dansby
AND TELECOMMUNICATIONS BUSINESS
Advanced Algorithms Analysis and Design
Copyright © Dale Carnegie & Associates, Inc.
Lecture 5 Dynamic Programming
Seminar on Dynamic Programming.
Matrix Chain Multiplication
Dynamic programming techniques
US Treasury & Its Borrowing
Hashing and Hash Tables
CS200: Algorithm Analysis
Chapter 8 Dynamic Programming.
CSCE 411 Design and Analysis of Algorithms
The Use of Artificial Life and Culture in Gaming As a Tool for Education Jared Witzer Frequently, presenters must deliver material of a technical nature.
Binhai Zhu Computer Science Department, Montana State University
Heaps,heapsort and priority queue
Binhai Zhu Computer Science Department, Montana State University
Brief Review of Proof Techniques
Matrix Chain Multiplication
AOE/ESM 4084 Engineering Design Optimization
Technology Update Kris Young Director of Technology
Technology Update Kris Young Director of Technology
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
ICS 353: Design and Analysis of Algorithms
Presenting a Technical Report
Numerical Methods Charudatt Kadolkar 12/9/2018
ICS 353: Design and Analysis of Algorithms
Binhai Zhu Computer Science Department, Montana State University
Final Budget Amendment and Proposed Budget
Longest Common Subsequence
Lecture 8. Paradigm #6 Dynamic Programming
2015/16 Evaluation Summary October 4, 2016 Jordan Harris
Ch. 15: Dynamic Programming Ming-Te Chi
2016 State Assessment Results
Dynamic Programming-- Longest Common Subsequence
Business Services Update Board of Education Workshop December 1, 2015
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
TROY SCHOOL DISTRICT ENROLLMENT PROJECTIONS February 7, 2017
Algorithms and Data Structures Lecture X
2015/16 Evaluation Summary October 18, 2016 Jordan Harris
Analysis of Algorithms CS 477/677
Business Services Update Board of Education Workshop December 6, 2016
Matrix Chain Multiplication
Randomized Algorithm & Public Key Cryptography
Longest Common Subsequence
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Final Budget Amendment and Proposed Budget
Chapter 15: Dynamic Programming
Chapter 15: Dynamic Programming
Binhai Zhu Computer Science Department, Montana State University
Business Services Update Board of Education Workshop March 7, 2017
SAT Based Abstraction/Refinement in Model-Checking
Binhai Zhu Computer Science Department, Montana State University
Binhai Zhu Computer Science Department, Montana State University
Binhai Zhu Computer Science Department, Montana State University
Seminar on Dynamic Programming.
Presentation transcript:

Binhai Zhu Computer Science Department, Montana State University Dynamic programming Binhai Zhu Computer Science Department, Montana State University Frequently, presenters must deliver material of a technical nature to an audience unfamiliar with the topic or vocabulary. The material may be complex or heavy with detail. To present technical material effectively, use the following guidelines from Dale Carnegie Training®.   Consider the amount of time available and prepare to organize your material. Narrow your topic. Divide your presentation into clear segments. Follow a logical progression. Maintain your focus throughout. Close the presentation with a summary, repetition of the key steps, or a logical conclusion. Keep your audience in mind at all times. For example, be sure data is clear and information is relevant. Keep the level of detail and vocabulary appropriate for the audience. Use visuals to support key points or steps. Keep alert to the needs of your listeners, and you will have a more receptive audience. 4/29/2019

Idea You have a large problem to solve, you can divide the problem into smaller sub-problems (1) Solution of a sub-problem might be interrelated and might be re-used again (this is different from Divide & Conquer. (2) So it is better to store those smaller solutions somewhere. 4/29/2019

Idea You have a large problem to solve, you can divide the problem into smaller sub-problems (1) Solution of a sub-problem might be interrelated and might be re-used again (this is different from Divide & Conquer. (2) So it is better to store those smaller solutions somewhere. (3) Of course, sometimes it might not work. 4/29/2019

Example 1. World Series Odds Two teams A and B play to see who is the first to win n games. In world series games n=4. Assumption: A and B are equally competent, each has a 50% chance to win any particular game. 4/29/2019

Example 1. World Series Odds P(i,j) --- the probability that A needs i extra games to win and B needs j extra games to win. 4/29/2019

Example 1. World Series Odds P(i,j) --- the probability that A needs i extra games to win and B needs j extra games to win. (Assume that we talk about team A is to win/lose.) P(i,j) = 1, if i = 0 and j > 0 P(i,j) = 0, if i >0 and j =0 4/29/2019

Example 1. World Series Odds P(i,j) --- the probability that A needs i extra games to win and B needs j extra games to win. (Assume that we talk about team A is to win/lose.) P(i,j) = 1, if i = 0 and j > 0 P(i,j) = 0, if i >0 and j =0 P(i,j) = ( P(i-1,j) + P(i,j-1) ) /2, if i > 0 and j > 0 What is the cost to calculate P(i,j) recursively? 4/29/2019

Example 1. World Series Odds P(i,j) = 1, if i = 0 and j > 0 P(i,j) = 0, if i >0 and j =0 P(i,j) = ( P(i-1,j) + P(i,j-1) ) /2, if i > 0 and j > 0 What is the cost to calculate P(i,j) recursively? Let i+j = n and T(n) be the cost for calculating P(i,j). T(1) = O(1) = c T(n) = 2T(n-1) + O(1) = 2T(n-1)+ d, c,d are constants. 4/29/2019

Example 1. World Series Odds Let i+j = n and T(n) be the cost for calculating P[i,j]. T(1) = O(1) = c T(n) = 2T(n-1) + O(1) = 2T(n-1)+ d, c,d are constants. T(n) = c2n-1 + d(2n-1-1) ε O(2n) = O(2i+j). So when n is large, this is not an efficient solution. 4/29/2019

Example 1. World Series Odds How to solve this problem with dynamic programming? (1) Define a Table P[-,-] Odds(i,j) for t = 1 to i+j do P[0,t] = 1 P[t,0] = 0 for k = 1 to t-1 do P[k,t-k]=(P[k-1,t-k]+P[k,t-k-1])/2; endfor return P[i,j] // running time? 4/29/2019

Example 1. World Series Odds Odds(i,j) for t = 1 to i+j do P[0,t] = 1 P[t,0] = 0 for k = 1 to t-1 do P[k,t-k]=(P[k-1,t-k]+P[k,t-k-1])/2; endfor return P[i,j] // running time? 4 1 3 1 ↑ j 2 1 1 1 1 2 3 4 i → 4/29/2019

Example 1. World Series Odds Odds(i,j) for t = 1 to i+j do P[0,t] = 1 P[t,0] = 0 for k = 1 to t-1 do P[k,t-k]=(P[k-1,t-k]+P[k,t-k-1])/2; endfor return P[i,j] // running time? 4 1 15/16 3 1 7/8 ↑ j 2 1 3/4 1 1 1/2 1 2 3 4 i → 4/29/2019

Example 1. World Series Odds Odds(i,j) for t = 1 to i+j do P[0,t] = 1 P[t,0] = 0 for k = 1 to t-1 do P[k,t-k]=(P[k-1,t-k]+P[k,t-k-1])/2; endfor return P[i,j] // running time? 4 1 15/16 13/16 21/32 1/2 3 1 7/8 11/16 1/2 11/32 ↑ j 2 1 3/4 1/2 5/16 3/16 1 1 1/2 1/4 1/8 1/16 1 2 3 4 i → 4/29/2019

Example 2. Matrix Chain Multiplication Given n matrices M1,M2,…,Mn, compute the product M1M2M3…Mn, where Mi has dimension di-1 x di (i.e., with di-1 rows and di columns), for i = 1,…,n. Objective? 4/29/2019

Example 2. Matrix Chain Multiplication Given n matrices M1,M2,…,Mn, compute the product M1M2M3…Mn, where Mi has dimension di-1 x di (i.e., with di-1 rows and di columns), for i = 1,…,n. Objective? Fact 1. Given matrices A with dimension p x q and B with dimension q x r, multiplication AB takes pqr scalar multiplications (see handouts). 4/29/2019

Example 2. Matrix Chain Multiplication Given n matrices M1,M2,…,Mn, compute the product M1M2M3…Mn, where Mi has dimension di-1 x di (i.e., with di-1 rows and di columns), for i = 1,…,n. Objective? Fact 1. Given matrices A with dimension p x q and B with dimension q x r, multiplication AB takes pqr scalar multiplications (see handouts). So the objective is to compute M1M2M3…Mn with the minimum number of scalar multiplications. 4/29/2019

Example 2. Matrix Chain Multiplication Problem: Parenthesize the product M1M2…Mn in a way to minimize the number of scalar multiplications. Example. M1 --- 20 x 10 M2 --- 10 x 50 M3 --- 50 x 5 M4 --- 5 x 30 (M1(M2(M3M4))) --- 28500 multiplications (M1((M2M3)M4) --- 10000 multiplications ((M1M2)(M3M4)) --- 47500 multiplications ((M1(M2M3))M4) --- 6500 multiplications (((M1M2)M3)M4) --- 18000 multiplications 4/29/2019

Example 2. Matrix Chain Multiplication Problem: Parenthesize the product M1M2…Mn in a way to minimize the number of scalar multiplications. However, exhaustive search is not efficient. Let P(n) be the number of alternative parenthesizations of n matrices. 4/29/2019

Example 2. Matrix Chain Multiplication Problem: Parenthesize the product M1M2…Mn in a way to minimize the number of scalar multiplications. However, exhaustive search is not efficient. Let P(n) be the number of alternative parenthesizations of n matrices. P(n) = 1, if n=1 P(n) = ∑k=1 to n-1 P(k)P(n-k), if n ≥ 2 4/29/2019

Example 2. Matrix Chain Multiplication Problem: Parenthesize the product M1M2…Mn in a way to minimize the number of scalar multiplications. However, exhaustive search is not efficient. Let P(n) be the number of alternative parenthesizations of n matrices. P(n) = 1, if n=1 P(n) = ∑k=1 to n-1 P(k)P(n-k), if n ≥ 2 P(n) ≥ 4n-1/(2n2-n). Ex. n = 20, this is > 228. 4/29/2019

Example 2. Matrix Chain Multiplication So let’s use dynamic programming. Let mij be the number of multiplications performed using an optimal parenthesization of MiMi+1…Mj-1Mj. 4/29/2019

Example 2. Matrix Chain Multiplication So let’s use dynamic programming. Let mij be the number of multiplications performed using an optimal parenthesization of MiMi+1…Mj-1Mj. mii = 0 mij = mink{mik + mk+1,j + di-1dkdj, 1 ≤ i ≤ k < j ≤ n} 4/29/2019

Example 2. Matrix chain multiplication Now you see another difference between dynamic programming and Divide&Conquer --- dynamic programming is always bottom-up! j 1 2 3 4 i 1 10000 3500 2500 4000 2 3 7500 Pass 2 4 Pass 1 mii = 0 mij = mink{mik + mk+1,j + di-1dkdj, 1 ≤ i ≤ k < j ≤ n} Pass 0 4/29/2019

Example 2. Matrix chain multiplication j m[1,4] contains the value of the optimal solution. 1 2 3 4 i 1 10000 3500 6500 2500 4000 2 7500 3 4 mii = 0 mij = mink{mik + mk+1,j + di-1dkdj, 1 ≤ i ≤ k < j ≤ n} 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, a chord vivj divides P into two polygons <vi,vi+1,…,vj> and <vj,vj+1,…,vi> (assume vn=v0 or more generally, vk=vk mod n). v5 v4 v6 v3 v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, a chord vivj divides P into two polygons <vi,vi+1,…,vj> and <vj,vj+1,…,vi> (assume vn=v0 or more generally, vk=vk mod n). v5 v4 v6 v3 v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, it can always be divided into n-2 non-overlapping triangles using n-3 chords. v5 v4 v6 v3 v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, it can always be divided into n-2 non-overlapping triangles using n-3 chords. v5 v4 v6 v3 v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, there could be a lot of triangulations (in fact, an exponential number of them). v5 v4 v6 v3 v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, we want to compute an optimal triangulation. v5 v4 v6 v3 Optimal on what? v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, we want to compute an optimal triangulation whose weight is minimized. The weight of a triangulation is the weight of all its triangles and the weight of a triangle is the sum of its 3 edge lengths. v5 v4 v6 v3 v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Given a convex polygon P=< v0,v1,…,vn-1 >, we want to compute an optimal triangulation whose weight is minimized. The weight of a triangulation is the weight of all its triangles and the weight of a triangle is the sum of its 3 edge lengths. v5 v4 The weight of ∆v2v4v6 is |v2v4|+|v4v6|+|v2v6| v6 v3 v2 v7 v1 v8 v0 4/29/2019

Example 3. Optimal Polygon Triangulation Dynamic Programming Solution: Let t[i,j] be the weight of an optimal triangulation of polygon <vi-1,vi,…,vj>. vj vk+1 vk vi-1 vi 4/29/2019

Example 3. Optimal Polygon Triangulation Dynamic Programming Solution: Let t[i,j] be the weight of an optimal triangulation of polygon <vi-1,vi,…,vj>. vj t[i,j] = mink { t[i,k] + t[k+1,j] + w(∆vi-1vkvj) }, i<j t[i,i] = 0 vk+1 vk vi-1 vi 4/29/2019

Example 3. Optimal Polygon Triangulation Dynamic Programming Solution: Let t[i,j] be the weight of an optimal triangulation of polygon <vi-1,vi,…,vj>. vj t[i,j] = mink { t[i,k] + t[k+1,j] + w(∆vi-1vkvj) }, i<j t[i,i] = 0 Almost identical to the matrix chain multiplication problem! vk+1 vk vi-1 vi 4/29/2019

Example 4. Longest Common Subsequence X=<A,B,C,B,D>, Y<B,A,D,C>, then <B,C> is a common subsequence of X and Y. The problem has applications in bioinformatics and communications. 4/29/2019

Example 4. Longest Common Subsequence Given a sequence X=<x1,x2,…,xm>, another sequence Z=<z1,z2,…,zk> is a subsequence of X if there exists a strictly increasing sequence <i1,i2,…,ik> such that xij=zj for j =1,2,…,k. Example. X=<A,B,C,B,D>, then Z=<A,D> is a subsequence of X. Choose i1=1,i2=5, then xi1=z1=A, xi2=z2=D. 4/29/2019

Example 4. Longest Common Subsequence Given a sequence X=<x1,x2,…,xm>, another sequence Z=<z1,z2,…,zk> is a subsequence of X if there exists a strictly increasing sequence <i1,i2,…,ik> such that xij=zj for j =1,2,…,k. Example. X=<A,B,C,B,D>, then Z=<A,D> is a subsequence of X. Choose i1=1,i2=5, then xi1=z1=A, xi2=z2=D. So by name, the LCS of X and Y is a common subsequence of X and Y whose length is the largest. 4/29/2019

Example 4. Longest Common Subsequence Example. X=<A,B,C,B,D>, Y=<B,A,C,A,D> then Z=<A,C,D> is a LCS of X and Y. Can you find another LCS in this case? 4/29/2019

Example 4. Longest Common Subsequence Example. X=<A,B,C,B,D>, Y=<B,A,C,A,D> then Z=<A,C,D> is a LCS of X and Y. Can you find another LCS in this case? Z’=<B,C,D> is another LCS, so longest common subsequence is not unique! 4/29/2019

Example 4. Longest Common Subsequence Dynamic programming solution. (For simplicity, let’s say we are only interested in finding the length.) -Define table c[-,-] -c[i,j] stores the length of an LCS of the sequences X[1..i] and Y[1..j]. c[i,j] = 0, if i=0 or j=0 c[i,j] = c[i-1,j-1] + 1, if i,j>0 and xi = yj c[i,j] = max{ c[i-1,j], c[i,j-1] }, if i,j>0 and xi ≠ yj 4/29/2019

Example 4. Longest Common Subsequence j 1 2 3 4 m ← length[X] n ← length[Y] For i = 1 to m do c[i,0] ← 0 For j = 0 to n do c[0,j] ← 0 for j = 1 to n if xi = yj then c[i,j]←c[i-1,j-1]+1 else if c[i-1,j] ≥ c[i,j-1] then c[i,j] ← c[i-1,j] else c[i,j] ← c[i,j-1] i 1 2 3 4 X=<A,C,B,D> Y=<B,A,C,D> 4/29/2019

Example 4. Longest Common Subsequence j 1 2 3 4 m ← length[X] n ← length[Y] For i = 1 to m do c[i,0] ← 0 For j = 0 to n do c[0,j] ← 0 for j = 1 to n if xi = yj then c[i,j]←c[i-1,j-1]+1 else if c[i-1,j] ≥ c[i,j-1] then c[i,j] ← c[i-1,j] else c[i,j] ← c[i,j-1] i 1 1 1 1 2 3 4 X=<A,C,B,D> Y=<B,A,C,D> 4/29/2019

Example 4. Longest Common Subsequence j 1 2 3 4 m ← length[X] n ← length[Y] For i = 1 to m do c[i,0] ← 0 For j = 0 to n do c[0,j] ← 0 for j = 1 to n if xi = yj then c[i,j]←c[i-1,j-1]+1 else if c[i-1,j] ≥ c[i,j-1] then c[i,j] ← c[i-1,j] else c[i,j] ← c[i,j-1] i 1 1 1 1 2 1 2 2 3 1 1 2 2 4 1 1 2 3 X=<A,C,B,D> Y=<B,A,C,D> 4/29/2019