1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.

Slides:



Advertisements
Similar presentations
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Advertisements

15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Chapter 7 Dynamic Programming.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
1 Longest Common Subsequence (LCS) Problem: Given sequences x[1..m] and y[1..n], find a longest common subsequence of both. Example: x=ABCBDAB and y=BDCABA,
Avatar Path Clustering in Networked Virtual Environments Jehn-Ruey Jiang, Ching-Chuan Huang, and Chung-Hsien Tsai Adaptive Computing and Networking Lab.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Chapter 7 Dynamic Programming 7.
§ 8 Dynamic Programming Fibonacci sequence
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
7 -1 Chapter 7 Dynamic Programming Fibonacci Sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence (1) 0,1,1,2,3,5,8,13,21,34,... Leonardo Fibonacci ( ) 用來計算兔子的數量 每對每個月可以生產一對 兔子出生後,
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
8 -1 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if i  2 Solved.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 4: Dynamic Programming Phan Th ị Hà D ươ ng 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
8 -1 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if i  2 Solved.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Lecture 5 Dynamic Programming
Seminar on Dynamic Programming.
Chapter 8 Dynamic Programming.
Dynamic Programming General Idea
Dynamic Programming.
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Lecture 8. Paradigm #6 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
Dynamic Programming General Idea
Advanced Analysis of Algorithms
Matrix Chain Multiplication
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Seminar on Dynamic Programming.
Presentation transcript:

1 Dynamic Programming 2012/11/20

P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems. Problems that can be solved by dynamic programming satisfy the principle of optimality.

3 Principle of optimality Suppose that in solving a problem, we have to make a sequence of decisions D 1, D 2, …, D n-1, D n If this sequence of decisions D 1, D 2, …, D n-1, D n is optimal, then the last k, 1  k  n, decisions must be optimal under the condition caused by the first n-k decisions.

4 Dynamic method v.s. Greedy method Comparison: In the greedy method, any decision is locally optimal. These locally optimal solutions will finally add up to be a globally optimal solution.

5 The Greedy Method E.g. Find a shortest path from v 0 to v 3. The greedy method can solve this problem. The shortest path: = 7.

6 The Greedy Method E.g. Find a shortest path from v 0 to v 3 in the multi-stage graph. Greedy method: v 0 v 1,2 v 2,1 v 3 = 23 Optimal: v 0 v 1,1 v 2,2 v 3 = 7 The greedy method does not work for this problem. This is because decisions at different stages influence one another.

7 Multistage graph A multistage graph G=(V,E) is a directed graph in which the vertices are partitioned into k  2 disjoint sets V i, 1  i  k In addition, if is an edge in E then u  V i and v  V i+i for some i, 1  i<k The set V 1 and V k are such that  V 1  =  V k  =1 The multistage graph problem is to find a minimum cost path from s in V 1 to t in V k Each set V i defines a stage in the graph

8 Greedy Method vs. Multistage graph E.g. The greedy method cannot be applied to this case: S A D T = 23. The shortest path is: S C F T = 9.

9 Dynamic Programming Dynamic programming approach: d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}

10 Dynamic Programming d(A, T) = min{4+d(D, T), 11+d(E, T)} = min{4+18, 11+13} = 22.

11 Dynamic Programming d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18. d(C, T) = min{ 2+d(F, T) } = 2+2 = 4 d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9.

12 Save computation For example, we never calculate (as a whole) the length of the path S  B  D  T ( namely, d(S,B)+d(B,D)+d(D,T) ) because we have found d(B, E)+d(E, T)<d(B,D)+d(D,T) There are some more examples … Compare with the brute-force method …

13 The advantages of dynamic programming approach To avoid exhaustively searching the entire solution space (to eliminate some impossible solutions and save computation). To solve the problem stage by stage systematically. To store intermediate solutions in a table (array) so that they can be retrieved from the table in later stages of computation.

Comment If a problem can be described by a multistage graph then it can be solved by dynamic programming. If a problem can be described by a multistage graph then it can be solved by dynamic programming. 14

15 The longest common subsequence (LCS or LCSS) problem A sequence of symbols A = b a c a d A subsequence of A: deleting 0 or more symbols (not necessarily consecutive) from A. E.g., ad, ac, bac, acad, bacad, bcd. Common subsequences of A = b a c a d and B = a c c b a d c b : ad, ac, bac, acad. The longest common subsequence of A and B: a c a d.

P.16 DNA Matching DNA = {A|C|G|T}* S1=ACCGGTCGAGTGCGGCCGAAGCCGGCCGAA S2=GTCGTTCGGAATGCCGTTGCTGTAAA Are S1 and S2 similar DNAs? The question can be answered by figuring out the longest common subsequence.

Networked virtual environments (NVEs) virtual worlds full of numerous virtual objects to simulate a variety of real world scenes allowing multiple geographically distributed users to assume avatars to concurrently interact with each other via network connections. E.G., MMOGs: World of Warcraft (WoW), Second Life (SL)

Avatar Path Clustering Because of similar personalities, interests, or habits, users may possess similar behavior patterns, which in turn lead to similar avatar paths within the virtual world. We would like to group similar avatar paths as a cluster and find a representative path (RP) for them.

How similar are two paths in Freebies island of Second Life? 19

SeqA:C60.C61.C62.C63.C55.C47.C39.C31.C32 LCSS-DC - path transfers sequence 20

SeqA :C60.C61.C62.C63.C55.C47.C39.C31.C32 SeqB :C60.C61.C62.C54.C62.C63.C64 LCSS AB :C60.C61.C62. C63 LCSS-DC - similar path thresholds 21

P.22 Longest-common-subsequence problem: We are given two sequences X = and Y = and wish to find a maximum length common subsequence of X and Y. We define X i = and Y j =.

Brute Force Solution m * 2 n = O(2 n ) or n * 2 m = O(2 m ) 23

P.24 A recursive solution to subproblem Define c [i, j] is the length of the LCS of X i and Y j.        j i j i y x i,j>jicjic =yx i,j> jic j=i= jic and 0 if]},1[],1,[max{ and 0if 1]1,1[ 0or 0 if 0 ],[

P.25 Computing the length of an LCS LCS_LENGTH(X,Y) 1 m  length[X] 2 n  length[Y] 3 for i  1 to m 4 do c[i, 0]  0 5 for j  1 to n 6 do c[0, j]  0

P.26 7 for i  1 to m 8 for j  1 to n 9 if x i = y j 10 then c[i, j]  c[i-1, j-1]+1 11 b[i, j]  “  ” 12 else if c[i–1, j]  c[i, j-1] 13 then c[i, j]  c[i-1, j] 14 b[i, j]  “  ” 15 else c[i, j]  c[i, j-1] 16 b[i, j]  “  ” 17 return c and b

P.27 Complexity: O(mn) rather than O(2 m ) or O(2 n ) of Brute force method

P.28 PRINT_LCS PRINT_LCS(b, X, i, j ) 1if i = 0 or j = 0 2then return 3if b[i, j] = “  ” 4then PRINT_LCS(b, X, i-1, j-1) 5 print x i 6else if b[i, j] = “  ” 7then PRINT_LCS(b, X, i-1, j) 8else PRINT_LCS(b, X, i, j-1) Complexity: O(m+n) By calling PRINT_LCS(b, X, length[X], length[Y]) to print LCS

Chapter 15P.29 Matrix-chain multiplication How to compute where is a matrix for every i. Example:

Chapter 15P.30 MATRIX MULTIPLY MATRIX MULTIPLY(A,B) 1if columns[A] rows[B] 2 then error “ incompatible dimensions ” 3 else for to rows[A] 4 for to columns[B] 5 6 for to columns[A] 7 8return C

Chapter 15P.31 Complexity: Let A be a matrix, and B be a matrix. Then the complexity of A xB is.

Chapter 15P.32 Example: is a matrix, is a matrix, and is a matrix. Then takes time. However takes time.

Chapter 15P.33 The matrix-chain multiplication problem: Given a chain of n matrices, where for i=0,1, …,n, matrix Ai has dimension p i-1  p i, fully parenthesize the product in a way that minimizes the number of scalar multiplications. A product of matrices is fully parenthesized if it is either a single matrix, or a product of two fully parenthesized matrix product, surrounded by parentheses.

Chapter 15P.34 Counting the number of parenthesizations: [Catalan number]

Chapter 15P.35 Step 1: The structure of an optimal parenthesization

Chapter 15P.36 Step 2: A recursive solution Define m[i, j]= minimum number of scalar multiplications needed to compute the matrix goal m[1, n]

Chapter 15P.37 Step 3: Computing the optimal costs Instead of computing the solution to the recurrence recursively, we compute the optimal cost by using a tabular, bottom-up approach. The procedure uses an auxiliary table m[1..n, 1..n] for storing the m[i, j] costs and an auxiliary table s[1..n, 1..n] that records which index of k achieved the optimal cost in computing m[i, j].

Chapter 15P.38 MATRIX_CHAIN_ORDER MATRIX_CHAIN_ORDER(p) 1 n  length[p] –1 2 for i  1 to n 3do m[i, i]  0 4 for l  2 to n 5do for i  1 to n – l + 1 6do j  i + l – 1 7 m[i, j]   8 for k  i to j – 1 9do q  m[i, k] + m[k+1, j]+ p i-1 p k p j 10 if q < m[i, j] 11 then m[i, j]  q 12 s[i, j]  k 13 return m and s Complexity:

Chapter 15P.39 Example:

Chapter 15P.40 the m and s table computed by MATRIX-CHAIN-ORDER for n=6

Chapter 15P.41 m[2,5]= min{ m[2,2]+m[3,5]+p 1 p 2 p 5 =  15  20=13000, m[2,3]+m[4,5]+p 1 p 3 p 5 =  5  20=7125, m[2,4]+m[5,5]+p 1 p 4 p 5 =  10  20=11374 } =7125

Chapter 15P.42 MATRIX_CHAIN_MULTIPLY PRINT_OPTIMAL_PARENS(s, i, j) 1 if i=j 2 then print “A”i 3 else print “(“ 4 PRINT_OPTIMAL_PARENS(s, i, s[i,j]) 5 PRINT_OPTIMAL_PARENS(s, s[i,j]+1, j) 6 print “)” Example:

Q&A 43