Chapter 15: Dynamic Programming III

Slides:



Advertisements
Similar presentations
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
Advertisements

Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
Chapter 7 Dynamic Programming 7.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Code
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Optimal binary search trees
Analysis of Algorithms
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming UNC Chapel Hill Z. Guo.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
We learnt what are forests, trees, bintrees, binary search trees. And some operation on the tree i.e. insertion, deletion, traversing What we learnt.
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Dynamic Programming Min Edit Distance Longest Increasing Subsequence Climbing Stairs Minimum Path Sum.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Dynamic Programming 又稱 動態規劃 經常被用來解決最佳化問題. Dynamic Programming2 Rod cutting problem Given a rod of length N inches and a table of prices p[i] for i=1..N,
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
MA/CSSE 473 Day 30 Optimal BSTs. MA/CSSE 473 Day 30 Student Questions Optimal Linked Lists Expected Lookup time in a Binary Tree Optimal Binary Tree (intro)
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
David Meredith Dynamic programming David Meredith
Algorithmics - Lecture 11
CS 3343: Analysis of Algorithms
Advanced Algorithms Analysis and Design
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
CS200: Algorithm Analysis
Dynamic Programming Several problems Principle of dynamic programming
Chapter 8 Dynamic Programming.
Dynamic Programming Comp 122, Fall 2004.
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
ICS 353: Design and Analysis of Algorithms
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Merge Sort Dynamic Programming
Advanced Algorithms Analysis and Design
Dynamic Programming.
Dynamic Programming.
Chapter 15-1 : Dynamic Programming I
Static Dictionaries Collection of items. Each item is a pair.
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Lecture 8. Paradigm #6 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
Dynamic Programming-- Longest Common Subsequence
Introduction to Algorithms: Dynamic Programming
Dynamic Programming.
DYNAMIC PROGRAMMING.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
Dynamic Programming II DP over Intervals
Analysis of Algorithms CS 477/677
Advanced Analysis of Algorithms
Matrix Chain Multiplication
Longest Common Subsequence
Dynamic Programming.
Analysis of Algorithms CS 477/677
Optimal Binary Search Tree. 1.Preface  OBST is one special kind of advanced tree.  It focus on how to reduce the cost of the search of the BST.  It.
Presentation transcript:

Chapter 15: Dynamic Programming III

Subsequence of a String Let S = s1s2…sm be a string of length m Any string of the form si1 si2 … sik with i1  i2  …  ik is a subsequence of S E.g., if S = farmers  fame, arm, mrs, farmers, are some of the subsequences of S

Longest Common Subsequence Let S and T be two strings If a string is both a subsequence of S and a subsequence of T, it is a common subsequence of S and T In addition, if it is the longest possible one, it is a longest common subsequence

Longest Common Subsequence E.g., S = algorithms T = logarithms Then, aim, lots, ohms, grit, are some of the common subsequences of S and T Longest common subsequences: lorithms , lgrithms

Longest Common Subsequence Let S = s1s2…sm be a string of length m Let T = t1t2…tn be a string of length n Can we quickly find a longest common subsequence (LCS) of S and T ?

Optimal Substructure Let X = x1x2…xk be an LCS of S1,i = s1 s2…si and T1,j = t1t2…tj. Lemma: If si = tj, then xk = si = tj, and x1x2…xk-1 must be the LCS of S1,i-1 and T1,j-1 If si  tj, then X must either be (i) an LCS of S1,i and T1,j-1 , or (ii) an LCS of S1,i-1 and T1,j

Optimal Substructure Let leni,j = length of the LCS of S1,i and T1,j Lemma: For any i, j  1, if si = tj, leni,j = leni-1,j-1 + 1 if si  tj, leni,j = max { leni,j-1 , leni-1,j }

Compute_L(m, n) runs in O(2m+n) time Length of LCS Define a function Compute_L(i,j) as follows: Compute_L(i, j) /* Finding leni,j */ 1. if (i == 0 or j == 0 ) return 0; /* base case */ 2. if (si == tj) return Compute_L(i-1,j-1) + 1; 3. else return max {Compute_L(i-1,j), Compute_L(i,j-1)}; Compute_L(m, n) runs in O(2m+n) time

Overlapping Subproblems To speed up, we can see that : To Compute_L(i,j-1) and Compute_L(i-1,j), has a common subproblem: Compute_L(i-1,j-1) In fact, in our recursive algorithm, there are many redundant computations ! Question: Can we avoid it ?

Bottom-Up Approach Let us create a 2D table L to store all leni,j values once they are computed BottomUp_L( ) /* Finding min #operations */ 1. For all i and j, set L[i,0] = L[0, j] = 0; 2. for (i = 1,2,…, m) Compute L[i,j] for all j; // Based on L[i-1,j-1], L[i-1,j], L[i,j-1] 3. return L[m,n] ; Running Time = Q(mn)

Example Run: After Step 1 D O R M I T Y

Example Run: After Step 2, i = 1 D O R M I T Y 1

Example Run: After Step 2, i = 2 D O R M I T Y 1 2

Example Run: After Step 2, i = 3 D O R M I T Y 1 2 3

Example Run: After Step 2, i = 4 D O R M I T Y 1 2 3

Example Run: After Step 2 D O R M I T Y 1 2 3 4

Extra information to obtain an LCS D O R M I T Y 1 1

Extra Info: After Step 2, i = 2 D O R M I T Y 1 1 1 2 2

Extra Info: After Step 2, i = 3 D O R M I T Y 1 1 1 2 2 2 3 3

D O R M I T Y Extra Info: After Step 2 1 1 1 2 2 2 3 3 3 4 1 1 1 2 2 2 3 3 3 4 4 4

LCS obtained by tracing from L[m,n] 1 1 1 2 2 2 3 3 3 4 4

Computing the length of an LCS LCS_LENGTH(X,Y) 1 m  X.length 2 n  Y.length 3 for i  1 to m 4 c[i, 0]  0 5 for j  1 to n 6 c[0, j]  0 7 for i  1 to m 8 for j  1 to n

Computing the length of an LCS 9 if xi == yj 10 c[i, j]  c[i-1, j-1]+1 11 b[i, j]  “” 12 elseif c[i–1, j]  c[i, j-1] 13 c[i, j]  c[i-1, j] 14 b[i, j]  “” 15 else c[i, j]  c[i, j-1] 16 b[i, j]  “” 17 return c and b

Remarks Again, a slight change in the algorithm allows us to obtain a particular LCS Also, we can make minor changes to the recursive algorithm and obtain a memoized version (whose running time is O(mn))

Writing a Translation Program In real life, different words may be searched with different frequencies E.g., apple may be more often than pizza Also, there may be different frequencies for the unsuccessful searches E.g., we may unluckily search for a word in the range (hotdog, pizza) more often than in the range (apple, banana)

Suppose your friend in Google gives you the probabilities of what a search will be: = apple = banana 0.21  apple 0.01 (apple, banana) 0.10 0.18 = burger 0.05 (burger, hotdog) 0.12 (banana, burger) = hotdog = pizza 0.02 (hotdog, pizza) 0.04 = spaghetti 0.07 0.11 (pizza, spaghetti)  spaghetti

This tree has better expected performance Given these probabilities, we may want words that are searched more frequently to be nearer the root of the search tree burger hotdog apple banana pizza spaghetti This tree has better expected performance

Expected Search Time To handle unsuccessful searches, we can modify the search tree slightly (by adding dummy leaves), and define the expected search time as follows: Let k1  k2  …  kn denote the n keys, which correspond to the internal nodes Let d0  d1  d2  …  dn be dummy keys for ranges of the unsuccessful search  dummy keys correspond to leaves

Search tree of Page 28 after modification k2 k1 k5 d0 d1 k4 k6 k3 d4 d5 d6 d2 d3 Search tree of Page 28 after modification

Search Time Lemma: Based on the modified search tree: when we search for a word ki, search time = node-depth(ki) when we search for a word in range dj, search time = node-depth(dj)

Expected Search Time Let pi = Pr( ki is searched ) Let qj = Pr( word in dj is searched ) So, Si pi + Sj qj = 1 Expected search time = Si pi node-depth(ki) + Sj qj node-depth(dj)

15.5 Optimal Binary search trees cost:2.75 optimal!! cost:2.80

Optimal Binary Search Tree Question: Given the probabilities pi and qj, can we construct a binary search tree whose expected search time is minimized? Such a search tree is called an Optimal Binary Search Tree

Optimal Substructure Let T = optimal BST for the keys ( ki, ki+1, … , kj; di-1, di, … , dj ). Let L and R be its left and right subtrees. Lemma: Suppose kr is the root of T. Then, L must be an optimal BST for the keys (ki , ki+1, … , kr-1; di-1, di, … , dr-1) R must be an optimal BST for the keys (kr+1, kr+2, … , kj; dr, dr+1, … , dj)

Optimal Substructure Let Ei,j = expected time spent with the keys ( ki, ki+1, …, kj; di-1, di, …, dj ) in optimal BST Let wi,j = Ss=i to j ps + St=i-1 to j qt = sum of the probabilities of keys ( ki, ki+1, … , kj; di-1, di, … , dj )

Optimal Substructure Lemma: For any j  i, Ei,j = minr { Ei,r-1 + Er+1,j + wi,j } kr Contribute wi,j Contribute Ei,r-1 Contribute Er+1,j

Optimal Binary Search Tree Define a function Compute_E(i,j) as follows: Compute_E(i, j) /* Finding ei,j */ 1. if (i == j+1) return qj; /* Exp time with key dj */ 2. min = ; 3. for (r = i, i+1, …, j) { g = Compute_E(i,r-1) + Compute_E(r+1,j) + wi,j ; if (g  min) min = g; } 4. return min ;

Optimal Binary Search Tree Question: We want to get Compute_E(1,n) What is its running time? Similar to Matrix-Chain Multiplication, the recursive function runs in W(3n) time In fact, it will examine at most once for all possible binary search tree  Running time = O(C(2n-2,n-1)/n) Catalan Number

Overlapping Subproblems Here, we can see that : To Compute_E(i,j) and Compute_E(i,j+1), there are many COMMON subproblems: Compute_E(i,i+1), …, Compute_E(i,j-1) So, in our recursive algorithm, there are many redundant computations ! Question: Can we avoid it ?

Bottom-Up Approach Let us create a 2D table E to store all Ei,j values once they are computed Let us also create a 2D table W to store all wi,j We first compute all entries in W. Next, we compute Ei,j for j-i = 0,1,2,…,n-1

Bottom-Up Approach BottomUp_E( ) /* Finding min #operations */ 1. Fill all entries of W 2. for j = 1, 2, …, n, set E[j+1,j] = qj ; 3. for (length = 0,1,2,…, n-1) Compute E[i,i+length] for all i; // From W and E[x,y] with |x-y| < length 4. return E[1,n] ; Running Time = Q(n3)

The table e[i,j], w[i,j], and root[i,j] compute by OPTIMAL-BST on the key distribution.

Remarks Again, a slight change in the algorithm allows us to get the exact structure of the optimal binary search tree Also, we can make minor changes to the recursive algorithm and obtain a memoized version (whose running time is O(n3))

Homework Use the dynamic programming approach to write a program to find the maximum sum in any contiguous sub-array of a given array of n real values. Show the time complexity of your algorithm. (Due Nov. 23) Practice at home: exercises 15.4-4, 15.4-5, 15.5-2, 15.5-3

Brainstorm There is a staircase with n steps. Your friend, Jack, wants to count how many different ways he can walk up this staircase. Because John is quite tall, in one move, he can choose either to walk up 1 step, 2 steps, or 3 steps. Let Fk denote the number of ways John can walk up a staircase with k steps. So, F1 = 1, F2 = 2, F3 = 4, and F4 = 7. Derive a recurrence for Fk, and show that Fn can be computed in O(n) time.

Brainstorm There is a staircase with n stages, and on each stage there is a coupon to your favorite restaurant. Each coupon has an associated value which may be different. You can climb up 1, 2, or 3 stages in a step. Since you are in a hurry, you need to reach the top of the stair in at most L steps, where L ≤ n. When you visit a particular stage, you can collect the coupon on that stage. Find a way to climb the stair within L steps so that you can collect coupons with maximum total value. Your algorithm should run in O(n2) time.

Brainstorm 外遇村裡有50對夫妻,丈夫全都有外遇。妻子都知道所有外遇男人,就是不知道自己的先生有外遇,妻子之間彼此也不會互相交換訊息。村子有一個規定,妻子若能證明自己的先生外遇,就必需在當天晚上12:00殺死他。有一天,公認不會說謊的皇后來到外遇村,宣布至少有一位丈夫不忠。結果會發生甚麼事?