Unit 4: Dynamic Programming

Slides:



Advertisements
Similar presentations
UNC Chapel Hill Lin/Foskey/Manocha Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses.
Advertisements

Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Introduction to Algorithms
CSC 252 Algorithms Haniya Aslam
Dynamic Programming II Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Steps in DP: Step 1 Think what decision is the “last piece in the puzzle” –Where to place the outermost parentheses in a matrix chain multiplication (A.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
1 Dynamic Programming 2012/11/20. P.2 Dynamic Programming (DP) Dynamic programming Dynamic programming is typically applied to optimization problems.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Dynamic Programming 又稱 動態規劃 經常被用來解決最佳化問題. Dynamic Programming2 Rod cutting problem Given a rod of length N inches and a table of prices p[i] for i=1..N,
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
Dynamic Programming Typically applied to optimization problems
David Meredith Dynamic programming David Meredith
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Advanced Algorithms Analysis and Design
Dynamic Programming Comp 122, Fall 2004.
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
DYNAMIC PROGRAMMING.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming II DP over Intervals
Analysis of Algorithms CS 477/677
Dynamic Programming.
Presentation transcript:

Unit 4: Dynamic Programming Course contents: Assembly-line scheduling Rod cutting Matrix-chain multiplication Longest common subsequence Optimal binary search trees Applications: optimal polygon triangulation, flip-chip routing, technology mapping for logic synthesis Reading: Chapter 15 Unit 4 Spring 2013 1

Dynamic Programming (DP) vs. Divide-and-Conquer Both solve problems by combining the solutions to subproblems. Divide-and-conquer algorithms Partition a problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. Inefficient if they solve the same subproblem more than once. Dynamic programming (DP) Applicable when the subproblems are not independent. DP solves each subproblem just once. Unit 4 Spring 2013 2

Assembly-line Scheduling station S2,1 S2,2 a1,1 a1,2 a1,3 a1,n-1 a1,n x1 x2 t1,1 t2,1 a2,1 a2,2 a2,3 a2,n-1 a2,n e2 t1,2 t2,2 t1,n-1 t2,n-1 e1 auto exits chassis enters line 1 line 2 station S1,1 S1,2 S1,3 S1,n-1 S1,n S2,3 S2,n-1 S2,n An auto chassis enters each assembly line, has parts added at stations, and a finished auto exits at the end of the line. Si,j: the jth station on line i ai,j: the assembly time required at station Si,j ti,j: transfer time from station Si,j to the j+1 station of the other line. ei (xi): time to enter (exit) line i Unit 4 Spring 2013 3

Optimal Substructure S1,1 S1,2 S1,3 S1,n-1 S1,n S2,1 S2,2 a1,1 a1,2 a1,3 a1,n-1 a1,n x1 x2 t1,1 t2,1 a2,1 a2,2 a2,3 a2,n-1 a2,n e2 t1,2 t2,2 t1,n-1 t2,n-1 e1 auto exits chassis enters line 1 line 2 S2,3 S2,n-1 S2,n Objective: Determine the stations to choose to minimize the total manufacturing time for one auto. Brute force: (2n), why? The problem is linearly ordered and cannot be rearranged => Dynamic programming? Optimal substructure: If the fastest way through station Si,j is through S1,j-1, then the chassis must have taken a fastest way from the starting point through S1,j-1. Unit 4 Spring 2013 4

Overlapping Subproblem: Recurrence S1,n-1 S1,n S2,1 S2,2 a1,1 a1,2 a1,3 a1,n-1 a1,n x1 x2 t1,1 t2,1 a2,1 a2,2 a2,3 a2,n-1 a2,n e2 t1,2 t2,2 t1,n-1 t2,n-1 e1 auto exits chassis enters line 1 line 2 S2,3 S2,n-1 S2,n Overlapping subproblem: The fastest way through station S1,j is either through S1,j-1 and then S1,j , or through S2,j-1 and then transfer to line 1 and through S1,j. fi[j]: fastest time from the starting point through Si,j The fastest time all the way through the factory f* = min(f1[n] + x1, f2[n] + x2) Unit 4 Spring 2013 5

Computing the Fastest Time Fastest-Way(a, t, e, x, n) 1. f1[1] = e1 + a1,1 2. f2[1] = e2 + a2,1 3. for j = 2 to n 4. if f1[j-1] + a1,j  f2[j-1] + t2,j -1 + a1,j 5. f1[j] = f1[j-1] + a1,j 6. l1[j] = 1 7. else f1[j] = f2[j-1] + t2,j -1 + a1,j 8. l1[j] = 2 9. if f2[j-1] + a2,j  f1[j-1] + t1,j -1 + a2,j 10. f2[j] = f2[j-1] + a2,j 11. l2[j] = 2 12. else f2[j] = f1[j-1] + t1,j -1 + a2,j 13. l2[j] = 1 14. if f1[n] + x1  f2[n] + x2 15. f* = f1[n] + x1 16. l* = 1 17. else f* = f2[n] + x2 18. l* = 2 Running time? Linear time!! li[j]: The line number whose station j-1 is used in a fastest way through Si,j Unit 4 Spring 2013 6

Constructing the Fastest Way Print-Station(l, n) 1. i = l* 2. Print “line” i “, station “ n 3. for j = n downto 2 4. i = li[j] 5. Print “line “ i “, station “ j-1 line 1, station 6 line 2, station 5 line 2, station 4 line 1, station 3 line 2, station 2 line 1, station 1 S1,1 S1,2 S1,3 S1,5 S1,6 S2,1 S2,2 7 9 3 8 4 2 5 6 1 auto exits chassis enters line 1 line 2 S2,3 S2,5 S2,6 S1,4 S2,4 Unit 4 Spring 2013 7

An Example 9 18 20 24 32 35 12 16 22 25 30 37 f1[j] f* = 38 f2[j] j 1 S1,1 S1,2 S1,3 S1,5 S1,6 S2,1 S2,2 7 9 3 8 4 2 5 6 1 auto exits chassis enters line 1 line 2 S2,3 S2,5 S2,6 S1,4 S2,4 9 18 20 24 32 35 12 16 22 25 30 37 f1[j] f* = 38 f2[j] j 1 2 3 4 5 6 l1[j] 1 2 l* = 1 l2[j] Unit 4 Spring 2013 8

Dynamic Programming (DP) Typically apply to optimization problem. Generic approach Calculate the solutions to all subproblems. Proceed computation from the small subproblems to larger subproblems. Compute a subproblem based on previously computed results for smaller subproblems. Store the solution to a subproblem in a table and never recompute. Development of a DP Characterize the structure of an optimal solution. Recursively define the value of an optimal solution. Compute the value of an optimal solution bottom-up. Construct an optimal solution from computed information (omitted if only the optimal value is required). Unit 4 Spring 2013 9

When to Use Dynamic Programming (DP) DP computes recurrence efficiently by storing partial results  efficient only when the number of partial results is small. Hopeless configurations: n! permutations of an n-element set, 2n subsets of an n-element set, etc. Promising configurations: contiguous substrings of an n-character string, n(n+1)/2 possible subtrees of a binary search tree, etc. DP works best on objects that are linearly ordered and cannot be rearranged!! Linear assembly lines, matrices in a chain, characters in a string, points around the boundary of a polygon, points on a line/circle, the left-to-right order of leaves in a search tree, etc. Objects are ordered left-to-right  Smell DP? Unit 4 Spring 2013 10

Rod Cutting Cut steel rods into pieces to maximize the revenue Each cut is free; rod lengths are integral numbers. Input: A length n and table of prices pi, for i = 1, 2, …, n. Output: The maximum revenue obtainable for rods whose lengths sum to n, computed as the sum of the prices for the individual rods. length i 1 2 3 4 5 6 7 8 9 10 price pi 17 20 24 30 max revenue r4 = 5+5 =10 5 length i = 4 9 8 5 1 Objects are linearly ordered (and cannot be rearranged)?? Unit 4 Spring 2013 11

Optimal Rod Cutting length i 1 2 3 4 5 6 7 8 9 10 price pi 17 20 24 30 max revenue ri 13 18 22 25 If pn is large enough, an optimal solution might require no cuts, i.e., just leave the rod as n unit long. Solution for the maximum revenue ri of length i r1 = 1 from solution 1 = 1 (no cuts) r2 = 5 from solution 2 = 2 (no cuts) r3 = 8 from solution 3 = 3 (no cuts) r4 = 10 from solution 4 = 2 + 2 r5 = 13 from solution 5 = 2 + 3 r6 = 17 from solution 6 = 6 (no cuts) r7 = 18 from solution 7 = 1 + 6 or 7 = 2 + 2 + 3 r8 = 22 from solution 8 = 2 + 6 Unit 4 Spring 2013 12

Optimal Substructure length i 1 2 3 4 5 6 7 8 9 10 price pi 17 20 24 30 max revenue ri 13 18 22 25 Optimal substructure: To solve the original problem, solve subproblems on smaller sizes. The optimal solution to the original problem incorporates optimal solutions to the subproblems. We may solve the subproblems independently. After making a cut, we have two subproblems. Max revenue r7: r7 = 18 = r4 + r3 = (r2 + r2) + r3 or r1 + r6 Decomposition with only one subproblem: Some cut gives a first piece of length i on the left and a remaining piece of length n - i on the right. Unit 4 Spring 2013 13

Recursive Top-Down “Solution” Cut-Rod(p, n) 1. if n == 0 2. return 0 3. q = - ∞ 4. for i = 1 to n 5. q = max(q, p[i] + Cut-Rod(p, n - i)) 6. return q Inefficient solution: Cut-Rod calls itself repeatedly, even on subproblems it has already solved!! Cut-Rod(p, 4) Cut-Rod(p, 3) T(n) = 2n Cut-Rod(p, 1) Overlapping subproblems? Cut-Rod(p, 0) Unit 4 Spring 2013 14

Top-Down DP Cut-Rod with Memoization Complexity: O(n2) time Solve each subproblem just once, and solves subproblems for sizes 0,1, …, n. To solve a subproblem of size n, the for loop iterates n times. Memoized-Cut-Rod(p, n) 1. let r[0..n] be a new array 2. for i = 0 to n 3. r[i] = -  4. return Memoized-Cut-Rod -Aux(p, n, r) Memoized-Cut-Rod-Aux(p, n, r) 1. if r[n] ≥ 0 2. return r[n] 3. if n == 0 4. q = 0 5. else q = - ∞ 6. for i = 1 to n 7. q = max(q, p[i] + Memoized-Cut-Rod -Aux(p, n-i, r)) 8. r[n] = q 9. return q // Each overlapping subproblem // is solved just once!! Unit 4 Spring 2013 15

Bottom-Up DP Cut-Rod Complexity: O(n2) time Sort the subproblems by size and solve smaller ones first. When solving a subproblem, have already solved the smaller subproblems we need. Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] be a new array 2. r[0] = 0 3. for j = 1 to n 4. q = - ∞ 5. for i = 1 to j 6. q = max(q, p[i] + r[j – i]) 7. r[j] = q 8. return r[n] Unit 4 Spring 2013 16

Bottom-Up DP with Solution Construction Extend the bottom-up approach to record not just optimal values, but optimal choices. Saves the first cut made in an optimal solution for a problem of size i in s[i]. Extended-Bottom-Up-Cut-Rod(p, n) 1. let r[0..n] and s[0..n] be new arrays 2. r[0] = 0 3. for j = 1 to n 4. q = - ∞ 5. for i = 1 to j 6. if q < p[i] + r[j – i] 7. q = p[i] + r[j – i] 8. s[j] = i 9. r[j] = q 10. return r and s 5 5 5 5 Print-Cut-Rod-Solution(p, n) 1. (r, s) = Extended-Bottom-Up-Cut-Rod(p, n) 2. while n > 0 3. print s[n] 4. n = n – s[n] 1 17 i 1 2 3 4 5 6 7 8 9 10 r[i] 13 17 18 22 25 30 s[i] Unit 4 Spring 2013 17

DP Example: Matrix-Chain Multiplication If A is a p x q matrix and B a q x r matrix, then C = AB is a p x r matrix time complexity: O(pqr). Matrix-Multiply(A, B) 1. if A.columns ≠ B.rows 2. error “incompatible dimensions” 3. else let C be a new A.rows * B.columns matrix 4. for i = 1 to A.rows 5. for j = 1 to B.columns 6. cij = 0 7. for k = 1 to A.columns 8. cij = cij + aikbkj 9. return C Unit 4 Spring 2013 18

DP Example: Matrix-Chain Multiplication The matrix-chain multiplication problem Input: Given a chain <A1, A2, …, An> of n matrices, matrix Ai has dimension pi-1 x pi Objective: Parenthesize the product A1 A2 … An to minimize the number of scalar multiplications Exp: dimensions: A1: 4 x 2; A2: 2 x 5; A3: 5 x 1 (A1A2)A3: total multiplications = 4 x 2 x 5 + 4 x 5 x 1 = 60 A1(A2A3): total multiplications = 2 x 5 x 1 + 4 x 2 x 1 = 18 So the order of multiplications can make a big difference! Unit 4 Spring 2013 19

Matrix-Chain Multiplication: Brute Force A = A1 A2 … An: How to evaluate A using the minimum number of multiplications? Brute force: check all possible orders? P(n): number of ways to multiply n matrices. , exponential in n. Any efficient solution? The matrix chain is linearly ordered and cannot be rearranged!! Smell Dynamic programming? Unit 4 Spring 2013 20

Matrix-Chain Multiplication m[i, j]: minimum number of multiplications to compute matrix Ai..j = Ai Ai +1 … Aj, 1  i  j  n. m[1, n]: the cheapest cost to compute A1..n. Applicability of dynamic programming Optimal substructure: an optimal solution contains within its optimal solutions to subproblems. Overlapping subproblem: a recursive algorithm revisits the same problem over and over again; only (n2) subproblems. matrix Ai has dimension pi-1 x pi Unit 4 Spring 2013 21

Bottom-Up DP Matrix-Chain Order Matrix-Chain-Order(p) 1. n = p.length – 1 2. Let m[1..n, 1..n] and s[1..n-1, 2..n] be new tables 3. for i = 1 to n 4. m[i, i] = 0 5. for l = 2 to n // l is the chain length 6. for i = 1 to n – l +1 7. j = i + l -1 8. m[i, j] =  9. for k = i to j -1 10. q = m[i, k] + m[k+1, j] + pi-1pkpj 11. if q < m[i, j] 12. m[i, j] = q 13. s[i, j] = k 14. return m and s Ai dimension pi-1 x pi s Unit 4 Spring 2013 22

Constructing an Optimal Solution s[i, j]: value of k such that the optimal parenthesization of Ai Ai+1 … Aj splits between Ak and Ak+1 Optimal matrix A1..n multiplication: A1..s[1, n]As[1, n] + 1..n Exp: call Print-Optimal-Parens(s, 1, 6): ((A1 (A2 A3))((A4 A5) A6)) Print-Optimal-Parens(s, i, j) 1. if i == j 2. print “A”i 3. else print “(“ 4. Print-Optimal-Parens(s, i, s[i, j]) 5. Print-Optimal-Parens(s, s[i, j] + 1, j) 6. print “)“ s Unit 4 Spring 2013 23

Top-Down, Recursive Matrix-Chain Order Time complexity: (2n) (T(n) > (T(k)+T(n-k)+1)). Recursive-Matrix-Chain(p, i, j) 1. if i == j 2. return 0 3. m[i, j] =  4. for k = i to j -1 5 q = Recursive-Matrix-Chain(p,i, k) + Recursive-Matrix-Chain(p, k+1, j) + pi-1pkpj 6. if q < m[i, j] 7. m[i, j] = q 8. return m[i, j] Unit 4 Spring 2013 24

Top-Down DP Matrix-Chain Order (Memoization) Complexity: O(n2) space for m[] matrix and O(n3) time to fill in O(n2) entries (each takes O(n) time) Memoized-Matrix-Chain(p) 1. n = p.length - 1 2. let m[1..n, 1..n] be a new table 2. for i = 1 to n 3. for j = i to n 4. m[i, j] =  5. return Lookup-Chain(m, p,1,n) Lookup-Chain(m, p, i, j) 1. if m[i, j] <  2. return m[i, j] 3. if i == j 4. m[ i, j] = 0 5. else for k = i to j -1 6. q = Lookup-Chain(m, p, i, k) + Lookup-Chain(m, p, k+1, j) + pi-1pkpj 7. if q < m[i, j] 8. m[i, j] = q 9. return m[i, j] Unit 4 Spring 2013 25

Two Approaches to DP Bottom-up iterative approach Start with recursive divide-and-conquer algorithm. Find the dependencies between the subproblems (whose solutions are needed for computing a subproblem). Solve the subproblems in the correct order. Top-down recursive approach (memoization) Keep top-down approach of original algorithms. Save solutions to subproblems in a table (possibly a lot of storage). Recurse only on a subproblem if the solution is not already available in the table. If all subproblems must be solved at least once, bottom-up DP is better due to less overhead for recursion and for maintaining tables. If many subproblems need not be solved, top-down DP is better since it computes only those required. Unit 4 Spring 2013 26

Longest Common Subsequence Problem: Given X = <x1, x2, …, xm> and Y = <y1, y2, …, yn>, find the longest common subsequence (LCS) of X and Y. Exp: X = <a, b, c, b, d, a, b> and Y = <b, d, c, a, b, a> LCS = <b, c, b, a> (also, LCS = <b, d, a, b>). Exp: DNA sequencing: S1 = ACCGGTCGAGATGCAG; S2 = GTCGTTCGGAATGCAT; LCS S3 = CGTCGGATGCA Brute-force method: Enumerate all subsequences of X and check if they appear in Y. Each subsequence of X corresponds to a subset of the indices {1, 2, …, m} of the elements of X. There are 2m subsequences of X. Why? Unit 4 Spring 2013 27

Optimal Substructure for LCS Let X = <x1, x2, …, xm> and Y = <y1, y2, …, yn> be sequences, and Z = <z1, z2, …, zk> be LCS of X and Y. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1. If xm ≠ yn, then zk ≠ xm implies Z is an LCS of Xm-1 and Y. If xm ≠ yn, then zk ≠ yn implies Z is an LCS of X and Yn-1. c[i, j]: length of the LCS of Xi and Yj c[m, n]: length of LCS of X and Y Basis: c[0, j] = 0 and c[i, 0] = 0 Unit 4 Spring 2013 28

Top-Down DP for LCS c[i, j]: length of the LCS of Xi and Yj, where Xi = <x1, x2, …, xi> and Yj=<y1, y2, …, yj>. c[m, n]: LCS of X and Y. Basis: c[0, j] = 0 and c[i, 0] = 0. The top-down dynamic programming: initialize c[i, 0] = c[0, j] = 0, c[i, j] = NIL TD-LCS(i, j) 1. if c[i,j] == NIL 2. if xi == yj 3. c[i, j] = TD-LCS(i-1, j-1) + 1 4. else c[i, j] = max(TD-LCS(i, j-1), TD-LCS(i-1, j)) 5. return c[i, j] Unit 4 Spring 2013 29

Bottom-Up DP for LCS Find the right order to solve the subproblems LCS-Length(X,Y) 1. m = X.length 2. n = Y.length 3. let b[1..m, 1..n] and c[0..m, 0..n] be new tables 4. for i = 1 to m 5. c[i, 0] = 0 6. for j = 0 to n 7. c[0, j] = 0 8. for i = 1 to m 9. for j = 1 to n 10. if xi == yj 11. c[i, j] = c[i-1, j-1]+1 12. b[i, j] = “↖” 13. elseif c[i-1,j]  c[i, j-1] 14. c[i,j] = c[i-1, j] 15. b[i, j] = ``  '' 16. else c[i, j] = c[i, j-1] 17. b[i, j] = “  “ 18. return c and b Find the right order to solve the subproblems To compute c[i, j], we need c[i-1, j-1], c[i-1, j], and c[i, j-1] b[i, j]: points to the table entry w.r.t. the optimal subproblem solution chosen when computing c[i, j] Unit 4 Spring 2013 30

Example of LCS LCS time and space complexity: O(mn). X = <A, B, C, B, D, A, B> and Y = <B, D, C, A, B, A>  LCS = <B, C, B, A>. Unit 4 Spring 2013 31

Constructing an LCS Trace back from b[m, n] to b[1, 1], following the arrows: O(m+n) time. Print-LCS(b, X, i, j) 1. if i == 0 or j == 0 2. return 3. if b[i, j] == “↖ “ 4. Print-LCS(b, X, i-1, j-1) 5. print xi 6. elseif b[i, j] == “  “ 7. Print-LCS(b, X, i -1, j) 8. else Print-LCS(b, X, i, j-1) Unit 4 Spring 2013 32

Optimal Binary Search Tree Given a sequence K = <k1, k2, …, kn> of n distinct keys in sorted order (k1 < k2 < … < kn) and a set of probabilities P = <p1, p2, …, pn> for searching the keys in K and Q = <q0, q1, q2, …, qn> for unsuccessful searches (corresponding to D = <d0, d1, d2, …, dn> of n+1 distinct dummy keys with di representing all values between ki and ki+1), construct a binary search tree whose expected search cost is smallest. k2 k1 k3 k4 k5 d0 d1 d2 d3 d4 d5 p2 k2 k1 k4 k5 k3 d0 d1 d2 d3 d4 d5 p1 p4 p3 p5 q0 q1 q2 q3 q4 q5 Unit 4 Spring 2013 33

An Example i 1 2 3 4 5 pi 0.15 0.10 0.05 0.20 qi k2 k1 k4 k5 k3 d0 d1 1 2 3 4 5 pi 0.15 0.10 0.05 0.20 qi k2 k1 k4 k5 k3 d0 d1 d2 d3 d4 d5 Cost = 2.80 k2 k1 k3 k4 k5 d0 d1 d2 d3 d4 d5 0.10×1 (depth 0) 0.15×2 0.10×2 0.10×4 0.20×3 Cost = 2.75 Optimal!! Unit 4 Spring 2013 34

Optimal Substructure If an optimal binary search tree T has a subtree T’ containing keys ki, …, kj, then this subtree T’ must be optimal as well for the subproblem with keys ki, …, kj and dummy keys di-1, …, dj. Given keys ki, …, kj with kr (i  r  j) as the root, the left subtree contains the keys ki, …, kr-1 (and dummy keys di-1, …, dr-1) and the right subtree contains the keys kr+1, …, kj (and dummy keys dr, …, dj). For the subtree with keys ki, …, kj with root ki, the left subtree contains keys ki, .., ki-1 (no key) with the dummy key di-1. k2 k1 k4 k5 k3 d0 d1 d2 d3 d4 d5 Unit 4 Spring 2013 35

Overlapping Subproblem: Recurrence e[i, j] : expected cost of searching an optimal binary search tree containing the keys ki, …, kj. Want to find e[1, n]. e[i, i -1] = qi-1 (only the dummy key di-1). If kr (i  r  j) is the root of an optimal subtree containing keys ki, …, kj and let then e[i, j] = pr + (e[i, r-1] + w(i, r-1)) + (e[r+1, j] +w(r+1, j)) = e[i, r-1] + e[r+1, j] + w(i, j) Recurrence: Node depths increase by 1 after merging two subtrees, and so do the costs k3 k4 k5 d2 d3 d4 d5 0.10×1 0.10×3 0.20×2 Unit 4 Spring 2013 36

Computing the Optimal Cost Need a table e[1..n+1, 0..n] for e[i, j] (why e[1, 0] and e[n+1, n]?) Apply the recurrence to compute w(i, j) (why?) k3 k4 k5 d2 d3 d4 d5 Optimal-BST(p, q, n) 1. let e[1..n+1, 0..n], w[1..n+1, 0..n], and root[1..n, 1..n] be new tables 2. for i = 1 to n + 1 3. e[i, i-1] = qi-1 4. w[i, i-1] = qi-1 5. for l = 1 to n 6. for i = 1 to n – l + 1 7. j = i + l -1 8. e[i, j] =  9. w[i, j] = w[i, j-1] + pj + qj 10. for r = i to j 11. t = e[i, r-1] + e[r+1, j] + w[i, j] 12. if t < e[i, j] 13. e[i, j] = t 14. root[i, j] = r 15. return e and root root[i, j] : index r for which kr is the root of an optimal search tree containing keys ki, …, kj. 2.75 2.00 0.50 1.30 0.05 0.90 0.10 1.75 1.20 0.60 0.30 0.40 1.25 0.70 0.25 0.45 3 4 5 1 2 6 e j i Unit 4 Spring 2013 37

Example e[1, 1] = e[1, 0] + e[2, 1] + w(1,1) = 0.05 + 0.10 + 0.3 = 0.45 i 1 2 3 4 5 pi 0.15 0.10 0.05 0.20 qi 2.75 2.00 0.50 1.30 0.05 0.90 0.10 1.75 1.20 0.60 0.30 0.40 1.25 0.70 0.25 0.45 3 4 5 1 2 6 e j i 2 4 5 1 3 root j i e[1, 5] = e[1, 1] + e[3, 5] + w(1,5) = 0.45 + 1.30 + 1.00 = 2.75 1.00 0.80 0.35 0.60 0.05 0.50 0.10 0.70 0.30 0.20 0.45 0.25 0.55 0.15 3 4 5 1 2 6 w j i k2 k1 k4 k5 k3 d0 d1 d2 d3 d4 d5 Unit 4 Spring 2013 38

Appendix A: Optimal Polygon Triangulation Terminology: polygon, interior, exterior, boundary, convex polygon, triangulation? The Optimal Polygon Triangulation Problem: Given a convex polygon P = <v0, v1, …, vn-1> and a weight function w defined on triangles, find a triangulation that minimizes   w(). One possible weight function on triangle: w(vivjvk)=|vivj| + |vjvk| + |vkvi|, where |vivj| is the Euclidean distance from vi to vj. Unit 4 Spring 2013 39

Optimal Polygon Triangulation (cont'd) Correspondence between full parenthesization, full binary tree (parse tree), and triangulation full parenthesization  full binary tree full binary tree (n-1 leaves)  triangulation (n sides) t[i, j]: weight of an optimal triangulation of polygon <vi-1, vi, …, vj>. Unit 4 Spring 2013 40

Pseudocode: Optimal Polygon Triangulation Matrix-Chain-Order is a special case of the optimal polygonal triangulation problem. Only need to modify Line 9 of Matrix-Chain-Order. Complexity: Runs in (n3) time and uses (n2) space. Optimal-Polygon-Triangulation(P) 1. n = P.length 2. for i = 1 to n 3. t[i,i] = 0 4. for l = 2 to n 5. for i = 1 to n – l + 1 6. j = i + l - 1 7. t[i, j] =  8. for k = i to j-1 9. q = t[i, k] + t[k+1, j ] + w( vi-1 vk vj) 10. if q < t[i, j] 11. t[i, j] = q 12. s[i,j] = k 13. return t and s Unit 4 Spring 2013 41

Tree decomposition A tree decomposition: a e Tree with a vertex set called bag associated to every node. For all edges {v,w}: there is a set containing both v and w. For every v: the nodes that contain v form a connected subtree. g b c f d a a e g c f a e b c f f c d Unit 4 Spring 2013

Tree decomposition A tree decomposition: a e Tree with a vertex set called bag associated to every node. For all edges {v,w}: there is a set containing both v and w. For every v: the nodes that contain v form a connected subtree. g b c f d a a e g c f a e b c f f c d Unit 4 Spring 2013

Tree decomposition A tree decomposition: a e Tree with a vertex set called bag associated to every node. For all edges {v,w}: there is a set containing both v and w. For every v: the nodes that contain v form a connected subtree. g b c f d a a e g c f a e b c f f c d Unit 4 Spring 2013

Treewidth (definition) Width of tree decomposition: Treewidth of graph G: tw(G)= minimum width over all tree decompositions of G. a e g b c f d a a e g c f a e b c f f c d a b c e d g f Unit 4 Spring 2013

Dynamic programming algorithms Many NP-hard (and some PSPACE-hard, or #P-hard) graph problems become polynomial or linear time solvable when restricted to graphs of bounded treewidth (or pathwidth or branchwidth) Well known problems like Independent Set, Hamiltonian Circuit, Graph Coloring Frequency assignment TSP Unit 4 Spring 2013

Dynamic programming with tree decompositions For each bag, a table is computed: Contains information on subgraph formed by vertices in bag and bags below it Bags are separators => Size of tables often bounded by function of width Computing a table needs only tables of children and local information 6 4 5 3 1 2 Unit 4 Spring 2013