UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems Dynamic Programming & Greedy Algorithms
Algorithmic Paradigm Context Divide & Conquer Dynamic Programming View problem as collection of subproblems “Recursive” nature Independent subproblems Number of subproblems depends on partitioning factors typically small Preprocessing Characteristic running time typically log function of n depends on number and difficulty of subproblems Primarily for optimization problems Optimal substructure: optimal solution to problem contains within it optimal solutions to subproblems Overlapping subproblems
Dynamic Programming Approach to Optimization Problems 1. Characterize structure of an optimal solution. 2. Recursively define value of an optimal solution. 3. Compute value of an optimal solution in bottom-up fashion. 4. Construct an optimal solution from computed information. source: textbook Cormen, et al.
Dynamic Programming Matrix Parenthesization
Example: Matrix Parenthesization Definitions ä Given “chain” of n matrices: ä Given “chain” of n matrices: ä Compute product A 1 A 2 … A n efficiently ä Minimize “cost” = number of scalar multiplications ä Multiplication order matters! source: textbook Cormen, et al.
Example: Matrix Parenthesization Step 1: Characterizing an Optimal Solution Observation: Any parenthesization of A i A i+1 … A j must split it between A k and A k+1 for some k. THM: Optimal Matrix Parenthesization: If an optimal parenthesization of A i A i+1 … A j splits at k, then parenthesization of prefix A i A i+1 … A k must be an optimal parenthesization. Why? If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize A i A i+1 … A j, contradicting optimality of that parenthesization. If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize A i A i+1 … A j, contradicting optimality of that parenthesization. source: textbook Cormen, et al. common DP proof technique: “cut-and-paste” proof by contradiction
Example: Matrix Parenthesization Step 2: A Recursive Solution ä Recursive definition of minimum parenthesization cost: m[i,j]= min{m[i,k] + m[k+1,j] + p i-1 p k p j } if i < j 0 if i = j How many distinct subproblems? i <= k < j each matrix A i has dimensions p i-1 x p i source: textbook Cormen, et al.
Example: Matrix Parenthesization Step 3: Computing Optimal Costs 0 2,625 2,500 1,000 s: value of k that achieves optimal cost in computing m[i, j] source: textbook Cormen, et al.
Example: Matrix Parenthesization Step 4: Constructing an Optimal Solution PRINT-OPTIMAL-PARENS(s, i, j) if i = j then print “A” i else print “(“ PRINT-OPTIMAL-PARENS(s, i, s[i, j]) PRINT-OPTIMAL-PARENS(s, i, s[i, j]) PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j) PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j) print “)“ print “)“ source: textbook Cormen, et al.
Example: Matrix Parenthesization Memoization ä Provide Dynamic Programming efficiency ä But with top-down strategy ä Use recursion ä Fill in table “on demand” ä Example: ä RECURSIVE-MATRIX-CHAIN: source: textbook Cormen, et al. MEMOIZED-MATRIX-CHAIN(p) 1 n length[p] for i 1 to n 3 do for j i to n 4 do m[i,j] 5 return LOOKUP-CHAIN(p,1,n) LOOKUP-CHAIN(p,i,j) 1 if m[i,j] < 2 then return m[i,j] 3 if i=j 4 then m[i,j] 0 5 else for k i to j-1 6 do q LOOKUP-CHAIN(p,i,k) + LOOKUP-CHAIN(p,k+1,j) + p i-1 p k p j 7if q < m[i,j] 8 then m[i,j] q 9 return m[i,j]
Dynamic Programming Longest Common Subsequence
Example: Longest Common Subsequence (LCS): Motivation ä Strand of DNA: string over finite set {A,C,G,T} ä each element of set is a base: adenine, guanine, cytosine or thymine ä Compare DNA similarities ä S 1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA ä S 2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA ä One measure of similarity: ä find the longest string S 3 containing bases that also appear (not necessarily consecutively) in S 1 and S 2 ä S 3 = GTCGTCGGAAGCCGGCCGAA source: textbook Cormen, et al.
Example: LCS Definitions ä Sequence is a subsequence of if (strictly increasing indices of X) such that ä example: is subsequence of with index sequence ä Z is common subsequence of X and Y if Z is subsequence of both X and Y ä example: ä common subsequence but not longest ä common subsequence. Longest? Longest Common Subsequence Problem: Given 2 sequences X, Y, find maximum-length common subsequence Z. source: textbook Cormen, et al.
Example: LCS Step 1: Characterize an LCS THM 15.1: Optimal LCS Substructure Given sequences: For any LCSof X and Y: 1 if thenand Z k-1 is an LCS of X m-1 and Y n-1 2 if thenZ is an LCS of X m-1 and Y 3 if thenZ is an LCS of X and Y n-1 PROOF: based on producing contradictions 1 a) Suppose. Appending to Z contradicts longest nature of Z. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending to W yields common subsequence of length > k = contradiction. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending to W yields common subsequence of length > k = contradiction. 2 Common subsequence W of X m-1 and Y of length > k would also be common subsequence of X m, Y, contradicting longest nature of Z. 3 Similar to proof of (2) source: textbook Cormen, et al.
Example: LCS Step 2: A Recursive Solution ä Implications of Thm 15.1: ? yes no Find LCS(X m-1, Y n-1 ) Find LCS(X m-1, Y) Find LCS(X, Y n-1 ) LCS 1 (X, Y) = LCS(X m-1, Y n-1 ) + x m LCS 2 (X, Y) = max(LCS(X m-1, Y), LCS(X, Y n-1 ))
Example: LCS Step 2: A Recursive Solution (continued) ä Overlapping subproblem structure: ä Recurrence for length of optimal solution: Conditions of problem can exclude some subproblems! c[i,j]= c[i-1,j-1]+1 if i,j > 0 and x i =y j max(c[i,j-1], c[i-1,j])if i,j > 0 and x i =y j 0 if i=0 or j=0 (mn) distinct subproblems source: textbook Cormen, et al.
Example: LCS Step 3: Compute Length of an LCS source: textbook Cormen, et al. c table c table (represent b table) (represent b table) B CB A B C B A What is the asymptotic worst- case time complexity?
Example: LCS Step 4: Construct an LCS source: textbook Cormen, et al.
Example: LCS Improve the Code ä Can eliminate b table c[i,j] depends only on 3 other c table entries: ä c[i-1,j-1] c[i-1,j] c[i,j-1] given value of c[i,j], can pick the one in O(1) time ä reconstruct LCS in O(m+n) time similar to PRINT-LCS same (mn) space, but (mn) was needed anyway... ä Asymptotic space reduction ä leverage: need only 2 rows of c table at a time ä row being computed ä previous row ä can also do it with ~ space for 1 row of c table ä but does not preserve LCS reconstruction data source: textbook Cormen, et al.
Dynamic Programming Activity Selection
Activity Selection Optimization Problem ä Problem Instance: ä Set S = {1,2,...,n} of n activities ä Each activity i has: ä start time: s i ä finish time : f i ä Activities i, j are compatible iff non-overlapping: ä Objective: ä select a maximum-sized set of mutually compatible activities source: textbook Cormen, et al.
Algorithmic Progression ä “Brute-Force” ä (board work) ä Dynamic Programming #1 ä Exponential number of subproblems ä (board work) ä Dynamic Programming #2 ä Quadratic number of subproblems ä (board work) ä Greedy Algorithm ä (board work: next week)