Download presentation
Presentation is loading. Please wait.
1
UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization Problems Dynamic Programming & Greedy Algorithms
2
Algorithmic Paradigm Context Divide & Conquer Dynamic Programming View problem as collection of subproblems “Recursive” nature Independent subproblems Number of subproblems depends on partitioning factors typically small Preprocessing Characteristic running time typically log function of n depends on number and difficulty of subproblems Primarily for optimization problems Optimal substructure: optimal solution to problem contains within it optimal solutions to subproblems Overlapping subproblems
3
Dynamic Programming
4
Example: Matrix Parenthesization Definitions ä Given “chain” of n matrices: ä Given “chain” of n matrices: ä Compute product A 1 A 2 … A n efficiently ä Minimize “cost” = number of scalar multiplications ä Multiplication order matters!
5
Example: Matrix Parenthesization Step 1: Characterizing an Optimal Solution Observation: Any parenthesization of A i A i+1 … A j must split it between A k and A k+1 for some k. THM: Optimal Matrix Parenthesization: If an optimal parenthesization of A i A i+1 … A j splits at k, then parenthesization of prefix A i A i+1 … A k must be an optimal parenthesization. Why? If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize A i A i+1 … A j, contradicting optimality of that parenthesization. If existed less costly way to parenthesize prefix, then substituting that parenthesization would yield less costly way to parenthesize A i A i+1 … A j, contradicting optimality of that parenthesization.
6
Example: Matrix Parenthesization Step 2: A Recursive Solution ä Recursive definition of minimum parenthesization cost: m[i,j]= min{m[i,k] + m[k+1,j] + p i-1 p k p j } if i < j 0 if i = j How many distinct subproblems? i <= k < j
7
Example: Matrix Parenthesization Step 3: Computing Optimal Costs 0 2,625 2,500 1,000 s: value of k that achieves optimal cost in computing m[i, j] source: 91.503 textbook Cormen, et al.
8
Example: Matrix Parenthesization Step 4: Constructing an Optimal Solution PRINT-OPTIMAL-PARENS(s, i, j) if i = j then print “A” i else print “(“ PRINT-OPTIMAL-PARENS(s, i, s[i, j]) PRINT-OPTIMAL-PARENS(s, i, s[i, j]) PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j) PRINT-OPTIMAL-PARENS(s, s[i, j]+1, j) print “)“ print “)“ source: 91.503 textbook Cormen, et al.
9
Example: Matrix Parenthesization Memoization ä Provide Dynamic Programming efficiency ä But with top-down strategy ä Use recursion ä Fill in table “on demand” ä Example: ä RECURSIVE-MATRIX-CHAIN: MEMOIZED-MATRIX-CHAIN(p) 1 n length[p] - 1 2 for i 1 to n 3 do for j i to n 4 do m[i,j] 5 return LOOKUP-CHAIN(p,1,n) LOOKUP-CHAIN(p,1,n) 1 if m[i,j] < 2 then return m[i,j] 3 if i=j 4 then m[i,j] 0 5 else for k i to j-1 6 do q LOOKUP-CHAIN(p,i,k) + LOOKUP-CHAIN(p,k+1,j) + p i-1 p k p j 7if q < m[i,j] 8 then m[i,j] q 9 return m[i,j] source: 91.503 textbook Cormen, et al.
10
Example: Longest Common Subsequence (LCS): Motivation ä Strand of DNA: string over finite set {A,C,G,T} ä each element of set is a base: adenine, guanine, cytosine or thymine ä Compare DNA similarities ä S 1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA ä S 2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA ä One measure of similarity: ä find the longest string S 3 containing bases that also appear (not necessarily consecutively) in S 1 and S 2 ä S 3 = GTCGTCGGAAGCCGGCCGAA source: 91.503 textbook Cormen, et al.
11
Example: LCS Definitions ä Sequence is a subsequence of if (strictly increasing indices of X) such that ä example: is subsequence of with index sequence ä Z is common subsequence of X and Y if Z is subsequence of both X and Y ä example: ä common subsequence but not longest ä common subsequence. Longest? Longest Common Subsequence Problem: Given 2 sequences X, Y, find maximum-length common subsequence Z. source: 91.503 textbook Cormen, et al.
12
Example: LCS Step 1: Characterize an LCS THM 15.1: Optimal LCS Substructure Given sequences: For any LCSof X and Y: 1 if thenand Z k-1 is an LCS of X m-1 and Y n-1 2 if thenZ is an LCS of X m-1 and Y 3 if thenZ is an LCS of X and Y n-1 PROOF: based on producing contradictions 1 a) Suppose. Appending to Z contradicts longest nature of Z. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending to W yields common subsequence of length > k = contradiction. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending to W yields common subsequence of length > k = contradiction. 2 Common subsequence W of X m-1 and Y of length > k would also be common subsequence of X m, Y, contradicting longest nature of Z. 3 Similar to proof of (2) source: 91.503 textbook Cormen, et al.
13
Example: LCS Step 2: A Recursive Solution ä Implications of Thm 15.1: ? yes no Find LCS(X m-1, Y n-1 ) Find LCS(X m-1, Y) Find LCS(X, Y n-1 ) LCS(X, Y) = LCS(X m-1, Y n-1 ) + x m LCS(X, Y) = max(LCS(X m-1, Y), LCS(X, Y n-1 ) LCS(X, Y)
14
Example: LCS Step 2: A Recursive Solution (continued) ä Overlapping subproblem structure: ä Recurrence for length of optimal solution: Conditions of problem can exclude some subproblems! c[i,j]= c[i-1,j-1]+1 if i,j > 0 and x i =y j max(c[i,j-1], c[i-1,j])if i,j > 0 and x i =y j 0 if i=0 or j=0 (mn) distinct subproblems source: 91.503 textbook Cormen, et al.
15
Example: LCS Step 3: Compute Length of an LCS c table ( represent b table) B CB A B C B A source: 91.503 textbook Cormen, et al.
16
Example: LCS Step 4: Construct an LCS source: 91.503 textbook Cormen, et al.
17
Example: LCS Improve the Code ä Can eliminate b table c[i,j] depends only on 3 other c table entries: ä c[i-1,j-1] c[i-1,j] c[i,j-1] given value of c[i,j], can pick the one in O(1) time ä reconstruct LCS in O(m+n) time similar to PRINT-LCS same (mn) space, but (mn) was needed anyway... ä Asymptotic space reduction ä leverage: need only 2 rows of c table at a time ä row being computed ä previous row ä can also do it with ~ space for 1 row of c table ä but does not preserve LCS reconstruction data source: 91.503 textbook Cormen, et al.
18
Algorithmic Paradigm Context
19
Greedy Algorithms
20
What is a Greedy Algorithm? ä Solves an optimization problem ä Optimal Substructure: ä optimal solution contains in it optimal solutions to subproblems ä Greedy Strategy: ä At each decision point, do what looks best “locally” ä Choice does not depend on evaluating potential future choices or solving subproblems ä Top-down algorithmic structure ä With each step, reduce problem to a smaller problem ä Greedy Choice Property: ä “locally best” = globally best
21
Examples ä From 91.404 ä Minimum Spanning Tree ä Dijkstra’s single-source shortest path ä Huffman Codes ä Fractional Knapsack ä Activity Selection
22
Example Optimization Problem: Activity Selection ä Problem Instance: ä Set S = {1,2,...,n} of n activities ä Each activity i has: ä start time: s i ä finish time : f i ä Activities i, j are compatible iff non-overlapping: ä Objective: ä select a maximum-sized set of mutually compatible activities source: 91.503 textbook Cormen, et al.
24
Running time? Examples: Activity Selection ä Algorithm: ä S’ = presort activities in S by nondecreasing finish time ä and renumber ä GREEDY-ACTIVITY-SELECTOR(S’) ä n length[S’] ä A {1} ä j1 ä for i 2 to n ä do if ä then ä j i ä return A source: 91.503 textbook Cormen, et al.
25
Another use for Greedy Algorithm ä If optimization problem does not have “greedy choice property”, greedy algorithm may still be useful in bounding the optimal solution ä Example: minimization problem Optimal (unknown value) Upper Bound (heuristic) Lower Bound Solution Values
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.