Presentation is loading. Please wait.

Presentation is loading. Please wait.

UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Giampiero Pecelli Fall, 2010 Paradigms for Optimization Problems Dynamic Programming.

Similar presentations


Presentation on theme: "UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Giampiero Pecelli Fall, 2010 Paradigms for Optimization Problems Dynamic Programming."— Presentation transcript:

1 UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Giampiero Pecelli Fall, 2010 Paradigms for Optimization Problems Dynamic Programming & Greedy Algorithms

2 Optimization This, generally, refers to classes of problems that possess multiple solutions at one level, and where we have a real- valued function defined on the solutions. Problem: find a solution that minimizes or maximizes the value of this function. Note: there is no guarantee that such a solution will be unique and, moreover, there is no guarantee that you will find it (local maxima, anyone?) unless the search is over a small enough search space or the function is restricted enough.

3 Optimization Question: are there classes of problems for which you can guarantee an optimizing solution can be found? Answer: yes. BUT you also need to find such a solution in a "reasonable" amount of time. We are going to look at two classes of problems, and the techniques that will succeed in constructing their solutions in a "reasonable" (i.e., low degree polynomial in the size of the initial data) amount of time.

4 Optimization We begin with a rough comparison that contrasts a method you are familiar with (divide and conquer) and the method (still unspecified) of Dynamic Programming (developed by Richard Bellman in the late 1940's and early 1950's). For some history and other ideas, see: http://en.wikipedia.org/wiki/Dynamic_programming

5 Two Algorithmic Models: Divide & Conquer Dynamic Programming View problem as collection of subproblems “Recursive” nature Independent subproblems Number of subproblems depends on partitioning factors typically small Preprocessing Characteristic running time typically log function of n depends on number and difficulty of subproblems Primarily for optimization problems Optimal substructure: optimal solution to problem contains within it optimal solutions to subproblems Overlapping subproblems

6 Dynamic Programming

7 Example: Rod Cutting (text) ä You are given a rod of length n ≥ 0 (n in inches) ä A rod of length i inches will be sold for p i dollars ä Cutting is free (simplifying assumption) ä Problem: given a table of prices p i determine the maximum revenue r n obtainable by cutting up the rod and selling the pieces. Length i Price p i 12345678910 1589 17 202430

8 Example: Rod Cutting We can see immediately (from the values in the table) that n ≤ p n ≤ 3n. This is not very useful because: ä The range of potential revenue is very large ä Our finding quick upper and lower bounds depends on finding quickly the minimum and maximum p i /i ratios (one pass through the table), but then we are back to the point above….

9 Example: Rod Cutting Step 1: Characterizing an Optimal Solution Question: in how many different ways can we cut a rod of length n? For a rod of length 4: 2 4 - 1 = 2 3 = 8 For a rod of length n: 2 n-1. Exponential: we cannot try all possibilities for n "large". The obvious exhaustive approach won't work.

10 Example: Rod Cutting Step 1: Characterizing an Optimal Solution Question: in how many different ways can we cut a rod of length n? Proof Details: a rod of length n can have exactly n-1 possible cut positions – choose 0 ≤ k ≤ n-1 actual cuts. We can choose the k cuts (without repetition) anywhere we want, so that for each such k the number of different choices is When we sum up over all possibilities (k = 0 to k = n-1): For a rod of length n: 2 n-1.

11 Example: Rod Cutting Characterizing an Optimal Solution Let us find a way to solve the problem recursively (we might be able to modify the solution so that the maximum can be actually computed): assume we have cut a rod of length n into 0 ≤ k ≤ n pieces of length i 1, …, i k, n = i 1 +…+ i k, with revenue r n = p i1 + … + p ik Assume further that this solution is optimal. How can we construct it? Advice: when you don’t know what to do next, start with a simple example and hope something will occur to you…

12 Example: Rod Cutting Characterizing an Optimal Solution We begin by constructing (by hand) the optimal solutions for i = 1, …, 10: r 1 = 1 from sln. 1 = 1 (no cuts) r 2 = 5 from sln. 2 = 2 (no cuts) r 3 = 8 from sln. 3 = 3 (no cuts) r 4 = 10 from sln. 4 = 2 + 2 r 5 = 13 from sln. 5 = 2 + 3 r 6 = 17 from sln. 6 = 6 (no cuts) r 7 = 18 from sln. 7 = 1 + 6 or 7 = 2 + 2 + 3 r 8 = 22 from sln. 8 = 2 + 6 r 9 = 25 from sln. 9 = 3 + 6 r 10 = 30 from sln. 10 = 10 (no cuts) Length i Price p i 12345678910 1589 17 202430

13 Example: Rod Cutting Characterizing an Optimal Solution Notice that in some cases r n = p n, while in other cases the optimal revenue r n is obtained by cutting the rod into smaller pieces. In ALL cases we have the recursion r n = max(p n, r 1 + r n-1, r 2 + r n-2, …, r n-1 + r 1 ) exhibiting optimal substructure (meaning?) A slightly different way of stating the same recursion, which avoids repeating some computations, is r n = max 1≤i≤n (p i + r n-i ) And this latter relation can be implemented as a simple top-down recursive procedure:

14 Example: Rod Cutting Characterizing an Optimal Solution Time Out: How to justify the step from: r n = max(p n, r 1 + r n-1, r 2 + r n-2, …, r n-1 + r 1 ) to r n = max 1≤i≤n (p i + r n-i ) Note: every optimal partitioning of a rod of length n has a first cut – a segment of, say, length i. The optimal revenue, r n, must satisfy r n = p i + r n-i, where r n-i is the optimal revenue for a rod of length n – i. If the latter were not the case, there would be a better partitioning for a rod of length n – i, giving a revenue r’ n–i > r n-i and a total revenue r’ n = p n + r’ n-i > p i + r n-i = r n. Since we do not know which one of the leftmost cut positions provides the largest revenue, we just maximize over all the possible first cut positions.

15 Example: Rod Cutting Characterizing an Optimal Solution We can also notice that all the items we choose the maximum of are optimal in their own right: each substructure (max revenue for rods of lengths 1, …, n-1) is also optimal (again, optimal substructure property). Nevertheless, we are still in trouble: computing the recursion leads to recomputing a number (= overlapping subproblems) of values – how many?

16 Example: Rod Cutting Characterizing an Optimal Solution Let’s call Cut-Rod(p, 4), to see the effects on a simple case: The number of nodes for a tree corresponding to a rod of size n is:

17 Example: Rod Cutting Beyond Naïve Time Complexity We have a problem: “reasonable size” problems are not solvable in “reasonable time” (but, in this case, they are solvable in “reasonable space”). Specifically: Note that navigating the whole tree requires 2 n stack-frame activations. Note that navigating the whole tree requires 2 n stack-frame activations. Note also that no more than n + 1 stack-frames are active at any one time and that no more than n + 1 different values need to be computed or used. Note also that no more than n + 1 stack-frames are active at any one time and that no more than n + 1 different values need to be computed or used. Can we exploit these observations? A standard solution method involves saving the values associated with each T(j), so that we compute each value only once (called “memoizing” = writing yourself a memo).

18 Example: Rod Cutting Naïve Caching We introduce two procedures:

19 Example: Rod Cutting More Sophisticated Caching We now remove some unnecessary complications:

20 Example: Rod Cutting Time Complexity Whether we solve the problem in a top-down or bottom-up manner the asymptotic time is Θ(n 2 ), the major difference being recursive calls as compared to loop iterations. Why??

21 Example: Longest Common Subsequence (LCS): Motivation ä Strand of DNA: string over finite set {A,C,G,T} ä each element of set is a base: adenine, guanine, cytosine or thymine ä Compare DNA similarities ä S 1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA ä S 2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA ä One measure of similarity: ä find the longest string S 3 containing bases that also appear (not necessarily consecutively) in S 1 and S 2 ä S 3 = GTCGTCGGAAGCCGGCCGAA source: 91.503 textbook Cormen, et al.

22 Example: LCS Definitions ä The sequence is a subsequence of if (strictly increasing indices of X) such that ä example: is a subsequence of with index sequence ä Z is common subsequence of X and Y if Z is subsequence of both X and Y ä example: ä common subsequence but not longest ä common subsequence. Longest? Longest Common Subsequence Problem: Given 2 sequences X, Y, find maximum-length common subsequence Z. source: 91.503 textbook Cormen, et al.

23 Example: LCS Step 1: Characterize an LCS THM 15.1: Optimal LCS Substructure Given sequences: For any LCS of X and Y : 1 if x m = y n then z k = x m = y n and Z k-1 is an LCS of X m-1 and Y n-1 2 if x m ≠ y n then z k ≠ x m Z is an LCS of X m-1 and Y 3 if x m ≠ y n then z k ≠ y n Z is an LCS of X and Y n-1 PROOF: based on producing contradictions 1 a) Suppose z k ≠ x m. Appending x m = y n to Z contradicts longest nature of Z. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending x m to W yields common subsequence of length > k = contradiction. b) To establish longest nature of Z k-1, suppose common subsequence W of X m-1 and Y n-1 has length > k-1. Appending x m to W yields common subsequence of length > k = contradiction. 2 Common subsequence W of X m-1 and Y of length > k would also be common subsequence of X m, Y, contradicting longest nature of Z. 3 Similar to proof of (2) source: 91.503 textbook Cormen, et al.

24 Example: LCS Step 2: A Recursive Solution ä Implications of Thm 15.1: ? yes no Find LCS(X m-1, Y n-1 ) Find LCS(X m-1, Y) Find LCS(X, Y n-1 ) LCS(X, Y) = LCS(X m-1, Y n-1 ) + x m LCS(X, Y) = max(LCS(X m-1, Y), LCS(X, Y n-1 )) LCS(X, Y)

25 Example: LCS Step 2: A Recursive Solution (continued) ä Overlapping subproblem structure: ä Recurrence for length of optimal solution: Conditions of problem can exclude some subproblems! c[i,j]= c[i-1,j-1]+1 if i,j > 0 and x i =y j max(c[i,j-1], c[i-1,j])if i,j > 0 and x i =y j 0 if i=0 or j=0  (mn) distinct subproblems source: 91.503 textbook Cormen, et al.

26 Example: LCS Step 3: Compute Length of an LCS c table (represent b table) source: 91.503 textbook Cormen, et al.

27 Example: LCS Step 4: Construct an LCS source: 91.503 textbook Cormen, et al.

28 Example: LCS Improve the Code ä Can eliminate b table  c[i,j] depends only on 3 other c table entries: ä c[i-1,j-1], c[i-1,j], c[i,j-1]  given value of c[i,j], can pick the one in O(1) time ä reconstruct LCS in O(m+n) time similar to PRINT-LCS  same  (mn) space, but  (mn) was needed anyway... ä Asymptotic space reduction ä leverage: need only 2 rows of c table at a time ä row being computed ä previous row ä can also do it with ~ space for 1 row of c table ä but does not preserve LCS reconstruction data source: 91.503 textbook Cormen, et al.

29 Algorithmic Paradigm Context


Download ppt "UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Giampiero Pecelli Fall, 2010 Paradigms for Optimization Problems Dynamic Programming."

Similar presentations


Ads by Google