Download presentation
Presentation is loading. Please wait.
Published byRaquel Conte Modified over 9 years ago
1
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms
2
David Luebke 2 5/4/2015 Administrivia l Hand back midterm l Go over problem values
3
David Luebke 3 5/4/2015 Review: Amortized Analysis l To illustrate amortized analysis we examined dynamic tables 1. Init table size m = 1 2. Insert elements until number n > m 3. Generate new table of size 2m 4. Reinsert old elements into new table 5. (back to step 2) l What is the worst-case cost of an insert? l What is the amortized cost of an insert?
4
David Luebke 4 5/4/2015 Review: Analysis Of Dynamic Tables l Let c i = cost of ith insert l c i = i if i-1 is exact power of 2, 1 otherwise l Example: n OperationTable Size Cost Insert(1)11 1 Insert(2)21 + 1 2 Insert(3)41 + 2 Insert(4)41 Insert(5)81 + 4 Insert(6)81 Insert(7)81 Insert(8)81 Insert(9)161 + 8 1 2 3 4 5 6 7 8 9
5
David Luebke 5 5/4/2015 Review: Aggregate Analysis l n Insert() operations cost l Average cost of operation = (total cost)/(# operations) < 3 l Asymptotically, then, a dynamic table costs the same as a fixed-size table n Both O(1) per Insert operation
6
David Luebke 6 5/4/2015 Review: Accounting Analysis l Charge each operation $3 amortized cost n Use $1 to perform immediate Insert() n Store $2 l When table doubles n $1 reinserts old item, $1 reinserts another old item n We’ve paid these costs up front with the last n/2 Insert()s l Upshot: O(1) amortized cost per operation
7
David Luebke 7 5/4/2015 Review: Accounting Analysis l Suppose must support insert & delete, table should contract as well as expand n Table overflows double it (as before) n Table < 1/4 full halve it n Charge $3 for Insert (as before) n Charge $2 for Delete u Store extra $1 in emptied slot u Use later to pay to copy remaining items to new table when shrinking table l What if we halve size when table < 1/8 full?
8
David Luebke 8 5/4/2015 Review: Longest Common Subsequence l Longest common subsequence (LCS) problem: n Given two sequences x[1..m] and y[1..n], find the longest subsequence which occurs in both n Ex: x = {A B C B D A B }, y = {B D C A B A} n {B C} and {A A} are both subsequences of both u What is the LCS? n Brute-force algorithm: For every subsequence of x, check if it’s a subsequence of y u What will be the running time of the brute-force alg?
9
David Luebke 9 5/4/2015 LCS Algorithm l Brute-force algorithm: 2 m subsequences of x to check against n elements of y: O(n 2 m ) l But LCS problem has optimal substructure n Subproblems: pairs of prefixes of x and y l Simplify: just worry about LCS length for now n Define c[i,j] = length of LCS of x[1..i], y[1..j] n So c[m,n] = length of LCS of x and y
10
David Luebke 10 5/4/2015 Finding LCS Length l Define c[i,j] = length of LCS of x[1..i], y[1..j] l Theorem: l What is this really saying?
11
David Luebke 11 5/4/2015 Optimal Substructure of LCS l Observation 1: Optimal substructure n A simple recursive algorithm will suffice n Draw sample recursion tree from c[3,4] n What will be the depth of the tree? l Observation 2: Overlapping subproblems n Find some places where we solve the same subproblem more than once
12
David Luebke 12 5/4/2015 Structure of Subproblems l For the LCS problem: n There are few subproblems in total n And many recurring instances of each (unlike divide & conquer, where subproblems unique) l How many distinct problems exist for the LCS of x[1..m] and y[1..n]? l A: mn
13
David Luebke 13 5/4/2015 Memoization l Memoization is one way to deal with overlapping subproblems n After computing the solution to a subproblem, store in a table n Subsequent calls just do a table lookup l Can modify recursive alg to use memoziation: n There are mn subproblems n How many times is each subproblem wanted? n What will be the running time for this algorithm? The running space?
14
David Luebke 14 5/4/2015 Dynamic Programming l Dynamic programming: build table bottom-up n Same table as memoization, but instead of starting at (m,n) and recursing down, start at (1,1) n Draw LCS-length table for i=0..7, j=0..6: u X (vert) = {A B C B D A B}, Y (horiz) = {B D C A B A} u Initialize top row/left column to 0, march across rows u What values does a given cell depend on? n What is the final length of the LCS? the LCS itself? n What is the running time? space? l Can actually reduce space to O(min(m,n))
15
David Luebke 15 5/4/2015 Dynamic Programming l Summary of the basic idea: n Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems n Overlapping subproblems: few subproblems in total, many recurring instances of each n Solve bottom-up, building a table of solved subproblems that are used to solve larger ones l Variations: n “Table” could be 3-dimensional, triangular, a tree, etc.
16
David Luebke 16 5/4/2015 Greedy Algorithms l A greedy algorithm always makes the choice that looks best at the moment n The hope: a locally optimal choice will lead to a globally optimal solution n For some problems, it works n My example: walking to the Corner l Dynamic programming can be overkill; greedy algorithms tend to be easier to code
17
David Luebke 17 5/4/2015 Activity-Selection Problem l Problem: get your money’s worth out of a carnival n Buy a wristband that lets you onto any ride n Lots of rides, each starting and ending at different times n Your goal: ride as many rides as possible u Another, alternative goal that we don’t solve here: maximize time spent on rides l Welcome to the activity selection problem
18
David Luebke 18 5/4/2015 Activity-Selection l Formally: n Given a set S of n activities s i = start time of activity i f i = finish time of activity i n Find max-size subset A of compatible activities n Assume (wlog) that f 1 f 2 … f n 1 2 3 4 5 6
19
David Luebke 19 5/4/2015 Activity Selection: Optimal Substructure l Let k be the minimum activity in A (i.e., the one with the earliest finish time). Then A - {k} is an optimal solution to S’ = {i S: s i f k } n In words: once activity #1 is selected, the problem reduces to finding an optimal solution for activity- selection over activities in S compatible with #1 n Proof: if we could find optimal solution B’ to S’ with |B| > |A - {k}|, u Then B {k} is compatible u And |B {k}| > |A|
20
David Luebke 20 5/4/2015 Activity Selection: Repeated Subproblems l Consider a recursive algorithm that tries all possible compatible subsets to find a maximal set, and notice repeated subproblems: S 1 A? S’ 2 A? S-{1} 2 A? S-{1,2}S’’S’-{2}S’’ yesno yes
21
David Luebke 21 5/4/2015 Greedy Choice Property l Dynamic programming? Memoize? Yes, but… l Activity selection problem also exhibits the greedy choice property: n Locally optimal choice globally optimal sol’n n Them 17.1: if S is an activity selection problem sorted by finish time, then optimal solution A S such that {1} A u Sketch of proof: if optimal solution B that does not contain {1}, can always replace first activity in B with {1} (Why?). Same number of activities, thus optimal.
22
David Luebke 22 5/4/2015 Activity Selection: A Greedy Algorithm l So actual algorithm is simple: n Sort the activities by finish time n Schedule the first activity n Then schedule the next activity in sorted list which starts after previous activity finishes n Repeat until no more activities l Intuition is even more simple: n Always pick the shortest ride available at the time
23
David Luebke 23 5/4/2015 The End
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.