Download presentation
Presentation is loading. Please wait.
Published byBrian Fletcher Modified over 8 years ago
1
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill
2
Overview Like dynamic programming, used to solve optimization problems. Dynamic programming can be overkill; greedy algorithms tend to be easier to code Problems exhibit optimal substructure (like DP). Problems also exhibit the greedy-choice property. »When we have a choice to make, make the one that looks best right now. »Make a locally optimal choice in hope of getting a globally optimal solution.
3
Activity-selection Problem Input: Set S of n activities, a 1, a 2, …, a n. »s i = start time of activity i. »f i = finish time of activity i. Output: Subset A of maximum number of compatible activities. »Two activities are compatible, if their intervals don’t overlap. Example: Activities in each line are compatible. 1 2 3 4 6 5 7
4
Optimal Substructure Assume activities are sorted by finishing times. »f 1 f 2 … f n. Suppose an optimal solution includes activity a k. »This generates two subproblems. »Selecting from a 1, …, a k-1, activities compatible with one another, and that finish before a k starts (compatible with a k ). »Selecting from a k+1, …, a n, activities compatible with one another, and that start after a k finishes. »The solutions to the two subproblems must be optimal. Prove using the cut-and-paste approach.
5
Recursive Solution Let S ij = subset of activities in S that start after a i finishes and finish before a j starts. Subproblems: Selecting maximum number of mutually compatible activities from S ij. Let c[i, j] = size of maximum-size subset of mutually compatible activities in S ij. Recursive Solution:
6
Greedy-choice Property The problem also exhibits the greedy-choice property. »There is an optimal solution to the subproblem S ij, that includes the activity with the smallest finish time in set S ij. Hence, there is an optimal solution to S that includes a 1. Therefore, make this greedy choice without solving subproblems first and evaluating them. Solve the subproblem that ensues as a result of making this greedy choice. Combine the greedy choice and the solution to the subproblem.
7
Recursive Algorithm Recursive-Activity-Selector (s, f, i, j) 1.m i+1 2.while m < j and s m < f i 3. do m m+1 4.if m < j 5. then return {a m } Recursive-Activity-Selector(s, f, m, j) 6. else return Recursive-Activity-Selector (s, f, i, j) 1.m i+1 2.while m < j and s m < f i 3. do m m+1 4.if m < j 5. then return {a m } Recursive-Activity-Selector(s, f, m, j) 6. else return Initial Call: Recursive-Activity-Selector (s, f, 0, n+1) Complexity: (n) Straightforward to convert the algorithm to an iterative one.
8
Typical Steps Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve. Prove that there’s always an optimal solution that makes the greedy choice, so that the greedy choice is always safe. Show that greedy choice and optimal solution to subproblem optimal solution to the problem. Make the greedy choice and solve top-down. May have to preprocess input to put it into greedy order. »Example: Sorting activities by finish time.
9
Activity Selection: A Greedy Algorithm So actual algorithm is simple: »Sort the activities by finish time »Schedule the first activity »Then schedule the next activity in sorted list which starts after previous activity finishes »Repeat until no more activities Intuition is even more simple: »Always pick the shortest ride available at the time
10
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
11
Analysis of Algorithm The activity selected for consideration is always the one with the earliest finish Why does this work? Intuitively, it always leaves the maximum time possible to schedule more activities The greedy choice maximizes the amount of unscheduled time remaining What is the space and time complexity?
12
Elements of the Greedy Strategy Sometimes a greedy strategy results in an optimal solution and sometimes it does not. No general way to tell if the greedy strategy will result in an optimal solution Two ingredients usually necessary »greedy-choice property »optimal substructure
13
Greedy-Choice Property A globally optimal solution can be arrived at by making a locally optimal (greedy) choice. Unlike dynamic programming, we solve the problem in a top down manner Must prove that the greedy choices result in a globally optimal solution
14
Optimal Substructure Like dynamic programming, the optimal solution must contain within it optimal solutions to sub-problems. Given a choice between using a greedy algorithm and a dynamic programming algorithm for the same problem, in general which would you choose?
15
Greedy versus Dynamic Programming Both greedy and dynamic programming exploit the optimal substructure property Optimal substructure: a problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to the sub- problems.
16
Task Scheduling Given: a set T of n tasks, each having: »A start time, s i »A finish time, f i (where s i < f i ) Goal: Perform all the tasks using a minimum number of “machines.”
17
Task Scheduling Algorithm Greedy choice: consider tasks by their start time and use as few machines as possible with this order.
18
Task Scheduling Algorithm Running time: Given a set of n tasks specified by their start and finish times, Algorithm TaskSchedule produces a schedule of the tasks with the minimum number of machines in O(nlogn) time. »Use heap-based priority queue to store tasks with the start time as the priorities »Finding the earliest task takes O(logn) time
19
Example Given: a set T of n tasks, each having: »A start time, s i »A finish time, f i (where s i < f i ) »[1,4], [1,3], [2,5], [3,7], [4,7], [6,9], [7,8] (ordered by start) Goal: Perform all tasks on min. number of machines 198765432 Machine 1 Machine 3 Machine 2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.