Download presentation
Presentation is loading. Please wait.
Published byอดิศักดิ์ วอชิงตัน Modified over 5 years ago
1
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
2
Greedy Algorithms Dynamic programming is one technique for solving optimization problems. Dynamic programming is a powerful technique, but it may be more than what's needed. Many optimization problems require a series of steps with choices being made at each step. Greedy algorithms make the choice that looks best at the moment. This locally optimal choice may lead to a globally optimal solution (i.e. an optimal solution to the entire problem).
3
Finding the optimal solution
f(x) f(x) A B B C Global optimum Local optimum Global optimum If we start at A and move in the direction of descent, we will end up at the local optimum, B. On the left graph, B is also at the global optimum. On the right graph, the global optimum is elsewhere, at C.
4
When can we use Greedy algorithms?
We can use a greedy algorithm when the following are true: The greedy choice property: A globally optimal solution can be arrived at by making a locally optimal (greedy) choice. The optimal substructure property: The optimal solution contains within it optimal solutions to subproblems. Showing that the greedy choice results in a similar, but smaller problem is the same as demonstrating that the optimal solution exhibits optimal substructure.
5
Dynamic programming vs. Greedy algorithms
Dynamic programming makes choices based on the solutions to the subproblems. Greedy algorithms make the choice that seems best at the time, then solves the subproblems. Dynamic programming solves problems from the bottom up. Greedy algorithms solve problems from the top down.
6
Example: Scheduling Suppose we want to schedule competing activities that each want to use the same resource (e.g. a school gym). Suppose we cannot schedule two activities that overlap in time. Problem: Select the maximum number of mutually compatible activities (i.e. activities that do not overlap). Given: Set of activities: S{1, 2, 3, ....n} Each activity has start time, si, and finish time, fi, such that si <= fi Activities i and j are compatible if their time intervals do not overlap. Assume: the activities are sorted, such that f1 <= f2 <= f3 <= ... <= fn (This can be done in O(nlgn) time.)
7
Activity Selection Algorithm
Idea: At each step, select the activity with the smallest finish time that is compatible with the activities already chosen. Greedy-Activity-Selector(s, f) n = s.length A = {a1} //Automatically select first activity j = 1 //Last activity selected so far for i = 2 to n do if si >= fj then A = A U {ai} //Add activity i to the set j = i //record last activity added return A
8
Example Activities: s = [1, 3, 0, 5] f = [4, 5, 6, 7]
2 6 3 5 7 4 s = [1, 3, 0, 5] f = [4, 5, 6, 7] We will show how the algorithm works in class.
9
Proving that the solution is optimal
Suppose you have an optimal solution, A, for set, S. If A starts with activity 1, it starts with the greedy choice. If A starts with another activity, k, then fk <= si for all i > k. Because all finish times are ordered, f1 <= fk <= si. So, we can substitute activity 1 for the first activity in A, and it will be compatible for all the other activities. A' = (A - {k}) U 1 A' is also optimal (same number of activities as A).
10
Finishing the proof Once the greedy choice of activity 1 is made, we can find the optimal solution by finding the optimal solution among all activities in S that are compatible with activity 1. This subproblem has the same form as the original problem. Therefore we can solve it with a greedy choice for the first choice. By induction, we can make greedy choices for each choice and find an optimal solution.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.