CS 46101 Section 600 CS 56101 Section 002 Dr. Angela Guercio Spring 2010.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Greedy Algorithms.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
CSCE 411H Design and Analysis of Algorithms Set 8: Greedy Algorithms Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 8 1 * Slides adapted.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Greedy Algorithms Greed is good. (Some of the time)
Analysis of Algorithms
Greed is good. (Some of the time)
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Introduction to Algorithms Jiafen Liu Sept
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
Cs333/cutler Greedy1 Introduction to Greedy Algorithms The greedy technique Problems explored –The coin changing problem –Activity selection.
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Dynamic Programming CIS 606 Spring 2010.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Greedy Algorithms Reading Material: –Alsuwaiyel’s Book: Section 8.1 –CLR Book (2 nd Edition): Section 16.1.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Week 2: Greedy Algorithms
Lecture 7: Greedy Algorithms II
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
16.Greedy algorithms Hsu, Lih-Hsing. Computer Theory Lab. Chapter 16P An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a.
9/3/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Guest lecturer: Martin Furer Algorithm Design and Analysis L ECTURE.
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
Advanced Algorithm Design and Analysis (Lecture 5) SW5 fall 2004 Simonas Šaltenis E1-215b
Data Structures and Algorithms A. G. Malamos
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSC 201: Design and Analysis of Algorithms Greedy Algorithms.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Greedy Algorithms. What is “Greedy Algorithm” Optimization problem usually goes through a sequence of steps A greedy algorithm makes the choice looks.
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
CSC5101 Advanced Algorithms Analysis
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Greedy Algorithms Chapter 16 Highlights
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
1 Algorithms CSCI 235, Fall 2015 Lecture 29 Greedy Algorithms.
Greedy Algorithms Lecture 10 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
CS6045: Advanced Algorithms Greedy Algorithms. Main Concept –Divide the problem into multiple steps (sub-problems) –For each step take the best choice.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms General principle of greedy algorithm
Greedy algorithms: CSC317
CSCE 411 Design and Analysis of Algorithms
Lecture on Design and Analysis of Computer Algorithm
Analysis of Algorithms CS 477/677
Greedy Algorithms (Chap. 16)
Introduction to Algorithms`
Presented by Po-Chuan & Chen-Chen 2016/03/08
Algorithm Design Methods
ICS 353: Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Chapter 16: Greedy algorithms Ming-Te Chi
Advanced Algorithms Analysis and Design
Greedy Algorithms.
Advanced Algorithms Analysis and Design
Lecture 6 Topics Greedy Algorithm
Chapter 16: Greedy algorithms Ming-Te Chi
ICS 353: Design and Analysis of Algorithms
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
Greedy Algorithms Comp 122, Spring 2004.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Advance Algorithm Dynamic Programming
Presentation transcript:

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010

Similar to dynamic programming. Used for optimization problems. Idea When we have a choice to make, make the one that looks best right now. Make a locally optimal choice in hope of getting a globally optimal solution. Greedy algorithms dont always yield an optimal solution. But sometimes they do. Well see a problem for which they do. Then well look at some general characteristics of when greedy algorithms give optimal solutions.

n activities require exclusive use of a common resource. For example, scheduling the use of a classroom. Set of activities S = {a 1,..., a n }. a i needs resource during period [s i, f i ), which is a half-open interval, where s i = start time and f i = finish time. Goal Select the largest possible set of nonoverlapping (mutually compatible) activities.

Note Could have many other objectives: Schedule room for longest time. Maximize income rental fees. Assume that activities are sorted by finish time: f 1 f 2 f 3... f n-1 f n.

S sorted by finish time:

S ij = {a k S : f i s k < f k s j } = activities that start after a i finishes and finish before a j starts : Activities in S ij are compatible with all activities that finish by f i, and all activities that start no earlier than s j.

Let A ij be a maximum-size set of mutually compatible activities in S ij. Let a k A ij be some activity in A ij. Then we have two subproblems: 1.Find mutually compatible activities in S ik (activities that start after a i finishes and that finish before a k starts). 2.Find mutually compatible activities in S kj (activities that start after a k finishes and that finish before a j starts).

Let A ik = A ij S ik = activities in A ij that finish before a k starts; A kj = A ij S kj = activities in A ij that start after a k finishes: Then A ij = A ik {a k } A kj |A ij |=|A ik | + |A kj |+ 1 Claim Optimal solution A ij must include optimal solutions for the two subproblems for S ik and S kj.

Use the usual cut-and-paste argument. Will show the claim for S kj ; proof for S ik is symmetric. Suppose we could find a set A kj of mutually compatible activities in S kj, where |A kj | > |A kj | Then use A kj instead of A kj when solving the subproblem for S ij. Size of resulting set of mutually compatible activities would be |A ik |+ |A kj | + 1 > |A ik |+ |A kj | + 1 = |A| This contradicts the assumption that A ij is optimal.

Since optimal solution A ij must include optimal solutions to the subproblems for S ik and S kj, could solve by dynamic programming. Let c[i, j] = size of optimal solution for S ij. Then c[i, j] = c[i, k] + c[k, j] + 1 But we dont know which activity a k to choose, so we have to try them all: Could then develop a recursive algorithm and memoize it. Or could develop a bottom-up algorithm and fill in table entries. Instead, we will look at a greedy approach.

Choose an activity to add to optimal solution before solving subproblems. For activity- selection problem, we can get away with considering only the greedy choice: the activity that leaves the resource available for as many other activities as possible. Question: Which activity leaves the resource available for the most other activities? Answer: The first activity to finish. (If more than one activity has earliest finish time, can choose any such activity.) Since activities are sorted by finish time, just choose activity a 1.

That leaves only one subproblem to solve: finding a maximum size set of mutually compatible activities that start after a 1 finishes. (Dont have to worry about activities that finish before a 1 starts, because s 1 < f 1 and no activity a i has finish time f i < f 1 no activity a i has f i s 1.) Since have only subproblem to solve, simplify notation: S k = {a i S : s i f k } = activities that start after a k finishes.

Making greedy choice of a 1 S 1 remains as only subproblem to solve. By optimal substructure, if a 1 is in an optimal solution, then an optimal solution to the original problem consists of a 1 plus all activities in an optimal solution to S 1. But need to prove that a 1 is always part of some optimal solution.

If S k is nonempty and a m has the earliest finish time in S k, then a m is included in some optimal solution. Proof Let A k be an optimal solution to S k, and let a j have the earliest finish time of any activity in A k. If a j = a m, done. Otherwise, let A k = A k – {a j } {a m } but with a m substituted for a j. Claim Activities in A k are disjoint. Proof Because f m f j the activities in A k are disjoint (they were disjoint in A k and a m had the earliest finish time in S k ) (proves claim) Since |A k | = |A k |, conclude that A k is also an optimal solution to S k, and it includes a m. (proves theorem)

So, dont need full power of dynamic programming. Dont need to work bottom-up. Instead, can just repeatedly choose the activity that finishes first, keep only the activities that are compatible with that one, and repeat until no activities remain. Can work top-down: make a choice, then solve a subproblem. Dont have to solve subproblems before making a choice.

Start and finish times are represented by arrays s and f, where f is assumed to be already sorted in monotonically increasing order. To start, add fictitious activity a 0 with f 0 = 0, so that S 0 = S, the entire set of activities. Procedure REC-ACTIVITY-SELECTOR takes as parameters the arrays s and f, index k of current subproblem, and number n of activities in the original problem.

Idea The while loop checks a k+1, a k+2,..., a n until it finds an activity a m that is compatible with a k (need s m f k ). If the loop terminates because a m is found (m > n), then recursively solve S m, and return this solution, along with a m. If the loop never finds a compatible a m (m > n), then just return empty set. Time Θ(n) each activity examined exactly once, assuming that activities are already sorted by finish times.

Can convert the recursive algorithm to an iterative one. Its already almost tail recursive.

The choice that seems best at the moment is the one we go with. What did we do for activity selection? 1.Determine the optimal substructure. 2.Develop a recursive solution. 3.Show that if we make the greedy choice, only one subproblem remains. 4.Prove that its always safe to make the greedy choice. 5.Develop a recursive greedy algorithm. 6.Convert it to an iterative algorithm.

At first, it looked like dynamic programming. In the activity-selection problem, we started out by defining subproblems S ij, where both i and j varied. But then found that making the greedy choice allowed us to restrict the subproblems to be of the form S k. Could instead have gone straight for the greedy approach: in our first crack at defining subproblems, use the S k form. Could then have proven that the greedy choice a m (the first activity to finish), combined with optimal solution to the remaining compatible activities S m, gives an optimal solution to S k.

Typically, we streamline these steps: 1.Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve. 2.Prove that theres always an optimal solution that makes the greedy choice, so that the greedy choice is always safe. 3.Demonstrate optimal substructure by showing that, having made the greedy choice, combining an optimal solution to the remaining subproblem with the greedy choice gives an optimal solution to the original problem.

No general way to tell whether a greedy algorithm is optimal, but two key ingredients are 1.greedy-choice property and 2.optimal substructure.

Can assemble a globally optimal solution by making locally optimal (greedy) choices. Dynamic programming Make a choice at each step. Choice depends on knowing optimal solutions to subproblems. Solve subproblems first. Solve bottom-up. Greedy Make a choice at each step. Make the choice before solving the subproblems. Solve top-down.

Typically show the greedy-choice property by what we did for activity selection: Look at an optimal solution. If it includes the greedy choice, done. Otherwise, modify the optimal solution to include the greedy choice, yielding another solution thats just as good. Can get efficiency gains from greedy-choice property. Preprocess input to put it into greedy order. Or, if dynamic data, use a priority queue.

The knapsack problem is a good example of the difference. 0-1 knapsack problem n items. Item i is worth $v i, weighs w i pounds. Find a most valuable subset of items with total weight W. Have to either take an item or not take itcant take part of it.

Fractional knapsack problem Like the 0-1 knapsack problem, but can take fraction of an item. Both have optimal substructure. But the fractional knapsack problem has the greedy- choice property, and the 0-1 knapsack problem does not. To solve the fractional problem, rank items by value/weight: v i /w i. Let v i /w i v i+1 /w i+1 for all i. Take items in decreasing order of value/weight. Will take all of the items with the greatest value/weight, and possibly a fraction of the next item.

Chapter 22 – Elementary Graph algorithms Breadth-first search Depth–first search Chapter 23 Minimum Spanning Tree