6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Greedy Algorithms.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Greedy Algorithms Greed is good. (Some of the time)
Analysis of Algorithms
Greed is good. (Some of the time)
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Introduction to Algorithms Jiafen Liu Sept
Chapter 3 The Greedy Method 3.
Chapter 23 Minimum Spanning Trees
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Lecture 1 (Part 3) Tuesday, 9/4/01 Greedy Algorithms.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 2 Monday, 2/6/06 Design Patterns for Optimization.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Greedy Algorithms Reading Material: –Alsuwaiyel’s Book: Section 8.1 –CLR Book (2 nd Edition): Section 16.1.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 2 Tuesday, 9/10/02 Design Patterns for Optimization.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 2 Monday, 9/13/06 Design Patterns for Optimization Problems.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Monday, 12/2/02 Design Patterns for Optimization Problems Greedy.
CSE 780 Algorithms Advanced Algorithms Greedy algorithm Job-select problem Greedy vs DP.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 5. Greedy Algorithms - 1 Greedy.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Design Patterns for Optimization Problems Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2008 Design Patterns for Optimization Problems Dynamic Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 1 (Part 3) Tuesday, 1/29/02 Design Patterns for Optimization.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2008 Lecture 2 Tuesday, 9/16/08 Design Patterns for Optimization.
Lecture 7: Greedy Algorithms II
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
CS420 lecture eight Greedy Algorithms. Going from A to G Starting with a full tank, we can drive 350 miles before we need to gas up, minimize the number.
16.Greedy algorithms Hsu, Lih-Hsing. Computer Theory Lab. Chapter 16P An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a.
9/3/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Guest lecturer: Martin Furer Algorithm Design and Analysis L ECTURE.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Advanced Algorithm Design and Analysis (Lecture 5) SW5 fall 2004 Simonas Šaltenis E1-215b
Greedy Algorithms Dr. Yingwu Zhu. Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that.
Data Structures and Algorithms A. G. Malamos
CSC 413/513: Intro to Algorithms Greedy Algorithms.
Greedy Algorithms and Matroids Andreas Klappenecker.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Greedy Algorithms Z. GuoUNC Chapel Hill CLRS CH. 16, 23, & 24.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 2 Tuesday, 2/2/10 Design Patterns for Optimization.
Data Structures and Algorithms (AT70.02) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology Instructor: Prof. Sumanta Guha Slide Sources: CLRS “Intro.
1 Algorithms CSCI 235, Fall 2015 Lecture 29 Greedy Algorithms.
Greedy Algorithms Lecture 10 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Greedy Algorithms. p2. Activity-selection problem: Problem : Want to schedule as many compatible activities as possible., n activities. Activity i, start.
PREPARED BY: Qurat Ul Ain SUBMITTED TO: Ma’am Samreen.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms General principle of greedy algorithm
Greedy algorithms: CSC317
Greedy Algorithms – Chapter 5
Greedy Algorithms (Chap. 16)
Greedy Algorithms / Interval Scheduling Yin Tat Lee
Presented by Po-Chuan & Chen-Chen 2016/03/08
ICS 353: Design and Analysis of Algorithms
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Chapter 16: Greedy algorithms Ming-Te Chi
Advanced Algorithms Analysis and Design
Chapter 16: Greedy algorithms Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
Greedy Algorithms Comp 122, Spring 2004.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Major Design Strategies
Advance Algorithm Dynamic Programming
Presentation transcript:

6/13/20161 Greedy A Comparison

6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision point, do what looks best “locally” –Choice does not depend on evaluating potential future choices or solving subproblems –Top-down algorithmic structure With each step, reduce problem to a smaller problem Optimal Substructure: –optimal solution contains in it optimal solutions to subproblems Greedy Choice Property: –“locally best” = globally best What is a Greedy Algorithm?

6/13/20163 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision point, do what looks best “locally” –Choice does not depend on evaluating potential future choices or solving subproblems –Top-down algorithmic structure With each step, reduce problem to a smaller problem Optimal Substructure: –optimal solution contains in it optimal solutions to subproblems Greedy Choice Property: –“locally best” = globally best What is a Greedy Algorithm?

6/13/20164 Greedy See also: With a greedy algorithm for the “N-queens problem”.

6/13/20165 Greedy Examples: Minimum Spanning Tree Minimum Spanning Forest Dijkstra Shortest Path Huffman Codes Fractional Knapsack Activity Selection

Minimum Spanning Tree 6/13/20166 Greedy A B C D E F G source: textbook Cormen et al. for Undirected, Connected, Weighted Graph G=(V,E) Produces minimum weight tree of edges that includes every vertex. Invariant: Minimum weight spanning forest Becomes single tree at end Invariant: Minimum weight tree Spans all vertices at end Time: O(|E|lg|E|) given fast FIND-SET, UNION Time: O(|E|lg|V|) = O(|E|lg|E|) slightly faster with fast priority queue

Single Source Shortest Paths: Dijkstra’s Algorithm 6/13/20167 Greedy source: textbook Cormen et al. for (nonnegative) weighted, directed graph G=(V,E) A B C D E F G

6/13/20168 Greedy Huffman Codes

6/13/20169 Greedy Fractional Knapsack item1item2item3 “knapsack” Value: $60 $100$120

6/13/ Greedy: Example Activity Selection: Problem Instance –Set S = {1, 2,..., n} of n activities –Each activity i has: start time: s i finish time: f i –Activities i, j are compatible iff non- overlapping: [s i, f i ) Å [s j, f j ) = . –Objective: select a maximum-sized set of mutually compatible activities

6/13/ Greedy Greedy Algorithm Algorithm: –S’ = presort activities in S by nondecreasing finish time and renumber –GREEDY-ACTIVITY-SELECTOR(S’, f) n = S’.length A = {1} j = 1 for i = 2 to n »if s i ≥ f i » A = A [ {i} » j = i return A source: textbook Cormen, et al.

6/13/ Greedy Why does this all work? Why does it provide an optimal (maximal) solution? It is clear that it will provide a solution (a set of non- overlapping activities), but there is no obvious reason to believe that it will provide a maximal set of activities. We start with a restatement of the problem in such a way that a dynamic programming solution can be constructed. This solution will be shown to be maximal. It will then be modified into a greedy solution in such a way that maximality will be preserved.

6/13/ Greedy Consider the set S of activities, sorted in monotonically increasing order of finishing time: Where the activity a i corresponds to the time interval [s i, f i ). The subset {a 3, a 9, a 11 } consists of mutually compatible activities, but is not maximal. The subsets {a 1, a 4, a 8, a 11 }, and {a 2, a 4, a 9, a 11 } are also compatible and are both maximal. i sisi fifi

6/13/ Greedy First Step: find an optimal substructure: an optimal solution to a subproblem that can be extended to a full solution. Define an appropriate space of subproblems: S ij = {a k S: f i ≤ s k < f k ≤ s j }, the set of all those activities compatible with a i and a j and with those that finish no later than a i and start no earlier than a j. Add two activities: a 0 :[…, 0); a n+1 :[, …); which come before and after, respectively, all activities in S = S 0,n+1. Assume the activities are sorted in non-decreasing order of finish: f 0 ≤ f 1 ≤ … ≤ f n < f n+1. Then S ij is empty whenever i ≥ j.

6/13/ Greedy We define the suproblems: find a maximum size subset of mutually compatible activities from S ij, 0 ≤ i < j ≤ n+1. What is the substructure? Substructure: suppose a solution to S ij contains some activity a k, f i ≤ s k < f k ≤ s j. a k generates two subproblems, S ik and S kj. The solution to S ij is the union of a solution to S ik, the singleton activity {a k }, and a solution to S kj. Any solution to a larger problem can be obtained by patching together solutions for smaller problems. Cardinality (=number of activities) is also additive.

6/13/ Greedy Optimal Substructure: assume A ij is an optimal solution to S ij, containing an activity a k. Then A ij contains a solution to S ik (the activities that end before the beginning of a k ) and a solution to S kj (the activities that begin after the end of a k ). If these solutions were not already maximal, then one could obtain a maximal solution for, say, S ik, and splice it with {a k } and the solution to S kj to obtain a solution for S ij of greater cardinality, thus violating the optimality condition assumed for A ij.

6/13/ Greedy Use Optimal Substructure to construct an optimal solution: Any solution to a nonempty problem S ij includes some activity a k, and any optimal solution to S ij must contain optimal solutions to S ik and S kj. For each a k in S 0,n+1, find (recursively) optimal solutions of S 0k and S k,n+1, say A 0k and A k,n+1. Splice them together, along with a k, to form solutions A k. Take a maximal A k as an optimal solution A 0,n+1. We need to look at the recursion in more detail… (cost??)

6/13/ Greedy Second Step: the recursive algorithm. Let c[i,j] denote the maximum number of compatible activities in S ij. It is easy to see that c[i,j] = 0, whenever i ≥ j - and S ij is empty. If, in a non-empty set of activities S ij, the activity a k occurs as part of a maximal compatible subset, this generates two subproblems, S ik and S kj, and the equation c[i,j] = c[i,k] c[k,j], which tells us how the cardinalities are related. The problem is that k is not known a priori. Solution: c[i,j] = max i<k<j (c[i,k] c[k,j]), if S ij ≠. And one can write an easy “bottom up” (dynamic programming) algorithm to compute a maximal solution.

6/13/ Greedy A better mousetrap. Recall S ij = {a k S: f i ≤ s k < f k ≤ s j }. Theorem Consider any nonempty subproblem S ij, and let a m be the activity in S ij with the earliest finish time: f m = min{f k : a k  S ij }. Then: 1. a m is used in some maximum size subset of mutually compatible activities of S ij (e.g., A ij = A im U {a m } U A mj ). 2. The subproblem S im is empty, so that choosing a m leaves the subproblem S mj as the only one that may be nonempty. Proof: 2) if S im is nonempty then S ij must contain an activity (finishing) prior to a m. Contradiction. 1) Suppose A ij is a maximal subset. Either a m is in A ij, and we are done, or it is not. If not, let a k be the earliest finishing activity in A ij. It can be replaced by a m (since it finishes no earlier than a m ) thus giving a maximal subset containing a m.

6/13/ Greedy Why is this a better mousetrap? The dynamic programming solution requires solving j - i – 1 subproblems to solve S ij. Total Running Time??? Theorem 16.1 gives us conditions under which solving S ij requires solving ONE subproblem only, since the other subproblems implied by the dynamic programming recurrence relation are empty, or irrelevant (they might give us other maximal solutions, but we need just one)… Lots less computation… Another benefit comes from the observation that the problem can be solved in a “top-down” fashion: take the earliest finishing activity, a m, and you are left with the problem S m,n+1. It is easy to see that each activity needs to be looked at only once: linear time (after sorting).

6/13/ Greedy The Algorithm: R_A_S(s, f, k, n) 1.m = k while m ≤ n and s[m] < f[k]// find first activity in S k to finish 3. do m = m if m ≤ n 5. then return {a m } + R_A_S(s, f, m, n) 6. else return . The time is - fairly obviously –  (n).

6/13/ Greedy The Recursive Activity Selector Algorithm: Cormen, Leiserson, Rivest, Stein: Fig. 16.1

6/13/ Greedy A Slightly Different Perspective. Rather than start from a dynamic programming approach, moving to a greedy strategy, start with identifying the characteristics of the greedy approach. 1. Make a choice and leave a subproblem to solve. 2. Prove that the greedy choice is always safe - there is an optimal solution starting from a greedy choice. 3. Prove that, having made a greedy choice, the solution of the remaining subproblem can be added to the greedy choice providing a solution to the original problem.

6/13/ Greedy Greedy Choice Property. Choice that looks best in the current problem will work: no need to look ahead to its implications for the solution. This is important because it reduces the amount of computational complexity we have to deal with. Can’t expect this to hold all the time: maximization of a function with multiple relative maxima on an interval - the gradient method would lead to a “greedy choice”, but may well lead to a relative maximum that is far from the actual maximum.

6/13/ Greedy Optimal Substructure. While the greedy choice property tells us that we should be able to solve the problem with little computation, we need to know that the solution can be properly reconstructed: an optimal solution contains optimal “sub-solutions” to the subproblems. An induction can then be used to turn the construction around (bottom-up). In the case of a problem possessing only the Optimal Substructure property there is little chance that we will be able to find methods more efficient than dynamic programming (with memoization).

6/13/ Greedy Note: ä If the optimization problem does not have the “greedy choice property”, a greedy algorithm may still be useful in bounding the optimal solution ä Example: minimization problem Optimal (unknown value) Solution Values Upper Bound Lower Bound