Chapter 8 The Greedy Approach.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms
Advertisements

Greedy Technique The first key ingredient is the greedy-choice property: a globally optimal solution can be arrived at by making a locally optimal (greedy)
Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
CSCE 411H Design and Analysis of Algorithms Set 8: Greedy Algorithms Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 8 1 * Slides adapted.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Greedy Algorithms Greed is good. (Some of the time)
Greed is good. (Some of the time)
Chapter 23 Minimum Spanning Trees
Greedy Algorithms for Matroids Andreas Klappenecker.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
Chapter 9: Greedy Algorithms The Design and Analysis of Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
Chapter 9 Greedy Technique Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
16.Greedy algorithms Hsu, Lih-Hsing. Computer Theory Lab. Chapter 16P An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a.
TECH Computer Science Graph Optimization Problems and Greedy Algorithms Greedy Algorithms  // Make the best choice now! Optimization Problems  Minimizing.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Algorithms: Design and Analysis Summer School 2013 at VIASM: Random Structures and Algorithms Lecture 3: Greedy algorithms Phan Th ị Hà D ươ ng 1.
2IL05 Data Structures Fall 2007 Lecture 13: Minimum Spanning Trees.
Spring 2015 Lecture 11: Minimum Spanning Trees
UNC Chapel Hill Lin/Foskey/Manocha Minimum Spanning Trees Problem: Connect a set of nodes by a network of minimal total length Some applications: –Communication.
Minimum Spanning Trees and Kruskal’s Algorithm CLRS 23.
1 Greedy algorithm 叶德仕 2 Greedy algorithm’s paradigm Algorithm is greedy if it builds up a solution in small steps it chooses a decision.
Data Structures and Algorithms A. G. Malamos
1 Minimum Spanning Trees. Minimum- Spanning Trees 1. Concrete example: computer connection 2. Definition of a Minimum- Spanning Tree.
Greedy Algorithms and Matroids Andreas Klappenecker.
1 Greedy Technique Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible b locally optimal.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 2 Tuesday, 2/2/10 Design Patterns for Optimization.
Greedy Algorithms Analysis of Algorithms.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
Greedy Algorithms. p2. Activity-selection problem: Problem : Want to schedule as many compatible activities as possible., n activities. Activity i, start.
Midwestern State University Minimum Spanning Trees Definition of MST Generic MST algorithm Kruskal's algorithm Prim's algorithm 1.
CSCI 58000, Algorithm Design, Analysis & Implementation Lecture 12 Greedy Algorithms (Chapter 16)
Greedy Algorithms General principle of greedy algorithm
Lecture ? The Algorithms of Kruskal and Prim
CSCE 411 Design and Analysis of Algorithms
Introduction to Algorithms
Lecture on Design and Analysis of Computer Algorithm
Chapter 5 : Trees.
Greedy Technique.
Introduction to Algorithms
Introduction to Algorithms`
Minimum Spanning Trees
Lecture 12 Algorithm Analysis
First-Cut Techniques Lecturer: Jing Liu
ICS 353: Design and Analysis of Algorithms
Greedy Algorithm Enyue (Annie) Lu.
Algorithms (2IL15) – Lecture 2
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Data Structures – LECTURE 13 Minumum spanning trees
CS 583 Analysis of Algorithms
Chapter 16: Greedy algorithms Ming-Te Chi
Autumn 2015 Lecture 10 Minimum Spanning Trees
Lecture 12 Algorithm Analysis
Minimum Spanning Tree Algorithms
Lecture 6 Topics Greedy Algorithm
Greedy Algorithms TOPICS Greedy Strategy Activity Selection
Chapter 16: Greedy algorithms Ming-Te Chi
Greedy Algorithms Comp 122, Spring 2004.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Lecture 12 Algorithm Analysis
Winter 2019 Lecture 11 Minimum Spanning Trees (Part II)
Chapter 23: Minimum Spanning Trees: A graph optimization problem
Minimum Spanning Trees
Autumn 2019 Lecture 11 Minimum Spanning Trees (Part II)
Presentation transcript:

Chapter 8 The Greedy Approach

Introduction Greedy algorithms are usually designed to solve optimization problems. They typically consist of an iterative procedure that tries to find a local optimal solution. A greedy algorithm makes a correct guess on the basis of little calculation without worrying about the future. It builds a solution step by step. Each step increases the size of the partial solution and is based on local optimization. The choice made is that which produces the largest immediate gain while maintaining feasibility. The algorithms are typically efficient. The hard part is proving that the algorithm does indeed solve the problem.

Fractional knapsack Given n items of sizes s1,...,sn, and values v1,...,vn and size C, the knapsack capacity, the objective is to find nonnegative real numbers x1,...,xn that Greedy strategy: For each item compute yi=vi/si. Sort the items by decreasing ratio,and fill the knapsack with as much as possible from the first item, then the second, and so forth. Characteristics: The algorithm consists of a simple iterative procedure that selects that item which produces the largest immediate gain while maintaining feasibility.

The shortest path problem Let G=(V,E) be a directed graph in which each edge has a nonnegative length, and a distinguished vertex s called the source. Then determine the distance from s to every other vertex in V, where the distance from vertex s to vertex x is defined as the length of a shortest path from s to x. 1. X{1}; YV-{1} 2. For each vertex vY if there is an edge from 1 to v then let [v] (the label of v) be the length of that edge; otherwise let [v]=. Let [1]=0 3. while Y{} 4. Let yY be such that [y] is minimum 5. move y from Y to X 6. update the labels of those vertices in Y that are adjacent to y 7. end while

Algorithm 8.1 DIJKSTRA Input: A weighted directed graph G=(V,E), where V={1,...,n} Output: The distance from vertex 1 to every other vertex in G 1. X={1}; YV-{1}; [1]0 2. for y2 to n 3. if y is adjacent to 1 then [y]length[1,y] 4. else [y] 5. end if 6. end for 7. for j1 to n-1 8. Let yY be such that [y] is minimum 9. XX{y} {add vertex y to X} 10. YY-{y} {delete vertex y from Y} 11. for each edge (y,w) 12. if wY and [y]+length[y,w]<[w] then 13. [w][y]+length[y,w] 14. end for 15. end for

Representation: adjacency lists Lemma 8.1 (correctness): In Algorithm DIJKSTRA, when a vertex y is chosen in Step 8, if its label [y] is finite then [y]=[y] Theorem 8.1 (time complexity): Given a directed graph G with nonnegative weights on its edges and a source vertex s, Algorithm DIJKSTRA finds the length of the distance from s to every other vertex in (n^2) time.

Basic steps of greedy algorithm 1. solution= 2. while not finish(solution) 3. x=select(A) 4. if (feasible(solution,x)) 5. solution=solution+x 6. end if 7. end while 8. return solution

Elements of the greedy strategy Q: How can one tell if a greedy algorithm will solve a particular optimization problems? A: There is no way in general. If we can demonstrate the following properties, then it is probable to use greedy algorithm: 1. Greedy-choice property 2. Optimal substructure (the same with that of dynamic programming)

Greedy-choice property A globally optimal solution can be arrived at by making a locally optimal (greedy) choice. When we are considering which choice to make, we make the choice that looks best in the current problem and then solve the subproblem arising after the choice is made. The choice may depends on choices so far, but it cannot depend on any future choices or on the solutions to subproblems. It usually progresses in a top-down fashion, making one greedy choice after another, reducing each given problem instance to a smaller one. In dynamic programming, we make a choice at each step, but the choice usually depends on the solutions to subproblems. Consequently we solve dynamic-programming problems in a bottom-up manner, progressing from smaller subproblems to large subproblems. But we must prove that a greedy choice at each step yields a globally optimal solution.

Optimal substructure A problem exhibits optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. In the 0-1 knapsack problem, when we consider an item for inclusion in the knapsack, we must compare the solution to the subproblem in which the item is included with the solution to the subproblem in which the item is excluded before we can make the choice. The problem formulated in this way gives rise to many overlapping subproblems--a hallmark of dynamic programming.

File compression Suppose we are given a file, which is a string of characters. Compress the file as much as possible in such a way that the original file can be reconstructed. Let the set of characters in the file be C={c1,c2,...,cn}. Let also f(ci), 1in, be the frequency of character ci in the file. Since the frequency of some characters may be much larger than others, it is reasonable to use variable length encodings. When the encodings vary in length, we stipulate that the encoding of one character must not be the prefix of the encoding of another character; such codes are called prefix codes.

Huffman code Algorithm 8.6 HUFFMAN Input: A set C={c1,...,cn} of n characters and their frequencies {f(c1),...,f(cn)} Output: A Huffman tree (V,T) for C 1. Insert all characters into a min-heap H according to their frequencies 2. VC; T={} 3. for j1 to n-1 4. cDELETEMIN(H) 5. c’DELETEMIN(H) 6. f(v)f(c)+f(c’) {v is a new node} 7. INSERT(H,v) 8. V=V{v} {Add v to V} 9. T=T{(v,c),(v,c’)} {Make c and c’ children of v in T} 10. end for Time complexity: O(nlogn)

Correctness Lemma L4(greedy-choice property): Let x and y be two characters in C having the lowest frequencies. Then there exists an optimal prefix code for C in which the codewords for x and y have the same length and differ only in the last bit. This lemma implies that the process of building up an optimal tree by mergers can begin with the greedy choice of merging together those two characters of lowest frequency. Lemma L5(optimal-substructure property): Let x and y be two characters in C with minimum frequency. Let C’ be the alphabet C with characters x, y removed and (new) character z added. Define f for C’ as for C, except that f(z)=f(x)+f(y). Let T’ be any tree representing an optimal prefix code for C’. Then the tree T, obtained from T’ by replacing the leaf node for z with an internal node having x and y as children, represents an optimal prefix code for C. Theorem (correctness): Algorithm 8.6 produces an optimal prefix code.

A beautiful theory about greedy algorithms Although the theory of matroid does not cover all cases for which a greedy method applies, it does cover many cases of practical interest. The theory is very beautiful and the word “matroid” is due to H.Whitney (1907-1989).

Matroids A matroid is an ordered pair M=(S,L) satisfying the following conditions: 1. S is a finite nonempty set 2. L is hereditary: L is a nonempty family of subsets of S, called the independent subsets of S, such that if BL and AB, then AL. Obviously L 3. M satisfies the exchange property: If AL, BL and |A|<|B|, then xB-A s.t. AU{x}L

Graphic matroid Given an undirected graph G=(V,E), the graphic matroid MG=(SG,LG) is defined by: 1. SG=E, the set of edges of G 2. If A is a subset of E, then ALG iff A is acyclic. That is, a set of edges A is independent iff the subgraph GA=(V,A) forms a forest Theorem M1: If graph G=(V,E) is undirected, then MG=(SG,LG) is a matroid

Invariant of matroid Definition: Given a matroid M=(S,L), we call an element xA an extension of AL if x can be added to A while preserving independence. That is, x is an extension if A{x}L Definition: An independent subset A in a matroid M is called maximal if it has no extensions Theorem M2: All maximal independent subsets in a matroid have the same size Example: Consider a graphic matroid MG for a connected, undirected graph G. Every maximal independent subset of MG must be a tree with exactly |V|-1 edges that connects all the vertices of G, which is called spanning tree of G

Weighted matroid Definition: A matroid M=(S,L) is weighted if there is an associated weight function w that assigns a strictly positive weight w(x) to each element xS. The weight function w extends to subsets of S by summation w(A)=xAw(x) for any AS Example: Let w(e) denote the length of an edge e in a graphic matroid MG, then w(A) is the total length of the edges in A

Why weighted matroid Many problems for which a greedy approach provides optimal solutions can be formulated in terms of finding a maximum-weight independent subset in a weighted matroid We are given a weighted matroid M=(S,L) and we wish to find an independent set AL such that w(A) is maximized. We call such a subset that is independent and has maximum possible weight an optimal subset of the matroid. Because the weight is positive, an optimal subset is always a maximal independent subset--it always helps to make A as large as possible

Minimum spanning trees MST: Given a connected undirected graph G=(V,E) and a length function w such that w(e) is the length of edge e. To find out the subset of edges that connects all the vertices together and has minimum total length. Solution: Consider the weighted matroid MG with weight function w’(e)=w0-w(e), where w0 is larger than the maximum length of any edge. Each maximal independent subset A corresponds to a spanning tree and w’(A)=(|V|-1)w0-w(A) Any algorithm that can find an optimal subset A in an arbitrary weighted matroid can solve MST

Greedy algorithm for weighted matroid Basic idea: Consider each xS in turn in order of monotonically decreasing weight and immediately adds it to the set A being accumulated if A{x} is independent Greedy(M,w) 1. A {Now A is independent} 2. sort S into monotonically decreasing order by weight w 3. for each xS, take in monotonically decreasing order by weight w(x) 4. if A{x}L then AA{x} {Now A is still independent} 5. end for 6. return A The time cost is O(nlogn+nf(n)). Step 4 requires a check on whether or not the set A{x} is independent, which costs O(f(n)) time

Greedy-choice property Lemma L1: Let x be the first element of S such that {x} is independent, if any such x exists. If x exists, then there exists an optimal subset A of S that contains x

Other properties of matroid Lemma L2: For any matroid M=(S,L), if xS is an extension of some independent subset A of S, then x is also an extension of  Corollary: Let M=(S,L) be any matroid. If xS is not an extension of , then x is not an extension of any independent subset A Note: This corollary says that any element that can not be used immediately can never be used. So GREEDY never makes error when it skips over any initial element in S that is not an extension of 

Optimal substructure (matroids exhibit the optimal-substructure property) Lemma L3: Let xS be the first element chosen by GREEDY for the weighted matroid M=(S,L). The remaining problem of finding a maximum-weight independent subset containing x reduces to finding a maximum-weight independent subset of the weight matroid M’=(S’,L’), where S’={yS|{x,y}L} L’={BS-{x}|B{x}L}, and the weight function for M’ is the weight function for M, restricted to S’. (We call M’ the contraction of M by x)

Correctness of greedy algorithm Theorem M3: If M=(S,L) is a weighted matroid with weight function w, then GREEDY(M,w) returns an optimal subset

Minimum cost spanning trees Definition 8.1: Let G=(V,E) be a connected undirected graph with weights on its edges. A spanning tree (V,T) of G is a subgraph of G that is a tree. If G is weighted and the sum of the weights of the edges in T is minimum, then (V,T) is called a minimum cost spanning tree or simply a minimum spanning tree.

Greedy methods for MST Q: How to grow an MST by adding one edges at a time? A: The algorithm manages a set of edges A, maintaining the following loop invariant: Prior to each iteration, A is a subset of some MST That is, at each step we determine an edge (u,v) such that A{(u,v)} is still a subset of a MST and (u,v) is called a safe edge for A

Generic MST algorithm Given a connected, undirected graph G=(V,E) with weights on its edges GENERIC-MST(G,w) 1. A 2. while A does not form a spanning tree 3. find an edge (u,v) that is safe for A 4. AA{(u,v)} 5. end while 6. return A In the followings, we will provide a rule for recognizing safe edges, and the two algorithms use the rule efficiently

Cut and light edge Definition: A cut (S,V-S) of an undirected graph G=(V,E) is a partition of V. An edge (u,v)E crosses the cut iff one of its endpoints is in S and the other is in V-S. We say that a cut respects a set A of edges if no edge in A crosses the cut. Definition: An edge is a light edge crossing a cut if its weight is the minimum of any edge crossing the cut

Example of cut and light edge

Which edge is safe Theorem M4: Given a connected, undirected graph G=(V,E) with weights on its edges. Let A be a subset of E that is included in some MST for G, let (S,V-S) be any cut of G that respects A, and let (u,v) be a light edge crossing (S,V-S). Then edge (u,v) is safe for A Corollary: Given a connected, undirected graph G=(V,E) with weights on its edges. Let A be a subset of E that is included in some MST for G, let C=(Vc,Ec) be a connected component (tree) in the forest GA=(V,A). If (u,v) is a light edge connecting C to some other component in GA, then (u,v) is safe for A

Ideas of MST algorithms Kruskal’s algorithm: the set A is a forest. The safe edge added to A is always a least-weight edge in the graph that connects two distinct components. Prim’s algorithm: the set A forms a single tree. The safe edge added to A is always a least-weight edge connecting the tree to a vertex not in the tree The reason why greedy algorithms are effective at finding MST is that the set of forests of graph forms a graphic matroid.

Kruskal’s algorithm Algorithm 8.3 KRUSKAL Input: A weighted connected undirected graph G=(V,E) with n vertices Output: The set of edges T of a minimum cost spanning tree for G 1. Sort the edges in E by nondecreasing weight 2. for each vertex vV 3. MAKESET({v}) 4. end for 5. T={} 6. while |T|<n-1 7. Let (x,y) be the next edge in E 8. if FIND(x)FIND(y) then 9. Add (x,y) to T 10. UNION(x,y) 11. end if 12. end while

Prim’s algorithm Algorithm 8.3 PRIM Input: A weighted connected undirected graph G=(V,E), where V={1,2,...,n} Output: The set of edges T of a minimum cost spanning tree for G 1. T{}; X{1}; YV-{1} 2. for y2 to n 3. if y adjacent to 1 then N[y]1, C[y]c[1,y] 4. else C[y] 5. end for 6. for j1 to n-1 {find n-1 edges} 7. Let yY be such that C[y] is minimum 8. TT{(y,N[y])} {add edge (y,N[y]) to T} 9. XX{y}, YY-{y} {move vertex y from Y to X} 10. for each vertex wY that is adjacent to y 11. if c[y,w]<C[w] then N[w]y, C[w]c[y,w] 12. end for 13. end for

A task-scheduling problem Inputs: A set S=(a1,a2,…,an} of n unit-time tasks A set of n integer deadlines d1,d2,…,dn, such that 1din and task ai is supposed to finish by time di A set of n nonnegative penalties w1,w2,…,wn, such that we incur a penalty of wi if task ai is not finished by time di Find a schedule for S that minimizes the total penalties incurred for missed deadlines