Algorithm Design Techniques Greedy Approach vs Dynamic Programming

Slides:



Advertisements
Similar presentations
Introduction to Algorithms
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Algorithm Design Techniques: Greedy Algorithms. Introduction Algorithm Design Techniques –Design of algorithms –Algorithms commonly used to solve problems.
The Greedy Method1. 2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1) Task Scheduling (§5.1.2) Minimum Spanning.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Chapter 5 Fundamental Algorithm Design Techniques.
Greedy Algorithms Greed is good. (Some of the time)
Analysis of Algorithms
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
© 2004 Goodrich, Tamassia Greedy Method and Compression1 The Greedy Method and Text Compression.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
16.Greedy algorithms Hsu, Lih-Hsing. Computer Theory Lab. Chapter 16P An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a.
Advanced Algorithm Design and Analysis (Lecture 5) SW5 fall 2004 Simonas Šaltenis E1-215b
Introduction to Algorithms Chapter 16: Greedy Algorithms.
5-1-1 CSC401 – Analysis of Algorithms Chapter 5--1 The Greedy Method Objectives Introduce the Brute Force method and the Greedy Method Compare the solutions.
GREEDY ALGORITHMS UNIT IV. TOPICS TO BE COVERED Fractional Knapsack problem Huffman Coding Single source shortest paths Minimum Spanning Trees Task Scheduling.
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
Greedy Algorithms Chapter 16 Highlights
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
1 Algorithms CSCI 235, Fall 2015 Lecture 29 Greedy Algorithms.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 18.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
CSCI 58000, Algorithm Design, Analysis & Implementation Lecture 12 Greedy Algorithms (Chapter 16)
Greedy Algorithms General principle of greedy algorithm
Greedy algorithms: CSC317
Greedy Algorithms Alexandra Stefan.
CSC317 Greedy algorithms; Two main properties:
Lecture on Design and Analysis of Computer Algorithm
Greedy Method 6/22/2018 6:57 PM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015.
CS 3343: Analysis of Algorithms
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
The Greedy Method and Text Compression
Greedy Algorithms (Chap. 16)
The Greedy Method and Text Compression
Introduction to Algorithms`
Greedy Algorithm.
Chapter 16: Greedy Algorithm
Dynamic Programming Several problems Principle of dynamic programming
Dynamic Programming Comp 122, Fall 2004.
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Algorithms (2IL15) – Lecture 2
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Chapter 16: Greedy algorithms Ming-Te Chi
Advanced Algorithms Analysis and Design
Merge Sort Dynamic Programming
Advanced Algorithms Analysis and Design
Dynamic Programming.
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Lecture 6 Topics Greedy Algorithm
Greedy Algorithms TOPICS Greedy Strategy Activity Selection
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Data Structure and Algorithms
Greedy Algorithms Alexandra Stefan.
Chapter 16: Greedy algorithms Ming-Te Chi
Dynamic Programming Comp 122, Fall 2004.
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 29 Greedy Algorithms
Dynamic Programming II DP over Intervals
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Analysis of Algorithms CS 477/677
Algorithms CSCI 235, Spring 2019 Lecture 30 More Greedy Algorithms
Asst. Prof. Dr. İlker Kocabaş
Advance Algorithm Dynamic Programming
Analysis of Algorithms CS 477/677
Presentation transcript:

Algorithm Design Techniques Greedy Approach vs Dynamic Programming Text Book: C L R S 2. Goodrich

Greedy Algorithms A greedy algorithm always makes the choice that looks best at the moment. locally optimal choice leading to a globally optimal solution. Greedy algorithms do not always yield optimal solutions Examples: Minimum spanning tree(Kruskal and Prim), shortest path algorithm (Dijkstra)

Activity Selection Problem: Schedule several competing activities that requires exclusive use of a common resource Goal: Select a maximum-size set of mutually compatible activities.

Activity Selection we have a set S = {a1, a2, …, an} of n proposed activities that wish to use a resource. Each activity ai has a start time si and a finish time fi, where 0 <= si <= fi <= ∞. Activities ai and aj are compatible if if si >= fj or sj >= fi.

Activity Selection Let us consider the following set of activities Some mutually exclusive set of activities are:

Activity Selection –solution We sort the activities in monotonically increasing order of finish time

Activity Selection – Iterative solution

Activity Selection – Recursive solution

Activity Selection – Example2 Schedule the activities (0,35), (10,20), (25,35), (5,15), (15,25), (13,16), (24,27)

Steps in Greedy Algorithm design Cast the optimization problem as one in which we make a choice and are left with one subproblem to solve. Prove that there is always an optimal solution to the original problem that makes the greedy choice, so that the greedy choice is always safe. Demonstrate optimal substructure by showing that, having made the greedy choice, what remains is a subproblem with the property that if we combine an optimal solution to the subproblem with the greedy choice we have made, we arrive at an optimal solution to the original problem.

Task Scheduling Given: a set T of n tasks, each having: A start time, si A finish time, fi (where si < fi) Goal: Perform all the tasks using a minimum number of “machines.” Machine 3 Machine 2 Machine 1 1 2 3 4 5 6 7 8 9 The Greedy Method

Task Scheduling Algorithm Greedy choice: consider tasks by their start time and use as few machines as possible with this order. Run time: O(n log n). Why? Correctness: Suppose there is a better schedule. We can use k-1 machines The algorithm uses k Let i be first task scheduled on machine k Machine i must conflict with k-1 other tasks But that means there is no non-conflicting schedule using k-1 machines Algorithm taskSchedule(T) Input: set T of tasks w/ start time si and finish time fi Output: non-conflicting schedule with minimum number of machines m  0 {no. of machines} while T is not empty remove task i w/ smallest si if there’s a machine j for i then schedule i on machine j else m  m + 1 schedule i on machine m The Greedy Method

The Fractional Knapsack Problem Given: A set S of n items, with each item i having bi - a positive benefit wi - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. If we are allowed to take fractional amounts, then this is the fractional knapsack problem. In this case, we let xi denote the amount we take of item i Objective: maximize Constraint: The Greedy Method

Example Given: A set S of n items, with each item i having bi - a positive benefit wi - a positive weight Goal: Choose items with maximum total benefit but with weight at most W. 10 ml “knapsack” Solution: 1 ml of 5 2 ml of 3 6 ml of 4 1 ml of 2 Items: 1 2 3 4 5 Weight: 4 ml 8 ml 2 ml 6 ml 1 ml Benefit: $12 $32 $40 $30 $50 Value: 3 4 20 5 50 ($ per ml) The Greedy Method

The Fractional Knapsack Algorithm Greedy choice: Keep taking item with highest value (benefit to weight ratio) Since Run time: O(n log n). Why? Correctness: Suppose there is a better solution there is an item i with higher value than a chosen item j (i.e., vi<vj) but xi<wi and xj>0 If we substitute some i with j, we get a better solution How much of i: min{wi-xi, xj} Thus, there is no better solution than the greedy one Algorithm fractionalKnapsack(S, W) Input: set S of items w/ benefit bi and weight wi; max. weight W Output: amount xi of each item i to maximize benefit with weight at most W for each item i in S xi  0 vi  bi / wi {value} w  0 {total weight} while w < W remove item i with highest vi xi  min{wi , W - w} w  w + min{wi , W - w} The Greedy Method

Huffman Coding

Tree for fixed length coding

Variable length coding Develop codes in such a way that no codeword is also a prefix of some other codeword Such codes are called prefix codes.

Tree for variable length coding

Properties of optimal code tree An optimal code for a file is always represented by a full binary tree Every nonleaf node has two children The tree for an optimal prefix code has exactly |C| leaves and exactly |C|-1 internal nodes

Properties of optimal code tree For each character c in the alphabet C, let the attribute c.freq denote the frequency of c in the file and let dT(c) denote the depth of c’s leaf in the tree. Note that dT(c) is also the length of the codeword for character c. The number of bits required to encode a file is thus

Huffman coding

Creating Huffman tree

Creating Huffman tree

Creating Huffman tree

Creating Huffman tree

Making Change Problem: A dollar amount to reach and a collection of coin amounts to use to get there. Configuration: A dollar amount yet to return to a customer plus the coins already returned Objective function: Minimize number of coins returned. Greedy solution: Always return the largest coin you can Example 1: Coins are valued $.32, $.08, $.01 Has the greedy-choice property, since no amount over $.32 can be made with a minimum number of coins by omitting a $.32 coin (similarly for amounts over $.08, but under $.32). Example 2: Coins are valued $.30, $.20, $.05, $.01 Does not have greedy-choice property, since $.40 is best made with two $.20’s, but the greedy solution will pick three coins (which ones?) The Greedy Method

Dynamic programming When developing a dynamic-programming algorithm, we follow a sequence off our steps: 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution, typically in a bottom-up fashion. 4. Construct an optimal solution from computed information.

Matrix Multiplication

Matrix-chain multiplication Given a sequence (chain) <A1, A2, . . . , An> of n matrices Compute the product A1 . A2 . . . ,An Minimize the number of scalar multiplication Matrix multiplication is associative A1 – 100x100, A2-100x5, A3- 5x50 A1.(A2.A3) need 75000 multiplications (A1.A2).A3 need 7500 multiplications

Matrix Chain Multiplication Assume that matrix Ai has dimensions pi-1 x pi for i =1, 2, 3, … n. Let m[i,j] be the minimum number of scalar multiplications needed to compute the matrix Ai . Ai+1 . . . Aj The recursive formulation of optimum solution is

Matrix Chain Multiplication

30 35 25 15 5 10 20

30 35 25 15 5 10 20 0 + 0 + 30 * 35 * 15 = 15750

30 35 25 15 5 10 20 2625 + 0 + 35 * 5 * 10 = 4375 0 + 750 + 35 * 15 * 10 = 6000

30 35 25 15 5 10 20 0 + 10500 + 30 * 35 * 25 = 36750

30 35 25 15 5 10 20 36750 15750 + 5375 + 30 * 15 * 25 = 32375

30 35 25 15 5 10 20 36750 32375 7875 + 3500 + 30 * 5 * 25 = 15125

30 35 25 15 5 10 20 36750 32375 15125 9375 + 5000 + 30 * 10 * 25 = 21875

30 35 25 15 5 10 20 36750 32375 15125 21875 11875 + 0 + 30 * 20 * 25 = 26875

Memoized Matrix Chain

Memoized Matrix Chain

Longest Common Subsequence S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCCGAA S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA

Longest Common Subsequence S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCCGAA S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA S3= GTCGTCGGAAGCCGGCCGAA

LC Subsequence A subsequence of a given sequence is just the given sequence with zero or more elements left out Given two sequences X and Y, a sequence Z is a common subsequence of X and Y, if Z is a subsequence of both X and Y

Theorem – Optimal substructure of an LCS Let X = < x1,x2,…, xm> and Y = <y1, y2,…,yn> be sequences, and let Z=<z1, z2,…,zk> be any LCS of X and Y.   1. If xm = yn, then zk = xm = yn and Zk_1 is an LCS of Xm_1 and Yn_1. 2. If xm ≠ yn, then zk ≠ xm implies that Z is an LCS of Xm_1 and Y . 3. If xm ≠ yn, then zk ≠ yn implies that Z is an LCS of X and Yn_1 .

LCS – A recursive solution Let c[I,j] be the length of an LCS of X and Y.   C[I,j] = =0 if i=0 or j=0 =c[i-1, j-1] + 1 if I,j > 0 and xi = yj =max ( c[I, j-1], c[i-1, j] ) if I,j > 0 and xi ≠ yj

LCS Algorithm – Page 1

LCS Algorithm – Page 2

Print LCS

Example X = A B C B D A B Y = B D C A B A

Longest Common substring Let M[I, j] be the number of letters to the left of (and including) ai complying with the same number of letters to the left (and including) bj

Longest Common substring

0-1 Knapsack Example: Take W = 10 and Item Weight Value 1 6 $30 1 6 $30 2 3 $14 3 4 $16 4 2 $9

0-1 Knapsack There are two versions of this problem. If there are unlimited quantities of each item available, the optimal choice is to pick item 1 and two of item 4 (total: $48). On the other hand, if there is one of each item, then the optimal knapsack contains items 1 and 3 (total: $46).

Knapsack with repetition We can shrink the original problem in two ways: we can either look at smaller knapsack capacities w <= W, or we can look at fewer items (for instance, items 1, 2,… j, for j <= n). Define K(w) = maximum value achievable with a knapsack of capacity w.

Knapsack with repetition Can we express this in terms of smaller subproblems? If the optimal solution to K(w) includes item i, then removing this item from the knapsack leaves an optimal solution to K(w-wi). In other words, K(w) is simply K(w - wi) + vi, for some i. We don't know which I, so we need to try all possibilities.

Knapsack with repetition

Knapsack without repetition K(w; j) = maximum value achievable using a knapsack of capacity w and items 1,. . .,j

Knapsack without repetition