COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

Greedy Algorithms.
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Types of Algorithms.
Analysis of Algorithms
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Algorithms Dynamic programming Longest Common Subsequence.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS420 Lecture 9 Dynamic Programming. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems.
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
Greedy Algorithms Basic idea Connection to dynamic programming
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Cs333/cutler Greedy1 Introduction to Greedy Algorithms The greedy technique Problems explored –The coin changing problem –Activity selection.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
Dynamic Programming (II)
Lecture 7: Greedy Algorithms II Shang-Hua Teng. Greedy algorithms A greedy algorithm always makes the choice that looks best at the moment –My everyday.
Dynamic Programming CIS 606 Spring 2010.
Greedy Algorithms Reading Material: –Alsuwaiyel’s Book: Section 8.1 –CLR Book (2 nd Edition): Section 16.1.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CS 206 Introduction to Computer Science II 10 / 14 / 2009 Instructor: Michael Eckmann.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2008 Design Patterns for Optimization Problems Dynamic Programming.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Lecture 7: Greedy Algorithms II
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
Longest Common Subsequence
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
CS 206 Introduction to Computer Science II 02 / 25 / 2009 Instructor: Michael Eckmann.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Called as the Interval Scheduling Problem. A simpler version of a class of scheduling problems. – Can add weights. – Can add multiple resources – Can ask.
Dynamic Programming Credits: Many of these slides were originally authored by Jeff Edmonds, York University. Thanks Jeff!
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
David Luebke 1 10/24/2015 CS 332: Algorithms Greedy Algorithms Continued.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Introduction to Algorithms Jiafen Liu Sept
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
COSC 3101NJ. Elder Announcements Midterms are marked Assignment 2: –Still analyzing.
1 Chapter 16: Greedy Algorithm. 2 About this lecture Introduce Greedy Algorithm Look at some problems solvable by Greedy Algorithm.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
6/13/20161 Greedy A Comparison. 6/13/20162 Greedy Solves an optimization problem: the solution is “best” in some sense. Greedy Strategy: –At each decision.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 17.
Chapter 16: Greedy Algorithm
Greedy Algorithms Many optimization problems can be solved more quickly using a greedy approach The basic principle is that local optimal decisions may.
Advanced Algorithms Analysis and Design
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Advanced Algorithms Analysis and Design
Longest Common Subsequence
Dynamic Programming II DP over Intervals
Presentation transcript:

COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend either block. –The exam will be closed book. –Please remember to bring ID Course Evaluation: –Wed, Mar 24, 19:00 –Volunteer?

COSC 3101NJ. Elder Making Change (Revisited)

COSC 3101NJ. Elder Greed and Money Suppose we are given a system of denominations. How do we decide whether the greedy algorithm always produces an optimal representation, for all values of N (the change to be made)? It turns out that this problem can be solved efficiently (Pearson 1994).

COSC 3101NJ. Elder Greed and Money (cntd…)

COSC 3101NJ. Elder Greed and Money (cntd…) References: J.O. Shallit. What this country needs is an 18-cent piece, Math. Intelligencer 25 (2) (2003), D. Pearson. A polynomial-time algorithm for the change-making problem. Technical Report TR , Department of Computer Science, Cornell University, June 1994.

COSC 3101NJ. Elder Dynamic Programming

COSC 3101NJ. Elder Example 1. Rock Climbing Problem A rock climber wants to get from the bottom of a rock to the top by the safest possible path. At every step, he reaches for handholds above him; some holds are safer than other. From every place, he can only reach a few nearest handholds.

COSC 3101NJ. Elder Rock climbing (cont) At every step our climber can reach exactly three handholds: above, above and to the right and above and to the left.  Suppose we have a wall instead of the rock. There is a table of “danger ratings” provided. The “Danger” of a path is the sum of danger ratings of all handholds on the path

COSC 3101NJ. Elder Rock Climbing (cont) We represent the wall as a table. Every cell of the table contains the danger rating of the corresponding block The obvious greedy algorithm does not give an optimal solution The rating of this path is 13. The rating of an optimal path is However, we can solve this problem by a dynamic programming strategy in polynomial time.

COSC 3101NJ. Elder Idea: once we know the rating of a path to every handhold on a layer, we can easily compute the ratings of the paths to the holds on the next layer. For the top layer, that gives us an answer to the problem itself.

COSC 3101NJ. Elder For every handhold, there is only one “path” rating. Once we have reached a hold, we don’t need to know how we got there to move to the next level. This is called an “optimal substructure” property. Once we know optimal solutions to subproblems, we can compute an optimal solution to the problem itself.

COSC 3101NJ. Elder Dynamic programming Step 1: Describe an array of values you want to compute. Step 2: Give a recurrence for computing later values from earlier (bottom-up). Step 3: Give a high-level program. Step 4: Show how to use values in the array to compute an optimal solution.

COSC 3101NJ. Elder Rock climbing: step 1. Step 1: Describe an array of values you want to compute. For 1  i  n and 1  j  m, define A(i,j) to be the cumulative rating of the least dangerous path from the bottom to the hold (i,j). The rating of the best path to the top will be the minimal value in the last row of the array.

COSC 3101NJ. Elder Rock climbing: step 2. Step 2: Give a recurrence for computing later values from earlier (bottom-up). Let C(i,j) be the rating of the hold (i,j). There are three cases for A(i,j): Left (j=1): C(i,j)+min{A(i-1,j),A(i-1,j+1)} Right (j=m): C(i,j)+min{A(i-1,j-1),A(i-1,j)} Middle: C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)} For the first row (i=1), A(i,j)=C(i,j).

COSC 3101NJ. Elder Rock climbing: simpler step 2 Add initialization row: A(0,j)=0. No danger to stand on the ground. Add two initialization columns: A(i,0)=A(i,m+1)= . It is infinitely dangerous to try to hold on to the air where the wall ends. Now the recurrence becomes, for every i,j: A(i,j) = C(i,j)+min{A(i-1,j-1),A(i-1,j),A(i-1,j+1)}

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): i\j   1  2  3  4  Initialization: A(i,0)=A(i,m+1)= , A(0,j)=0

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): i\j   1   2  3  4  The values in the first row are the same as C(i,j). i\j   1  2  3  4 

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): A(2,1)=5+min{ ,3,2}=7. i\j   1   2  7  3  4 

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): A(2,1)=5+min{ ,3,2}=7. A(2,2)=7+min{3,2,5}=9 i\j   1   2  79  3  4 

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): A(2,1)=5+min{ ,3,2}=7. A(2,2)=7+min{3,2,5}=9 A(2,3)=5+min{2,5,4}=7. i\j   1   2  797  3  4 

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): The best cumulative rating on the second row is 5. i\j   1   2   3  4 

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): The best cumulative rating on the third row is 7. i\j   1   2   3   4 

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): The best cumulative rating on the last row is 12. i\j   1   2   3   4  

COSC 3101NJ. Elder Rock climbing: example C(i,j): A(i,j): The best cumulative rating on the last row is 12. i\j   1   2   3   4   So the rating of the best path to the top is 12.

COSC 3101NJ. Elder Rock climbing example: step C(i,j): A(i,j): i\j   1   2   3   4   To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).

COSC 3101NJ. Elder Rock climbing example: step C(i,j):A(i,j): i\j   1   2   3   4   The last hold was (4,4). To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).

COSC 3101NJ. Elder Rock climbing example: step C(i,j):A(i,j): i\j   1   2   3   4   The hold before the last was (3,4), since min{13,7,8} was 7. To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j).

COSC 3101NJ. Elder Rock climbing example: step C(i,j):A(i,j): To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j). i\j   1   2   3   4   The hold before that was (2,5), since min{7,10,5} was 5.

COSC 3101NJ. Elder Rock climbing example: step C(i,j):A(i,j): To find the actual path we need to retrace backwards the decisions made during the calculation of A(i,j). i\j   1   2   3   4   Finally, the first hold was (1,4), since min{5,4,8} was 4.

COSC 3101NJ. Elder Rock climbing example: step C(i,j):A(i,j): We are done! i\j   1   2   3   4  

COSC 3101NJ. Elder Ingredients: Instances: Events with starting and finishing times,,…, >. Solutions: A set of events that do not overlap. Value of Solution: The number of events scheduled. Goal: Given a set of events, schedule as many as possible. Example 2: The Activity Selection Problem

COSC 3101NJ. Elder From Lecture 6: Problem can be solved by greedy algorithm Earliest Finishing Time Schedule the event that will free up your room for someone else as soon as possible. Motivation: Works! Greedy Criteria:

COSC 3101NJ. Elder But what if activities have different values? Activity Selection with Profits:

COSC 3101NJ. Elder Will a greedy algorithm based on finishing time still work? No! e.g.

COSC 3101NJ. Elder Dynamic Programming Solution

COSC 3101NJ. Elder Step 1. Define an array of values to compute

COSC 3101NJ. Elder Step 2. Provide a Recurrent Solution Decide not to schedule activity i Profit from scheduling activity i Profit from scheduling activities that end before activity i begins

COSC 3101NJ. Elder Step 3. Provide an Algorithm function A=actselwithp(g, H) % assumes inputs sorted by finishing time A(0)=0; for i=1:length(g) A(i)=max(A(i-1), g(i)+A(H(i))); end Running time?O(n)

COSC 3101NJ. Elder Step 4. Compute Optimal Solution function actstring=printasp(A,H,i,actstring) if i==0 return end if A(i)>A(i-1) actstring = printasp(A, H, H(i), actstring); actstring = [actstring, sprintf('%d ', i)]; else actstring = printasp(A, H, i-1, actstring); end Running time?O(n)

COSC 3101NJ. Elder Example Activity i1234 Start s i 0232 Finish f i Profit g i H(i)????

COSC 3101NJ. Elder Example Activity i1234 Start s i 0232 Finish f i Profit g i H(i)0???

COSC 3101NJ. Elder Example Activity i1234 Start s i 0232 Finish f i Profit g i H(i)00??

COSC 3101NJ. Elder Example Activity i1234 Start s i 0232 Finish f i Profit g i H(i)001?

COSC 3101NJ. Elder Example Activity i1234 Start s i 0232 Finish f i Profit g i H(i)0010

COSC 3101NJ. Elder Example 3: Scheduling Jobs with Deadlines, Profits and Durations

COSC 3101NJ. Elder Dynamic Programming Solution

COSC 3101NJ. Elder Step 1. Define an array of values to compute

COSC 3101NJ. Elder Step 2. Provide a Recurrent Solution Decide not to schedule job i Profit from job iProfit from scheduling activities that end before job i begins

COSC 3101NJ. Elder Step 2. (cntd…) Proving the Recurrent Solution We effectively schedule job i at the latest possible time. This leaves the largest and earliest contiguous block of time for scheduling jobs with earlier deadlines.

COSC 3101NJ. Elder Step 3. Provide an Algorithm Running time? O(nd)

COSC 3101NJ. Elder Step 4. Compute Optimal Solution Running time?O(n)

COSC 3101NJ. Elder Example 4: The (General) Knapsack Problem

COSC 3101NJ. Elder The general knapsack problem can be treated and solved as a special case of the job scheduling problem Running time? O(nC) time job k job j job i d weight object k object j object i C The Knapsack Problem: A Special Case of Job Scheduling

COSC 3101NJ. Elder Example 5: Longest Common Subsequence Input: 2 sequences, X = x 1,..., x m and Y = y 1,..., y n. Output: a subsequence common to both whose length is longest. Note: A subsequence doesn’t have to be consecutive, but it has to be in order.

COSC 3101NJ. Elder Examples

COSC 3101NJ. Elder Brute-force Algorithm

COSC 3101NJ. Elder Optimal Substructure

COSC 3101NJ. Elder Step 1. Define an array of values to compute

COSC 3101NJ. Elder Step 2. Provide a Recurrent Solution Input sequences are empty Last elements match: must be part of LCS Last elements don’t match: at most one of them is part of LCS

COSC 3101NJ. Elder Step 3. Provide an Algorithm Running time? O(mn)

COSC 3101NJ. Elder Step 4. Compute Optimal Solution Running time?O(m+n)

COSC 3101NJ. Elder Example

COSC 3101NJ. Elder Example 6: Optimal Binary Search Trees

COSC 3101NJ. Elder Expected Search Cost Which BST is more efficient?

COSC 3101NJ. Elder Observations

COSC 3101NJ. Elder Optimal Substructure

COSC 3101NJ. Elder Recursive Solution

COSC 3101NJ. Elder Recursive Solution (cntd…)

COSC 3101NJ. Elder Step 2. Provide a Recurrent Solution Expected cost of search for left subtree Expected cost of forming tree from root and subtrees Expected cost of search for right subtree

COSC 3101NJ. Elder Step 3. Provide an Algorithm Running time? O(n 3 ) work on subtrees of increasing size l

COSC 3101NJ. Elder Example

COSC 3101NJ. Elder Example (cntd…)

COSC 3101NJ. Elder Step 4. Compute Optimal Solution Running time?O(n)

COSC 3101NJ. Elder Elements of Dynamic Programming Optimal substructure: –an optimal solution to the problem contains within it optimal solutions to subproblems.

COSC 3101NJ. Elder Elements of Dynamic Programming Cut and paste: prove optimal substructure by contradiction: –assume an optimal solution to a problem with suboptimal solution to subproblem –cut out the suboptimal solution to the subproblem. –paste in the optimal solution to the subproblem. –show that this results in a better solution to the original problem. –This contradicts our assertion that our original solution is optimal.

COSC 3101NJ. Elder Dynamic programming uses optimal substructure from the bottom up: –First find optimal solutions to subproblems –Then choose which to use in optimal solution to problem. Elements of Dynamic Programming