CSCI 256 Data Structures and Algorithm Analysis Lecture 13 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.

Slides:



Advertisements
Similar presentations
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Advertisements

Types of Algorithms.
Analysis of Algorithms
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Introduction to Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
DYNAMIC PROGRAMMING. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
Dynamic Programming CIS 606 Spring 2010.
Lecture 38 CSE 331 Dec 3, A new grading proposal Towards your final score in the course MAX ( mid-term as 25%+ finals as 40%, finals as 65%) .
Lecture 36 CSE 331 Dec 2, Graded HW 8 END of the lecture.
1 prepared from lecture material © 2004 Goodrich & Tamassia COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material.
Lecture 37 CSE 331 Dec 1, A new grading proposal Towards your final score in the course MAX ( mid-term as 25%+ finals as 40%, finals as 65%) .
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 8.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Lectures on Greedy Algorithms and Dynamic Programming
CSCI 256 Data Structures and Algorithm Analysis Lecture 6 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Sequence Alignment Tanya Berger-Wolf CS502: Algorithms in Computational Biology January 25, 2011.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
CSC5101 Advanced Algorithms Analysis
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
CSCI 256 Data Structures and Algorithm Analysis Lecture 16 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
CSCI 256 Data Structures and Algorithm Analysis Lecture 10 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Lecture 33 CSE 331 Nov 20, HW 8 due today Place Q1, Q2 and Q3 in separate piles I will not accept HWs after 1:15pm Submit your HWs to the side of.
Lecture 10: Dynamic Programming 1 CS 341: Algorithms Thursday, June 2 nd
Greedy algorithms: CSC317
CS502: Algorithms in Computational Biology
Lecture 32 CSE 331 Nov 16, 2016.
Weighted Interval Scheduling
Greedy Algorithms / Interval Scheduling Yin Tat Lee
CS38 Introduction to Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
Algorithms (2IL15) – Lecture 2
Lecture 33 CSE 331 Nov 18, 2016.
Lecture 33 CSE 331 Nov 17, 2017.
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Lecture 32 CSE 331 Nov 15, 2017.
Lecture 4 Dynamic Programming
Lecture 36 CSE 331 Nov 28, 2011.
Lecture 36 CSE 331 Nov 30, 2012.
Analysis of Algorithms CS 477/677
Dynamic Programming.
Week 8 - Friday CS322.
Presentation transcript:

CSCI 256 Data Structures and Algorithm Analysis Lecture 13 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some by Iker Gondra

Algorithmic Paradigms Greed –Build up a solution incrementally, myopically optimizing some local criterion Divide-and-conquer –Break up a problem into two or more subproblems, solve each subproblem independently, and combine solutions to subproblems to form solution to original problem Dynamic programming –Break up a problem into a series of overlapping subproblems, and build up solutions to larger and larger subproblems –Rather than solving overlapping problems again and again, solve each subproblem only once and record the results in a table. This simple idea can easily transform exponential-time algorithms into polynomial-time algorithms

Algorithmic Paradigms The divide and conquer strategies usually worked to reduce a polynomial running time to something faster but still polynomial time We need something faster to reduce an exponential brute force search down to polynomial time—which is where DP comes in.

Dynamic Programming (DP) DP is like the divide-and-conquer method in that it solves problems by combining the solutions to subproblems Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem In contrast, DP is applicable when the subproblems are not independent (i.e., subproblems share subsubproblems). In this context, divide-and-conquer does more work than necessary, repeatedly solving the common subsubproblems. DP solves each subproblem just once and saves its answer in a table

A Little Bit of DP History Bellman –Pioneered the systematic study of DP in the 1950s –DP as a general method for optimizing multistage decision processes Etymology –Dynamic programming = planning over time. Thus the word “programming” in the name of this technique stands for “planning”, not computer programming –Secretary of Defense was hostile to mathematical research –Bellman sought an impressive name to avoid confrontation "it's impossible to use dynamic in a pejorative sense" "something not even a Congressman could object to" Reference: Bellman, R. E. Eye of the Hurricane, An Autobiography.

DP: Many Applications Areas –Bioinformatics –Control theory –Information theory –Operations research –Computer science: theory, graphics, AI, systems, …. Some famous DP algorithms – we’ll see some of these –Viterbi for hidden Markov models –Unix diff for comparing two files –Smith-Waterman for sequence alignment –Bellman-Ford for shortest path routing in networks –Cocke-Kasami-Younger for parsing context free grammars

Developing DP algorithms The development of a DP algorithm can be broken into a sequence of 4 steps 1)Characterize the structure of an optimal solution 2)Recursively define the value of an optimal solution 3)Compute the value of an optimal solution in a bottom-up fashion 4)Construct an optimal solution from computed information

Weighted Interval Scheduling Weighted interval scheduling problem –Job j starts at s j, finishes at f j, and has weight or value v j –Two jobs compatible if they don't overlap –Goal: find maximum weight subset of mutually compatible jobs Time f g h e a b c d

Unweighted Interval Scheduling Review Recall: Greedy algorithm works if all weights are 1 –Consider jobs in ascending order of finish time –Add job to subset if it is compatible with previously chosen jobs Observation: Greedy algorithm can fail spectacularly if arbitrary weights are allowed Time b a weight = 999 weight = 1

Weighted Interval Scheduling Notation: Label jobs by finishing time: f 1  f 2 ...  f n Def: p(j) = largest index i < j such that job i is compatible with job j Ex: p(8) = 5, p(7) = 3, p(2) = 0 Time

DP: Binary Choice: do we take job j or not?? Notation: OPT(j) = value of optimal solution to the problem consisting of job requests 1, 2,..., j –Case 1: OPT selects job j can't use incompatible jobs { p(j) + 1, p(j) + 2,..., j - 1 } must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., p(j) –Case 2: OPT does not select job j must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., j-1 optimal substructure

Weighted Interval Scheduling: Brute Force Brute force algorithm Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n Compute p(1), p(2), …, p(n) Compute-Opt(j) { if (j = 0) return 0 else return max(v j + Compute-Opt(p(j)), Compute-Opt(j-1)) }

Compute-Opt(j) correctly computes Opt(j) for each j= 0, 1,2,…, n Proof: How??

Compute-Opt(j) correctly computes Opt(j) for each j= 0,1,2,…, n Proof: By Induction (of course!!) Use strong induction here By definition Opt(0) = 0 (this is base case). Assume Compute-Opt(i) correctly computes Opt(i) for all i < j. By induction, Compute-Opt (p(j)) = Opt p(j) and Compute-Opt (j-1) = Opt(j-1). Hence from definition of Opt(j), Opt(j) = max (v j + Compute-Opt(p(j)), Compute-Opt(j-1)) = Compute–Opt(j)

Weighted Interval Scheduling: Brute Force Observation: Recursive algorithm fails spectacularly because of redundant subproblems  exponential algorithm Number of recursive calls for family of "layered" instances as below grows like Fibonacci sequence p(1) = 0, p(j) = j

Recursive algorithm takes exponential time Indeed if we implement the Compute-Opt as written, it takes exponential time in the worst case – our example on previous slide -- which shows the tree of calls, shows the tree widens very quickly due to the recursive branching; for this case, where p(j) = j-2, for each j = 2,3,…, Compute-Opt generates separate recursive calls on problems of sizes j-1 and j-2 – so will grow like the Fibonacci numbers which increase exponentially!

Recursive algorithm takes exponential time How do we get around this? Note the same computation is occurring over and over again in many recursive calls

Weighted Interval Scheduling: SOLUTION!!! ----Memoization Memoization: Store results of each subproblem in a cache; lookup and use as needed Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n Compute p(1), p(2), …, p(n) for j = 1 to n Opt[j] = -1 Opt[0] = 0 M-Compute-Opt(j) { if (Opt[j] == -1) Opt[j] = max(v j + M-Compute-Opt(p(j)), M-Compute-Opt(j-1)) return Opt[j] }

Intelligent M-Cmpute-Opt Use an array M[o,…n]; M[j] starts with the value empty; it will hold the value of M-Compute-Opt(j) as soon as it is determined. To determine Opt(n) invoke M-Compute- Opt(n)

Fill in the array with the Opt values Opt[ j ] = max (v j + Opt[ p[ j ] ], Opt[ j – 1])

Weighted Interval Scheduling: Running Time Claim: Memoized version of algorithm takes O(n log n) time –Sort by finish time: O(n log n) –Computing p(  ) : O(n) after sorting by finish time –M-Compute-Opt(j) : each invocation takes O(1) time and either (i) returns an existing value Opt[j] (ii) fills in one new entry Opt[j] and makes two recursive calls – how many recursive calls in total??? Progress measure  = # nonempty entries of Opt[] –initially  = 0, throughout   n –(ii) increases  by 1  at most 2n recursive calls Overall running time of M-Compute-Opt(n) is O(n)

Weighted Interval Scheduling: Finding a Solution Q: DP algorithm computes optimal value. What if we want the solution itself? –A: Do some post-processing –# of recursive calls  n  O(n) Run M-Compute-Opt(n) Run Find-Solution(n) Find-Solution(j) { if (j = 0) output nothing else if (v j + Opt[p(j)] > Opt[j-1]) print j Find-Solution(p(j)) else Find-Solution(j-1) }

Memoization or Iteration over Subproblems KEY to this algorithm is the array Opt – it encodes the notion that we are using the value of the optimal solutions to the subproblems on intervals {1,2,…,j} and defines Opt[j] based on values earlier in the array; once we have Opt[n] the problem is solved – it contains the value of the optimal solution and Find Solution retrieves the actual solution; NOTE – we can directly compute the entries in Opt by an iterative algorithm rather than memoized recursion

Weighted Interval Scheduling: Iterative Version Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n. Compute p(1), p(2), …, p(n) Iterative-Compute-Opt { Opt[0] = 0 for j = 1 to n Opt[j] = max(v j + Opt[p(j)], Opt[j-1]) }

Iteration over Subproblems The previous algorithm captures the essence of the dynamic programming technique and serves as a template for algorithms we develop in the next few lectures – we develop algorithms which iteratively build up subproblems (though there is always an equivalent way to formulate the algorithm as a memoized recursion)

Basic Properties to be satisfied for DP approach There are only a polynomial number of subproblems. The solution to the original problem can be easily be computed from solutions to subproblems (and in the case to weighted interval scheduling may be one of the subproblems. There is a natural ordering of subproblems from smallest to largest with an easy to compute recurrence that allows one to determine the solution of the subproblem from the solutions of some number of smaller subproblems. Subtle issue – the relationship between figuring out the subproblems and then finding the recurrence relation to link them together.