1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.

Slides:



Advertisements
Similar presentations
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Advertisements

Greedy Algorithms Be greedy! always make the choice that looks best at the moment. Local optimization. Not always yielding a globally optimal solution.
CSCI 256 Data Structures and Algorithm Analysis Lecture 13 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Approaches to Problem Solving greedy algorithms dynamic programming backtracking divide-and-conquer.
Approaches to Problem Solving greedy algorithms dynamic programming backtracking divide-and-conquer.
1 Dynamic Programming Jose Rolim University of Geneva.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 20.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
DYNAMIC PROGRAMMING. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer.
Lecture 37 CSE 331 Nov 4, Homework stuff (Last!) HW 10 at the end of the lecture Solutions to HW 9 on Monday Graded HW 9 on.
Lecture 38 CSE 331 Dec 3, A new grading proposal Towards your final score in the course MAX ( mid-term as 25%+ finals as 40%, finals as 65%) .
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Lecture 36 CSE 331 Dec 2, Graded HW 8 END of the lecture.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Design Patterns for Optimization Problems Dynamic Programming.
1 prepared from lecture material © 2004 Goodrich & Tamassia COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material.
Lecture 37 CSE 331 Dec 1, A new grading proposal Towards your final score in the course MAX ( mid-term as 25%+ finals as 40%, finals as 65%) .
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Dynamic Programming Adapted from Introduction and Algorithms by Kleinberg and Tardos.
9/10/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 13 Dynamic.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
1 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into.
Dynamic programming 叶德仕
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 8.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 17.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Prof. Swarat Chaudhuri COMP 482: Design and Analysis of Algorithms Spring 2012 Lecture 16.
1 Chapter 6 Dynamic Programming. 2 Algorithmic Paradigms Greedy. Build up a solution incrementally, optimizing some local criterion. Divide-and-conquer.
Lectures on Greedy Algorithms and Dynamic Programming
CSCI 256 Data Structures and Algorithm Analysis Lecture 6 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Topic 25 Dynamic Programming "Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Approaches to Problem Solving greedy algorithms dynamic programming backtracking divide-and-conquer.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
Sequence Alignment Tanya Berger-Wolf CS502: Algorithms in Computational Biology January 25, 2011.
Chapter 6 Dynamic Programming
1 Chapter 5-1 Greedy Algorithms Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Instructor Neelima Gupta Instructor: Ms. Neelima Gupta.
1 Chapter 6 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
CSCI-256 Data Structures & Algorithm Analysis Lecture Note: Some slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved. 18.
CSCI 256 Data Structures and Algorithm Analysis Lecture 16 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
CS502: Algorithms in Computational Biology
Chapter 6 Dynamic Programming
Algorithm Design and Analysis
Weighted Interval Scheduling
Analysis of Algorithms CS 477/677
CS38 Introduction to Algorithms
Prepared by Chen & Po-Chuan 2016/03/29
Chapter 6 Dynamic Programming
Lecture 33 CSE 331 Nov 18, 2016.
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Richard Anderson Lecture 16a Dynamic Programming
Chapter 6 Dynamic Programming
Chapter 6 Dynamic Programming
Lecture 36 CSE 331 Nov 28, 2011.
Lecture 36 CSE 331 Nov 30, 2012.
Analysis of Algorithms CS 477/677
Asst. Prof. Dr. İlker Kocabaş
Dynamic Programming.
Week 8 - Friday CS322.
Presentation transcript:

1 Chapter 6-1 Dynamic Programming Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.

2 Algorithmic Paradigms Greed. Build up a solution incrementally, myopically optimizing some local criterion. Divide-and-conquer. Break up a problem into two sub-problems, solve each sub-problem independently, and combine solution to sub-problems to form solution to original problem. Dynamic programming. Break up a problem into a series of overlapping sub-problems, and build up solutions to larger and larger sub-problems.

3 Dynamic Programming History Bellman. Pioneered the systematic study of dynamic programming in the 1950s. Etymology. n Dynamic programming = planning over time. n Secretary of Defense was hostile to mathematical research. n Bellman sought an impressive name to avoid confrontation. – "it's impossible to use dynamic in a pejorative sense" – "something not even a Congressman could object to" Reference: Bellman, R. E. Eye of the Hurricane, An Autobiography.

4 Dynamic Programming Applications Areas. n Bioinformatics. n Control theory. n Information theory. n Operations research. n Computer science: theory, graphics, AI, systems, …. Some famous dynamic programming algorithms. n Viterbi for hidden Markov models. n Unix diff for comparing two files. n Smith-Waterman for sequence alignment. n Bellman-Ford for shortest path routing in networks. n Cocke-Kasami-Younger for parsing context free grammars.

6.1 Weighted Interval Scheduling

6 Weighted Interval Scheduling Weighted interval scheduling problem. n Job j starts at s j, finishes at f j, and has weight or value v j. n Two jobs compatible if they don't overlap. n Goal: find maximum weight subset of mutually compatible jobs. Time f g h e a b c d

7 Unweighted Interval Scheduling Review Recall. Greedy algorithm works if all weights are 1. n Consider jobs in ascending order of finish time. n Add job to subset if it is compatible with previously chosen jobs. Observation. Greedy algorithm can fail spectacularly if arbitrary weights are allowed. Time b a weight = 999 weight = 1

8 Weighted Interval Scheduling Notation. Label jobs by finishing time: f 1  f 2 ...  f n. Def. p(j) = largest index i < j such that job i is compatible with j. Ex: p(8) = 5, p(7) = 3, p(2) = 0. Time

9 Dynamic Programming: Binary Choice Notation. OPT(j) = value of optimal solution to the problem consisting of job requests 1, 2,..., j. n Case 1: OPT selects job j. – can't use incompatible jobs { p(j) + 1, p(j) + 2,..., j - 1 } – must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., p(j) n Case 2: OPT does not select job j. – must include optimal solution to problem consisting of remaining compatible jobs 1, 2,..., j-1 optimal substructure

10 Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n. Compute p(1), p(2), …, p(n) Compute-Opt(j) { if (j = 0) return 0 else return max(v j + Compute-Opt(p(j)), Compute-Opt(j-1)) } Weighted Interval Scheduling: Brute Force Brute force algorithm.

11 Weighted Interval Scheduling: Brute Force Observation. Recursive algorithm fails spectacularly because of redundant sub-problems  exponential algorithms. Ex. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence p(1) = 0, p(j) = j

12 Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n. Compute p(1), p(2), …, p(n) for j = 1 to n M[j] = empty M[j] = 0 M-Compute-Opt(j) { if (M[j] is empty) M[j] = max(w j + M-Compute-Opt(p(j)), M-Compute-Opt(j-1)) return M[j] } global array Weighted Interval Scheduling: Memoization Memoization. Store results of each sub-problem in a cache; lookup as needed.

13 Weighted Interval Scheduling: Running Time Claim. Memoized version of algorithm takes O(n log n) time. n Sort by finish time: O(n log n). n Computing p(  ) : O(n) after sorting by start time. M-Compute-Opt(j) : each invocation takes O(1) time and either – (i) returns an existing value M[j] – (ii) fills in one new entry M[j] and makes two recursive calls Progress measure  = # nonempty entries of M[]. – initially  = 0, throughout   n. – (ii) increases  by 1  at most 2n recursive calls. Overall running time of M-Compute-Opt(n) is O(n). ▪ Remark. O(n) if jobs are pre-sorted by start and finish times.

14 Automated Memoization Automated memoization. Many functional programming languages (e.g., Lisp) have built-in support for memoization. Q. Why not in imperative languages (e.g., Java)? F(40) F(39)F(38) F(37)F(36) F(37) F(36)F(35) F(36) F(35)F(34) F(37) F(36)F(35) static int F(int n) { if (n <= 1) return n; else return F(n-1) + F(n-2); } (defun F (n) (if (<= n 1) n (+ (F (- n 1)) (F (- n 2))))) Lisp (efficient) Java (exponential)

15 Weighted Interval Scheduling: Finding a Solution Q. Dynamic programming algorithms computes optimal value. What if we want the solution itself? A. Do some post-processing. n # of recursive calls  n  O(n). Run M-Compute-Opt(n) Run Find-Solution(n) Find-Solution(j) { if (j = 0) output nothing else if (v j + M[p(j)] > M[j-1]) print j Find-Solution(p(j)) else Find-Solution(j-1) }

16 Weighted Interval Scheduling: Bottom-Up Bottom-up dynamic programming. Unwind recursion. Input: n, s 1,…,s n, f 1,…,f n, v 1,…,v n Sort jobs by finish times so that f 1  f 2 ...  f n. Compute p(1), p(2), …, p(n) Iterative-Compute-Opt { M[0] = 0 for j = 1 to n M[j] = max(v j + M[p(j)], M[j-1]) }

6.3 Segmented Least Squares

18 Segmented Least Squares Least squares. n Foundational problem in statistic and numerical analysis. n Given n points in the plane: (x 1, y 1 ), (x 2, y 2 ),..., (x n, y n ). n Find a line y = ax + b that minimizes the sum of the squared error: Solution. Calculus  min error is achieved when x y

19 Segmented Least Squares Segmented least squares. n Points lie roughly on a sequence of several line segments. n Given n points in the plane (x 1, y 1 ), (x 2, y 2 ),..., (x n, y n ) with n x 1 < x 2 <... < x n, find a sequence of lines that minimizes f(x). Q. What's a reasonable choice for f(x) to balance accuracy and parsimony? x y goodness of fit number of lines

20 Segmented Least Squares Segmented least squares. n Points lie roughly on a sequence of several line segments. n Given n points in the plane (x 1, y 1 ), (x 2, y 2 ),..., (x n, y n ) with n x 1 < x 2 <... < x n, find a sequence of lines that minimizes: – the sum of the sums of the squared errors E in each segment – the number of lines L n Tradeoff function: E + c L, for some constant c > 0. x y

21 Dynamic Programming: Multiway Choice Notation. n OPT(j) = minimum cost for points p 1, p i+1,..., p j. n e(i, j) = minimum sum of squares for points p i, p i+1,..., p j. To compute OPT(j): n Last segment uses points p i, p i+1,..., p j for some i. n Cost = e(i, j) + c + OPT(i-1).

22 Segmented Least Squares: Algorithm Running time. O(n 3 ). n Bottleneck = computing e(i, j) for O(n 2 ) pairs, O(n) per pair using previous formula. INPUT: n, p 1,…,p N, c Segmented-Least-Squares() { M[0] = 0 for j = 1 to n for i = 1 to j compute the least square error e ij for the segment p i,…, p j for j = 1 to n M[j] = min 1  i  j (e ij + c + M[i-1]) return M[n] } can be improved to O(n 2 ) by pre-computing various statistics

HOMEWORK (Chapter 6) 1