First Ingredient of Dynamic Programming

Slides:



Advertisements
Similar presentations
Dynamic Programming Introduction Prof. Muhammad Saeed.
Advertisements

Introduction to Algorithms 6.046J/18.401J/SMA5503
Dynamic Programming.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 2 Tuesday, 2/5/02 Dynamic Programming.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Code
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 1 (Part 3) Tuesday, 9/3/02 Design Patterns for Optimization.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 4. Dynamic Programming - 1 Dynamic.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2008 Design Patterns for Optimization Problems Dynamic Programming.
November 7, 2005Copyright © by Erik D. Demaine and Charles E. Leiserson Dynamic programming Design technique, like divide-and-conquer. Example:
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Lecture 7 Topics Dynamic Programming
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
David Luebke 1 8/23/2015 CS 332: Algorithms Greedy Algorithms.
Algorithms and Data Structures Lecture X
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
Dynamic Programming Chapter 15 Highlights Charles Tappert Seidenberg School of CSIS, Pace University.
Dynamic Programming Nattee Niparnan. Dynamic Programming  Many problem can be solved by D&C (in fact, D&C is a very powerful approach if you generalized.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC 413/513: Intro to Algorithms Greedy Algorithms.
CS 8833 Algorithms Algorithms Dynamic Programming.
Dynamic Programming (Ch. 15) Not a specific algorithm, but a technique (like divide- and-conquer). Developed back in the day when “programming” meant “tabular.
1 Chapter 15-1 : Dynamic Programming I. 2 Divide-and-conquer strategy allows us to solve a big problem by handling only smaller sub-problems Some problems.
COSC 3101A - Design and Analysis of Algorithms 8 Elements of DP Memoization Longest Common Subsequence Greedy Algorithms Many of these slides are taken.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Dynamic Programming. Many problem can be solved by D&C – (in fact, D&C is a very powerful approach if you generalize it since MOST problems can be solved.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Dynamic Programming academy.zariba.com 1. Lecture Content 1.Fibonacci Numbers Revisited 2.Dynamic Programming 3.Examples 4.Homework 2.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Sequence of decisions. Problem state.
The Knapsack Problem.
Lecture 5 Dynamic Programming
Dynamic Programming.
Chapter 15: Dynamic Programming II
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Introduction to Algorithms: Dynamic Programming
Matrix Chain Product 張智星 (Roger Jang)
Matrix Chain Multiplication
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

First Ingredient of Dynamic Programming 1. Optimal substructure The optimal solution to the problem contains optimal solutions to the subproblems. Example A A A A A A )( i i+1 k k+1 k+2 j ( ) . … . … Optimal parenthesization for A A i j … Optimal parenthesization for A A i k … Optimal parenthesization for A A k+1 j … Proof by contradiction (cut-and-paste).

Second Ingredient of DP 2. Overlapping subproblems. Few subproblems but many recurring instances. 1..4 1..1 2..4 1..2 3..4 1..3 4..4 2..2 3..4 2..3 4..4 1..1 2..2 3..3 4..4 1..1 2..3 1..2 3..3 3..3 4..4 2..2 3..3 2..2 3..3 1..1 2..2 Example Matrix-Chain Multiplication recurrences Exponential number of nodes but only (n ) subproblems! 2

Memoization 1. Still recursive 2. After computing the solution to a subproblem, store it in table. 3. Subsequent calls do table lookup.

Matrix-Chain Recursion Tree without Memoization 1..4 already solved 1..1 2..4 1..2 3..4 1..3 4..4 2..2 3..4 2..3 4..4 1..1 2..2 3..3 4..4 1..1 2..3 1..2 3..3 Can be pruned! 3..3 4..4 2..2 3..3 2..2 3..3 1..1 2..2

Matrix-Chain Recursion Tree with Memoization 1..4 1..1 2..4 1..2 3..4 1..3 4..4 Table lookup for solutions 2..2 3..4 2..3 4..4 1..1 2..2 1..1 2..3 1..2 3..3 3..3 4..4 2..2 3..3

Memoized-Matrix-Chain Lookup-Chain(p, i, j) // chain product A … A ; dimensions in p[ ] if m[i, j] <  // if cost already computed then return m[i, j] // simply return the cost // otherwise, it’s the first call; compute the cost recursively if i = j then m[i, j] = 0 else for k = i to j – 1 do q = Lookup-Chain(p, i, k) + Lookup-Chain(p, k+1, j) + p p p if q < m[i, j] then m[i, j] = q return m[i, j] i j i-1 k j Invoke Lookup-Chain(p, 1, n) to compute the chain product cost.

Analysis of Memoization Lookup-Chain(p, 1, n) ... . LC(p, 1, j) … LC(p, i – 1, j) LC(p, i, j+1) … LC(p, i, n) ... LC(p, i, j) Each LC(p, i, j), i = 1, …, n and j = i, …, n, is called by i – 1 + n – j = O(n) parents

 Analysis (cont’d) (O(n) + (n+ij2) O(1)) The first call to Lookup-Chain(p, i, j) requires computation. // (j – i) = O(n) time // (excluding time spent on recursively computing other entries). The rest n + i  j  2 calls result in table lookups // O(1) each call of this kind Total running time: n n i = 1 j = i (O(n) + (n+ij2) O(1))  3 = O(n )

Memoization vs DP Top-down vs bottom-up. Asymptotically as fast as DP. Prefered to DP if not all subproblems need to be solved. Otherwise slower than DP by a constant factor because of overhead for recursion and table maintenance.