Ch. 15: Dynamic Programming Ming-Te Chi

Slides:



Advertisements
Similar presentations
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Advertisements

15.Dynamic Programming Hsu, Lih-Hsing. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming (pro-gram)
Dynamic Programming CIS 606 Spring 2010.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Design Patterns for Optimization Problems Dynamic Programming.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Analysis of Algorithms CS 477/677
Analysis of Algorithms
1 Dynamic programming algorithms for all-pairs shortest path and longest common subsequences We will study a new technique—dynamic programming algorithms.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming Dynamic programming is a technique for solving problems with a recursive structure with the following characteristics: 1.optimal substructure.
Dynamic Programming UNC Chapel Hill Z. Guo.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
COSC 3101NJ. Elder Announcements Midterms are marked Assignment 2: –Still analyzing.
15.Dynamic Programming. Computer Theory Lab. Chapter 15P.2 Dynamic programming Dynamic programming is typically applied to optimization problems. In such.
Dynamic Programming 又稱 動態規劃 經常被用來解決最佳化問題. Dynamic Programming2 Rod cutting problem Given a rod of length N inches and a table of prices p[i] for i=1..N,
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Lecture 5 Dynamic Programming
David Meredith Dynamic programming David Meredith
Advanced Algorithms Analysis and Design
Advanced Design and Analysis Techniques
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Least common subsequence:
Dynamic Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
Lecture 5 Dynamic Programming
CS200: Algorithm Analysis
Dynamic Programming Several problems Principle of dynamic programming
Chapter 8 Dynamic Programming.
Dynamic Programming Comp 122, Fall 2004.
CSCE 411 Design and Analysis of Algorithms
Unit-5 Dynamic Programming
Chapter 15: Dynamic Programming III
Dynamic Programming.
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Advanced Algorithms Analysis and Design
Dynamic Programming.
Chapter 15: Dynamic Programming II
Dynamic Programming Comp 122, Fall 2004.
Lecture 8. Paradigm #6 Dynamic Programming
Dynamic Programming-- Longest Common Subsequence
Introduction to Algorithms: Dynamic Programming
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Longest Common Subsequence
Dynamic Programming II DP over Intervals
Dynamic Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
Algorithms and Data Structures Lecture X
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
Chapter 15: Dynamic Programming
Chapter 15: Dynamic Programming
Dynamic Programming.
Data Structures and Algorithms Dynamic Programming
Optimal Binary Search Tree. 1.Preface  OBST is one special kind of advanced tree.  It focus on how to reduce the cost of the search of the BST.  It.
Presentation transcript:

Ch. 15: Dynamic Programming Ming-Te Chi Algorithms Ch. 15: Dynamic Programming Ming-Te Chi Ch15 Dynamic Programming

Ch. 15 Dynamic Programming Dynamic programming is typically applied to optimization problems. In such problem there can be many solutions. Each solution has a value, and we wish to find a solution with the optimal value. Ch. 15 Dynamic Programming

The development of a dynamic programming 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom up fashion. 4. Construct an optimal solution from computed information. Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming 15.1 Rod cutting Input: A length n and table of prices pi , for i = 1, 2, …, n. Output: The maximum revenue obtainable for rods whose lengths sum to n, computed as the sum of the prices for the individual rods. Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Ex: a rod of length 4 Ch. 15 Dynamic Programming

Recursive top-down solution Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming   Ch. 15 Dynamic Programming

Dynamic-programming solution Store, don’t recompute time-memory trade-off Turn a exponential-time solution into a polynomial-time solution Ch. 15 Dynamic Programming

Top-down with memoization Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Bottom-up Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming   Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Subproblem graphs For rod-cutting problem with n = 4 Directed graph One vertex for each distinct subproblem. Has a directed edge (x, y). if computing an optimal solution to subproblem x directly requires knowing an optimal solution to subproblem y. Ch. 15 Dynamic Programming

Reconstructing a solution Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming

15.2 Matrix-chain multiplication A product of matrices is fully parenthesized if it is either a single matrix, or a product of two fully parenthesized matrix product, surrounded by parentheses. Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming How to compute where is a matrix for every i. Example: Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming MATRIX MULTIPLY Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Complexity:   Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Example:   Ch. 15 Dynamic Programming

The matrix-chain multiplication problem:   Ch. 15 Dynamic Programming

Counting the number of parenthesizations: [Catalan number] Ch. 15 Dynamic Programming

Step 1: The structure of an optimal parenthesization Ch. 15 Dynamic Programming

Step 2: A recursive solution Define m[i, j]= minimum number of scalar multiplications needed to compute the matrix goal m[1, n] Ch. 15 Dynamic Programming

Step 3: Computing the optimal costs Complexity: Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Example: Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming the m and s table computed by MATRIX-CHAIN-ORDER for n=6 Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming m[2,2]+m[3,5]+p1p2p5=0+2500+351520=13000, m[2,3]+m[4,5]+p1p3p5=2625+1000+35520=7125, m[2,4]+m[5,5]+p1p4p5=4375+0+351020=11374 } =7125 Ch. 15 Dynamic Programming

Step 4: Constructing an optimal solution example: Ch. 15 Dynamic Programming

16.3 Elements of dynamic programming Optimal substructure Overlapping subproblems How memoization might help

Optimal substructure: We say that a problem exhibits optimal substructure if an optimal solution to the problem contains within its optimal solution to subproblems. Example: Matrix-multiplication problem Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Common pattern 1. You show that a solution to the problem consists of making a choice. Making this choice leaves one or more subproblems to be solved. 2. You suppose that for a given problem, you are given the choice that leads to an optimal solution. 3. Given this choice, you determine which subproblems ensue and how to best characterize the resulting space of subproblems. 4. You show that the solutions to the subproblems used within the optimal solution to the problem must themselves be optimal by using a “cut-and-paste” technique. Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Optimal substructure 1. How many subproblems are used in an optimal solution to the original problem 2. How many choices we have in determining which subproblem(s) to use in an optimal solution. Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Subtleties One should be careful not to assume that optimal substructure applies when it does not. consider the following two problems in which we are given a directed graph G = (V, E) and vertices u, v  V. Unweighted shortest path: Find a path from u to v consisting of the fewest edges. Good for Dynamic programming. Unweighted longest simple path: Find a simple path from u to v consisting of the most edges. Not good for Dynamic programming. Ch. 15 Dynamic Programming

Overlapping subproblems example: MAXTRIX_CHAIN_ORDER

RECURSIVE_MATRIX_CHAIN Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming The recursion tree for the computation of RECURSUVE-MATRIX-CHAIN(P, 1, 4) Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming We can prove that T(n) =(2n) using substitution method. Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Solution: 1. bottom up 2. memorization (memorize the natural, but inefficient) Ch. 15 Dynamic Programming

MEMORIZED_MATRIX_CHAIN Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming LOOKUP_CHAIN Time Complexity: Ch. 15 Dynamic Programming

16.4 Longest Common Subsequence X = < A, B, C, B, D, A, B > Y = < B, D, C, A, B, A > < B, C, A > is a common subsequence of both X and Y. < B, C, B, A > or < B, C, A, B > is the longest common subsequence of X and Y. Ch. 15 Dynamic Programming

Longest-common-subsequence problem: We are given two sequences X = <x1,x2,...,xm> and Y = <y1,y2,...,yn> and wish to find a maximum length common subsequence of X and Y. We Define ith prefix of X, Xi = < x1,x2,...,xi >. Ch. 15 Dynamic Programming

Theorem 16.1. (Optimal substructure of LCS) Let X = <x1,x2,...,xm> and Y = <y1,y2,...,yn> be the sequences, and let Z = <z1,z2,...,zk> be any LCS of X and Y. 1. If xm = yn then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xm  yn then zk  xm implies Z is an LCS of Xm-1 and Y. 3. If xm  yn then zk  yn implies Z is an LCS of X and Yn-1. Ch. 15 Dynamic Programming

A recursive solution to subproblem Define c [i, j] is the length of the LCS of Xi and Yj . Ch. 15 Dynamic Programming

Computing the length of an LCS Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Complexity: O(mn) Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Complexity: O(m+n) Ch. 15 Dynamic Programming

15.5 Optimal Binary search trees cost:2.75 optimal!! cost:2.80 Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Expected cost the expected cost of a search in T is Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming For a given set of probabilities, our goal is to construct a binary search tree whose expected search is smallest. We call such a tree an optimal binary search tree. Ch. 15 Dynamic Programming

Step 1: The structure of an optimal binary search tree Consider any subtree of a binary search tree. It must contain keys in a contiguous range ki, ...,kj, for some 1  i  j  n. In addition, a subtree that contains keys ki, ..., kj must also have as its leaves the dummy keys di-1, ..., dj. If an optimal binary search tree T has a subtree T' containing keys ki, ..., kj, then this subtree T' must be optimal as well for the subproblem with keys ki, ..., kj and dummy keys di-1, ..., dj. Ch. 15 Dynamic Programming

Step 2: A recursive solution Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Step 3:computing the expected search cost of an optimal binary search tree Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming the OPTIMAL-BST procedure takes (n3), just like MATRIX-CHAIN-ORDER Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming The table e[i,j], w[i,j], and root[i,j] computer by OPTIMAL-BST on the key distribution. Ch. 15 Dynamic Programming

Ch. 15 Dynamic Programming Knuth has shown that there are always roots of optimal subtrees such that root[i, j –1]  root[i+1, j] for all 1  i  j  n. We can use this fact to modify Optimal-BST procedure to run in (n2) time. Ch. 15 Dynamic Programming