Dynamic Programming Reading Material: Chapter 7 Sections 1 - 4 and 6.

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
CPSC 335 Dynamic Programming Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Data Structures Lecture 10 Fang Yu Department of Management Information Systems National Chengchi University Fall 2010.
Dynamic Programming Reading Material: Chapter 7..
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming Code
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
KNAPSACK PROBLEM A dynamic approach. Knapsack Problem  Given a sack, able to hold K kg  Given a list of objects  Each has a weight and a value  Try.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
6/4/ ITCS 6114 Dynamic programming Longest Common Subsequence.
12-CRS-0106 REVISED 8 FEB 2013 CSG523/ Desain dan Analisis Algoritma Dynamic Programming Intelligence, Computing, Multimedia (ICM)
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Dynamic Programming Typically applied to optimization problems
Merge Sort 5/28/2018 9:55 AM Dynamic Programming Dynamic Programming.
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Chapter 8 Dynamic Programming.
CS38 Introduction to Algorithms
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Chapter 8 Dynamic Programming
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Lecture 8. Paradigm #6 Dynamic Programming
Dynamic Programming-- Longest Common Subsequence
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Analysis of Algorithms CS 477/677
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Seminar on Dynamic Programming.
Presentation transcript:

Dynamic Programming Reading Material: Chapter 7 Sections and 6.

Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation of the recursive solution results in identical recursive calls that are executed more than once. Dynamic programming implements such algorithms by evaluating the recurrence in a bottom-up manner, saving intermediate results that are later used in computing the desired solution

Fibonacci Numbers What is the recursive algorithm that computes Fibonacci numbers? What is its time complexity? –Note that it can be shown that

Computing the Binomial Coefficient Recursive Definition Actual Value

Computing the Binomial Coefficient What is the direct recursive algorithm for computing the binomial coefficient? How much does it cost? –Note that

Optimization Problems and Dynamic Programming Optimization problems with certain properties make another class of problems that can be solved more efficiently using dynamic programming. Development of a dynamic programming solution to an optimization problem involves four steps –Characterize the structure of an optimal solution Optimal substructures, where an optimal solution consists of sub- solutions that are optimal. Overlapping sub-problems where the space of sub-problems is small in the sense that the algorithm solves the same sub-problems over and over rather than generating new sub-problems. –Recursively define the value of an optimal solution. –Compute the value of an optimal solution in a bottom-up manner. –Construct an optimal solution from the computed optimal value.

Longest Common Subsequence Problem Problem Definition: Given two strings A and B over alphabet , determine the length of the longest subsequence that is common in A and B. A subsequence of A=a 1 a 2 …a n is a string of the form a i1 a i2 …a ik where 1  i 1 <i 2 <…<i k  n Example: Let  = { x, y, z }, A = xyxyxxzy, B=yxyyzxy, and C= zzyyxyz –LCS(A,B)=yxyzyHence the length = –LCS(B,C)=Hence the length = –LCS(A,C)=Hence the length =

Straight-Forward Solution Brute-force search –How many subsequences exist in a string of length n? –How much time needed to check a string whether it is a subsequence of another string of length m? –What is the time complexity of the brute-force search algorithm of finding the length of the longest common subsequence of two strings of sizes n and m?

Dynamic Programming Solution Let L[i,j] denote the length of the longest common subsequence of a 1 a 2 …a i and b 1 b 2 …b j, which are substrings of A and B of lengths n and m, respectively. Then L[i,j] = when i = 0 or j = 0 L[i,j] = when i > 0, j > 0, a i =b j L[i,j] = when i > 0, j > 0, a i  b j

LCS Algorithm Algorithm LCS(A,B) Input: A and B strings of length n and m respectively Output: Length of longest common subsequence of A and B Initialize L[i,0] and L[0,j] to zero; for i ← 1 to n do for j ← 1 to m do if a i = b j then L[i,j] ← 1 + L[i-1,j-1] else L[i,j] ← max(L[i-1,j],L[i,j-1]) end if end for; return L[n,m];

Example (Q7.5 pp. 220) Find the length of the longest common subsequence of A=xzyzzyx and B=zxyyzxz

Example (Cont.) xzyzzyx z0 x0 y0 y0 z0 x0 z0

Complexity Analysis of LCS Algorithm What is the time and space complexity of the algorithm?

Matrix Chain Multiplication Assume Matrices A, B, and C have dimensions 2  10, 10  2, and 2  10 respectively. The number of scalar multiplications using the standard Matrix multiplication algorithm for –(A B) C is –A (B C) is Problem Statement: Find the order of multiplying n matrices in which the number of scalar multiplications is minimum.

Straight-Forward Solution Again, let us consider the brute-force method. We need to compute the number of different ways that we can parenthesize the product of n matrices. –e.g. how many different orderings do we have for the product of four matrices? –Let f(n) denote the number of ways to parenthesize the product M 1, M 2, …, M n. (M 1 M 2 …M k ) (M k+1 M k+2 …M n ) What is f(2), f(3) and f(1)?

Catalan Numbers C n =f(n+1) Using Stirling’s Formula, it can be shown that f(n) is approximately

Cost of Brute Force Method How many possibilities do we have for parenthesizing n matrices? How much does it cost to find the number of scalar multiplications for one parenthesized expression? Therefore, the total cost is

The Recursive Solution –Since the number of columns of each matrix M i is equal to the number of rows of M i+1, we only need to specify the number of rows of all the matrices, plus the number of columns of the last matrix, r 1, r 2, …, r n+1 respectively. –Let the cost of multiplying the chain M i …M j (denoted by M i,j ) be C[i,j] –If k is an index between i+1 and j, what is the cost of multiplying M i,j considering multiplying M i,k-1 with M k,j ? –Therefore, C[1,n]=

The Dynamic Programming Algorithm C[1,1]C[1,2]C[1,3]C[1,4]C[1,5]C[1,6] C[2,2]C[2,3]C[2,4]C[2,5]C[2,6] C[3,3]C[3,4]C[3,5]C[3,6] C[4,4]C[4,5]C[4,6] C[5,5]C[5,6] C[6,6]

Example (Q7.11 pp ) Given as input 2, 3, 6, 4, 2, 7 compute the minimum number of scalar multiplications:

MatChain Algorithm Algorithm MatChain Input: r[1..n+1] of +ve integers corresponding to the dimensions of a chain of matrices Output: Least number of scalar multiplications required to multiply the n matrices for i := 1 to n do C[i,i] := 0; // diagonal d 0 for d := 1 to n-1 do // for diagonals d 1 to d n-1 for i := 1 to n-d do j := i+d; C[i,j] :=  ; for k := i+1 to j do C[i,j] := min{C[i,j],C[i,k-1]+C[k,j]+r[i]r[k]r[j+1]; end for; return C[1,n];

Time and Space Complexity of MatChain Algorithm Time Complexity Space Complexity

The Knapsack Problem Let U = {u 1, u 2, …, u n } be a set of n items to be packed in a knapsack of size C . Let s j and v j be the size and value of the j th item, where s j, v j , 1  j  n. The objective is to fill the knapsack with some items from U whose total size does not exceed C and whose total value is maximum. –Assume that the size of each item does not exceed C.

The Knapsack Problem Formulation Given n +ve integers in U, we want to find a subset S  U s.t. is maximized subject to the constraint

Inductive Solution Let V[i,j] denote the value obtained by filling a knapsack of size j with items taken from the first i items {u 1, u 2, …, u i } in an optimal way: –The range of i is –The range of j is –The objective is to find V[, ] V[i,0] =V[0,j] = V[i,j] = V[i-1,j] if = max {V[i-1,j], V[, ]+v i } if

Example (pp. 223 Question 7.22) There are five items of sizes 3, 5, 7, 8, and 9 with values 4, 6, 7, 9, and 10 respectively. The size of the knapsack is

Algorithm Knapsack Input: A set of items U = {u 1,u 2,…,u n } with sizes s 1,s 2,…,s n and values v 1,v 2,…,v n, respectively and knapsack capacity C. Output: the maximum value of subject to for i := 0 to n do V[i,0] := 0; for j := 0 to C do V[0,j] := 0; for i := 1 to n do for j := 1 to C do V[i,j] := V[i-1,j]; if s i  j then V[i,j] := max{V[i,j], V[i-1,j-s i ]+v i } end for; return V[n,C];

Time and Space Complexity of the Knapsack Algorithm