Dynamic Programming.

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

Dynamic Programming Introduction Prof. Muhammad Saeed.
Dynamic Programming.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Introduction to Algorithms
RAIK 283: Data Structures & Algorithms
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Dynamic Programming Kun-Mao Chao ( 趙坤茂 ) Department of Computer Science and Information Engineering National Taiwan University, Taiwan
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Dynamic Programming Typically applied to optimization problems
Lecture 12.
All-pairs Shortest paths Transitive Closure
Dynamic Programming Sequence of decisions. Problem state.
Lecture 5 Dynamic Programming
Algorithmics - Lecture 11
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Introduction to the Design and Analysis of Algorithms
0/1 Knapsack Making Change (any coin set)
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Advanced Design and Analysis Techniques
Lecture 5 Dynamic Programming
CS200: Algorithm Analysis
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Dynamic Programming General Idea
Dynamic Programming Dr. Yingwu Zhu Chapter 15.
Chapter 8 Dynamic Programming
ICS 353: Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
A Quick Note on Useful Algorithmic Strategies
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Merge Sort Dynamic Programming
CS6045: Advanced Algorithms
Longest Common Subsequence
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
Dynamic Programming General Idea
Dynamic Programming.
DYNAMIC PROGRAMMING.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
Dynamic Programming.
Dynamic Programming Kun-Mao Chao (趙坤茂)
Presentation transcript:

Dynamic Programming

Expected Outcomes Students should be able to Write down the four steps of dynamic programming Compute a Fibonacci number and the binomial coefficients by dynamic programming Compute the longest common subsequence and the shortest common supersequence of two given sequences by dynamic programming Solve the invest problem by dynamic programming

Dynamic Programming Dynamic Programming is a general algorithm design technique. Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems Main idea: solve several smaller (overlapping) subproblems record solutions in a table so that each subproblem is only solved once final state of the table will be (or contain) solution Dynamic programming vs. divide-and-conquer partition a problem into overlapping subproblems and independent ones store and not store solutions to subproblems They both solve problems by dividing a problem into small subproblems. But D&C partition the problem into independent subproblems; in contrast, dynamic programming is applicable when the subproblems are not independent, I.e., when subproblems share subsubproblems. In this context, a dandc algorithm does more work than necessary, repeatedly solving the common subproblems, while a DP algorithm solves every subsubproblem just ONCE and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subproblem is encountered.

Frame of Dynamic Programming Problem solved Solution can be expressed in a recursive way Sub-problems occur repeatedly Subsequence of optimal solution is an optimal solution to the sub-problem Frame Characterize the structure of an optimal solution Recursively define the value of an optimal solution Compute the value of an optimal solution in a bottom-up fashion Construct an optimal solution from computed information

Three basic components The development of a dynamic programming algorithm has three basic components: A recurrence relation (for defining the value/cost of an optimal solution); A tabular computation (for computing the value of an optimal solution); A backtracing procedure (for delivering an optimal solution).

Example: Fibonacci numbers Recall definition of Fibonacci numbers: f(0) = 0 f(1) = 1 f(n) = f(n-1) + f(n-2) Computing the nth Fibonacci number recursively (top-down): f(n) f(n-1) + f(n-2) f(n-2) + f(n-3) f(n-3) + f(n-4) ...

Example: Fibonacci numbers Computing the nth fibonacci number using bottom-up iteration: f(0) = 0 f(1) = 1 f(2) = 0+1 = 1 f(3) = 1+1 = 2 f(4) = 1+2 = 3 f(5) = 2+3 = 5 f(n-2) = f(n-1) = f(n) = f(n-1) + f(n-2) ALGORITHM Fib(n) F[0]  0, F[1]  1 for i2 to n do F[i]  F[i-1] + F[i-2] return F[n] extra space

Examples of Dynamic Programming Computing binomial coefficients Compute the longest common subsequence Compute the shortest common supersquence Warshall’s algorithm for transitive closure Floyd’s algorithms for all-pairs shortest paths Some instances of difficult discrete optimization problems: knapsack

Computing Binomial Coefficients A binomial coefficient, denoted C(n, k), is the number of combinations of k elements from an n-element set (0 ≤ k ≤ n). Recurrence relation (a problem  2 overlapping subproblems) C(n, k) = C(n-1, k-1) + C(n-1, k), for n > k > 0, and C(n, 0) = C(n, n) = 1 Dynamic programming solution: Record the values of the binomial coefficients in a table of n+1 rows and k+1 columns, numbered from 0 to n and 0 to k respectively. 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 …

Dynamic Binomial Coefficient Algorithm for i = 0 to n do for j = 0 to minimum( i, k ) do if j = 0 or j = i then BiCoeff[ i, j ] = 1 else BiCoeff[ i, j ] = BiCoeff[ i-1, j-1 ] + BiCoeff[ i-1, j ] end if end for j end for i

Longest Common Subsequence (LCS) A subsequence of a sequence S is obtained by deleting zero or more symbols from S. For example, the following are all subsequences of “president”: pred, sdn, predent. The longest common subsequence problem is to find a maximum length common subsequence between two sequences.

LCS For instance, Sequence 1: president Sequence 2: providence Its LCS is priden. president providence

LCS Sequence 1: algorithm Sequence 2: alignment Another example: Sequence 1: algorithm Sequence 2: alignment One of its LCS is algm. a l g o r i t h m a l i g n m e n t

How to compute LCS? Let A=a1a2…am and B=b1b2…bn . len(i, j): the length of an LCS between a1a2…ai and b1b2…bj With proper initializations, len(i, j) can be computed as follows.

Running time and memory: O(mn) and O(mn).

The backtracing algorithm

Shortest common super-sequence Definition: Let X and Y be two sequences. A sequence Z is a super-sequence of X and Y if both X and Y are subsequence of Z. Shortest common super-sequence problem: Input: two sequences X and Y. Output: a shortest common super-sequence of X and Y. Example: X=abc and Y=abb. Both abbc and abcb are the shortest common super-sequences for X and Y.

How to compute SCS? Recursive Equation: Let len[i,j] be the length of an SCS of X[1...i] and Y[1...j]. len[i,j] can be computed as follows: j if i=0, i if j=0, len[i,j] = len[i-1,j-1]+1 if i, j>0 and xi=yj, min{len[i,j-1]+1, len[i-1,j]+1} if i, j>0 and xiyj.

Solution: ABDCABDAB

Exercise Consider the algorithm for LCS as an example, write down the SCS algorithm and analyze it.

An interesting example: Investment Problem Suppose there are m dollars,and n products. Let fi(x) be the profit of investing x dollars to product i. How to arrange the investment such that the total profit f1(x1) + f2(x2) + … + fn(xn) is maximized. Instance:5 thousand dollars,4 products x f1(x) f2(x) f3(x) f4(x) 1 11 2 20 12 5 10 21 3 13 30 22 4 14 15 32 23 40 24

Fk(x)optimum profit for investing x thousand dollars into producing the first k products. xk(x)dollars invested on product k in Fk(x) Dynamic Programming Table x F1(x) x1(x) F2(x) x2(x) F3(x) x3(x) F4(x) x4(x) 1 11 1 11 0 20 1 2 12 2 12 0 13 1 31 1 3 13 3 16 2 30 3 33 1 4 14 4 21 3 41 3 50 1 5 15 5 26 4 43 4 61 1 Solution: x1 =1, x2 =0, x3=3, x4 =1 F4(5) = 61

Algorithm for Investment for y1 to m F1(y)=f1(y) for k2 to n Fk(y)max0<=xk<=y{fk(xk)+Fk-1(y-xk)} return Fn(m)

Time complexity For each Fk(x) (2kn,1x m), there are x+1 addition, x comparison