Dynamic Programming ACM Workshop 24 August 2011. Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.

Slides:



Advertisements
Similar presentations
Welcome to our presentation
Advertisements

Dynamic Programming 25-Mar-17.
Dynamic Programming Introduction Prof. Muhammad Saeed.
Lecture 7 Paradigm #5 Greedy Algorithms
Algorithm Design approaches Dr. Jey Veerasamy. Petrol cost minimization problem You need to go from S to T by car, spending the minimum for petrol. 2.
Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Introduction to Algorithms
CSC 252 Algorithms Haniya Aslam
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
16-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
CS 8833 Algorithms Algorithms Dynamic Programming.
DP (not Daniel Park's dance party). Dynamic programming Can speed up many problems. Basically, it's like magic. :D Overlapping subproblems o Number of.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Dynamic Programming Kun-Mao Chao ( 趙坤茂 ) Department of Computer Science and Information Engineering National Taiwan University, Taiwan
Advanced Algorithms Analysis and Design
Dynamic Programming Typically applied to optimization problems
Dynamic Programming 26-Apr-18.
Dynamic Programming Sequence of decisions. Problem state.
Lecture 5 Dynamic Programming
Introduction to the Design and Analysis of Algorithms
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Advanced Design and Analysis Techniques
Lecture 5 Dynamic Programming
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
A Quick Note on Useful Algorithmic Strategies
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
A Note on Useful Algorithmic Strategies
Dynamic Programming 23-Feb-19.
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
A Note on Useful Algorithmic Strategies
Dynamic Programming.
Matrix Chain Product 張智星 (Roger Jang)
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Matrix Chain Multiplication
A Note on Useful Algorithmic Strategies
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
A Note on Useful Algorithmic Strategies
Seminar on Dynamic Programming.
Dynamic Programming Kun-Mao Chao (趙坤茂)
Data Structures and Algorithms Dynamic Programming
Presentation transcript:

Dynamic Programming ACM Workshop 24 August 2011

Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime of some algorithms from exponential to polynomial. Not all problems have DP characteristics Richard Bellman was one of the principal founders of this approach.

Recursion.for i>1 i F i F i F F F The Fibonacci numbers are defined by the following recurrence:

Recursive code for Fibonacci Int fib(int n) { if(n==0 || n==1 ) return 1; else return fib(n-1) + fib(n-2); }

DP solution The above algo is of exponential order. You can have O(n) solution using DP

Tabular computation The tabular computation can avoid recompuation F 10 F9F9 F8F8 F7F7 F6F6 F5F5 F4F4 F3F3 F2F2 F1F1 F0F0

Two key ingredients Two key ingredients for an optimization problem to be suitable for a dynamic- programming solution: Each substructure is optimal. (Principle of optimality) 1. optimal substructures 2. overlapping subproblems Subproblems are dependent. (otherwise, a divide-and- conquer approach is the choice.)

Three basic components The development of a dynamic- programming algorithm has three basic components: The recurrence relation (for defining the value of an optimal solution); The tabular computation (for computing the value of an optimal solution); The traceback (for delivering an optimal solution).

Maximum sum You have a sequence of integers.You need the maximum sum from the continuous sub-sequences. Eg- 3, -4, 5, -7, 8, -6, 2 1, -14, -9, 19 Max sum from 8,-6,21 which gives 23 Brute-force is O(n^3). We can check sums of all sequences of different sizes from 1 to n.

Solving Problem is no problem. Actual problem is understanding the problem!!!

Formulation of Linear Recurrence Let S[i] be the max sum of a continuous sequence that starts with any index and ends at i. S[i]=max(Arr[i],Arr[i]+S[i-1])

DP Solution

Longest increasing subsequence(LIS) The longest increasing subsequence is to find a longest increasing subsequence of a given sequence of distinct integers a 1 a 2 …a n. e.g are increasing subsequences. are not increasing subsequences. We want to find a longest one.

A naive approach for LIS Let L[i] be the length of a longest increasing subsequence ending at position i. L[i] = 1 + max j = 0..to..i-1 {L[j] where a j < a i } Prev[j]=k

DP Solution 952Arr Prev L

Binomial Coefficients (x + y) 2 = x 2 + 2xy + y 2, coefficients are 1,2,1 (x + y) 3 = x 3 + 3x 2 y + 3xy 2 + y 3, coefficients are 1,3,3,1 (x + y) 4 = x 4 + 4x 3 y + 6x 2 y 2 + 4xy 3 + y 4, coefficients are 1,4,6,4,1 (x + y) 5 = x 5 + 5x 4 y + 10x 3 y x 2 y 3 + 5xy 4 + y 5, coefficients are 1,5,10,10,5,1 The n+1 coefficients can be computed for (x + y) n according to the formula c(n, i) = n! / (i! * (n – i)!) for each of i = 0..n The repeated computation of all the factorials gets to be expensive We can use dynamic programming to save the factorials as we go

Solution by dynamic programming n c(n,0) c(n,1) c(n,2) c(n,3) c(n,4) c(n,5) c(n,6) Each row depends only on the preceding row Only linear space and quadratic time are needed This algorithm is known as Pascals Triangle

Matrix-chain multiplication If the chain of matrices is A1, A2, A3, A4, the product A1 A2 A3 A4 can be fully parenthesized in five distinct ways: (A1 (A2 (A3 A4))), (A1 ((A2 A3) A4)), ((A1 A2) (A3 A4)), ((A1 (A2 A3)) A4), (((A1 A2) A3) A4).

Matrix-chain multiplication We can multiply two matrices A and B only if they are compatible: the number of columns of A must equal the number of rows of B. If A is a p × q matrix and B is a q × r matrix, the resulting matrix C is a p × r matrix.

Matrix-chain multiplication Three matrices: A1: 10 × 100, A2: 100 × 5, A3: 5 × 50 ((A1 A2) A3) -> we perform 10 · 100 · 5 = 5000 scalar multiplications to compute the 10 × 5 matrix product A1 A2, plus another 10 · 5 · 50 = 2500 scalar multiplications to multiply this matrix by A3, for a total of 7500 scalar multiplications. (A1 (A2 A3))-> we perform 100 · 5 · 50 = 25,000 scalar multiplications to compute the 100 × 50 matrix product A2 A3, plus another 10 ·100 · 50 = 50,000 scalar multiplications to multiply A1 by this matrix, for a total of 75,000scalar multiplications.

matrix-chain multiplication problem given a chain A1, A2,...,An of n matrices, where for i = 1, 2,..., n, matrix Ai has dimension pi-1 × pi, fully parenthesize the product A1 A2 An in a way that minimizes the number of scalar multiplications.

Step 2: A recursive solution Now we use our optimal substructure to show that we can construct an optimal solution to the problem from optimal solutions to subproblems.

The three expressions here represent the combinations - A2*(A3…A5) -(A2..A3)*(A4..A5) -(A.2...A4)*A5 Respectively.

List of problems(LIS) 1 History Grading 231 Testing the catcher 1 What goes up Is bigger smarter

Problem statement: Given a bag of Capacity W and objects with some weights and profit associated. Find the best combination to maximize the profit. 0-1 Knapsack problem

0-1 knapsack Problem

Make all possible combination of objects and select the lot with maximum profit. For n objects total possible combinations are 2^n For n=100 total possibilities will be 2^100 Naïve approach Around 10^30 secs……

Recursively the 0-1-knapsack problem can be formulated as: A(0, Y) = 0 A(j, 0) = 0 A(j, Y) = A(j 1, Y) if w j > Y A(j, Y) = max { A(j 1, Y), p j + A(j 1, Y w j ) } if w j Y. Recursive relation

Suppose we have gold bars of following weights 2,3,4 and we have a bag of capacity 5 kgs. Now using the dynamic programming approach we come up with following table. Dynamic programming approach

Table Filling the table finally arr[m][n] gives the value of best possible combination.

u c l l l 3 u u c l c 4 u u u c u Printing the solution Moving from the arr[m][n] we get following path u->c->c->0 So the answer will be 2,3.

Dividing coins CD Diving for gold Super sale List of problems