Data Structures and Algorithms Dynamic Programming

Slides:



Advertisements
Similar presentations
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Advertisements

CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
Introduction to Algorithms
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Dynamic Programming Lecture 9 Asst. Prof. Dr. İlker Kocabaş 1.
CPSC 311, Fall 2009: Dynamic Programming 1 CPSC 311 Analysis of Algorithms Dynamic Programming Prof. Jennifer Welch Fall 2009.
Dynamic Programming CIS 606 Spring 2010.
CPSC 411 Design and Analysis of Algorithms Set 5: Dynamic Programming Prof. Jennifer Welch Spring 2011 CPSC 411, Spring 2011: Set 5 1.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Fundamental Techniques
Analysis of Algorithms
11-1 Matrix-chain Multiplication Suppose we have a sequence or chain A 1, A 2, …, A n of n matrices to be multiplied –That is, we want to compute the product.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
CS 5243: Algorithms Dynamic Programming Dynamic Programming is applicable when sub-problems are dependent! In the case of Divide and Conquer they are.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CS 8833 Algorithms Algorithms Dynamic Programming.
1 Programming for Engineers in Python Autumn Lecture 12: Dynamic Programming.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Lecture 12.
Advanced Algorithms Analysis and Design
Dynamic Programming Sequence of decisions. Problem state.
Rod cutting Decide where to cut steel rods:
Design & Analysis of Algorithm Dynamic Programming
Lecture 5 Dynamic Programming
David Meredith Dynamic programming David Meredith
Advanced Algorithms Analysis and Design
Advanced Design and Analysis Techniques
Types of Algorithms.
Dynamic Programming CISC4080, Computer Algorithms CIS, Fordham Univ.
Lecture 5 Dynamic Programming
CSCE 411 Design and Analysis of Algorithms
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Types of Algorithms.
Advanced Algorithms Analysis and Design
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Data Structure and Algorithms
Lecture 8. Paradigm #6 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
DYNAMIC PROGRAMMING.
Types of Algorithms.
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
COMP108 Algorithmic Foundations Dynamic Programming
Algorithms and Data Structures Lecture X
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Divide-and-Conquer 7 2  9 4   2   4   7
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Asst. Prof. Dr. İlker Kocabaş
Advance Algorithm Dynamic Programming
Dynamic Programming.
Presentation transcript:

Data Structures and Algorithms Dynamic Programming Dr. Muhammad Safyan Department of computer Science Government College University, Lahore

Today’s Agenda Problem solving Approaches Dynamic programming

Approaches to Solve a problem Brute Force A brute force algorithm blindly iterates an entire domain of possible solutions in search of one or more solutions which satisfy a condition. Imagine attempting to open one of these: Divide and Conquer: Divide a large problem in such a way to become sub-problem of the original ones and again divide sub-problem in to it s sub-problem and subsequently reach to a position where the problem become very simple. Very simple case become the base case. Usually Applies to the problem when the sub-problems are disjoint e.g Merge Sort divide and conquer also combines solutions to subproblems, but applies when the subproblems are disjoint. Dynamic programming applies when the subproblems overlap

Approaches to Solve a problem Dynamic Programming: is a special case of Divide and conquer approach applies when the subproblems overlap e.g. Fibonacci problem Greedy Approach: ?

Dynamic Programming Dynamic Programming is a technique used for Optimizations. Goal: Either get the Maximum or Minimum Results. In Dynamic Programming, Procedure are not well defined. Instead: It find out all possible solution and then pickup the solution i.e. optimal. Dynamic programming may consume more memory than normal - -> Choose best from multiple solutions.

Dynamic Programming (DP) Use Recursive approach though not using Recursive Formula. Recursive approach use Recursion/ Iteration. DP follows Principle of Optimality-taking the sequence of steps while solving a problem for decision making. Following Conditions met for the Dynamic problem Recursive Equation Optimal Substructure Overlapping sub problem

Dynamic Programming Recursive Equations: Function Call itself Optimal Sub-Structure Optimal solution to a problem contains optimal solution to sub- Problem. Overlapping Sub-problem: Repeating sub-problem A function call another function that also part of another function of problem. Advantage + Dynamic Programming reduce Time Complexity

Dynamic Programming Recursion Methodologies There are two way solve recursive solution Top-Down: use recursion Bottom-Up: use Memoization or Tabulation use recursive Equation and For Loop

Fibonacci Series Fib(n)= 0 if n=0 1 if n=1 fib(n-2)+fib(n-1) Int fib( int n) { if (n<=1) return n; Else return(fib(n-2)+fib(n-1) } Time Complexity=T(n)= O(2n). How can we reduce it?

Top Down: Fibonacci Series What’s the problem?

Top: Down Iterative Mehtod: Fib(n) { if (n == 0) return M[0]; if (n == 1) return M[1]; if (Fib(n-2) is not already calculated) call Fib(n-2); if(Fib(n-1) is already calculated) call Fib(n-1); //Store the ${n}^{th}$ Fibonacci no. in memory & use previous results. M[n] = M[n-1] + M[n-2] Return M[n]; }

already calculated …

Fibonacci Series Store the result into Global array Total call=6 T(n)= n+1 T(n)= O(n) This is called Memoization. Memoization brings big Difference. This is called Bottom-Up approach. We use iterative method -> called Tabulation Method

Developing these algorithms follows four steps: Dynamic Problem Dynamic programming is a method of solving optimization problems by combining the solutions of subproblems Developing these algorithms follows four steps: Characterize the optimality - formally state what properties an optimal solution exhibits Recursively define an optimal solution - analyze the problem in a top-down fashion to determine how subproblems relate to the original Solve the subproblems - start with a base case and solve the sub-problems in a bottom-up manner to find the optimal value Reconstruct the optimal solution - (optionally) determine the solution that produces the optimal value In general, we follow these steps when solving a problem with dynamic programming: Characterize the structure of an optimal solution: How are optimal solutions composed of optimal solutions to subproblems? Assume you have an optimal solution and show how it must decompose Sometimes it is useful to write a brute force solution, observe its redunancies, and characterize a more refined solution e.g., our observation that a cut produces one to two smaller rods that can be solved optimally Recursively define the value of an optimal solution: Write a recursive cost function that reflects the above structure e.g., the recurrence relation shown Compute the value of an optimal solution: Write code to compute the recursive values, memoizing or solving smaller problems first to avoid redundant computation e.g., Bottom-Up-Cut-Rod Construct an optimal solution from the computed information: Augment the code as needed to record the structure of the solution e.g., Extended-Bottom-Up-Cut-Rod and Print-Cut-Rod-Solution Thus the process involves breaking down the original problem into subproblems that also exhibit optimal behavior. While the subproblems are not usually independent, we only need to solve each subproblem once and then store the values for future computations. To illustrate this procedure we will consider the problem of maximizing profit for rod cutting.

Rod Cutting Problem Assume a company buys long steel rods and cuts them into shorter rods for sale to its customers. If each cut is free and rods of different lengths can be sold for different amounts, we wish to determine how to best cut the original rods to maximize the revenue. Brute Force Solution: Let the length of the rod be n inches.  There are 2n-1 different ways to cut the rod. Binary decision of whether or not to make a cut. Number of permutations of lengths is equal to the number of binary patterns of  n-1 bits of which there are 2n-1.

Rod Cutting Problem Eight possible ways to cut a rod of length 4

Rod Cutting Problem To find the optimal value we simply add up the prices for all the pieces of each permutation and select the highest value. Dynamic Programming Solution: Formalize the problem by assuming that a piece of length i has price pi. Optimal solution cuts the rod into k pieces of lengths i1, i2, ... , ik. such that n = i1 + i2 + ... + ik, then the revenue for a rod of length n is

Rod Cutting Problem Optimal Substructure:

Rod Cutting Problem Recursive Equation: Complexity where T(j) is the number of times the recursion occurs for each iteration of the for loop with j = n-i. The solution to this recursion can be shown to be T(n) = 2n which is still exponential behavior.  The problem with the top-down naive solution is that we recomputed all possible cuts thus producing the same run time as brute-force (only in a recursive fashion).

Rod Cutting Problem: Bottom-Up we can store the solutions to the smaller problems in a bottom- up manner rather than recomputed them. the run time can be drastically improved (at the cost of additional memory usage).  To implement this approach we simply solve the problems starting for smaller lengths and store these optimal revenues in an array (of size n+1). Then when evaluating longer lengths we simply look-up these values to determine the optimal revenue for the larger piece. We can formulate this recursively as follows

Rod Cutting Problem: Bottom-Up Total Length Profit ↓ Piece Length  1 2 3 4 5   → 2 2 4 6 8 10 Length of Pieces Profit per Piece 1 2 5 3 9 4 6 ↓ 5 2 5 7 10 12 ↓ ↓ 9 2 5 9 11 14 ↓ ↓ ↓ ↓ ↓ 6 2 5 9 11 14 Max(Profit by excluding new piece, Profit by including new piece)

Rod Cutting Problem: Bottom-Up Note that to compute any rj we only need the values r0 to rj-1 which we store in an array. Hence we will compute the new element using only previously computed values. The implementation of this approach is

Rod Cutting Problem: Bottom-Up

Rod Cutting Problem: Extended Bottom-Up Thus we have reduced the run time from exponential to polynomial! If in addition to the maximal revenue we want to know where to make the actual cuts we simply use an additional array s[] (also of size n+1) that stores the optimal cut for each segment size. Then we proceed backwards through the cuts by examining s[i] = i - s[i] starting at i = n to see where each subsequent cut is made until i = 0 (indicating that we take the last piece without further cuts). A modified implementation that explicitly performs the maximization to include s[] and print the final optimal cut lengths (which still has the same O(n2) run time) is given below

Recalling Matrix Multiplication Matrix Multiplication: Dynamic Programming Recalling Matrix Multiplication

Recalling Matrix Multiplication Matrix Multiplication: Dynamic Programming Recalling Matrix Multiplication

Recalling Matrix Multiplication Matrix Multiplication: Dynamic Programming Recalling Matrix Multiplication

Matrix-Chain multiplication (cont.) Cost of the matrix multiplication: An example:

Matrix-Chain multiplication (cont.)

Matrix-Chain multiplication (cont.) The problem: Given a chain of n matrices, where matrix Ai has dimension pi-1x pi, fully paranthesize the product in a way that minimizes the number of scalar multiplications.

Elements of dynamic programming (cont.) Overlapping subproblems: (cont.) 1..4 1..1 2..4 1..2 3..4 1..3 4..4 2..2 3..4 2..3 4..4 1..1 2..2 3..3 4..4 1..1 2..3 1..2 3..3 3..3 4..4 2..2 3..3 2..2 3..3 1..1 2..2 The recursion tree of RECURSIVE-MATRIX-CHAIN( p, 1, 4). The computations performed in a shaded subtree are replaced by a single table lookup in MEMOIZED-MATRIX-CHAIN( p, 1, 4).

Matrix-Chain multiplication (Contd.) RECURSIVE-MATRIX-CHAIN (p, i, j) 1 if i = j 2 then return 0 3 m[i,j] ← -1 4 for k←i to j-1 5 do q←RECURSIVE-MATRIX-CHAIN (p, i, k) + RECURSIVE-MATRIX-CHAIN (p, k+1, j)+ pi-1 pk pj 6 if q < m[i,j] 7 then m[i,j] ←q 8 return m[i,j]

Elements of dynamic programming (cont.) Overlapping subproblems: (cont.) WE guess that Using the substitution method with

Matrix-Chain multiplication (cont.) Counting the number of alternative paranthesization : bn

Matrix-Chain multiplication (cont.) Step 1: The structure of an optimal paranthesization(op) Find the optimal substructure and then use it to construct an optimal solution to the problem from optimal solutions to subproblems. Let Ai...j where i ≤ j, denote the matrix product Ai Ai+1 ... Aj Any parenthesization of Ai Ai+1 ... Aj must split the product between Ak and Ak+1 for i ≤ k < j.

Matrix-Chain multiplication (cont.) The optimal substructure of the problem: Suppose that an op of Ai Ai+1 ... Aj splits the product between Ak and Ak+1 then the paranthesization of the subchain Ai Ai+1 ... Ak within this parantesization of Ai Ai+1 ... Aj must be an op of Ai Ai+1 ... Ak

Matrix-Chain multiplication (cont.) Step 2: A recursive solution: Let m[i,j] be the minimum number of scalar multiplications needed to compute the matrix Ai...j where 1≤ i ≤ j ≤ n. Thus, the cost of a cheapest way to compute A1...n would be m[1,n]. Assume that the op splits the product Ai...j between Ak and Ak+1.where i ≤ k <j. Then m[i,j] =The minimum cost for computing Ai...k and Ak+1...j + the cost of multiplying these two matrices.

Matrix-Chain multiplication (cont.) Recursive defination for the minimum cost of paranthesization:

Matrix-Chain multiplication (cont.) To help us keep track of how to constrct an optimal solution we define s[ i,j] to be a value of k at which we can split the product Ai...j to obtain an optimal paranthesization. That is s[ i,j] equals a value k such that

Matrix-Chain multiplication (cont.) Step 3: Computing the optimal costs It is easy to write a recursive algorithm based on recurrence for computing m[i,j]. But the running time will be exponential!...

Matrix-Chain multiplication (cont.) Step 3: Computing the optimal costs We compute the optimal cost by using a tabular, bottom-up approach.

Matrix-Chain multiplication (Contd.) MATRIX-CHAIN-ORDER(p) n←length[p]-1 for i←1 to n do m[i,i]←0 for l←2 to n do for i←1 to n-l+1 do j←i+l-1 m[i,j]← ∞ for k←i to j-1 do q←m[i,k] + m[k+1,j]+pi-1 pk pj if q < m[i,j] then m[i,j] ←q s[i,j] ←k return m and s

Matrix-Chain multiplication (cont.) An example: matrix dimension A1 30 x 35 A2 35 x 15 A3 15 x 5 A4 5 x 10 A5 10 x 20 A6 20 x 25

Matrix-Chain multiplication (cont.) s m 6 1 6 1 5 3 2 i j 5 15125 2 4 i 3 3 3 j 11875 10500 3 3 3 4 3 3 4 9375 7125 575 4 3 3 5 5 3 2 1 7875 4375 2500 2500 3500 5 1 2 3 4 5 2 2625 750 1000 1000 5000 15750 6 1 A1 A2 A3 A4 A5 A6

Matrix-Chain multiplication (cont.) Step 4: Constructing an optimal solution An optimal solution can be constructed from the computed information stored in the table s[1...n, 1...n]. We know that the final matrix multiplication is The earlier matrix multiplication can be computed recursively.

Matrix-Chain multiplication (Contd.) PRINT-OPTIMAL-PARENS (s, i, j) 1 if i=j then print “Ai” else print “ ( “ PRINT-OPTIMAL-PARENS (s, i, s[i,j]) PRINT-OPTIMAL-PARENS (s, s[i,j]+1, j) 6 Print “ ) ”

Matrix-Chain multiplication (Contd.) RUNNING TIME: Recursive solution takes exponential time. Matrix-chain order yields a running time of O(n3)