Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unit-5 Dynamic Programming

Similar presentations


Presentation on theme: "Unit-5 Dynamic Programming"— Presentation transcript:

1 Unit-5 Dynamic Programming
Analysis and design of algorithm

2 Outline Overview Applications - Fibonacci Series
shortest path in graph matrix multiplication travelling salesman problem

3 Overview Origin: Richard Bellman, 1957
Programming referred to a series of choices Dynamic: choices are made on the fly, not in beginning

4 Dynamic Programming Dynamic programming is a technique for solving problems recursively and is applicable when the computations of the subproblems overlap. It can be used to solve many problems in time O(n 2 ) or O(n 3 ) for which a naive approach would take exponential time.

5 Principle of Optimality (Optimal Substructure Property)
A problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem. Suppose that in solving a problem, we have to make a sequence of decisions D1, D2… Dn. If this sequence is optimal, then the last k decisions, 1 k n must be optimal. Ex: The shortest path problem If i1, i2… j is a shortest path from i to j, then i1, i2… j must be a shortest path from i1to j

6 General Idea General idea is , If you have solved a problem with the given input, then save the result for future reference, so as to avoid solving the same problem again.  If the given problem can be broken up in to smaller sub-problems and these smaller subproblems are in turn divided in to still-smaller ones, and in this process, if you observe some over-lappping subproblems, then dynamic programming can be applied Also, the optimal solutions to the subproblems contribute to the optimal solution of the given problem which is referred to as the Optimal Substructure Property .

7 Approach Dynamic programming is typically implemented using tabulation, but can also be implemented using memoization There are two ways of doing this. 1.) Top-Down : Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive. This is referred to as Memoization. If you use memoization to solve the problem you do it by maintaining a map of already solved sub problems. You do it "top down" in the sense that you solve the "top" problem first (which typically recurses down to solve the sub-problems).

8 2.) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem. This is referred to as Tabulation. When you solve a dynamic programming problem using tabulation you solve the problem "bottom up", i.e., by solving all related sub-problems first, typically by filling up an n-dimensional table. Based on the results in the table, the solution to the "top" / original problem is then computed.

9 Difference bet. D&C and DP
In DP subproblem can be used in the solution of two different subproblems. In contrast, the divide-and-conquer approach creates subproblems that are completely separate and can be solved independently. The primary difference is that the subproblems of the divide-and- conquer approach are independent, while in dynamic programming they interact. Dynamic programming solves problems in a “bottom-up” manner as opposed to divide-and-conquer’s “top-down” approach.

10 Application The following computer problems can be solved using dynamic programming approach − Fibonacci number series Tower of Hanoi All pair shortest path by Floyd-Warshall Project scheduling

11 Application-1 Fibonacci Series
. Consider the Fibonacci Series :  0,1,1,2,3,5,8,13,21...

12 Using Recursive Function
fib(int n) {     if (n==0) return 0;     if (n==1) return 1;     return fib(n-1)+fib(n-2); } Time Complexity: T(n) = T(n-1) + T(n-2) which is exponential fib(5) / \ fib(4) fib(3) / \ / \ fib(3) fib(2) fib(2) fib(1) / \ / \ / \ fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) / \ fib(1) fib(0)

13 Using Dynamic Programming
int fib(int n) {   /* Declare an array to store Fibonacci numbers. */   int f[n+1];   int i;   /* 0th and 1st number of the series are 0 and 1*/   f[0] = 0;   f[1] = 1;   for (i = 2; i <= n; i++)   {       /* Add the previous 2 numbers in the series          and store it */       f[i] = f[i-1] + f[i-2];   }   return f[n]; } Time Complexity: O(n) Extra Space: O(n)

14 Application-2 Travelling Salesman Problem (TSP)
Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point. TSP problem can be described as: find a tour of N cities in a country (assuming all cities to be visited are reachable), the tour should (a) visit every city just once, (b) return to the starting point and (c) be of minimum distance.

15 Algorithm Number the cities 1, 2, , N and assume we start at city 1, and the distance between city i and city j is dij. Consider subsets S ⊆ {2, ,N} of cities and, for c ∈ S, let D(S, c) be the minimum distance, starting at city 1, visiting all cities in S and finishing at city c. First phase: if S = {c}, then D(S, c) = d1,c. Otherwise: D(S, c) = minx∈S−c (D(S − c, x) + dx,c ) Second phase: the minimum distance for a complete tour of all cities is M = minc∈{2,...,N} (D({2, , N}, c) + dc,1 ) A tour n1 , . . ., nN is of minimum distance just when it satisfies M = D({2, , N}, nN ) + dnN,1 .

16 Pseudocode function algorithm TSP (G, n) for k := 2 to n do
C({1, k}, k) := d1,k end for for s := 3 to n do for all S ⊆ {1, 2, , n}, |S| = s do for all k ∈ S do {C(S, k) = minm≠1,m≠k,m∈S [C(S − {k}, m) + dm,k ]} opt := mink≠1 [C({1, 2, 3, , n}, k) + dk,1] return (opt) end

17 Example Distance matrix: Functions description:
Functions description: g(x, S) - starting from 1, path min cost ends at vertex x, passing vertices in set S exactly once cxy - edge cost ends at x from y p(x, S) - the second-to-last vertex to x from set S. Used for constructing the TSP path back at the end.

18 Solution k = 0, null set: Set ∅: g(2, ∅) = c21 = 1 g(3, ∅) = c31 = 15
k = 1, consider sets of 1 element: Set {2}: g(3,{2}) = c32 + g(2, ∅ ) = c32 + c21 = = p(3,{2}) = 2 g(4,{2}) = c42 + g(2, ∅ ) = c42 + c21 = = p(4,{2}) = 2 Set {3}: g(2,{3}) = c23 + g(3, ∅ ) = c23 + c31 = = p(2,{3}) = 3 g(4,{3}) = c43 + g(3, ∅ ) = c43 + c31 = = p(4,{3}) = 3 Set {4}: g(2,{4}) = c24 + g(4, ∅ ) = c24 + c41 = = p(2,{4}) = 4 g(3,{4}) = c34 + g(4, ∅ ) = c34 + c41 = = p(3,{4}) = 4

19 k = 2, consider sets of 2 elements:
g(4,{2,3}) = min {c42 + g(2,{3}), c43 + g(3,{2})} = min {3+21, 12+8}= min {24, 20}= 20 p(4,{2,3}) = 3 Set {2,4}: g(3,{2,4}) = min {c32 + g(2,{4}), c34 + g(4,{2})} = min {7+10, 8+4}= min {17, 12} = 12 p(3,{2,4}) = 4 Set {3,4}: g(2,{3,4}) = min {c23 + g(3,{4}), c24 + g(4,{3})} = min {6+14, }= min {20, 31}= 20 p(2,{3,4}) = 3

20 Length of an optimal tour:
f = g(1,{2,3,4}) = min { c12 + g(2,{3,4}), c13 + g(3,{2,4}), c14 + g(4,{2,3}) } = min {2 + 20, , } = min {22, 21, 30} = 21  Successor of node 1: p(1,{2,3,4}) = 3 Successor of node 3: p(3, {2,4}) = 4 Successor of node 4: p(4, {2}) = 2 The optimal TSP tour reaches: 1 → 2 → 4 → 3 → 1  The worst-case time complexity of this algorithm is  O(2nn2) and the space O(2nn) 

21 Application-3 Matrix Chain Multiplication
Matrix multiplication is associative. No matter how we parenthesize the product, the result will be the same. For example, if we had four matrices A, B, C, and D, we would have:   (ABC)D = (AB)(CD) = A(BCD) = .... However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the efficiency. For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. Then, (AB)C = (10×30×5) + (10×5×60) = = 4500 operations A(BC) = (30×5×60) + (10×30×60) = = operations. Clearly the first parenthesization requires less number of operations.

22 Problem: Given an array p[] which represents the chain of matrices such that the ith matrix Ai is of dimension p[i-1] x p[i]. We need to write a function MatrixChainOrder() that should return the minimum number of multiplications needed to multiply the chain.

23 EG Input: p[] = {40, 20, 30, 10, 30} Output: There are 4 matrices of dimensions 40x20, 20x30, 30x10 and 10x30. Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained by putting parenthesis in following way (A(BC))D --> 20*30* *20* *10*30  Input: p[] = {10, 20, 30, 40, 30} Output: There are 4 matrices of dimensions 10x20, 20x30, 30x40 and 40x30. Let the input 4 matrices be A, B, C and D. The minimum number of multiplications are obtained by putting parenthesis in following way ((AB)C)D --> 10*20* *30* *40*30  Input: p[] = {10, 20, 30} Output: There are only two matrices of dimensions 10x20 and 20x30. So there is only one way to multiply the matrices, cost of which is 10*20*30

24 Algorithm Matrix-Chain(array p[1 .. n], int n) {           Array s[1 .. n − 1, 2 .. n];           FOR i = 1 TO n DO m[i, i] = 0;                                         // initialize           FOR L = 2 TO n DO {                                                      // L=length of subchain                   FOR i = 1 TO n − L + 1 do {                            j = i + L − 1;                           m[i, j] = infinity;                           FOR k = i TO j − 1 DO {                                  // check all splits                                   q = m[i, k] + m[k + 1, j] + p[i − 1] p[k] p[j];                                   IF (q < m[i, j]) {                                           m[i, j] = q;                                             s[i, j] = k;                                    }                            }                     }              }            return m[1, n](final cost) and s (splitting markers);             }

25 EG The m-table computed by MatrixChain procedure for n = 6 matrices A1, A2, A3, A4, A5, A6 and their dimensions 30, 35, 15, 5, 10, 20, 25.

26 Complexity The space complexity of this procedure Ο(n2).
Since the tables m and s require Ο(n2) space. Time complexity , a simple inspection of the for-loop(s) structures gives us a running time of the procedure. The three for-loops are nested three deep, and each one of them iterates at most n times (that is to say indices L, i, and j takes on at most n − 1 values). Therefore, The running time of this procedure is Ο(n3).

27 Shortest Path Given a graph and a source vertex src in graph, find shortest paths from src to all vertices in the given graph. The graph may contain negative weight edges. Bellman-Ford is also simpler than Dijkstra and suites well for distributed systems. But time complexity of Bellman-Ford is O(VE), which is more than Dijkstra.

28 Bellman – Ford Algorithm
Following are the detailed steps. Input: Graph and a source vertex src Output: Shortest distance to all vertices from src. If there is a negative weight cycle, then shortest distances are not calculated, negative weight cycle is reported. 1) This step initializes distances from source to all vertices as infinite and distance to source itself as 0. Create an array dist[] of size |V| with all values as infinite except dist[src] where src is source vertex. 2) This step calculates shortest distances. Do following |V|-1 times where |V| is the number of vertices in given graph. …..a) Do following for each edge u-v ………………If dist[v] > dist[u] + weight of edge uv, then update dist[v] ………………….dist[v] = dist[u] + weight of edge uv 3) This step reports if there is a negative weight cycle in graph. Do following for each edge u-v ……If dist[v] > dist[u] + weight of edge uv, then “Graph contains negative weight cycle” The idea of step 3 is, step 2 guarantees shortest distances if graph doesn’t contain negative weight cycle. If we iterate through all edges one more time and get a shorter path for any vertex, then there is a negative weight cycle

29 How it works? Like other Dynamic Programming Problems, the algorithm calculate shortest paths in bottom-up manner. It first calculates the shortest distances for the shortest paths which have at-most one edge in the path. Then, it calculates shortest paths with at-nost 2 edges, and so on. After the ith iteration of outer loop, the shortest paths with at most i edges are calculated. There can be maximum |V| – 1 edges in any simple path, that is why the outer loop runs |v| – 1 times.

30


Download ppt "Unit-5 Dynamic Programming"

Similar presentations


Ads by Google