Dynamic Programming.

Slides:



Advertisements
Similar presentations
1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 19 Prof. Erik Demaine.
Advertisements

Advanced Algorithm Design and Analysis (Lecture 7) SW5 fall 2004 Simonas Šaltenis E1-215b
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Warshall’s and Floyd’sAlgorithm Dr. Ying Lu RAIK 283: Data Structures.
All Pairs Shortest Paths and Floyd-Warshall Algorithm CLRS 25.2
Dynamic Programming Reading Material: Chapter 7..
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Lecture 22: Matrix Operations and All-pair Shortest Paths II Shang-Hua Teng.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science San Jose State University.
Algorithms All pairs shortest path
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
Dynamic Programming Mani Chandy
Dynamic Programming 2 Mani Chandy
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
Directed graphs Definition. A directed graph (or digraph) is a pair (V, E), where V is a finite non-empty set of vertices, and E is a set of ordered pairs.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
MA/CSSE 473 Day 28 Dynamic Programming Binomial Coefficients Warshall's algorithm Student questions?
1 The Floyd-Warshall Algorithm Andreas Klappenecker.
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
All-Pairs Shortest Paths
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Graph Theory. undirected graph node: a, b, c, d, e, f edge: (a, b), (a, c), (b, c), (b, e), (c, d), (c, f), (d, e), (d, f), (e, f) subgraph.
All Pairs Shortest Path Algorithms Aditya Sehgal Amlan Bhattacharya.
MA/CSSE 473 Day 27 Dynamic Programming Binomial Coefficients
Dynamic Programming Typically applied to optimization problems
Seminar on Dynamic Programming.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Algorithm Analysis Fall 2017 CS 4306/03
All-Pairs SPs on DG Run Dijkstra;s algorithm for each vertex or
Dynamic programming techniques
Parallel Graph Algorithms
Chapter 25: All-Pairs Shortest Paths
Data Structures and Algorithms
Warshall’s and Floyd’sAlgorithm
Dynamic Programming Characterize the structure (problem state) of optimal solution Recursively define the value of optimal solution Compute the value of.
Unit-5 Dynamic Programming
Dynamic Programming General Idea
Analysis and design of algorithm
Chapter 8 Dynamic Programming
ICS 353: Design and Analysis of Algorithms
ICS 353: Design and Analysis of Algorithms
Dynamic Programming Characterize the structure (problem state) of optimal solution Recursively define the value of optimal solution Compute the value of.
Floyd-Warshall Algorithm
Dynamic Programming.
Advanced Algorithms Analysis and Design
All pairs shortest path problem
Dynamic Programming General Idea
Dynamic Programming.
Near-neighbor or Mesh Based Paradigm
Dynamic Programming.
Lecture 21: Matrix Operations and All-pair Shortest Paths
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Parallel Graph Algorithms
Matrix Chain Multiplication
COSC 3101A - Design and Analysis of Algorithms 12
Seminar on Dynamic Programming.
Presentation transcript:

Dynamic Programming

Expected Outcomes Students should be able to Solve the all pairs shortest paths problem by dynamic programming, that is the Floyd’s algorithm Solve the transitive closure problem by dynamic programming, that is the Warshall’s algorithm Solve the knapsack problem by dynamic programming Solve the matrix chain multiplication problem by dynamic programming

Transitive Closure The transitive closure of a directed graph with n vertices can be defined as the nxn matrix T = {tij}, in which the element in the ith row (1 ≤ i ≤ n) and the jth column (1 ≤ j ≤ n) is 1 if there exists a nontrivial directed path (i.e., a directed path of a positive length) from the ith vertex to the jth vertex; otherwise, tij is 0. Graph traversal-based algorithm and Warshall’s algorithm 3 4 2 1 3 4 2 1 Traversal-based: traverses the same digraph several times, we should hope that a better algorithm can be found. 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 1 1 1 0 0 0 0 Adjacency matrix Transitive closure

Warshall’s Algorithm 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 T(0) Main idea: Use a bottom-up method to construct the transitive closure of a given digraph with n vertices through a series of nxn boolean matrices: T(0),…, T(k-1), T(k) , …, T(n) The question is: how to obtain T(k) from T(k-1) ? T(k) : tij(k) = 1 in T(k) , iff there is an edge from i to j; or there is a path from i to j going through vertex 1; or there is a path from i to j going through vertex 1 and/or 2; or ... there is a path from i to j going through 1, 2, … and/or k 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 T(0) 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 T(1) 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 T(2) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 Main idea: a path exists between two vertices i, j, iff there is an edge from i to j; or there is a path from i to j going through vertex 1; or there is a path from i to j going through vertex 1 and/or 2; or there is a path from i to j going through vertex 1, 2, and/or 3; or ... there is a path from i to j going through any of the other vertices T(3) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 T(4) 0 0 1 0 1 1 1 1 0 0 0 0 Does not allow an intermediate node Allow 1 to be an intermediate node Allow 1,2 to be an intermediate node Allow 1,2,3 to be an intermediate node Allow 1,2,3,4 to be an intermediate node

{ Warshall’s Algorithm In the kth stage: to determine T(k) is to determine if a path exists between two vertices i, j using just vertices among 1,…,k tij(k-1) = 1 (path using just 1 ,…,k-1) tij(k) = 1: or (tik(k-1) = 1 and tkj(k-1) = 1) (path from i to k and from k to i using just 1 ,…,k-1) { k kth stage i Rule to determine whether tij (k) should be 1 in T(k) : If an element tij is 1 in T(k-1) , it remains 1 in T(k) . If an element tij is 0 in T(k-1) ,it has to be changed to 1 in T(k) iff the element in its row i and column k and the element in its column j and row k are both 1’s in T(k-1). Observe the conditions under which rij[k] would be a 1 in R[k] If there exists a path from I to j with the highest vertex number of the intermediate nodes being less than k. rij[k-1] = 1 If there exists a path from I to j with the highest vertex number of the intermediate nodes being k. I, …, k, …j rik[k-1] = 1 and rkj[k-1] = 1 j

Compute Transitive Closure Time Complexity Space Complexity less space?

Floyd’s Algorithm: All pairs shortest paths All pairs shortest paths problem: In a weighted graph, find shortest paths between every pair of vertices. Applicable to: undirected and directed weighted graphs; no negative weight. Same idea as the Warshall’s algorithm : construct solution through series of matrices D(0) , D(1), …, D(n) Example: dij(k) = length of the shortest path from i to j with each vertex numbered no higher than k. 3 4 2 1 6 5 Why infinity: 2  3: 1 + 4 < 1 + infinity 0  4  1 0 6 3  0   5 1 0 0  4  1 0 4 3  0  6 5 1 0 weight matrix distance matrix

Floyd’s Algorithm D(k) : allow 1, 2, …, k to be intermediate vertices. In the kth stage, determine whether the introduction of k as a new eligible intermediate vertex will bring about a shorter path from i to j. dij(k) = min{dij(k-1) , dik(k-1) + dkj(k-1)} for k  1, dij(0) = wij dik(k-1) k Observe whether the introduction of k as a new eligible intermediate vertex will bring about a shorter path from I to j. The procedure of deriving D(k) from D(k-1) actually is a procedure of findout out whether the introduction of node I as a new eligible intermediate node will i dkj(k-1) kth stage dij(k-1) j

Time Complexity: O(n3), Space ? Floyd Algorithm Time Complexity: O(n3), Space ?

Floyd algorithm (less space)

Constructing a shortest path For k=0 For k>=1

Print all-pairs shortest paths

Example: 1 5 4 3 2 7 -4 8 -5 6

D(0)= (0)= D(1)= (1)=

D(2)= (2)= D(3)= (3)=

D(4)= (4)= (5)= D(5)=

Shortest path from 1 to 2 in 4 3 1 3 8 1 -5 -4 2 7 5 4 6

The Knapsack Problem The problem Find the most valuable subset of the given n items that fit into a knapsack of capacity W. Consider the following subproblem P(i, j) Find the most valuable subset of the first i items that fit into a knapsack of capacity j, where 1  i  n, and 1 j  W Let V[i, j] be the value of an optimal solution to the above subproblem P(i, j). Goal: V[n, W] The question: What is the recurrence relation that expresses a solution to this instance in terms of solutions to smaller subinstances? V[I, j]: the value for the most valuable subset of the first I items that fit into a knapsack of capacity j.

{ The Knapsack Problem The Recurrence Two possibilities for the most valuable subset for the subproblem P(i, j) It does not include the ith item: V[i, j] = V[i-1, j] It includes the ith item: V[i, j] = vi+ V[i-1, j – wi] V[i, j] = max{V[i-1, j], vi+ V[i-1, j – wi] }, if j – wi  0 V[i-1, j] if j – wi < 0 V[0, j] = 0 for j  0 and V[i, 0] = 0 for i  0 {

Dynamic Matrix Multiplication If we have a series of matrices of different sizes that we need to multiply, the order we do it can have a big impact on the number of operations For example, if we need to multiply four matrices M1, M2, M3, and M4 of sizes 205, 5  35, 35  4, and 4  25, depending on the order this can take between 3100 and 24,500 multiplications

Dynamic Matrix Multiplication Problem Input: a series of matrices M1, M2, …, MN with different sizes s1s2, s2s3, …, sNsN+1 Output: the order of multiplication of these matrices such that the total number of multiplications is minimized Can you give a solution to this problem? Try all possibilities to find the order, but It is time consuming because the number of all possibilities is a Catalan number C(N-1) where

Dynamic Matrix Multiplication: Idea We look at the ways to pair the matrices and then combine these into groups of three, and then four keep track of the efficiency of these partial results

Characterize the optimal solution Denote M[i:j] (i<j)as MiMi+1…Mj, our problem is to find the optimal multiplication order of M[1: N]. Consider M[i:j] (i<j) and i k<j: The optimal value for M[i:j] must be the minimum one of the optimal values for M[i:k] plus for M[k+1:j] plus sisk+1sj+1 for i k<j. A key property: If M[i: j] is divide into M[i:k] and M[k+1:j], then the optimal order for M[i: j] contains the optimal order for M[i:k] and M[k+1:j].

Construct the recurrence relation for value of the optimal solution Let cost[i:j] be the number of multiplications of the optimal solution for M[i:j]: cost[i:j] = min i k<j{cost[i:k]+cost[k+1:j]+ sisk+1sj+1} cost[i:i] = 0 cost[1:N]: minimum number of multiplications of the problem

Algorithm: bottom-up computation of the cost[1:n] for i = 1 to N do costi,i = 0 end for for i = 1 to N-1 do for j = 1 to N-i do loc = i + j tempCost =  for k = j to loc-1 do if tempCost > costj,k + costk+1,loc + sj*sk+1*sloc+1 then tempCost = costj,k + costk+1,loc + sj*sk+1*sloc+1 tempTrace = k end if end for k costj,loc = tempCost tracej,loc = tempTrace end for j end for i

Algorithm – Construct an Optimal Solution GetOrder( first, last, order ) if first < last then middle = tracefirst,last GetOrder( first, middle, order ) GetOrder( middle+1, last, order ) orderposition = middle position = position + 1 end if

Time Complexity O(n3)