Dynamic Programming 2 Mani Chandy

Slides:



Advertisements
Similar presentations
1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 19 Prof. Erik Demaine.
Advertisements

Fibonacci Numbers F n = F n-1 + F n-2 F 0 =0, F 1 =1 – 0, 1, 1, 2, 3, 5, 8, 13, 21, 34 … Straightforward recursive procedure is slow! Why? How slow? Lets.
 Theorem 5.9: Let G be a simple graph with n vertices, where n>2. G has a Hamilton circuit if for any two vertices u and v of G that are not adjacent,
Single Source Shortest Paths
Lecture 19: Parallel Algorithms
* Bellman-Ford: single-source shortest distance * O(VE) for graphs with negative edges * Detects negative weight cycles * Floyd-Warshall: All pairs shortest.
8.3 Representing Relations Connection Matrices Let R be a relation from A = {a 1, a 2,..., a m } to B = {b 1, b 2,..., b n }. Definition: A n m  n connection.
Totally Unimodular Matrices
Greedy Algorithms Greed is good. (Some of the time)
Recursive Definitions and Structural Induction
1 Appendix B: Solving TSP by Dynamic Programming Course: Algorithm Design and Analysis.
Discussion #33 Adjacency Matrices. Topics Adjacency matrix for a directed graph Reachability Algorithmic Complexity and Correctness –Big Oh –Proofs of.
Lecture 17 Path Algebra Matrix multiplication of adjacency matrices of directed graphs give important information about the graphs. Manipulating these.
Discussion #34 1/17 Discussion #34 Warshall’s and Floyd’s Algorithms.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Complexity 11-1 Complexity Andrei Bulatov NP-Completeness.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Algorithm Design Strategy Divide and Conquer. More examples of Divide and Conquer  Review of Divide & Conquer Concept  More examples  Finding closest.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Shortest Paths Definitions Single Source Algorithms –Bellman Ford –DAG shortest path algorithm –Dijkstra All Pairs Algorithms –Using Single Source Algorithms.
1 Lecture 25: Parallel Algorithms II Topics: matrix, graph, and sort algorithms Tuesday presentations:  Each group: 10 minutes  Describe the problem,
TCOM 501: Networking Theory & Fundamentals
1 Advanced Algorithms All-pairs SPs DP algorithm Floyd-Warshall alg.
More Algorithm Examples Mani Chandy
Algorithms All pairs shortest path
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
1 Shortest Path Algorithms. 2 Routing Algorithms Shortest path routing What is a shortest path? –Minimum number of hops? –Minimum distance? There is a.
Dynamic Programming Mani Chandy
CS 473 All Pairs Shortest Paths1 CS473 – Algorithms I All Pairs Shortest Paths.
Induction and recursion
Theory of Computing Lecture 7 MAS 714 Hartmut Klauck.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
Induction and recursion
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
1 Network Optimization Chapter 3 Shortest Path Problems.
Chapter 5 Dynamic Programming 2001 년 5 월 24 일 충북대학교 알고리즘연구실.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
1 Closures of Relations: Transitive Closure and Partitions Sections 8.4 and 8.5.
1 The Floyd-Warshall Algorithm Andreas Klappenecker.
Lecture 7 All-Pairs Shortest Paths. All-Pairs Shortest Paths.
Introduction to Algorithms Jiafen Liu Sept
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Shortest Path Based Sufficiency Condition for Hamiltonian Graphs
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Mathematical Induction Section 5.1. Climbing an Infinite Ladder Suppose we have an infinite ladder: 1.We can reach the first rung of the ladder. 2.If.
8.4 Closures of Relations Definition: The closure of a relation R with respect to property P is the relation obtained by adding the minimum number of.
Introduction to Algorithms All-Pairs Shortest Paths My T. UF.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
Section Recursion 2  Recursion – defining an object (or function, algorithm, etc.) in terms of itself.  Recursion can be used to define sequences.
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar.
All Pairs Shortest Path Algorithms Aditya Sehgal Amlan Bhattacharya.
Trees.
All-pairs Shortest paths Transitive Closure
Unit 4: Dynamic Programming
Dynamic Programming Longest Path in a DAG, Yin Tat Lee
Dynamic Programming.
All pairs shortest path problem
Dynamic Programming.
Matrix Chain Multiplication
Directed Graphs (Part II)
All-Pairs Shortest Paths
Presentation transcript:

Dynamic Programming 2 Mani Chandy

The Pattern Given a problem P, obtain a sequence of problems Q 0, Q 1, …., Q m, where: –You have a solution to Q 0 –The solution to P can be obtained from the solution to Q m, –The solution to a problem Q j, j > 0, can be obtained from solutions to problems Q k, k < j, that appear earlier in the sequence.

Dynamic Progamming Pattern P Given problem P Propose a partial ordering of problems Q0Q0 QmQm You know how to compute solution to Q 0 You can compute the solution to P from the solution to Q m

Creative Step Finding problems Q i from problem P More mechanical step: Determining the function that computes the solution S k for problem Q k from the solutions S j of problems Q j for j < k.

Example: Matrix Multiplication 1 X N N X N NX1NX1 What is the cost of multiplying matrices of these sizes?

Cost of multiplying 2 matrices p x q q x r p rows and q columns. Cost is 2pqr because resultant matrix has pr elements, and The cost of computing each element is 2q operations.

Parenthesization 1 X N N X N NX1NX1 If we multiply these matrices first the cost is 2N 3. N X N Resulting matrix

Parenthesization 1 X N NX1NX1 N X N Cost of multiplication is N 2. Thus, total cost is proportional to N 3 + N 2 + N if we parenthesize the expression in this way.

Different Ordering 1 X N N X N NX1NX1 Cost is proportional to N 2

The Ordering Matters! One ordering costs O(N 3 ) The other ordering costs O(N 2 ) 1 X N N X N NX1NX1 1 X N N X N NX1NX1

Generalization: Parenthesization A1A1 op AnAn A3A3 A2A2 op …. op Associative operation Cost of operation depends on parameters of the operands. Parenthesize to minimize total cost. () () ()

Creative Step Come up with partial-ordering of problems Q i given problem P. Propose a partial ordering of problems Q0Q0 QmQm

Creative Step: Solution Q i,j is: optimally parenthesize the expression A i op ….. op A j Relatively “mechanical” steps: 1.Find partial ordering of problems Q i,j 2.Find function f that computes solution S i,j from solutions of problems earlier in the ordering.

Partial Ordering Structure Q 1,1 Q 2,2 Q 3,3 Q 4,4 Q 1,2 Q 2,3 Q 3,4 Q 1,3 Q 2,4 Q 1,4 Depends on Solutions known Solution to given problem obtained from solution to this problem Q 1,n.

The Recurrence Relation Let C[j,k] be the minimum cost of executing A j op … op A k. Base Case: ???? C[j,j] = 0 Induction Step: ???? C[j,k] for k > j is: min over all v of C[j,v]+C[v+1,k] + cost of the operation combining [j…v] and [v+1 … k] Proof: ???

For matrix multiplication Let j-th matrix have size: p (j-1) X p j Then the size of matrix obtained by combining [ j … v] is: ? p (j-1) X p v Then the size of matrix obtained by combining [ v+1 … k] is: ? p v X p k Cost of multiplying [j … v] and [v+1 … k] is p (j-1) X p v X p k

Computational Complexity Recurrence Relation for matrix multiplication: C[j,k] for k > j is: min over all v of (C[j,v]+C[v+1,k] + p (j-1) p v p k ) Time to compute C[j,k] is proportional to (k-j) Total time to compute C, for all j,k is: 0 x N {for bottom level, i.e., for C[j,j] } + 1 x (N-1) { for first level, i.e., for C[j,j+1]} + …. + k x (N-k) {for k-th level, i.e., for C[j, j+ k]} + ….. Show that total time is proportional to O(N 3 ).

Proof Structure What is the theorem that we are proving? We make an assertion about the meaning of a term. For instance, C[j,k] is the minimum cost of executing A j op …. op A k We are proving that this assertion is correct.

Proof Structure Almost always, we use induction. Base case: establish that the value of C[j,j] is correct. Induction step: Assume that the value of C[j, j+u] is correct for all u where u is less than V, and prove that the value of C[j, j+V] is correct. Remember what we are proving: C[j,k] is the minimum cost of executing A j op …. op A k

The Central Idea Bellman’s optimality principle Q i,j Q u,v Pick optimal Discard others Q a,z The discarded solutions for the smaller problem remain discarded because the optimal solution dominates them.

All-Points Shortest Path Given a weighted directed graph. The edge-weight W[j,k] represents the distance from vertex j to vertex k. There are no cycles of negative weight. For all j, k, compute D[j,k] where D[j,k] is the length of the shortest path from vertex j to vertex k.

The Creative Step Come up with partial-ordering of problems Q i given problem P. There are different problem sets Q i some better than others.

Creative Step Let F[j,k,m] be the length of the shortest path that has 1.at most m hops, and 2.is from vertex j to vertex k What is the partial-ordering of problems Q[j,k,m]?

A recurrence relation F[j,k,m] = min over all r of F[j,r,m-1] + W[r,k] Base case: F[j,k,1] = ????? W[j,k] (assume W[j,j] = 0 for all j) Obtaining solution for given problem P D[j,k] = F[j,k,n-1]

Proof of Correctness What are we proving? We are proving that the meaning we gave to F[j,k,m] is correct Base Case We show that F[j,k,1] is indeed the length of the shortest path from vertex j to vertex k that traverses at most one edge. Induction Step Assume that F[j,k,m] is the length of the shortest path from j to k that traverses at most m edges, for all m less than p, and prove that F[j,k,p] is the min length from j to k that traverses at most p edges

Proof of Induction Step Consider any path Z from a vertex j to a different vertex k where the path traverses at most p edges. We must prove: length(Z) is at least F[j,k,p] Let r be the prefinal vertex in path Z, i.e., r is the last vertex before k in path Z. Partition the path Z into the path Z’ from vertex j to vertex r followed by the edge (r,k). Then: length(Z) = length(Z’) + W[r,k] By the induction assumption: length(Z’) ?????

Proof of induction step length(Z’) >= F[j,r,p-1] From the induction hypothesis j r k at most p-1 hops 1hop Path Z Path Z’

Proof of Induction Step length(Z) = length(Z’) + W[r,k] length(Z’) >= F[j,r,p-1] Hence, length(Z) >= F[j,r,p-1] + W[r,k] F[j,k,p] = min over all r of F[j,r,p-1] + W[r,k] Hence, length(Z) >= F[j,k,p]

Proof of Induction Step We have shown that any path with at most p hops from vertex j to vertex k has length at least F[j,k,p]. Next, we show there exists a path with at most p hops from vertex j to vertex k that has length F[j,k,p]. From these two we conclude that the minimum length path with at most p hops from vertex j to vertex k has length F[j,k,p]. Left to you

Complexity? n4n4 Can you do better? Come up with partial-ordering of problems Q i given problem P. Let F[j,k,m] be the length of the shortest path from vertex j to vertex k that has at most m hops. 2m2m

Recurrence Relation Let F[j,k,m] be the length of the shortest path from vertex j to vertex k that has at most 2 m hops. Derive a recurrence relation from this meaning of F[j,k,m]. F[j,k,0] = ??? Length of shortest path from vertex j to vertex k that has at most 1 edge. W[j,k] F[j,k,m] = ??? Length of shortest path from vertex j to vertex k that has at most 2 m edges.

Derive Recurrence Relation F[j,k,m] = ??? Length of shortest path from vertex j to vertex k that has at most 2 m edges. F[j,k,m+1] = min over all v of (F[j,v,m] + F[v,k,m])

Proof Exactly the same proof structure as the previous case. Consider any path Z from j to k with at most 2 (m+1) hops. Let the number of hops in the path be t. Partition path Z into path Y followed by path Y’ where the number of hops in path Y is floor(t/2). Let Y end at vertex r. YY’ Path Z r

Proof length(Z) = length(Y) + length(Y’) From the induction hypothesis: length(Y) >= F[j,r,m] and length(Y’) >= F[r,k,m] Since Z has at most 2 (m+1) hops, Y and Y’ each have at most 2 m hops. Hence length(Z) >= F[j,r,m] + F[r,k,m] From recurrence relation, F[j,r,m]+F[r,k,m] >= F[j,k,m+1] Hence, length(Z) >= F[j,k,m+1]

Proof We have shown that every path with at most 2 m hops from vertex j to vertex k has length at least F[j,k,m]. Now show that there exists a path with at most 2 m hops from vertex j to vertex k that has length F[j,k,m]. Proof has the same structure and is left to you.

Complexity? n 3 log(n) Can you do better? Come up with partial-ordering of problems Q i given problem P. Let F[j,k,m] be the length of the shortest path from vertex j to vertex k that traverses intermediate vertices only in set {1, … m-1}

Derive and Prove Recurrence