Warshall’s and Floyd’sAlgorithm

Slides:



Advertisements
Similar presentations
1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 19 Prof. Erik Demaine.
Advertisements

1 Appendix B: Solving TSP by Dynamic Programming Course: Algorithm Design and Analysis.
Reachability as Transitive Closure
Advanced Algorithm Design and Analysis (Lecture 7) SW5 fall 2004 Simonas Šaltenis E1-215b
1 Chapter 26 All-Pairs Shortest Paths Problem definition Shortest paths and matrix multiplication The Floyd-Warshall algorithm.
Design and Analysis of Algorithms Single-source shortest paths, all-pairs shortest paths Haidong Xue Summer 2012, at GSU.
Chapter 25: All-Pairs Shortest-Paths
Discussion #34 1/17 Discussion #34 Warshall’s and Floyd’s Algorithms.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Warshall’s and Floyd’sAlgorithm Dr. Ying Lu RAIK 283: Data Structures.
All Pairs Shortest Paths and Floyd-Warshall Algorithm CLRS 25.2
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design technique Dynamic Programming is a.
Lecture 22: Matrix Operations and All-pair Shortest Paths II Shang-Hua Teng.
Chapter 8 Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 16 All shortest paths algorithms Properties of all shortest paths Simple algorithm:
Algorithms All pairs shortest path
Design and Analysis of Algorithms - Chapter 81 Dynamic Programming Dynamic Programming is a general algorithm design techniqueDynamic Programming is a.
Dynamic Programming A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River,
All-Pairs-Shortest-Paths for Large Graphs on the GPU Gary J Katz 1,2, Joe Kider 1 1 University of Pennsylvania 2 Lockheed Martin IS&GS.
1 Dynamic programming algorithms for all-pairs shortest path and longest common subsequences We will study a new technique—dynamic programming algorithms.
CS 473 All Pairs Shortest Paths1 CS473 – Algorithms I All Pairs Shortest Paths.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
More Dynamic Programming Floyd-Warshall Algorithm.
Directed graphs Definition. A directed graph (or digraph) is a pair (V, E), where V is a finite non-empty set of vertices, and E is a set of ordered pairs.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 8 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Chapter 5 Dynamic Programming 2001 년 5 월 24 일 충북대학교 알고리즘연구실.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by or formulated.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
MA/CSSE 473 Day 28 Dynamic Programming Binomial Coefficients Warshall's algorithm Student questions?
1 The Floyd-Warshall Algorithm Andreas Klappenecker.
All-pairs Shortest Paths. p2. The structure of a shortest path: All subpaths of a shortest path are shortest paths. p : a shortest path from vertex i.
Introduction to Algorithms Jiafen Liu Sept
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
1 Chapter Equivalence, Order, and Inductive Proof.
Algorithmics - Lecture 121 LECTURE 11: Dynamic programming - II -
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
1 Section 4.1 Properties of Binary Relations A binary relation R over a set A is a subset of A  A. If (x, y)  R we also write x R y. Example. Some sample.
All-Pairs Shortest Paths
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
MA/CSSE 473 Day 27 Dynamic Programming Binomial Coefficients
All-pairs Shortest paths Transitive Closure
Dynamic Programming 1 Neil Tang 4/20/2010
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
Chapter 8 Dynamic Programming
Algorithm Analysis Fall 2017 CS 4306/03
All-Pairs SPs on DG Run Dijkstra;s algorithm for each vertex or
CS330 Discussion 6.
All-Pairs Shortest Paths (26.0/25)
Chapter 25: All-Pairs Shortest Paths
Lecture 7 All-Pairs Shortest Paths
CS200: Algorithm Analysis
Analysis and design of algorithm
Chapter 8 Dynamic Programming
Floyd-Warshall Algorithm
Dynamic Programming.
Advanced Algorithms Analysis and Design
All pairs shortest path problem
Dynamic Programming 1 Neil Tang 4/15/2008
Near-neighbor or Mesh Based Paradigm
Dynamic Programming.
Lecture 21: Matrix Operations and All-pair Shortest Paths
Dynamic Programming Dynamic Programming is a general algorithm design technique for solving problems defined by recurrences with overlapping subproblems.
All Pairs Shortest Path Examples While the illustrations which follow only show solutions from vertex A (or 1) for simplicity, students should note that.
COSC 3101A - Design and Analysis of Algorithms 12
Presentation transcript:

Warshall’s and Floyd’sAlgorithm Source http://www.aw-bc.com/info/levitin

Warshall’s algorithm: transitive closure (TC) Definition TC: Computes the transitive closure of a relation (Alternatively: all paths in a directed graph) Example of transitive closure: 3 4 2 1 3 4 2 1 0 0 1 0 1 1 1 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0

Warshall’s algorithm there is an edge from i to j; or Main idea: a path exists between two vertices i, j, iff there is an edge from i to j; or there is a path from i to j going through vertex 1; or there is a path from i to j going through vertex 1 and/or 2; or … there is a path from i to j going through vertex 1, 2, … and/or k; or ... there is a path from i to j going through any of the other vertices

Warshall’s algorithm Idea: dynamic programming Vk P1 i j p2 Let V={1, …, n} and for k≤n, Vk={1, …, k} For any pair of vertices i, jV, identify all paths from i to j whose intermediate vertices are all drawn from Vk: Pijk={p1, p2, …}, if Pijk then Rk[i, j]=1 For any pair of vertices i, j: Rn[i, j], that is Rn Starting with R0=A, the adjacency matrix, how to get R1  …  Rk-1  Rk  …  Rn Vk P1 i j p2

Warshall’s algorithm Idea: dynamic programming pPijk: p is a path from i to j with all intermediate vertices in Vk If k is not on p, then p is also a path from i to j with all intermediate vertices in Vk-1: pPijk-1 k Vk Vk-1 p i j

Warshall’s algorithm Idea: dynamic programming pPijk: p is a path from i to j with all intermediate vertices in Vk If k is on p, then we break down p into p1 and p2 What are P1 and P2? p k Vk p1 p2 Vk-1 i j

Warshall’s algorithm Idea: dynamic programming pPijk: p is a path from i to j with all intermediate vertices in Vk If k is on p, then we break down p into p1 and p2 where p1 is a path from i to k with all intermediate vertices in Vk-1 p2 is a path from k to j with all intermediate vertices in Vk-1 p k Vk p1 p2 Vk-1 i j

{ Warshall’s algorithm In the kth stage determine if a path exists between two vertices i, j using just vertices among 1, …, k R(k-1)[i,j] (path using just 1, …, k-1) R(k)[i,j] = or (R(k-1)[i,k] and R(k-1)[k,j]) (path from i to k and from k to j using just 1, …, k-1) { k i kth stage j

Warshall’s algorithm k=1 k=2 FLOYD(G) RA for k in [1..n] 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 R1 0 1 0 0 0 0 0 1 0 0 0 0 1 1 1 0 R2 0 1 0 1 0 0 0 1 0 0 0 0 1 1 1 1 R3 0 1 0 1 0 0 0 1 0 0 0 0 1 1 1 1 2b 1a 4d 3c k=1 generating R1 k=1 0 1 0 0 0 0 0 1 0 0 0 0 1 1 1 0 k=2 generating R1 k=2, R2 0 1 0 1 0 0 0 1 0 0 0 0 1 1 1 1 i 1 2 3 4 R4 1 1 1 0 1 1 1 1 0 0 0 0 i 1 2 3 4 FLOYD(G) RA for k in [1..n] for i in [1..n] for j in [1..n] r[i,j](k) {r(k-1)[i,j] OR (r(k-1)[i,k] AND r(k-1)[k,j])} r(1)[4,1] {r(0)[4,1] or (r(0)[4,1] and r(0)[1,1])} r(1)[4,2] {r(0)[4,2] or (r(0)[4,1] and r(0)[1,2])} r(1)[4,3] {r(0)[4,3] or (r(0)[4,1] and r(0)[1,3])} r(1)[4,4] {r(0)[4,4] or (r(0)[4,1] and r(0)[1,4])} j= 1 2 3 4 j= 1 2 3 4 r[i,j](k) {r(k-1)[i,j] OR (r(k-1)[i,k] AND r(k-1)[k,j])} r(2)[4,1] {r(1)[4,1] or (r(1)[4,2] and r(1)[2,1])} r(2)[4,2] {r(1)[4,2] or (r(1)[4,2] and r(1)[2,2])} r(2)[4,3] {r(1)[4,3] or (r(1)[4,2] and r(1)[2,3])} r(2)[4,4] {r(1)[4,4] or (r(1)[4,2] and r(1)[2,4])} 9

Warshall’s algorithm FLOYD(G) RA for k in [1..n] for i in [1..n] 3 4 2 1 3 4 2 1 R2 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 3 4 2 1 R1 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 R0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 3 4 2 1 R4 0 0 1 0 1 1 1 1 0 0 0 0 R3 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 3 4 2 1 FLOYD(G) RA for k in [1..n] for i in [1..n] for j in [1..n] r[i,j] {r[i,j] OR (r[i,k] AND r[k,j])}

a. Apply here, and what is the time efficiency of Warshall’s algorithm a. Apply here, and what is the time efficiency of Warshall’s algorithm? b. What is the time efficiency of Warshall’s algorithm? c. How to solve this “finding all paths in a directed graph” problem by a traversal-based algorithm (BFS-based or DFS-based)? d. Explain why the time efficiency of Warshall’s algorithm is inferior to that of traversal-based algorithm for sparse graphs represented by their adjacency lists. 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0

Floyd’s algorithm: all pairs shortest paths In a weighted graph, find shortest paths between every pair of vertices Same idea: construct solution through series of matrices D(0), D(1), … using an initial subset of the vertices as intermediaries. In D(k), dij(k): weight of the shortest path from ui to uj with all intermediate vertices in an initial subset {u1, u2, … uk} Example: 3 4 2 1 6 5

Floyd’s algorithm: all pairs shortest paths Idea: dynamic programming Let V={u1,…,un} and for k≤n, Vk={u1,…,uk} To construct D(k) , we need to get dijk For any pair of vertices ui, ujV, consider all paths from ui to uj whose intermediate vertices are all drawn from Vk and find p the shortest path among them, weight of p is dijk Vk p ui uj

Floyd’s algorithm: all pairs shortest paths Idea: dynamic programming If uk is not in p, then a shortest path from ui to uj with all intermediate vertices in Vk-1 is also a shortest path in Vk , i.e., dij(k) = dij(k-1). If uk is in p, then we break down p into p1 and p2 where p1 is the shortest path from ui to uk with all intermediate vertices in Vk-1 p2 is the shortest path from uk to uj with all intermediate vertices in Vk-1 i.e., dij(k) = dik(k-1)+ dkj(k-1).

Dynamic programming Construct matrices D(0), D(1), … D(k-1), D(k) … D(n) dij(k): weight of the shortest path from ui to uj with all intermediate vertices in Vk dij(0)=wij dij(k)=min (dij(k-1), dik(k-1)+ dkj(k-1)) for k≥1 Dynamic programming is a technique for solving problems with overlapping subproblems. It suggests solving each smaller subproblem once and recording the results in a table from which a solution to the original problem can be then obtained. What are the overlapping subproblems in Floyd’s algorithm? General principle that underlines dynamic programming algorithms for optimization problems: Principle of optimality: an optimal solution to any instance of an optimization problem is composed of optimal solutions to its subinstances. The principle of optimality holds in most cases. (A rare example: it fails for finding longest simple paths).

Floyd’s algorithm FLOYD(G) for i,j in [1..n] d[i,j]w(ui,uj) // D0A 0 - 3 - 2 0 - - - 7 0 1 6 - - 0 D1 0 - 3 - 2 0 5 - - 7 0 1 6 - 9 0 D2 0 - 3 - 2 0 5 - 9 7 0 1 6 - 9 0 2 b2 D2 0 10 3 4 2 0 5 6 9 7 0 1 6 16 9 0 a1 7 3 6 d4 c3 1 D5 0 10 3 4 2 0 5 6 7 7 0 1 6 16 9 0 FLOYD(G) for i,j in [1..n] d[i,j]w(ui,uj) // D0A for k in [1..n] for i in [1..n] for j in [1..n] d[i,j]min(d[i,j],dk-1[i,k]+dk-1[k,j]) O(V3) better then BF that wld cost be O(V4) //similar to relaxation Dm = Dm-1*A raising to power m Strassen’s cannot?.. MatrixMultiplication(A, B) for for i,j in [1..n] c[i,j]0 for k in [1..n] for all keys for p in [1..n] for q in [1..n] for r in [1..n] c[p,q]c[p,q]+a[p,r] . b[r,q] 16 16