Chapter 25: All-Pairs Shortest Paths

Slides:



Advertisements
Similar presentations
1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 19 Prof. Erik Demaine.
Advertisements

* Bellman-Ford: single-source shortest distance * O(VE) for graphs with negative edges * Detects negative weight cycles * Floyd-Warshall: All pairs shortest.
Advanced Algorithm Design and Analysis (Lecture 7) SW5 fall 2004 Simonas Šaltenis E1-215b
CS138A Single Source Shortest Paths Peter Schröder.
Chapter 25: All-Pairs Shortest-Paths
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
Introduction To Algorithms CS 445 Discussion Session 8 Instructor: Dr Alon Efrat TA : Pooja Vaswani 04/04/2005.
All Pairs Shortest Paths and Floyd-Warshall Algorithm CLRS 25.2
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
Shortest Paths Definitions Single Source Algorithms –Bellman Ford –DAG shortest path algorithm –Dijkstra All Pairs Algorithms –Using Single Source Algorithms.
1 8a-ShortestPathsMore Shortest Paths in a Graph (cont’d) Fundamental Algorithms.
1 8-ShortestPaths Shortest Paths in a Graph Fundamental Algorithms.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
1 Advanced Algorithms All-pairs SPs DP algorithm Floyd-Warshall alg.
Shortest Paths Definitions Single Source Algorithms
Discrete Math for CS Chapter 8: Directed Graphs. Discrete Math for CS digraph: A digraph is a graph G = (V,E) where V is a finite set of vertices and.
All-Pairs Shortest Paths
CS 253: Algorithms Chapter 24 Shortest Paths Credit: Dr. George Bebis.
CS 473 All Pairs Shortest Paths1 CS473 – Algorithms I All Pairs Shortest Paths.
More Dynamic Programming Floyd-Warshall Algorithm.
Jim Anderson Comp 122, Fall 2003 Single-source SPs - 1 Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph.
Dijkstras Algorithm Named after its discoverer, Dutch computer scientist Edsger Dijkstra, is an algorithm that solves the single-source shortest path problem.
Algorithm Course Dr. Aref Rashad February Algorithms Course..... Dr. Aref Rashad Part: 6 Shortest Path Algorithms.
Chapter 24: Single-Source Shortest Paths Given: A single source vertex in a weighted, directed graph. Want to compute a shortest path for each possible.
1 The Floyd-Warshall Algorithm Andreas Klappenecker.
All-pairs Shortest Paths. p2. The structure of a shortest path: All subpaths of a shortest path are shortest paths. p : a shortest path from vertex i.
The all-pairs shortest path problem (APSP) input: a directed graph G = (V, E) with edge weights goal: find a minimum weight (shortest) path between every.
All-Pairs Shortest Paths
Introduction to Algorithms All-Pairs Shortest Paths My T. UF.
Shortest Paths.
Graph Algorithms Minimum Spanning Tree (Chap 23)
Algorithm Analysis Fall 2017 CS 4306/03
All-Pairs SPs on DG Run Dijkstra;s algorithm for each vertex or
Algorithms and Data Structures Lecture XIII
ADVANCED ALGORITHMS GRAPH ALGORITHMS (UNIT-2).
CS330 Discussion 6.
All-Pairs Shortest Paths (26.0/25)
Warshall’s and Floyd’sAlgorithm
Greedy Technique.
Unit-5 Dynamic Programming
Shortest Paths.
Lecture 7 Shortest Path Shortest-path problems
CS200: Algorithm Analysis
Algorithms (2IL15) – Lecture 5 SINGLE-SOURCE SHORTEST PATHS
Analysis and design of algorithm
Data Structures and Algorithms (AT70. 02) Comp. Sc. and Inf. Mgmt
Lecture 11 Topics Application of BFS Shortest Path
Algorithms and Data Structures Lecture XIII
Lecture 13 Algorithm Analysis
Lecture 13 Algorithm Analysis
Floyd-Warshall Algorithm
Shortest Path Algorithms
Dynamic Programming.
Advanced Algorithms Analysis and Design
Lecture 13 Algorithm Analysis
All pairs shortest path problem
Algorithms (2IL15) – Lecture 7
Lecture 14 Shortest Path (cont’d) Minimum Spanning Tree
Text Book: Introduction to algorithms By C L R S
Dynamic Programming.
Shortest Paths.
Lecture 21: Matrix Operations and All-pair Shortest Paths
Negative-Weight edges:
CS 3013: DS & Algorithms Shortest Paths.
Chapter 24: Single-Source Shortest-Path
Single-Source Shortest Path & Minimum Spanning Trees
Lecture 13 Shortest Path (cont’d) Minimum Spanning Tree
Directed Graphs (Part II)
COSC 3101A - Design and Analysis of Algorithms 12
Presentation transcript:

Chapter 25: All-Pairs Shortest Paths Application: Computing distance table for a road atlas. Atlanta Chicago Detroit … Atlanta - 650 520 Chicago 650 - 210 Detroit 520 210 -  One Approach: Run single-source SP algorithm |V| times. Nonnegative Edges: Use Dijkstra. Time complexity: O(V3) with linear array O(VElg V) with binary heap O(V2lg V + VE) with Fibonacci heap Three algorithms in this chapter: “Repeated Squaring”: O(V3lg V) Floyd-Warshall: O(V3) Johnson’s: O(V2lg V + VE) Negative Edges: Use Bellman-Ford. Time Complexity: O(V2E) = O(V4) for dense graphs negative edges allowed, but no negative cycles CSE246 Marmara Uni

“Repeated Squaring” Algorithm A dynamic-programming algorithm. Assume input graph is given by an adjacency matrix. W = (wij) Let dij(m) = minimum weight of any path from vertex i to vertex j, containing at most m edges. dij(0) = dij(m) = min(dij(m-1), min{dik(m-1) + wkj}) = min1  k  n{dik(m-1) + wkj}, since wjj = 0. Assuming no negative-weight cycles: δ(i,j) = dij(n-1) = dij(n) = dij(n+1) = … 0 if i = j  if i  j CSE246 Marmara Uni

“Repeated Squaring” (Continued) So, given W, we can simply compute a series of matrices D(1), D(2), …, D(n-1) where: D(1) = W D(m) = (dij(m)) [We’ll improve on this shortly.] n := rows[W]; D(1) := W; for m := 2 to n – 1 do D(m) := Extend-SP(D(m-1), W) od; return D(n-1) Extend-SP(D, W) n := rows[D]; for i := 1 to n do for j :=1 to n do d´ij := ; for k := 1 to n do d´ij := min(d´ij, dik + wkj) od od; return D´ CSE246 Marmara Uni

“Repeated Squaring” and Matrix Mult. Running time is O(V4). Note the similarity to matrix multiplication: Matrix-Multiply(A, B) n := rows[A]; for i := 1 to n do for j :=1 to n do cij := 0; for k := 1 to n do cij := cij + aikbkj od od; return C SP algorithm computes the following matrix “products”: D(1) = D(0)W = W D(2) = D(1)W = W2 D(3) = D(2)W = W3  D(n-1) = D(n-2)W = Wn-1 CSE246 Marmara Uni

Improving the Running Time Can improve time to O(V3 lg V) by computing “products” as follows: D(1) = W D(2) = W2 = WW D(4) = W4 = W2W2 D(8) = W8 = W4W4  D(2lg(n-1)) = W(2lg(n-1) ) = W 2lg(n-1) -1W 2lg(n-1) -1 D(n-1) = D(2lg(n-1)) Called repeated squaring. n := rows[W]; D(1) := W; m := 1; while n – 1 > m do D(2m) := Extend-SP(D(m), D(m)); m := 2m od; return D(m) Can modify algorithm to use only two matrices. Can also modify to compute predecessor matrix π. Exercise: Run on example graph. CSE246 Marmara Uni

Floyd-Warshall Algorithm Also dynamic programming, but with different recurrence. Runs in O(V3) time. Let dij(k) = weight of SP from vertex i to vertex j with all intermediate vertices in the set {1, 2, …, k}. dij(k) = wij if k = 0 min(dij(k-1), dik(k-1) + dkj(k-1)) if k  1 two possibilities i j i k j all in {1, …, k–1} all in {1, …, k–1} CSE246 Marmara Uni

FW (Continued) δ(i,j) = dij(n). So, want to compute D(n) = (dij(n)) n := rows[D]; D(0) := W; for k := 1 to n do for i :=1 to n do for j := 1 to n do dij(k) := min(dij(k-1), dik(k-1) + dkj(k-1)) od od; return D(n) Exercise: Run on example graph. Can reduce space from O(V3) to O(V2) — see Exercise 25.2-4. Can also modify to compute predecessor matrix. CSE246 Marmara Uni

Predecessor Matrix Let πij(k) = predecessor of vertex j on SP from vertex i with all intermediate vertices in {1, 2, …, k}. NIL if i = j or wij =  i otherwise πij(0) = πij(k-1) if dij(k-1)  dik(k-1) + dkj(k-1) πkj(k-1) otherwise πij(k) = Exercise: Add computation of  matrix to the algorithm. CSE246 Marmara Uni

Transitive Closure G = (V, E) T.C. Alg G* = (V, E*) E* = {(i,j):  path from i to j in G}. Can compute T.C. using Floyd-Warshall in O(V3) time using edge weights of 1. (i,j)  E* iff dij < n. CSE246 Marmara Uni

Example 1 2 D(3) = 3 4 D(0) = W = D(4) = D(1) = D(0) 1 2 D(2) = 3 4 1 2 3 4 1 0    2 2 0 1 1 3 1  0 1 4 3 1 2 0 D(3) = 3 4 1 2 3 4 1 0    2  0 1 1 3 1  0 1 4  1  0 1 2 3 4 1 0    2 2 0 1 1 3 1 2 0 1 4 3 1 2 0 D(0) = W = D(4) = D(1) = D(0) 1 2 3 4 1 0    2  0 1 1 3 1  0 1 4  1 2 0 1 2 D(2) = 3 4 CSE246 Marmara Uni

Another O(V3) T.C. Algorithm Uses only bits, so is better in practice. Let tij(k) = 1 iff there exists a path in G from i to j going through intermediate vertices in {1, 2, …, k}. 0 if i  j and (i,j)  E 1 otherwise tij(0) = tij(k) = tij(k-1)  (tik(k-1)  tkj(k-1)) CSE246 Marmara Uni

Code See book for how this algorithm runs on previous example. TC(G) n := |V[G]|; for i := 1 to n do for j :=1 to n do if i = j or (i,j)  E then tij(0) := 1 else tij(0) := 0 fi od od; for k := 1 to n do for i :=1 to n do for j := 1 to n do tij(k) := tij(k-1)  (tik(k-1)  tkj(k-1)) return T(n) See book for how this algorithm runs on previous example. CSE246 Marmara Uni

Johnson’s Algorithm An O(V2 lg V + VE) algorithm. Good for sparse graphs. Uses Dijkstra and Bellman-Ford as subroutines. Basic Idea: Reweight edges to be nonnegative. Then, run Dijkstra’s algorithm once per vertex. Use Bellman-Ford to compute new edge weights w. Must have: For all u and v, a SP from u to v using w is also a SP form u to v using w. For all u and v, w(u,v) is nonnegative. ^ ^ ^ CSE246 Marmara Uni

Counterexample Subtracting the minimum weight from every weight doesn’t work. Consider: Paths with more edges are unfairly penalized. -2 -1 1 CSE246 Marmara Uni

Johnson’s Insight Add a vertex s to the original graph G, with edges of weight 0 to each vertex in G: Assign new weights ŵ to each edge as follows: ŵ(u, v) = w(u, v) + d(s, u) - d(s, v) s CSE246 Marmara Uni

Question 1 Are all the ŵ’s non-negative? Yes: Otherwise, s  u  v would be shorter than the shortest path from s to v. d(s, u) u w(u, v) s d(s, v) v CSE246 Marmara Uni

Question 2 Does the reweighting preserve shortest paths? Yes: Consider any path the sum telescopes A value that depends only on the endpoints, not on the path. In other words, we have adjusted the lengths of all paths by the same amount. So this will not affect the relative ordering of the paths— shortest paths will be preserved. CSE246 Marmara Uni

Johnson’s: Running Time Computing G’: Q(V) Bellman-Ford: Q(VE) Reweighting: Q(E) Running (Modified) Dijkstra: (V2lgV +VElgV) Adjusting distances: Q(V2) —————————————————— Total is dominated by Dijkstra: (V2lgV+VElgV) CSE246 Marmara Uni

A General Result about Reweighting ^ Define: w(u,v) = w(u,v) + h(u) – h(v), where h: V  . Lemma 25.1: Let p = ‹v0, v1, …, vk›. Then, (i) w(p) = δ(v0, vk) iff w(p) = δ(v0, vk). (ii) G has a negative-weight cycle using w iff G has a negative-weight cycle using w. ^ ^ ^ Proof of (i): CSE246 Marmara Uni

Reweighting in Johnson’s Algorithm 2 –1 Want to define h s.t. w(u,v)  0. Do it like this: ^ 3 4 1 3 8 -5 2 7 1 -4 -5 –4 2 6 5 4 –1 5 1 4 1 3 13 Define h(v) = δ(s, v) v  V. By Lemma 24.10,  (u,v)  E: h(v)  h(u) + w(u,v). Thus, w(u,v) = w(u,v) + h(u) – h(v)  0. –5 2 10 4 –4 2 5 4 ^ CSE246 Marmara Uni

LP, shortest paths BF v1 -1 v0 1 v5 v2 -3 -3 5 4 v4 v3 -1 It is possible to find a feasible solution to a system of difference constraints by finding shortest-path weights (from v0) in the corresponding constraint graph, CSE246 Marmara Uni

Find a vector: x = <x1,x2,…,xn> such that max. an objective function: c1x1 + c2x2 + … + cnxn subject to m constraints: Ax  b For some special cases, a linear program can be solved more efficiently than with a general purpose LP algorithm Example: the maximal flow problem has an LP representation, starting with any feasible augmenting solution BF. CSE246 Marmara Uni

the difference constraints: equivalent to solving the difference constraints: Find a vector: x = <x1,x2 x3,x4,x5> that: One solution is x = (-5, -3, 0, -1, -4) Another solution is x = (0, 2, 5, 4, 1) In fact, for any d, (d-5, d-3, d, d-1, d-4) is a solution! CSE246 Marmara Uni Design and Analysis of Algorithms © Sigal Ar Linear Programming, Slide 23

Code for Johnson’s Algorithm Compute G´, where V[G´] = V[G]  {s}, E[G´] = E[G]  {(s,v): v  V[G]}; if Bellman-Ford(G´, w, s) = false then negative-weight cycle else for each v  V[G´] do set h(v) to (s, v) computed by Bellman-Ford od; for each (u,v)  E[G´] do w(u,v) := w(u,v) + h(u) – h(v) for each u  V[G] do run Dijkstra(G, w, u) to compute (u, v) for all v  V[G]; for each v  V[G] do duv := (u, v) + h(v) – h(u) od fi Running time is O(V2 lg V + VE). See Book. ^ ^ ^ ^ CSE246 Marmara Uni

Example 2 2/3 2/-1 0/1 0/0 0/-4 1 2 3 4 5 13 10 2/1 4 1 3 13 0/0 2/-3 2 10 0/-4 2/2 2 5 4 2/3 2/-1 0/1 0/0 0/-4 1 2 3 4 5 13 10 4/8 0/0 2/6 2/5 2/1 1 3 4 5 13 10 2 2/2 2/-2 0/0 0/-1 0/-5 1 2 3 4 5 13 10 CSE246 Marmara Uni