The Successive Shortest Path Algorithm

Slides:



Advertisements
Similar presentations
Min Cost Flow: Polynomial Algorithms. Overview Recap: Min Cost Flow, Residual Network Potential and Reduced Cost Polynomial Algorithms Approach Capacity.
Advertisements

Outline LP formulation of minimal cost flow problem
15.082J & 6.855J & ESD.78J October 14, 2010 Maximum Flows 2.
Max Flows 2: The Excess Scaling Algorithm and Variants. James Orlin.
Chapter 6 Maximum Flow Problems Flows and Cuts Augmenting Path Algorithm.
Network Optimization Models: Maximum Flow Problems
1 Maximum Flow Networks Suppose G = (V, E) is a directed network. Each edge (i,j) in E has an associated ‘capacity’ u ij. Goal: Determine the maximum amount.
Chapter 7 Maximum Flows: Polynomial Algorithms
Network Optimization Models: Maximum Flow Problems In this handout: The problem statement Solving by linear programming Augmenting path algorithm.
15.082J and 6.855J and ESD.78J November 4, 2010 The Network Simplex Algorithm.
15.082J, 6.855J, and ESD.78J September 21, 2010 Eulerian Walks Flow Decomposition and Transformations.
and 6.855J The Capacity Scaling Algorithm.
Minimum Cost Flows. 2 The Minimum Cost Flow Problem u ij = capacity of arc (i,j). c ij = unit cost of shipping flow from node i to node j on (i,j). x.
1 Minimum Cost Flow - Strongly Polynomial Algorithms Introduction Minimum-Mean Cycle Canceling Algorithm Repeated Capacity Scaling Algorithm Enhanced Capacity.
15.082J and 6.855J and ESD.78J October 19, 2010 Max Flows 3 Preflow-Push Algorithms.
15.082J & 6.855J & ESD.78J September 23, 2010 Dijkstra’s Algorithm for the Shortest Path Problem.
Max flows in O(nm) time, and sometimes less. by James B. Orlin MIT Sloan School.
Maximum Flow Problem (Thanks to Jim Orlin & MIT OCW)
& 6.855J & ESD.78J Algorithm Visualization The Ford-Fulkerson Augmenting Path Algorithm for the Maximum Flow Problem.
The Ford-Fulkerson Augmenting Path Algorithm for the Maximum Flow Problem Thanks to Jim Orlin & MIT OCW.
15.082J & 6.855J & ESD.78J October 7, 2010 Introduction to Maximum Flows.
15.082J and 6.855J March 4, 2003 Introduction to Maximum Flows.
15.082J and 6.855J and ESD.78J The Successive Shortest Path Algorithm and the Capacity Scaling Algorithm for the Minimum Cost Flow Problem.
and 6.855J March 6, 2003 Maximum Flows 2. 2 Network Reliability u Communication Network u What is the maximum number of arc disjoint paths from.
15.082J & 6.855J & ESD.78J September 30, 2010 The Label Correcting Algorithm.
EMIS 8374 The Ford-Fulkerson Algorithm (aka the labeling algorithm) Updated 4 March 2008.
15.082J and 6.855J and ESD.78J Network Simplex Animations.
15.082J and 6.855J and ESD.78J October 21, 2010 Max Flows 4.
1 Maximum Flows CONTENTS Introduction to Maximum Flows (Section 6.1) Introduction to Minimum Cuts (Section 6.1) Applications of Maximum Flows (Section.
Tuesday, March 19 The Network Simplex Method for Solving the Minimum Cost Flow Problem Handouts: Lecture Notes Warning: there is a lot to the network.
Minimum Cost Flow Algorithms and Networks. Algorithms and Networks: Minimum Cost Flow2 This lecture The minimum cost flow problem: statement and applications.
Cycle Canceling Algorithm
The minimum cost flow problem
Network Simplex Animations
Algorithms and Networks
EMIS 8374 Dijkstra’s Algorithm Updated 18 February 2008
Dijkstra’s Algorithm with two levels of buckets
James B. Orlin Presented by Tal Kaminker
The Shortest Augmenting Path Algorithm for the Maximum Flow Problem
Introduction to Maximum Flows
Introduction to Maximum Flows
15.082J & 6.855J & ESD.78J Visualizations
Dijkstra’s Algorithm for the Shortest Path Problem
Dijkstra’s Algorithm for the Shortest Path Problem
Introduction to Minimum Cost Flows
Not Always Feasible 2 (3,5) 1 s (0,2) 3 (, u) t.
Primal-Dual Algorithm
Complexity of Ford-Fulkerson
Chapter 5 Transportation, Assignment, and Transshipment Problems
Algorithms (2IL15) – Lecture 7
Successive Shortest Path Algorithm
Successive Shortest Path Algorithm
and 6.855J March 6, 2003 Maximum Flows 2
Algorithms and Networks
15.082J & 6.855J & ESD.78J Radix Heap Animation
Network Optimization Flow Decomposition.
The Shortest Augmenting Path Algorithm for the Maximum Flow Problem
Reading before Lecture 13
The Shortest Augmenting Path Algorithm for the Maximum Flow Problem
The Ford-Fulkerson Algorithm
Visualizations Dijkstra’s Algorithm
Introduction to Maximum Flows
Network Simplex Animations
15.082J & 6.855J & ESD.78J Visualizations
Introduction to Minimum Cost Flows
Introduction to Maximum Flows
Not Always Feasible 2 (3,5) 1 s (0,2) 3 (, u) t.
Max Flows 3 Preflow-Push Algorithms
Class 11 Max Flows Obtain a network, and use the same network to illustrate the shortest path problem for communication networks, the max flow.
15.082J & 6.855J & ESD.78J Visualizations
Presentation transcript:

The Successive Shortest Path Algorithm OPIM 915 taught by MGS The Successive Shortest Path Algorithm and the Capacity Scaling Algorithm for the Minimum Cost Flow Problem written by J. Orlin

Overview of lecture Quick review of min cost flow and optimality conditions Pseudoflows The successive shortest path algorithm A polynomial time scaling algorithm

The Minimum Cost Flow Problem uij = capacity of arc (i,j). cij = unit cost of shipping flow from node i to node j on (i,j). xij = amount shipped on arc (i,j) Minimize ∑(i,j)∈A cijxij ∑j xij - ∑j xji = bi for all i  N. and 0 ≤ xij ≤ uij for all (i,j)  A.

The Residual Network G(x) uij i j cij 8 1 2 $10 Suppose x12 = 3 uij - xij i j xij cij -cij 5 1 2 3 $10 -$10 Reducing the flow in (i, j) by 1 is equivalent to sending one unit of flow from j to i. It reduces the cost by cij.

Pseudo-flows A pseudo-flow is a "flow" vector x such that 0 ≤ x ≤ u. Let e(i) denote the excess (deficit) at node i. 1 2 3 5 4 10 20 -2 -7 Supplies/demands and capacities 5 -2 2 4 Introduce myself and the Tas. It is critical that students sign up for the virtual campus web site. Hand out instructions on how to use it. Make sure that everyone signs up on the list. Also, create a class list that is separate from the web site. Ask for a show of where students come from. There should be four major contingencies: Grad EECS, ORC, Transportation, Rest 3 1 2 4 3 5 -3 A pseudo-flow and excesses

Pseudo-flows and the residual network 1 2 3 5 4 -2 -3 A pseudo-flow and excesses 1 2 3 5 4 10 20 -2 -3 The residual network Introduce myself and the Tas. It is critical that students sign up for the virtual campus web site. Hand out instructions on how to use it. Make sure that everyone signs up on the list. Also, create a class list that is separate from the web site. Ask for a show of where students come from. There should be four major contingencies: Grad EECS, ORC, Transportation, Rest The infeasiblity of the pseudo-flow is  e(i)>0 e(i). e.g., 5.

Reduced Costs in G(x) Let πi denote the node potential for node i. For unit of flow out of node i, subtract πi from the cost. For unit of flow into node j, add πj to the cost. $3 – π1 + π2 1 2 3 $1 – π2 + π3 -$5 – π3 + π1 $3 1 2 3 $1 -$5

Optimality Conditions Theorem. Suppose that x* is flow and π* is a vector of node potential. Then x* and π* are optimal if the flow x* is feasible and cπ*ij ≥ 0 for all (i, j) ∈ G(x*). The second property is often called dual feasibility.

Optimality Conditions The cycle canceling algorithm starts with a feasible flow and maintains a feasible flow at each iteration. It terminates with dual feasibility. The successive shortest path algorithm starts with (and maintains) an infeasible flow x and node potentials π that are dual feasible. It stops when it obtains a flow that is feasible. It’s main subroutine involves sending flow along a path from a node with excess to a node with deficit.

Augmenting paths Capacities and excesses in G(x) 1 2 3 5 4 10 20 -2 -3 Capacities and excesses in G(x) 1 2 3 5 4 8 Excesses and reduced costs in G(x) -2 -3 A path P from i to j in G(x) is augmenting if e(i) > 0, e(j) < 0, and ∀ (v, w) ∈ P, cπvw = 0.

Augmenting along a path 1 2 3 5 4 The augmenting path with residual capacities. -2 The capacity of an augmenting path P from node i to node j is min { e(i), -e(j), min(v,w)∈P rvw}. In this case, the capacity is min {5, 2, 3} = 2 To augment along a path P is to send δ units of flow from i to j in P, where δ = capacity(P).

Before and after the augmentation 1 2 3 5 4 10 20 -2 -3 Capacities and excesses in G(x) before augmenting on P 1 2 3 5 4 10 20 -3 Capacities and excesses in G(x) after augmenting on P

The effect of augmentations 1 2 3 5 4 8 Excesses and reduced costs before the augmentation -2 -3 2 1 3 5 4 8 Excesses and reduced costs after the augmentation -3 Augmentations reduce the infeasiblity by at least 1. Augmentations maintain dual feasibility. Any arc added to G(x) is the reversal of an arc with a reduced cost of 0.

The Successive Shortest Path Algorithm Initialize with a dual feasible pseudo-flow (x, π) while x is infeasible do select a node i with e(i) > 0 adjust the node potentials so that it remains dual feasible and there is an augmenting path P from node i to a node j with e(j) < 0; (if the adjustment is not possible, then there is no feasible flow for the problem) augment along path P update the residual network and excesses; The SSP Algorithm

The next steps Show how to find an initial dual feasible pseudoflow. Show how to adjust the node potentials Show that the problem is infeasible if there is no possible adjustment of the node potentials Proof of finiteness Proof of correctness of the algorithm

Finding an initial dual feasible pseudoflow Let A’ = {(i, j) : uij = ∞} . We first choose π so that cπij ≥ 0 for (i, j) ∈ A’. Step 1. Let C = 1 + max { | cij | : (i, j) ∈ A’}. If there is no directed path from 1 to j in A’, then add arc (1, j) to A’ with cost nC. Step 2. Use a shortest path algorithm to find the shortest path length from 1 to node j in A’. If there is a negative cost cycle, quit. One can send an infinite amount of flow around a negative cost cycle. Step 3. Let π(j) = -d(j) for all j. Step 4. For each arc (i, j) ∈ A\A’ with cπij < 0, let xij = uij.

Finding an initial dual feasible pseudoflow 1 2 3 4 5 6 -1 -2 8 -20 A’ = solid lines. They have infinite capacity. The costs are on the arcs. The dotted red line has a capacity of 10. 1 2 3 4 5 6 8 7 d(2) = -1 -2 -3 -5 -7 The numbers next to nodes are the shortest path distances from node 1. cπij = cij + d(i) – d(j) 17 -17

Adjusting node potentials Select node i with e(i) > 0. Let d(j) = shortest path from node i to node j in G(x). Replace πj by πj - d(j) for all j ∈ N. 1 2 3 4 5 6 G(x) with reduced costs on arcs and shortest path distances. e(1) > 0. 1 2 3 4 5 6 Node potentials πj - d(j)and reduced costs -2 -4 -1

On Node Potentials After adjusting node potentials, there is an path with 0-reduced cost arcs in G(x) from node i to each other node (assuming that there is a path from node 1). 1 2 3 4 5 6 Node potentials πj - d(j)and reduced costs -2 -4 -1 1 2 3 4 5 6 7 12 10 Excesses and capacities. -2 -3 -5

Augment along a path from node i Select a node j with e(j) < 0 that is reachable from node i. Let P be the path from node i to node j. Send min {e(i), -e(j), min {rvw : (v, w) ∈ P} } units in P. 1 2 3 4 5 6 7 12 10 Excesses and capacities. -2 -3 -5 -2 4 3 1 2 3 4 2 3 2 3 7 Send min {10, 3, 5} units of flow from 1 to 4. Update residual capacities and excesses.

Proof of finiteness and correctness Finiteness comes from the fact that the infeasibility is always integer and nonnegative, and it decreases by at least one at each iteration. The algorithm usually terminates a feasible flow (and also with dual feasibility) and is thus optimal. It can also terminate if there is no path from a node i with e(i) > 0 to any node with negative excess. In the latter case, the problem is infeasible.

Analysis of the Successive Shortest Path Algorithm Each augmentation reduces the infeasibility by at least 1. Initial infeasibility is at most mUmax + nbmax The time per augmentation is the time to run Dijkstra’s algorithm (or any shortest path algorithm with non-negative distances.) Running time: O( (mUmax + nbmax) m log nC)

Mental Break Thomas Edison invented the light bulb. The invention helped him deal with a personal (non-professional) issue. What was it? He was afraid of the dark Apollo 11 did not have much fuel when it landed back on earth. How much fuel remained? About 20 seconds worth. How long does it take all of the stars of the Milky Way galaxy to revolve around the center of the galaxy? Around 200 million years.

Mental Break A neutron star is so dense that it can quickly spin on its axis an entire revolution without tearing itself apart. How fast? It can spin in 1/30th of a second. A full moon is brighter than a half moon. How much brighter? 9 times as bright. In 1961-2, MIT students developed the first interactive computer game. What was it called? Spacewar.

The Capacity Scaling Algorithm A polynomial time variant of the successive shortest path algorithm. Relies on the generic scaling approach. Send flow from a node i with e(i) ≥ Δ/2. Send flow along paths with capacity ≥ Δ/2.

Δ-optimality A pseudoflow x and node potentials π are called Δ-optimal if e(j) < Δ for all j ∀ (i, j), if cπij < 0, then rij < Δ Suppose Δ = 64. Which of the following are permitted if the pseudoflow is Δ-optimal? not permitted i j rij = 63 cπij = -6300 i e(i) = 64 j e(j) = -128 i j rij = 64 cπij = -1 not permitted permitted permitted

The Capacity Scaling Algorithm let x, π be an initial dual feasible pseudo-flow and node potentials let e* = 1 + max {e(i) : i ∈ N }; Δ := 2K , where K = ⎡ log e* ⎤ while Δ > 1 do y := ImproveApprox(x, Δ) x := y Δ := Δ/2 ImproveApprox(x, Δ) converts a Δ-optimal pseudo-flow into a Δ/2 optimal pseudo-flow.

Initializing ImproveApprox For any arc (i, j) with cπij < 0 and rij ≥ Δ/2, send rij units of flow in (i, j). i j rij = 63 cπij = -6300 Δ = 64 e(i) = α e(j) = β Δ = 32 i j rji = 63 cπji = 6300 e(i) = α-63 e(j) = β+63 After this operation, if cπvw < 0, then rvw < Δ/2 for all (v, w) ∈ G(x).

ImproveApprox(x, Δ) for any arc (i, j) with cπij < 0 and rij ≥ Δ/2, send rij units of flow in (i, j); update r and e( ); while there is a node i with e(i) ≥ Δ/2 do let G(x, Δ/2) = { (i, j) with rij ≥ Δ/2}; select a node i with e(i) ≥ Δ/2; let d(j) = min cost of a path from i to j in G(x, Δ/2); replace π by π - d( ) augment along an augmenting path from i to some node j with e(j) < 0 (assume that the path exists). update the residual network and excesses;

An augmentation when Δ = 64 r12 ≥ 32 r23 ≥ 32 r34 ≥ 32 cπ12 = 0 cπ23 = 0 cπ34 = 0 e(1) ≥ 32 e(4) < 0 1 2 3 4 5 6 35 55 -10 -5 40 45 1 2 3 4 30 45 10 35 25 After an augmentation from i to j, either the infeasibility decreases by at least Δ/2, or e(j) ≥ 0.

Number of Augmentations ∑ e(i) < nΔ at the beginning of the Δ-scaling phase j e(j) = β+63 Δ = 64 Recall that we send rij units of flow in arcs with negative reduce costs. Each saturating push increases e(j) by at most Δ – 1. ∑ e(i) < nΔ + mΔ after the saturating pushes at the beginning of the Δ-scaling phase. Each augmentation (i) eliminates a node with deficit or (ii) reduces ∑ e(i) by at least Δ/2 (i) occurs fewer than n times (ii) occurs fewer than 2m + 2n times.

Running time analysis The number of augmentations per scaling phase is O(m). The number of scaling phases is O(log M), where M < m umax + n bmax. The running time per augmentation is the time to solve a single shortest path problem using Dijkstra’s algorithm. The total running time is O((m log M)(m + n log nC)).

Summary The successive shortest path algorithm. Maintains dual feasibility and moves towards a feasible flow. Capacity scaling algorithm: a scaling variant of the successive shortest path problem. The capacity scaling algorithm was the first polynomial time algorithm for the min cost flow problem. Edmonds-Karp 1972.

Some Final Remarks The number of augmentations can be reduced from O(m log M) to O(m log n) Orlin 1988. The algorithm is slightly different, and the analysis is more complex. Next lectures: Network flow applications and LP, and Network simplex algorithm.

15.082J / 6.855J / ESD.78J Network Optimization MITOpenCourseWare http://ocw.mit.edu 15.082J / 6.855J / ESD.78J Network Optimization Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.