15.082J and 6.855J and ESD.78J October 21, 2010 Max Flows 4.

Slides:



Advertisements
Similar presentations
Lecture 7. Network Flows We consider a network with directed edges. Every edge has a capacity. If there is an edge from i to j, there is an edge from.
Advertisements

Min Cost Flow: Polynomial Algorithms. Overview Recap: Min Cost Flow, Residual Network Potential and Reduced Cost Polynomial Algorithms Approach Capacity.
15.082J & 6.855J & ESD.78J October 14, 2010 Maximum Flows 2.
Max Flows 2: The Excess Scaling Algorithm and Variants. James Orlin.
15.082J & 6.855J & ESD.78J Shortest Paths 2: Bucket implementations of Dijkstra’s Algorithm R-Heaps.
Chapter 5 Shortest Paths: Label-Correcting Algorithms
Chapter 6 Maximum Flow Problems Flows and Cuts Augmenting Path Algorithm.
15.082J, 6.855J, and ESD.78J Sept 16, 2010 Lecture 3. Graph Search Breadth First Search Depth First Search Intro to program verification Topological Sort.
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
1 Maximum Flow Networks Suppose G = (V, E) is a directed network. Each edge (i,j) in E has an associated ‘capacity’ u ij. Goal: Determine the maximum amount.
1 Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Chapter 7 Maximum Flows: Polynomial Algorithms
1 COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material has been reproduced and communicated to you by or on behalf.
A New Approach to the Maximum-Flow Problem Andrew V. Goldberg, Robert E. Tarjan Presented by Andrew Guillory.
Network Optimization Models: Maximum Flow Problems In this handout: The problem statement Solving by linear programming Augmenting path algorithm.
1 Maximum flow: The preflow/push method of Goldberg and Tarjan (87)
Maximum flow Algorithms and Networks. A&N: Maximum flow2 Today Maximum flow problem Variants Applications Briefly: Ford-Fulkerson; min cut max flow theorem.
Lecture 8. Why do we need residual networks? Residual networks allow one to reverse flows if necessary. If we have taken a bad path then residual networks.
A New Approach to the Maximum-Flow Problem Andrew V. Goldberg, Robert E. Tarjan Presented by Andrew Guillory.
15.082J, 6.855J, and ESD.78J September 21, 2010 Eulerian Walks Flow Decomposition and Transformations.
and 6.855J The Capacity Scaling Algorithm.
Advanced Algorithms Piyush Kumar (Lecture 5: Preflow Push) Welcome to COT5405.
15.082J and 6.855J and ESD.78J October 19, 2010 Max Flows 3 Preflow-Push Algorithms.
1 COMMONWEALTH OF AUSTRALIA Copyright Regulations 1969 WARNING This material has been reproduced and communicated to you by or on behalf.
15.082J & 6.855J & ESD.78J September 23, 2010 Dijkstra’s Algorithm for the Shortest Path Problem.
Max flows in O(nm) time, and sometimes less. by James B. Orlin MIT Sloan School.
Maximum Flow Problem (Thanks to Jim Orlin & MIT OCW)
& 6.855J & ESD.78J Algorithm Visualization The Ford-Fulkerson Augmenting Path Algorithm for the Maximum Flow Problem.
and 6.855J The Goldberg-Tarjan Preflow Push Algorithm for the Maximum Flow Problem.
15.082J & 6.855J & ESD.78J October 7, 2010 Introduction to Maximum Flows.
15.082J and 6.855J March 4, 2003 Introduction to Maximum Flows.
15.082J and 6.855J and ESD.78J The Successive Shortest Path Algorithm and the Capacity Scaling Algorithm for the Minimum Cost Flow Problem.
and 6.855J March 6, 2003 Maximum Flows 2. 2 Network Reliability u Communication Network u What is the maximum number of arc disjoint paths from.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
15.082J & 6.855J & ESD.78J September 30, 2010 The Label Correcting Algorithm.
EMIS 8374 The Ford-Fulkerson Algorithm (aka the labeling algorithm) Updated 4 March 2008.
1 Maximum Flows CONTENTS Introduction to Maximum Flows (Section 6.1) Introduction to Minimum Cuts (Section 6.1) Applications of Maximum Flows (Section.
Minimum Cost Flow Algorithms and Networks. Algorithms and Networks: Minimum Cost Flow2 This lecture The minimum cost flow problem: statement and applications.
Cycle Canceling Algorithm
Algorithms and Networks Hans Bodlaender
Algorithms and Networks
Dijkstra’s Algorithm with two levels of buckets
James B. Orlin Presented by Tal Kaminker
Maximum flow: The preflow/push method of Goldberg and Tarjan (87)
The Shortest Augmenting Path Algorithm for the Maximum Flow Problem
Chapter 7 Network Flow Slides by Kevin Wayne. Copyright © 2005 Pearson-Addison Wesley. All rights reserved.
Introduction to Maximum Flows
Dijkstra’s Algorithm for the Shortest Path Problem
3.4 Push-Relabel(Preflow-Push) Maximum Flow Alg.
Not Always Feasible 2 (3,5) 1 s (0,2) 3 (, u) t.
Primal-Dual Algorithm
Complexity of Ford-Fulkerson
Algorithms (2IL15) – Lecture 7
Successive Shortest Path Algorithm
EE5900 Advanced Embedded System For Smart Infrastructure
and 6.855J March 6, 2003 Maximum Flows 2
Algorithms and Networks
Min Global Cut Animation
The Shortest Augmenting Path Algorithm for the Maximum Flow Problem
The Shortest Augmenting Path Algorithm for the Maximum Flow Problem
Network Optimization Topological Ordering
and 6.855J The Goldberg-Tarjan Preflow Push Algorithm for the Maximum Flow Problem Obtain a network, and use the same network to illustrate the.
and 6.855J The Goldberg-Tarjan Preflow Push Algorithm for the Maximum Flow Problem Obtain a network, and use the same network to illustrate the.
Maximum flow: The preflow/push method of Goldberg and Tarjan (87)
Introduction to Minimum Cost Flows
The Successive Shortest Path Algorithm
Not Always Feasible 2 (3,5) 1 s (0,2) 3 (, u) t.
Max Flows 3 Preflow-Push Algorithms
Class 11 Max Flows Obtain a network, and use the same network to illustrate the shortest path problem for communication networks, the max flow.
Excess Scaling Approach Algorithm
Presentation transcript:

15.082J and 6.855J and ESD.78J October 21, 2010 Max Flows 4

2 Overview of today’s lecture u Scaling Algorithms u Potential function analysis u The Excess Scaling Algorithm O(n 2 log U) non-saturating pushes, where U = 1 + max{u ij : (i, j) ∈ A} O(nm + n 2 log U) running time. u A proof that Highest Level Preflow Push uses O(n 2 m 1/2 ) non-saturating pushes.

Scaling Algorithms 3 Generic Scaling Algorithm Δ := 2 K for some selected value K determine a Δ-optimal solution x while Δ > 1 do y := ImproveApprox(x, Δ) x := y Δ := Δ/2 1. Define a concept called Δ- optimal, where Δ is some positive integer, and where a 1-optimal solution is optimal for the original problem. 2. Develop a subroutine that efficiently determines Δ 0 - optimum solution where Δ 0 is some (possibly large) power of 2 3. Develop a subroutine Improve-Approx that transforms a Δ-optimal solution into a Δ/2-optimal solution.

4 The Capacity Scaling Algorithm  A flow x is called Δ-maximum if there is no augmenting path in G(x) of capacity Δ or more. u Note. If Δ ≥ U, then x = 0 is Δ-maximum. U = 1 + max {u ij : (i, j) ∈ A}  Subroutine ImproveApprox(x,Δ): takes a flow that is Δ-maximum and outputs a flow that is Δ/2-maximum. u We refer to a path in G(x) as a Δ-augmenting if it is an s-t path whose capacity is at least Δ. ImproveApprox(x,Δ) while there is a Δ/2-augmenting path in G(x) do find a Δ/2-augmenting path P in G(x); augment flow along P; update residual capacities and data structures;

5 Analysis of Capacity Scaling s t Lemma. At the beginning of the Δ-scaling phase, there is a an s-t cut (S, T) such that the capacity of each arc (i, j) from S to T is less than Δ. S = { j : there is a path P of capacity ≥ Δ from s to j} The residual capacity of (S, T) is less than mΔ. Δ = 10

6 Analysis of Capacity Scaling Corollary. The number of augmentations per scaling phase is at most 2m. Proof. Each augmentation reduces the residual capacity of the cut by at least Δ/2. Lemma. The number of times that ImproveApprox is called is at most log U Proof. Initially Δ = 2 K, where K = log U At each subsequent iteration Δ is halved. The algorithm stops when Δ = 1. The running time per scaling phase is O(m 2 ). The total running time is O(m 2 log U) The running time can be improved to O(nm log U)

A preview Next: an algorithm that is based on scaling excesses rather than capacities. Based on preflow-push At the Δ-scaling phases, all excesses are less than Δ. 7

8 A Review of Preflows At each intermediate stages we permit more flow arriving at nodes than leaving (except for s) A preflow is a function x: A → R s.t. 0  x  u and such that e(i) = ∑ j ∈ N x ji - ∑ j ∈ N x ij ≥ 0, for all i ∈ N – {s, t}. i.e., e(i) = excess at i = net excess flow into node i. The excess is required to be nonnegative.

9 A Feasible Preflow The excess e(j) at each node j ≠ s, t is the flow in minus the flow out. s t Note: total excess = flow out of s minus flow into t.

10 Distance Labels Distance labels d( ) are valid for G(x) if i. d(t) = 0 ii. d(i) ≤ d(j) + 1 for each (i,j) ∈ G(x) Defn. An arc (i, j) is admissible if r ij > 0 and d(i) = d(j) + 1. Lemma. Let d( ) be a valid distance label. Then d(i) is a lower bound on the distance from i to t in the residual network. it P = the shortest path from i to t in G(x) d( )

11 Distance labels and gaps We say that there is a gap at a distance level k (0 < k < n) if there is no node with distance label k. Lemma. Suppose there is a gap at distance level k. Then for any node j with d(j) > k, there is no path from j to t in the residual network. Proof. The shortest path from j to t would have to pass through a node whose distance level is k. it P = the shortest path from i to t in G(x) d( )

12 Active nodes in the residual network A node j in G\{s} is active if: e(j) > 0 and there is no gap at a distance level less than d(j) The preflow push algorithm will push flow from active nodes “towards the sink”, relying on d( ) s t k d( ) = k we( ) = w

13 Goldberg-Tarjan Preflow Push Algorithm Procedure Preprocess x :=0; compute the exact distance labels d(i) for each node; x sj := u sj for each arc (s, j) ∈ A(s); d(s) := n; Algorithm PREFLOW-PUSH; preprocess; while there is an active node i do select an active node i; push/relabel(i); convert the max preflow into a max flow Note: the “while loop” ends when there are no active nodes; i.e., if e(j) > 0, then d(j) is above a gap.

14 The Excess Scaling Algorithm u A preflow x is called Δ- maximum if e(j) < Δ for all j ≠ s, t. u Note. If Δ ≥ U, then the preflow after the preprocess step is Δ- maximum. u If a preflow is 1-maximum and if d(s) = n, then the preflow is a maximum flow.  Subroutine ImproveApprox(x,Δ): takes a preflow x that is Δ-maximum and outputs a preflow that is Δ/2-maximum. Excess Scaling Algorithm Δ := 2 K where K = log U Preprocess while Δ > 1 do y := ImproveApprox(x, Δ) x := y Δ := Δ/2 convert the maximum preflow x into a maximum flow

15 Improve Approx We say that a node is Δ-active if 1. e(i) ≥ Δ 2. node i is not above a gap If a node i is above a gap, then there is no path from i to t in G(x). Subroutine ImproveApprox(x,Δ) while the G(x) has a Δ/2-active node j do among Δ/2-active nodes, choose i with minimum distance label perform push/relabel(i) where the amount pushed in (i, j) is min (Δ/2, r ij ) Δ = 64 Send 2 units of flow in (7, 4). Then send 32 units of flow in (7, 5)

16 Lemmas about pushing Lemma 1. Throughout ImproveApprox, e(j) < Δ for all active nodes j. Proof. If we push in (i, j), then e(j) < Δ/2 before the push, and the amount pushed is at most Δ/ Δ = 64 Send 2 units of flow in (7, 4). Then send 32 units of flow in (7, 5) Lemma 2. Each non-saturating push sends exactly Δ/2 units of flow. Proof. The amount pushed in (i, j) is min (Δ/2, r ij ).

Analysis of the Excess Scaling Algorithm Theorem. The Excess Scaling Algorithm finds a maximum flow in O(nm + n 2 log U) steps. Proof. We have already shown the following in the analysis of preflow-push algorithms. 1. If it terminates, it terminates with a max flow 2. The time spent in all steps other than nonsaturating pushes is O(nm). 3. What remains to be proved: A Δ/2-active node with lowest distance label can be selected in O(1) steps. The number of nonsaturating pushes is O(n 2 log U). For this we will rely on a new potential function. 17

18 Selecting Δ-active nodes efficiently t 4 5 s 2 s s LIST It’s more challenging than one might guess. LIST is an array of size n. LIST(k) points to a linked list of nodes i with d(i) = k, and e(k) ≥ Δ/2

19 Selecting Δ-active nodes efficiently LIST If Ψ k+1 < Ψ k, then the push at the k-th iteration created a Δ-active node at level Ψ k+1 = Ψ k -1. It takes O(1) steps to identify Ψ k+1. View Ψ k as a potential function. Let δ k = Ψ k+1 – Ψ k. Then δ k ≥ -1. Losses in potential = Initial Potential + + Gains in potential - Final potential Maintain a pointer to LIST. Let Ψ k be the index of LIST pointed to at an iteration k. If Ψ k+1 > Ψ k, then it takes O(Ψ k+1 - Ψ k ) steps to identify Ψ k+1.

20 Selecting Δ-active nodes efficiently LIST Each loss in Ψ is exactly 1, and it occurs following a push. So, the losses in Ψ are at most the number of pushes. Gains in Ψ = Final Ψ + Losses in Ψ – Initial Ψ Gains in Ψ ≤ n + number of pushes The time spent scanning list = O(Gains in Ψ + Losses in Ψ) The time spent scanning list = O(n + number of pushes)

21 A potential function for bounding NSAT t 4 s 2 s s Φ = ∑ j ∈ N e(j) d(j) / Δ node e(j) e(j)d(j) Φ = 384 / 64 = 6 The potential is the “gravitational potential” measured in units of Δ Losses in Φ = Φ 0 + Gains in Φ - Φ f

22 What if the k-th iteration is a push? Every push decreases Φ δ k = Φ k+1 - Φ k = [ e k+1 (i) d k+1 (i) + e k+1 (j) d k+1 (j) - e k (i) d k (i) - e k (j) d k (j) ] / Δ A Nonsat push at step k sends Δ/2 units of flow. In this case, δ k = - ½. d k (u) = d k (v) + 1 Φ k = ∑ j ∈ N e k (j) d k (j) / Δ Suppose that α units of flow are sent in (i, j) at the k-th iteration. j i before step k e k (i) after step k j i e k (j) e k+1 (i) = e k (i) - α e k+1 (j) = e k (j) + α = [ - α d k (i) + α d k (j) ] /Δ = - α / Δ. d k ( ) = d k+1 ( )

23 What if the k-th iteration is a relabel? Every relabel increases Φ. δ k = Φ k+1 - Φ k = [ e k+1 (j) d k+1 (j) - e k (j) d k (j) ]/ Δ = [ e k (j) (d k (j) +w) - e k (j) d k (j) ]/ Δ Increasing d(j) by w, increases Φ by at most w. Φ k = ∑ j ∈ N e k (j) d k (j) / Δ Suppose that d k+1 (j) = d k (j) + w. After step k j e k+1 (j) = e k (j) d k+1 (j) = d k (j) + w = e k (j) w /Δ ≤ w j Before step k e k (j ) d k (j)

Bounding NSAT NSAT(Δ)/2 ≤ n 2 + n = O(n 2 ) Φ k = ∑ j ∈ N e k (j) d k (j) / Δ Theorem. The total number of nonsaturating pushes over all scaling phases is O(n 2 log U). 24 NSAT(Δ)/2 ≤ Losses in Φ = Φ 0 + Gains in Φ - Φ f NSAT(Δ) = number of nonsat pushes in Δ-scaling phase

You are entitled to receive something if you bring a raccoon’s head to the town hall in Henniker, NH. What is it? $10. In 1980, a Las Vegas hospital suspended workers because of what they were betting on. What was it? They were betting when patients would die. In what year did Christians begin celebrating December 25 as Jesus Christ’s birthday? 440 C.E. Mental Break 25

The NERF ball is a popular children’s toy. What does NERF stand for? Nothing. In advertising displays that include a clock, what time is most frequently given? 10:10. Babe Ruth kept something under his hat to keep cool. What was it? A cabbage leaf. He changed it every 2 innings. Mental Break 26

Highest Level Pushing The highest level pushing algorithm refers to the special case of the Goldberg-Tarjan preflow push algorithm in which pushes are from an active node with maximum distance label. Theorem. The running time of the highest level pushing algorithm is O(n 2 m.5 ). Note: Selecting an active node with highest distance level is carried out similarly to selecting a Δ-active node at the lowest level. It remains to prove that the number of nonsaturating pushes is O(n 2 m.5 ). The analysis is involved, but it is much easier than the analysis in the text. 27

Phases A phase consists of a consecutive sequence of pushes from nodes at the same level. Theorem. The number of phases is O(n 2 ) w t w Proof. Let Γ be the highest level of an active node. It is a potential function. A phase ends in one of two ways: 1.Γ increases (because of a relabel) 2.Γ decreases (no more active nodes at level w ) Γ increases at most n 2. (bound on relabels) Decreases in Γ = Γ 0 + Increases in Γ - Γ f ≤ n + n 2 – 0 = n 2 Conclusion: there are O(n 2 ) phases.

A potential function for highest level pushing t Active Not active Φ(j) is the number of active nodes i with d(i) ≥ d(j). Nodes 7, 8, 9, and 11 are active. 10 e.g., Φ(6) = 0, Φ(10) = 1 Φ(3) = 3, Φ(2) = 4 Φ = ∑ j ∈ N Φ(j) Φ = ×4 = 20

The effect of a node becoming active t Active Not active Φ(j) increases by 1 if d(j) ≤ d(i). Suppose that node i is made active. 10 Conclusion: if a node is made active and if there are no other changes in potential, then Φ increases by less than n.

Bounds on δ k Consider the case that the k-th step is a saturating push in arc (u,v). lowest possible Φ before step k v u e k (u) > 0 e k (v) = 0 δ k = Φ k+1 – Φ k < n d k+1 = d k t Φ k (j) is the number of active nodes i with d(i) ≥ d(j). Φ k = ∑ j ∈ N Φ k (j) highest possible Φ after step k v u e k+1 (u) > 0 e k+1 (v) > 0 31

Bounds on δ k Consider the case that the k-th step is a non- saturating push in arc (u,v). u e k (u) > 0 δ k = Φ k+1 – Φ k ≤ - | S | 0 w t w u u e k+1 (u) = 0 Let S be the set of nodes at level d(u) = w v S for j ∈ S, Φ k+1 (j) = Φ k (j) - 1 for j ∉ S, Φ k+1 (j) ≤ Φ k (j) 32

Bounds on δ k Consider the case that the k-th step is a relabel of u. u e k (u) > 0 δ k = Φ k+1 – Φ k < n 0 w 2 1 t w+1 u e k+1 (u) = 0 for j ∈ N, Φ k+1 (j) ≤ Φ k (j) + 1 w-1 u 33

Bounding NSAT Losses in Φ = Φ 0 + Gains in Φ - Φ f ≤ n 2 + n 2 m + n = O(n 2 m) We say that a phase is large if it has at least K nonsaturating pushes. Otherwise, it is small. The number of nonsaturating pushes is NSAT-large + NSAT-small. NSAT-small ≤ K × number of phases = O(Kn 2 ) NSAT-large × K ≤ Losses = O(n 2 m) NSAT-large ≤ Losses/K = O(n 2 m/K) Choose K = m.5. Then NSAT = O(Kn 2 + n 2 m/K) = O(n 2 m.5 ). 34

35 Review of Lecture Scaling techniques are useful when it is quicker to solve an optimization problem starting from the optimal solution of a closely related problem. capacity scaling excess scaling Potential functions are useful when the total running time is less than the bounds obtained by adding up running times bounds for each step. It permits a kind of global analysis. time to select active nodes in excess scaling NSAT pushes for excess scaling number of phases for highest level pushing alg. NSAT pushes for highest level pushing alg.

MITOpenCourseWare J / 6.855J / ESD.78J Network Optimization Fall 2010 For information about citing these materials or our Terms of Use, visit: