Presentation is loading. Please wait.

Presentation is loading. Please wait.

Graph Partitioning using Single Commodity Flows

Similar presentations


Presentation on theme: "Graph Partitioning using Single Commodity Flows"— Presentation transcript:

1 Graph Partitioning using Single Commodity Flows
Rohit Khandekar UC Berkeley Joint work with Satish Rao and Umesh Vazirani

2 Graph Partitioning

3 Graph Partitioning

4 Outline The sparsest cut problem Previous work
Our algorithm and outline of analysis Open questions

5 The Sparsest Cut Problem
Given a graph G=(V,E) Find a cut that minimizes the ratio of the number of edges across and the size of the smaller side T=V \ S S |E(S,T)| Minimize (S,T) min {|S|,|T|} Sparsity ф(S)

6 The Sparsest Cut Problem
Fundamental NP-hard combinatorial problem Central objects of study in theory of Markov chains, geometric embeddings Algorithmic primitive in clustering, divide-conquer, packet routing in distributed networks, etc.

7 The Sparsest Cut Problem
Related to the conductance 0 ≤ conductance ≤ 1 If degree is bounded, sparsity ≈ conductance |E(S,T)| min { ∑vεSd(v), ∑vεT d(v) }

8 Previous work Three approaches in theory Spectral Method based
[Alon-Milman’85] Sparsity = √ ф Multi-Commodity Flow based [Leighton-Rao’88] Sparsity = O( ф · log n ) Semi-definite Programming based [Arora-Rao-Vazirani’04] Sparsity = O( ф · √log n )

9 Previous work In practice, most successful graph partitioning heuristics use eigenvector based approach, multi-level clustering, max-flows Chaco [Hendrickson-Leland’94] METIS [Karypis-Kumar’98] Eigenvector + single commodity max-flow [Lang’04]

10 Multi-commodity Flows
Send flow between multiple source-sink pairs simultaneously Examples: Leighton-Rao send flow between every pair of vertices (embed complete graph) Arora-Rao-Vazirani generalize this approach to embedding an expander graph Computing multi-commodity flows (currently) takes Ω(n2) time (n = # vertices) even approximately

11 Question Can we get good approximations using a few
single commodity flow computations? Answer: YES There exists an algorithm that finds a O(log2 n) approximation using O(log2 n) single commodity max-flow computations. It runs in time O*(n3/2).

12 Approximation vs. Running time
AM,LS,ST Running time ARV AHK LR n3/2 this n √log n log n log2 n “quadratic” Approximation ratio

13 Expanders We call a weighted graph H=(V,F,w) an “α-expander” if sparsity of any cut is at least α. w(S,T) ф(S) = ≥ α min {|S|,|T|} T = V \ S S

14 Embedding a Graph into another
G=(V,E) H=(V,F,w)

15 Embedding a Graph into another
G=(V,E) H=(V,F,w) f1 we f2 f3 f1 + f2 + f3 = we Route such a flow for each edge e ε H without violating (unit) edge-capacities in G. If an α-expander can be embedded in G, then G is also an α-expander.

16 Main Theorem Given a graph G=(V,E) on n vertices and α ≤ 1, there exists an algorithm that either outputs a cut of sparsity at most α, or proves that every cut has sparsity at least α log2 n embeds a (α/log2 n)-expander in G The algorithm does O(log2 n) single commodity max-flow computations and runs in time O*(n3/2).

17 Algorithm

18 Assume α = 1 Output a cut (S,T=V \ S) such that
|E(S,T)| ≤ min {|S|,|T|} OR Embed a (1/log2 n)-expander in G Algorithm tries to do this

19 We vs. Adversary G=(V,E) H=(V,F,w) n/2 n/2

20 We vs. Adversary G=(V,E) H=(V,F,w)

21 We vs. Adversary G=(V,E) H=(V,F,w)

22 We vs. Adversary G=(V,E) H=(V,F,w)

23 We vs. Adversary G=(V,E) H=(V,F,w)

24 We vs. Adversary G=(V,E) G=(V,E) H=(V,F,w) H=(V,F,w)

25 If we output a cut … Cut-size S = n/2 – k + l +|E(S,T)| < n/2 k
Therefore, |E(S,T)| < k – l ≤ |S| = min {|S|,|T|} l T Assume |S| ≤ |T|

26 On the other hand … Lemma: After O(log2 n) iterations,
H becomes an Ω(1)-expander. Proof: Later.

27 Lemma implies Main Theorem
H is a “sum” of O(log2 n) matchings. Each matching is routable in G. Therefore H/O(log2 n) is routable in G. Since H is an Ω(1)-expander, H/O(log2 n) is an Ω(1/log2 n)-expander.

28 How to prove that H becomes an expander?
A graph is an expander if and only if the random walk from every vertex mixes rapidly. This is NOT an expander.

29 “Simulating” Random Walks
1/4 1/2 1/4 1 1/2 1/4 1/4

30 “Simulating” Random Walks
1/8 1/8 1/4 1/4 1/8 1/8 H is an expander if all such distributions become uniform …

31 How does adversary find a cut in H
Adversary would like to find a balanced cut across which very small amount of probability has crossed over.

32 How does adversary find a cut in H

33 How does adversary find a cut in H
= +1 charge Random assignment of charge = –1 charge Mix the charge along the matchings …

34 How does adversary find a cut in H
Order the vertices according to the final charge present and cut in half. n/2 n/2

35 Analysis

36 Outline of the Analysis
For a vertex v, let Pv be the vector of probabilities present at v from all the n walks. Initially, Pv = (0,…,0,1,0,…,0) where 1 is at co-ordinate v. If we add an edge (u,v), we update these vectors as Pu := Pv := Pu + Pv 2

37 Outline of the Analysis
We prove that after O(log2 n) iterations, Pv ≈ π = (1/n,1/n,…,1/n) for all v. This, in turn, implies that after O(log2 n) iterations, the graph H becomes an Ω(1)-expander.

38 |Pu – π|2 + |Pv – π|2 – 2 |(Pu+Pv)/2 – π|2
Potential function Ψ = ∑v |Pv – π|2 Initial potential = ∑v |(0,…,1,…,0) – π|2 = n – 1 If (u,v) is matched, reduction in potential of u and v is |Pu – π|2 + |Pv – π|2 – 2 |(Pu+Pv)/2 – π|2

39 |Pu – π|2 + |Pv – π|2 – 2 |(Pu+Pv)/2 – π|2
Potential function If (u,v) is matched, reduction in potential of u and v is |Pu – π|2 + |Pv – π|2 – 2 |(Pu+Pv)/2 – π|2 = ½ |Pu – Pv|2 Therefore to reduce the potential fast, we should match u and v if |Pu – Pv| is large. Pu – π Pv – π

40 Random Projections

41 Random Projections Pv – π n/2 n/2

42 Random Projections ≈ Mixing Random Charges
Taking projections on a random vector r = (r1, r2, …, rn) is equivalent to Mixing the initial charges r1, r2, …, rn

43 Random Projections ≈ Mixing Random Charges
Pn “Probability spread matrix” P = Projections on r are given by P · r Note that P = Mt · Mt-1 · … · M1 · I Thus P · r = Mt ( Mt-1 ( … (M1 · r)) … )

44 Potential function The projected lengths are “faithful” to the actual lengths within a factor of log n. Therefore we can argue that the potential decreases by a factor of (1 – 1/log n) in each iteration. Thus after O(log2 n) iterations, the potential becomes negligible; and the random walks mix.

45 Running time Number of iterations = O(log2 n)
Each iteration = 1 max-flow = O*(m3/2) [Benczur-Karger’96] In O*(m) time, we can transform any graph G on n vertices into G’ on same vertices: G’ has O(n log (n)/ε2) edges All cuts in G’ have size within (1 ± ε) of those in G Overall running time = O*(m + n3/2)

46 Remarks Finally, when all random walks mix, 1/n … 1/n P = …
In fact, P can be routed in G. Thus we in fact embed a complete graph. 1/n … 1/n

47 Extensions to Balanced Separator
Partition V into S and T = V \ S such that |S|, |T| ≥ n/3, and |E(S,T)| is minimized. The techniques can be extended to yield O(log2 n) approximation for this problem in similar running times.

48 Open Questions n3 Running time n2 n3/2 n Approximation ratio
AM,LS,ST Running time LR this ARV AHK n3/2 ? n ? √log n log n polylog n quadratic Approximation ratio Improve approximation ratio and/or running time.

49 Open Questions Our algorithm can be thought of as a “primal”- “dual” algorithm. Is there a more general framework? Can we extend this technique to other problems?

50 Thank You

51 Graph Partitioning


Download ppt "Graph Partitioning using Single Commodity Flows"

Similar presentations


Ads by Google