Presentation is loading. Please wait.

Presentation is loading. Please wait.

Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey U. Waterloo C&O Joint work with Isaac Fung TexPoint fonts used in EMF. Read.

Similar presentations


Presentation on theme: "Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey U. Waterloo C&O Joint work with Isaac Fung TexPoint fonts used in EMF. Read."— Presentation transcript:

1 Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey U. Waterloo C&O Joint work with Isaac Fung TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A

2 What are sparsifiers? Approximating all cuts – Sparsifiers: number of edges = O(n log n / ² 2 ), every cut approximated within 1+ ². [BK’96] – O~(m) time algorithm to construct them Spectral approximation – Spectral sparsifiers: number of edges = O(n log n / ² 2 ), “entire spectrum” approximated within 1+ ². [SS’08] – O~(m) time algorithm to construct them [BSS’09] Poly(n) n = # vertices Laplacian matrix of G Laplacian matrix of Sparsifier Weighted subgraphs that approximately preserve some properties m = # edges Poly(n) [BSS’09]

3 Why are sparsifiers useful? Approximating all cuts – Sparsifiers: fast algorithms for cut/flow problem ProblemApproximationRuntimeReference Min st Cut1+ ² O~(n 2 )BK’96 Sparsest CutO(log n)O~(n 2 )BK’96 Max st Flow1O~(m+nv)KL’02 Sparsest CutO~(n 2 )AHK’05 Sparsest CutO(log 2 n)O~(m+n 3/2 )KRV’06 Sparsest CutO~(m+n 3/2+ ² )S’09 Perfect Matching in Regular Bip. Graphs n/aO~(n 1.5 )GKK’09 Sparsest CutO~(m+n 1+ ² )M’10 v = flow value n = # vertices m = # edges

4 Our Motivation BSS algorithm is very mysterious, and “too good to be true” Are there other methods to get sparsifiers with only O(n/ ² 2 ) edges? Wild Speculation: Union of O(1/ ² 2 ) random spanning trees gives a sparsifier (if weighted appropriately) – True for complete graph [GRV ‘08] We prove: Speculation is false, but Union of O(log 2 n/ ² 2 ) random spanning trees gives a sparsifier

5 Formal problem statement Design an algorithm such that Input: An undirected graph G=(V,E) Output: A weighted subgraph H=(V,F,w), where F µ E and w : F ! R Goals: | | ± G (U)| - w( ± H (U)) | · ² | ± G (U)| 8 U µ V |F| = O(n log n / ² 2 ) Running time = O~( m / ² 2 ) # edges between U and V\U in G weight of edges between U and V\U in H n = # vertices m = # edges | | ± (U)| - w( ± (U)) | · ² | ± (U)| 8 U µ V

6 Sparsifying Complete Graph Sampling: Construct H by sampling every edge of G with prob p=100 log n/n. Give each edge weight 1/p. Properties of H: # sampled edges = O(n log n) | ± G (U)| ¼ | ± H (U)| 8 U µ V So H is a sparsifier of G

7 Generalize to arbitrary G? Can’t sample edges with same probability! Idea [BK’96] Sample low-connectivity edges with high probability, and high-connectivity edges with low probability Keep this Eliminate most of these

8 Non-uniform sampling algorithm [BK’96] Input: Graph G=(V,E), parameters p e 2 [0,1] Output: A weighted subgraph H=(V,F,w), where F µ E and w : F ! R For i=1 to ½ For each edge e 2 E With probability p e, Add e to F Increase w e by 1/( ½ p e ) Main Question: Can we choose ½ and p e ’s to achieve sparsification goals?

9 Non-uniform sampling algorithm [BK’96] Claim: H perfectly approximates G in expectation! For any e 2 E, E[ w e ] = 1 ) For every U µ V, E[ w( ± H (U)) ] = | ± G (U)| Goal: Show every w( ± H (U)) is tightly concentrated Input: Graph G=(V,E), parameters p e 2 [0,1] Output: A weighted subgraph H=(V,F,w), where F µ E and w : F ! R For i=1 to ½ For each edge e 2 E With probability p e, Add e to F Increase w e by 1/( ½ p e )

10 Prior Work Benczur-Karger ‘96 – Set ½ = O(log n), p e = 1/“strength” of edge e (max k s.t. e is contained in a k-edge-connected vertex-induced subgraph of G) – All cuts are preserved –  e p e · n ) |F| = O(n log n) (# edges in sparsifier) – Running time is O(m log 3 n) Spielman-Srivastava ‘08 – Set ½ = O(log n), p e = “effective resistance” of edge e (view G as an electrical network where each edge is a 1-ohm resistor) – H is a spectral sparsifier of G ) all cuts are preserved –  e p e = n-1 ) |F| = O(n log n) (# edges in sparsifier) – Running time is O(m log 50 n) – Uses “Matrix Chernoff Bound” Assume ² is constant O(m log 3 n) [Koutis-Miller-Peng ’10] Similar to edge connectivity

11 Our Work Fung-Harvey ’10 (independently Hariharan-Panigrahi ‘10) – Set ½ = O(log 2 n), p e = 1/edge-connectivity of edge e – All cuts are preserved –  e p e · n ) |F| = O(n log 2 n) – Running time is O(m log 2 n) – Advantages: Edge connectivities natural, easy to compute Faster than previous algorithms Implies sampling by edge strength, effective resistances, or random spanning trees works – Disadvantages: Extra log factor, no spectral sparsification Assume ² is constant (min size of a cut that contains e) Why? Pr[ e 2 T ] = effective resistance of e and edges are negatively correlated

12 Our Work Fung-Harvey ’10 (independently Hariharan-Panigrahi ‘10) – Set ½ = O(log 2 n), p e = 1/edge-connectivity of edge e – All cuts are preserved –  e p e · n ) |F| = O(n log 2 n) – Running time is O(m log 2 n) – Advantages: Edge connectivities natural, easy to compute Faster than previous algorithms Implies sampling by edge strength, effective resistances… Extra trick: Can shrink |F| to O(n log n) by using Benczur-Karger to sparsify our sparsifier! – Running time is O(m log 2 n) + O~(n) Assume ² is constant (min size of a cut that contains e) O(n log n)

13 Our Work Fung-Harvey ’10 (independently Hariharan-Panigrahi ‘10) – Set ½ = O(log 2 n), p e = 1/edge-connectivity of edge e – All cuts are preserved –  e p e · n ) |F| = O(n log 2 n) – Running time is O(m log 2 n) – Advantages: Edge connectivities natural, easy to compute Faster than previous algorithms Implies sampling by edge strength, effective resistances… Panigrahi ’10 – A sparsifier with O(n log n / ² 2 ) edges, with running time O(m) in unwtd graphs and O(m)+O~(n/ ² 2 ) in wtd graphs Assume ² is constant (min size of a cut that contains e)

14 Notation: k uv = min size of a cut separating u and v Main ideas: – Partition edges into connectivity classes E = E 1 [ E 2 [... E log n where E i = { e : 2 i-1 · k e <2 i }

15 Notation: k uv = min size of a cut separating u and v Main ideas: – Partition edges into connectivity classes E = E 1 [ E 2 [... E log n where E i = { e : 2 i-1 · k e <2 i } – Prove weight of sampled edges that each cut takes from each connectivity class is about right – This yields a sparsifier U

16 Prove weight of sampled edges that each cut takes from each connectivity class is about right Notation: C = ± (U) is a cut C i = ± (U) Å E i is a cut-induced set Need to prove: C1C1 C2C2 C3C3 C4C4

17 Notation: C i = ± (U) Å E i is a cut-induced set C1C1 C2C2 C3C3 C4C4 Prove 8 cut-induced set C i Key Ingredients Chernoff bound: Prove small Bound on # small cuts: Prove #{ cut-induced sets C i induced by a small cut |C| } is small. Union bound: sum of failure probabilities is small, so probably no failures.

18 Counting Small Cut-Induced Sets Theorem: Let G=(V,E) be a graph. Fix any B µ E. Suppose k e ¸ K for all e in B. (k uv = min size of a cut separating u and v) Then, for every ® ¸ 1, |{ ± (U) Å B : | ± (U)| · ® K }| < n 2 ®. Corollary: Counting Small Cuts [K’93] Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, for every ® ¸ 1, |{ ± (U) : | ± (U)| · ® K }| < n 2 ®.

19 Comparison Theorem: Let G=(V,E) be a graph. Fix any B µ E. Suppose k e ¸ K for all e in B. (k uv = min size of a cut separating u and v) Then |{ ± (U) Å B : | ± (U)| · c }| < n 2c/K 8 c ¸ 1. Corollary [K’93]: Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, |{ ± (U) : | ± (U)| · c }| < n 2c/K 8 c ¸ 1. How many cuts of size 1? Theorem says < n 2, taking K=c=1. Corollary, says < 1, because K=0. (Slightly unfair)

20 Conclusions Graph sparsifiers important for fast algorithms and some combinatorial theorems Sampling by edge-connectivities gives a sparsifier with O(n log 2 n) edges in O(m log 2 n) time – Improvements: O(n log n) edges in O(m) + O~(n) time [Panigrahi ‘10] Sampling by effective resistances also works ) sampling O(log 2 n) random spanning trees gives a sparsifier Questions Improve log 2 n to log n? Sampling o(log n) random trees gives a sparsifier with o(log n) approximation?


Download ppt "Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey U. Waterloo C&O Joint work with Isaac Fung TexPoint fonts used in EMF. Read."

Similar presentations


Ads by Google