Presentation is loading. Please wait.

Presentation is loading. Please wait.

Graph Sparsifiers Nick Harvey University of British Columbia Based on joint work with Isaac Fung, and independent work of Ramesh Hariharan & Debmalya Panigrahi.

Similar presentations


Presentation on theme: "Graph Sparsifiers Nick Harvey University of British Columbia Based on joint work with Isaac Fung, and independent work of Ramesh Hariharan & Debmalya Panigrahi."— Presentation transcript:

1 Graph Sparsifiers Nick Harvey University of British Columbia Based on joint work with Isaac Fung, and independent work of Ramesh Hariharan & Debmalya Panigrahi TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A

2 Approximating Dense Objects by Sparse Objects Floor joists Wood JoistsEngineered Joists

3 Approximating Dense Objects by Sparse Objects Bridges Masonry ArchTruss Arch

4 Approximating Dense Objects by Sparse Objects Bones Human FemurRobin Bone

5 Approximating Dense Objects by Sparse Objects Graphs Dense GraphSparse Graph

6 Input: Undirected graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) of G with weights w : F ! R + such that |F| is small and u( ± G (U)) = (1 § ² ) w( ± H (U)) 8 U µ V weight of edges between U and V\U in Gweight of edges between U and V\U in H Graph Sparsifiers Weighted subgraphs that approximately preserve graph structure Cut Sparsifiers (Karger ‘94) GH

7 Input: Undirected graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) of G with weights w : F ! R + such that |F| is small and x T L G x = (1 § ² ) x T L H x 8 x 2 R V Spectral Sparsifiers Weighted subgraphs that approximately preserve graph structure (Spielman-Teng ‘04) Laplacian matrix of GLaplacian matrix of H GH

8 Motivation: Faster Algorithms Dense Input graph G Exact/Approx Output Algorithm A for some problem P Sparse graph H Approximately preserves solution of P Algorithm A runs faster on sparse input Approximate Output (Fast) Sparsification Algorithm Min s-t cut, Sparsest cut, Max cut, …

9 State of the art SparsifierAuthors# edgesRunning Time CutBenczur-Karger ’96O(n log n / ² 2 )O(m log 3 n) SpectralSpielman-Teng ’04O(n log c n / ² 2 )O(m log c n / ² 2 ) SpectralSpielman-Srivastava ’08O(n log n / ² 2 )O(m log c n / ² 2 ) SpectralBatson et al. ’09O(n / ² 2 )O(mn 3 ) CutThis paperO(n log n / ² 2 )O(m + n log 5 n / ² 2 ) * SpectralLevin-Koutis-Peng ’12O(n log n / ² 2 )O(m log 2 n / ² 2 ) n = # vertices m = # edges *: The best algorithm in our paper is due to Panigrahi. ~ c = large constant

10 Random Sampling Can’t sample edges with same probability! Idea: [Benczur-Karger ’96] Sample low-connectivity edges with high probability, and high-connectivity edges with low probability Keep this Eliminate most of these

11 Generic algorithm Input: Graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) with weights w : F ! R + Choose ½ (= #sampling iterations) Choose probabilities { p e : e 2 E } For i=1 to ½ For each edge e 2 E With probability p e Add e to F Increase w e by u e /( ½ p e ) E[|F|] · ½ ¢  e p e E[ w e ] = u e 8 e 2 E ) For every U µ V, E[ w( ± H (U)) ] = u( ± G (U)) [Benczur-Karger ‘96] Goal 1: E[|F|] = O(n log n / ² 2 ) How should we choose these parameters? Goal 2: w( ± H (U)) is highly concentrated

12 Benczur-Karger Algorithm Input: Graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) with weights w : F ! R + Choose ½ = O(log n / ² 2 ) Let p e = 1/“strength” of edge e For i=1 to ½ For each edge e 2 E With probability p e Add e to F Increase w e by u e /( ½ p e ) Fact 1: E[|F|] = O(n log n / ² 2 ) Fact 2: w( ± H (U)) is very highly concentrated ) For every U µ V, w( ± H (U)) = (1 § ² ) u( ± G (U)) “strength” is a slightly unusual quantity, but Fact 3: Can estimate all edge strengths in O(m log 3 n) time “strength” is a slightly unusual quantity Question: [BK ‘02] Can we use connectivity instead of strength?

13 Our Algorithm Input: Graph G=(V,E), weights u : E ! R + Output: A subgraph H=(V,F) with weights w : F ! R + Choose ½ = O(log 2 n / ² 2 ) Let p e = 1/“connectivity” of e For i=1 to ½ For each edge e 2 E With probability p e Add e to F Increase w e by u e /( ½ p e ) Fact 1: E[|F|] = O(n log 2 n / ² 2 ) Fact 2: w( ± H (U)) is very highly concentrated ) For every U µ V, w( ± H (U)) = (1 § ² ) u( ± G (U)) Extra trick: Can shrink |F| to O(n log n / ² 2 ) by using Benczur-Karger to sparsify our sparsifier!

14 Motivation for our algorithm Connectivities are simpler and more natural ) Faster to compute Fact: Can estimate all edge connectivities in O(m + n log n) time [Ibaraki-Nagamochi ’92] ) Useful in other scenarios Our sampling method has been used to compute sparsifiers in the streaming model [Ahn-Guha-McGregor ’12]

15 Overview of Analysis Most cuts hit a huge number of edges ) extremely concentrated ) whp, most cuts are close to their mean Most Cuts are Big & Easy!

16 Overview of Analysis High connectivity Low sampling probability Low connectivity High sampling probability Hits many red edges ) reasonably concentrated Hits only one red edge ) poorly concentrated The same cut also hits many green edges ) highly concentrated This masks the poor concentration above There are few small cuts [Karger ’94], so probably all are concentrated. Key Question: Are there few such cuts? Key Lemma: Yes!

17 Notation: k uv = min size of a cut separating u and v Main ideas: – Partition edges into connectivity classes E = E 1 [ E 2 [... E log n where E i = { e : 2 i-1 · k e <2 i } – Prove weight of sampled edges that each cut takes from each connectivity class is about right – This yields a sparsifier U

18 Prove weight of sampled edges that each cut takes from each connectivity class is about right Notation: C = ± (U) is a cut C i := ± (U) Å E i is a cut-induced set Chernoff bounds can analyze each cut-induced set, but… Key Question: Are there few small cut-induced sets? C1C1 C2C2 C3C3 C4C4

19 Counting Small Cuts Lemma: [Karger ’93] Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, for every ® ¸ 1, |{ ± (U) : | ± (U)| · ® K }| < n 2 ®. Example: Let G = n-cycle. Edge connectivity is K=2. Number of cuts of size c = £ ( n c ). ) |{ ± (U) : | ± (U)| · ® K }| · O (n 2 ® ).

20 Counting Small Cut-Induced Sets Our Lemma: Let G=(V,E) be a graph. Fix any B µ E. Suppose k e ¸ K for all e in B. (k uv = min size of a cut separating u and v) Then, for every ® ¸ 1, |{ ± (U) Å B : | ± (U)| · ® K }| < n 2 ®. Karger’s Lemma: the special case B=E and K=min cut.

21 When is Karger’s Lemma Weak? Lemma: [Karger ’93] Let G=(V,E) be a graph. Let K be the edge-connectivity of G. (i.e., global min cut value) Then, for every c ¸ K, |{ ± (U) : | ± (U)| · c }| < n 2c/K. Example: Let G = n-cycle. Edge connectivity is K=2 |{ cuts of size c }| < n c ² K = ² < n 2c/ ²

22 Our Lemma Still Works Our Lemma: Let G=(V,E) be a graph. Fix any B µ E. Suppose k e ¸ K for all e in B. (k uv = min size of a cut separating u and v) Then, for every ® ¸ 1, |{ ± (U) Å B : | ± (U)| · ® K }| < n 2 ®. Example: Let G = n-cycle. Let B = cycle edges. We can take K=2. So |{ ± (U) Å B : | ± (U)| · ® K }| < n 2 ®. |{ cut-induced subsets of B induced by cuts of size · c }| · n c ²

23 Algorithm for Finding a Min Cut [Karger ’93] Input: A graph Output: A minimum cut (maybe) While graph has  2 vertices – Pick an edge at random – Contract it End While Output remaining edges Claim: For any min cut, this algorithm outputs it with probability ¸ 1/n 2. Corollary: There are · n 2 min cuts.

24 Finding a Small Cut-Induced Set Input: A graph G=(V,E), and B µ E Output: A cut-induced subset of B While graph has  2 vertices – If some vertex v has no incident edges in B Split-off all edges at v and delete v – Pick an edge at random – Contract it End While Output remaining edges in B Claim: For any min cut-induced subset of B, this algorithm outputs it with probability > 1/n 2. Corollary: There are < n 2 min cut-induced subsets of B Splitting Off Replace edges {u,v} and {u’,v} with {u,u’} while preserving edge-connectivity between all vertices other than v Splitting Off Replace edges {u,v} and {u’,v} with {u,u’} while preserving edge-connectivity between all vertices other than v Wolfgang Mader v u u’ v u

25 Conclusions Questions Sampling according to connectivities gives a sparsifier We generalize Karger’s cut counting lemma Improve O(log 2 n) to O(log n) in sampling analysis Applications of our cut-counting lemma?


Download ppt "Graph Sparsifiers Nick Harvey University of British Columbia Based on joint work with Isaac Fung, and independent work of Ramesh Hariharan & Debmalya Panigrahi."

Similar presentations


Ads by Google