Presentation is loading. Please wait.

Presentation is loading. Please wait.

the k-cut problem better approximate and exact algorithms

Similar presentations


Presentation on theme: "the k-cut problem better approximate and exact algorithms"— Presentation transcript:

1 the k-cut problem better approximate and exact algorithms
Anupam Gupta Carnegie Mellon University Based on joint work with: Euiwoong Lee (NYU) and Jason Li (CMU) SODA 2018 (and work in progress)

2 fine-grained approximation algorithms
Anupam Gupta Carnegie Mellon University Based on joint work with: Euiwoong Lee (NYU) and Jason Li (CMU) SODA 2018 (and work in progress)

3 the k-cut problem Given graph G, delete minimum weight edges to cut graph into at least k components. NP-hard [Goldschmidt Hochbaum 94] rand min-cut algo in 𝑂(𝑛 2𝑘−2 ) time [Karger Stein 96] det. algo in 𝑂(𝑛 2𝑘 ) time [Thorup 00] W[1] hard with parameter 𝑘 [Downey et al. 03] 2-approx [Saran Vazirani 95] (2-𝜀)-approx disproves Small Set Expansion hypothesis [Manurangsi 17] Q: Can we do better?

4 do better? version 1 (approx. algos)
𝑊[1] hard with parameter 𝑘 can’t do better than 𝑛 𝑂(𝑘) exactly 2−𝜀 -approx disproves SSE can’t do better than 2x in poly time Q: Can we get 1+𝜀 -approximation in FPT time 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦(𝑛)? A: We don’t know, but… Theorem: 1.8-approx in FPT time. Today: show ideas behind approx in FPT time.

5 do better? version 2 (exact algos)
at least as hard as clique may not beat 𝑛 𝜔𝑘/3 best exact algorithms take O( 𝑛 2𝑘−2 ) time Q: Can we get tight results? A: We don’t know, but… Theorem: exact algorithms in 𝑛 𝟐(𝜔𝑘/3) Today (probably not): show ideas behind algorithm that runs in time 𝑛 𝑘(1+𝜔/3) “ 𝑛 2𝑘/3 ” “ 𝑛 4𝑘/3 ” “ 𝑛 5𝑘/3 ”

6 result #1: FPT approx Theorem: approx for 𝑘-cut in time 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦 𝑛 . The main ideas: greedy algo is 2-approx. if greedy is better, we’re done look at instances where greedy does poorly bad examples have special structure exploit structure via another algorithm

7 the greedy algorithm [Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut Proof: 𝜕𝑆 1 ∗ is possible cut ⇒ 𝐶 1 ≤ |𝜕𝑆 1 ∗ |. either 𝜕𝑆 1 ∗ or 𝜕𝑆 2 ∗ is a possible cut ⇒ 𝐶 2 ≤ |𝜕𝑆 2 ∗ |. For all 𝑖∈ 𝑘−1 , our cut 𝐶 𝑖 ≤ |𝜕𝑆 𝑖 ∗ |. Hence our cost is at most 2(1−1/𝑘) times OPT. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

8 tight examples For 𝑘−1 iterations, greedily take the min global cut
[Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut Proof: 𝜕𝑆 1 ∗ is possible cut ⇒ 𝐶 1 ≤ |𝜕𝑆 1 ∗ |. either 𝜕𝑆 1 ∗ or 𝜕𝑆 2 ∗ is a possible cut ⇒ 𝐶 2 ≤ |𝜕𝑆 2 ∗ |. For all 𝑖∈ 𝑘−1 , our cut 𝐶 𝑖 ≤ |𝜕𝑆 𝑖 ∗ |. Hence our cost is at most 2(1−1/𝑘) times OPT. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

9 idea #1: branching [Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut recall: cut 𝐶 𝑖 ≤ |𝜕𝑆 𝒊 ∗ | what about 𝐶 𝑖 vs |𝜕𝑆 𝟏 ∗ |? suppose 𝐶 𝑖 > |𝜕𝑆 𝟏 ∗ | then 𝜕𝑆 𝟏 ∗ must be completely cut, else it's a valid cut! ⇒ union of some of algo’s components is exactly 𝑆 𝟏 ∗ (!!) Idea #𝟏: guess all subsets of components Branching factor: 2 𝑘 , branching depth: 𝑘 ⇒ 2 𝑘 2 time Henceforth, assume 𝐶 𝑖 ≤ |𝜕𝑆 𝟏 ∗ | for all components 𝐶 𝑖 that we find in greedy. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

10 all cuts smaller than 𝜕 𝑆 1 ∗
[Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut Assume: all cuts 𝐶 𝑖 ≤ |𝜕𝑆 𝟏 ∗ |. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

11 idea #2: if there are gaps…
Idea #2: If there’s a gap, we win. if 𝑎≥0.1𝑘 𝐴𝐿𝐺 1+0.1𝜀 ≤2𝑂𝑃𝑇. Similarly, if 𝑏≤0.9𝑘 (1+𝜖)𝜕 𝑆 1 ∗ 𝜕 𝑆 1 ∗ (1−𝜖)𝜕 𝑆 1 ∗ a b

12 hard case for greedy No gaps: even first cut 𝐶 1 has 𝜕𝐶 1 ≈|𝜕 𝑆 1 ∗ | ≈|𝜕 𝑆 𝑖 ∗ | In particular, 𝜕 𝑆 𝑖 ∗ ≈ mincut(G) for all 𝑖=1…(𝑘−1)

13 crossing cuts Hard case: 𝜕 𝑆 𝑖 ∗ ≈ mincut(G) for all 𝑖=1…(𝑘−1)
Consider all 1+𝜀 -min-cuts in G. Idea #3: If two of them cross: min 4-cut ≤2 1+𝜀 mincut(G). Greedily take min-4-cuts: pay 𝟐 1+𝜀 mincut(G), get 𝟑 new pieces. so paying ≈2/3 mincut(G) per new component. If we can do this Ω 𝑘 times, we save Ω(𝑂𝑃𝑇).

14 crossing cuts Hard case: 𝜕 𝑆 𝑖 ∗ ≈ mincut(G) for all 𝑖=1…(𝑘−1)
near-min-cuts don’t cross (laminar family) Recall: only 𝑛 2(1+𝜀) many near-min-cuts. Can represent by tree T. Optimal cut = deleting 𝑘−1 incomparable edges in this tree Idea #4: special algorithm for this “laminar cut” problem. “Given G and T, delete k-1 incomp edges, minimize cut value”

15 special case: “partial vertex cover”
T = star Cut 𝑘−1 edges to minimize cut. Reduces to: graph H, pick 𝑘−1 nodes, minimize weight of edges hitting them Algo: Pick 𝑘 min-degree nodes. overcount!  Case 1: 𝑂𝑃𝑇≥ 𝑘 2 /𝜀 Case II: 𝑂𝑃𝑇≤ 𝑘 2 /𝜀 #edges H overcount ≤ 𝑘 2 edges, error 𝜀𝑂𝑃𝑇 randomly color edges red/blue, throw away red edges. w.p. 2 − 𝑘 2 /𝜀 have all intra-OPT edges blue, all cut edges red. ⇒ 1+𝜀 -approximation in FPT time

16 to recap Theorem: 1.999-approx for 𝑘-cut in time 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦 𝑛 .
The main ideas: ensure we haven’t found the min-cut, else we win repeatedly find either min-cut or min 4-wise cut if many of these “small”, we win else, instance has most near-min-cuts being non-crossing then use special algorithm for this “laminar-cut” instance builds on the partial vertex cover ideas dynamic programming which repeatedly calls partial vertex cover.

17 result #2: faster exact algos
Theorem: exact algorithms in 𝑛 𝟐(𝜔𝑘/3) time Today: algorithm in 𝑛 𝑘(1+𝜔/3) time. Main ideas: Thorup’s tree-packing theorem incomparable case: reduction to max-weight triangle general case: careful dynamic program reducing the crossings

18 Thorup’s tree theorem Can efficiently find tree T such that it crosses
OPT cut at most 2𝑘−2 times. Enumeration gives his 𝑛 2𝑘−2 time algorithm. Use structure of k-cut to get better?

19 special case: 𝑘−1 incomparable crossings
tree crosses OPT 𝑘−1 times Say 𝑘−1=3 times. Create tripartite graph: “nodes” in each part = edges of tree. link between two “nodes” if incomparable Weight of link captures edges cut: w( , ) = wt. of edges leaving red blob, except to green blob find max-weight triangle in 𝑂 𝑀 𝑛 𝜔 time. To remove incomparability assumption: careful DP. 𝑉 1 𝑉 3 𝑉 2

20 reducing crossings In general, tree crosses OPT 2𝑘−2 times Show that:
delete random edge from tree & add random edge in G crossing resulting cut Pr[ reduce the number of edges in OPT ] = Ω 1 𝑛 𝑘 2

21 recap of result #2: faster exact algos
Today: algorithm in 𝑛 𝑘(1+𝜔/3) time. Main ideas: Thorup’s tree-packing theorem incompable case: reduction to max-weight triangle general case: delicate DP reducing the crossings

22 in summary New algorithms for k-cut, both in the exact and approximate setting. fixed-parameter runtime to get better approximations brings new set of ideas into play tight results? extension to the hypergraph k-cut problem? Better understanding of interplay between FPT and approximation? better algorithms for other problems in FPT time? E.g., dense-k-subgraph in 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦 𝑛 time? Thanks!


Download ppt "the k-cut problem better approximate and exact algorithms"

Similar presentations


Ads by Google