the k-cut problem better approximate and exact algorithms

Slides:



Advertisements
Similar presentations
Lower Bounds for Additive Spanners, Emulators, and More David P. Woodruff MIT and Tsinghua University To appear in FOCS, 2006.
Advertisements

Iterative Rounding and Iterative Relaxation
Bart Jansen 1.  Problem definition  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least k leaves?
Poly-Logarithmic Approximation for EDP with Congestion 2
Introduction to Kernel Lower Bounds Daniel Lokshtanov.
Bipartite Matching, Extremal Problems, Matrix Tree Theorem.
Greedy Algorithms Greed is good. (Some of the time)
Approximating Maximum Edge Coloring in Multigraphs
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Algorithms for Max-min Optimization
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Prize Collecting Cuts Daniel Golovin Carnegie Mellon University Lamps of ALADDIN 2005 Joint work with Mohit Singh & Viswanath Nagarajan.
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
(work appeared in SODA 10’) Yuk Hei Chan (Tom)
1 Refined Search Tree Technique for Dominating Set on Planar Graphs Jochen Alber, Hongbing Fan, Michael R. Fellows, Henning Fernau, Rolf Niedermeier, Fran.
Outline Introduction The hardness result The approximation algorithm.
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Ramsey Properties of Random Graphs; A Sharp Threshold Proven via A Hypergraph Regularity Lemma. Ehud Friedgut, Vojtech Rödl, Andrzej Rucinski, Prasad.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
Instructor Neelima Gupta Table of Contents Introduction to Approximation Algorithms Factor 2 approximation algorithm for TSP Factor.
Approximation algorithms
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
The NP class. NP-completeness
Introduction to Approximation Algorithms
Algorithms for Finding Distance-Edge-Colorings of Graphs
Characteristics of Planar Graphs
Introduction to Algorithms
Optimization problems such as
Approximating the MST Weight in Sublinear Time
An introduction to Approximation Algorithms Presented By Iman Sadeghi
Approximate Algorithms (chap. 35)
Maximum Matching in the Online Batch-Arrival Model
Lecture 22 Complexity and Reductions
Algorithm Design and Analysis
CS4234 Optimiz(s)ation Algorithms
Great Theoretical Ideas in Computer Science
Lecture 18: Uniformity Testing Monotonicity Testing
The Price of information in combinatorial optimization
Greedy Algorithms / Minimum Spanning Tree Yin Tat Lee
Chapter 5. Optimal Matchings
Approximating k-route cuts
Possibilities and Limitations in Computation
Computability and Complexity
Great Theoretical Ideas In Computer Science
k-center Clustering under Perturbation Resilience
Constrained Bipartite Vertex Cover: The Easy Kernel is Essentially Tight Bart M. P. Jansen June 4th, WORKER 2015, Nordfjordeid, Norway.
Bart M. P. Jansen June 3rd 2016, Algorithms for Optimization Problems
Coping With NP-Completeness
Dynamic and Online Algorithms for Set Cover
Approximation and Kernelization for Chordal Vertex Deletion
Coverage Approximation Algorithms
On the effect of randomness on planted 3-coloring models
The Byzantine Secretary Problem
Graphs and Algorithms (2MMD30)
On Approximating Covering Integer Programs
Optimization Problems Online with Random Demands
NP-Completeness Yin Tat Lee
Embedding Metrics into Geometric Spaces
Clustering.
Coping With NP-Completeness
Dynamic and Online Algorithms:
Instructor: Aaron Roth
Lecture 28 Approximation of Set Cover
Parameterized Complexity of Conflict-free Graph Coloring
Presentation transcript:

the k-cut problem better approximate and exact algorithms Anupam Gupta Carnegie Mellon University Based on joint work with: Euiwoong Lee (NYU) and Jason Li (CMU) SODA 2018 (and work in progress)

fine-grained approximation algorithms Anupam Gupta Carnegie Mellon University Based on joint work with: Euiwoong Lee (NYU) and Jason Li (CMU) SODA 2018 (and work in progress)

the k-cut problem Given graph G, delete minimum weight edges to cut graph into at least k components. NP-hard [Goldschmidt Hochbaum 94] rand min-cut algo in 𝑂(𝑛 2𝑘−2 ) time [Karger Stein 96] det. algo in 𝑂(𝑛 2𝑘 ) time [Thorup 00] W[1] hard with parameter 𝑘 [Downey et al. 03] 2-approx [Saran Vazirani 95] (2-𝜀)-approx disproves Small Set Expansion hypothesis [Manurangsi 17] Q: Can we do better?

do better? version 1 (approx. algos) 𝑊[1] hard with parameter 𝑘 can’t do better than 𝑛 𝑂(𝑘) exactly 2−𝜀 -approx disproves SSE can’t do better than 2x in poly time Q: Can we get 1+𝜀 -approximation in FPT time 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦(𝑛)? A: We don’t know, but… Theorem: 1.8-approx in FPT time. Today: show ideas behind 1.999-approx in FPT time.

do better? version 2 (exact algos) at least as hard as clique may not beat 𝑛 𝜔𝑘/3 best exact algorithms take O( 𝑛 2𝑘−2 ) time Q: Can we get tight results? A: We don’t know, but… Theorem: exact algorithms in 𝑛 𝟐(𝜔𝑘/3) Today (probably not): show ideas behind algorithm that runs in time 𝑛 𝑘(1+𝜔/3) “ 𝑛 2𝑘/3 ” “ 𝑛 4𝑘/3 ” “ 𝑛 5𝑘/3 ”

result #1: FPT approx Theorem: 1.999-approx for 𝑘-cut in time 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦 𝑛 . The main ideas: greedy algo is 2-approx. if greedy is better, we’re done look at instances where greedy does poorly bad examples have special structure exploit structure via another algorithm

the greedy algorithm [Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut Proof: 𝜕𝑆 1 ∗ is possible cut ⇒ 𝐶 1 ≤ |𝜕𝑆 1 ∗ |. either 𝜕𝑆 1 ∗ or 𝜕𝑆 2 ∗ is a possible cut ⇒ 𝐶 2 ≤ |𝜕𝑆 2 ∗ |. For all 𝑖∈ 𝑘−1 , our cut 𝐶 𝑖 ≤ |𝜕𝑆 𝑖 ∗ |. Hence our cost is at most 2(1−1/𝑘) times OPT. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

tight examples For 𝑘−1 iterations, greedily take the min global cut [Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut Proof: 𝜕𝑆 1 ∗ is possible cut ⇒ 𝐶 1 ≤ |𝜕𝑆 1 ∗ |. either 𝜕𝑆 1 ∗ or 𝜕𝑆 2 ∗ is a possible cut ⇒ 𝐶 2 ≤ |𝜕𝑆 2 ∗ |. For all 𝑖∈ 𝑘−1 , our cut 𝐶 𝑖 ≤ |𝜕𝑆 𝑖 ∗ |. Hence our cost is at most 2(1−1/𝑘) times OPT. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

idea #1: branching [Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut recall: cut 𝐶 𝑖 ≤ |𝜕𝑆 𝒊 ∗ | what about 𝐶 𝑖 vs |𝜕𝑆 𝟏 ∗ |? suppose 𝐶 𝑖 > |𝜕𝑆 𝟏 ∗ | then 𝜕𝑆 𝟏 ∗ must be completely cut, else it's a valid cut! ⇒ union of some of algo’s components is exactly 𝑆 𝟏 ∗ (!!) Idea #𝟏: guess all subsets of components Branching factor: 2 𝑘 , branching depth: 𝑘 ⇒ 2 𝑘 2 time Henceforth, assume 𝐶 𝑖 ≤ |𝜕𝑆 𝟏 ∗ | for all components 𝐶 𝑖 that we find in greedy. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

all cuts smaller than 𝜕 𝑆 1 ∗ [Saran Vazirani 95] greedy algorithm is 2-approximate For 𝑘−1 iterations, greedily take the min global cut Assume: all cuts 𝐶 𝑖 ≤ |𝜕𝑆 𝟏 ∗ |. 𝜕 𝑆 1 ∗ ≤ 𝜕 𝑆 2 ∗ ≤ … ≤|𝜕 𝑆 𝑘 ∗ | 2 𝑂𝑃𝑇= 𝑖 |𝜕 𝑆 𝑖 ∗ |

idea #2: if there are gaps… Idea #2: If there’s a gap, we win. if 𝑎≥0.1𝑘 𝐴𝐿𝐺 1+0.1𝜀 ≤2𝑂𝑃𝑇. Similarly, if 𝑏≤0.9𝑘 (1+𝜖)𝜕 𝑆 1 ∗ 𝜕 𝑆 1 ∗ (1−𝜖)𝜕 𝑆 1 ∗ a b

hard case for greedy No gaps: even first cut 𝐶 1 has 𝜕𝐶 1 ≈|𝜕 𝑆 1 ∗ | ≈|𝜕 𝑆 𝑖 ∗ | In particular, 𝜕 𝑆 𝑖 ∗ ≈ mincut(G) for all 𝑖=1…(𝑘−1)

crossing cuts Hard case: 𝜕 𝑆 𝑖 ∗ ≈ mincut(G) for all 𝑖=1…(𝑘−1) Consider all 1+𝜀 -min-cuts in G. Idea #3: If two of them cross: min 4-cut ≤2 1+𝜀 mincut(G). Greedily take min-4-cuts: pay 𝟐 1+𝜀 mincut(G), get 𝟑 new pieces. so paying ≈2/3 mincut(G) per new component. If we can do this Ω 𝑘 times, we save Ω(𝑂𝑃𝑇).

crossing cuts Hard case: 𝜕 𝑆 𝑖 ∗ ≈ mincut(G) for all 𝑖=1…(𝑘−1) near-min-cuts don’t cross (laminar family) Recall: only 𝑛 2(1+𝜀) many near-min-cuts. Can represent by tree T. Optimal cut = deleting 𝑘−1 incomparable edges in this tree Idea #4: special algorithm for this “laminar cut” problem. “Given G and T, delete k-1 incomp edges, minimize cut value”

special case: “partial vertex cover” T = star Cut 𝑘−1 edges to minimize cut. Reduces to: graph H, pick 𝑘−1 nodes, minimize weight of edges hitting them Algo: Pick 𝑘 min-degree nodes. overcount!  Case 1: 𝑂𝑃𝑇≥ 𝑘 2 /𝜀 Case II: 𝑂𝑃𝑇≤ 𝑘 2 /𝜀 #edges H overcount ≤ 𝑘 2 edges, error 𝜀𝑂𝑃𝑇 randomly color edges red/blue, throw away red edges. w.p. 2 − 𝑘 2 /𝜀 have all intra-OPT edges blue, all cut edges red. ⇒ 1+𝜀 -approximation in FPT time

to recap Theorem: 1.999-approx for 𝑘-cut in time 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦 𝑛 . The main ideas: ensure we haven’t found the min-cut, else we win repeatedly find either min-cut or min 4-wise cut if many of these “small”, we win else, instance has most near-min-cuts being non-crossing then use special algorithm for this “laminar-cut” instance builds on the partial vertex cover ideas dynamic programming which repeatedly calls partial vertex cover.

result #2: faster exact algos Theorem: exact algorithms in 𝑛 𝟐(𝜔𝑘/3) time Today: algorithm in 𝑛 𝑘(1+𝜔/3) time. Main ideas: Thorup’s tree-packing theorem incomparable case: reduction to max-weight triangle general case: careful dynamic program reducing the crossings

Thorup’s tree theorem Can efficiently find tree T such that it crosses OPT cut at most 2𝑘−2 times. Enumeration gives his 𝑛 2𝑘−2 time algorithm. Use structure of k-cut to get better?

special case: 𝑘−1 incomparable crossings tree crosses OPT 𝑘−1 times Say 𝑘−1=3 times. Create tripartite graph: “nodes” in each part = edges of tree. link between two “nodes” if incomparable Weight of link captures edges cut: w( , ) = wt. of edges leaving red blob, except to green blob find max-weight triangle in 𝑂 𝑀 𝑛 𝜔 time. To remove incomparability assumption: careful DP. 𝑉 1 𝑉 3 𝑉 2

reducing crossings In general, tree crosses OPT 2𝑘−2 times Show that: delete random edge from tree & add random edge in G crossing resulting cut Pr[ reduce the number of edges in OPT ] = Ω 1 𝑛 𝑘 2

recap of result #2: faster exact algos Today: algorithm in 𝑛 𝑘(1+𝜔/3) time. Main ideas: Thorup’s tree-packing theorem incompable case: reduction to max-weight triangle general case: delicate DP reducing the crossings

in summary New algorithms for k-cut, both in the exact and approximate setting. fixed-parameter runtime to get better approximations brings new set of ideas into play tight results? extension to the hypergraph k-cut problem? Better understanding of interplay between FPT and approximation? better algorithms for other problems in FPT time? E.g., dense-k-subgraph in 𝑓 𝑘 ⋅𝑝𝑜𝑙𝑦 𝑛 time? Thanks!