Generalization and Specialization of Kernelization Daniel Lokshtanov.

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

NP-Hard Nattee Niparnan.
Bart Jansen 1.  Problem definition  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least k leaves?
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
Minimum Vertex Cover in Rectangle Graphs
Approximation Algorithms
Introduction to Kernel Lower Bounds Daniel Lokshtanov.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Bart Jansen, Utrecht University. 2  Max Leaf  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least.
Bart Jansen, Utrecht University. 2  Max Leaf  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least.
NP-complete and NP-hard problems Transitivity of polynomial-time many-one reductions Concept of Completeness and hardness for a complexity class Definition.
The Theory of NP-Completeness
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Polynomial Time Approximation Schemes Presented By: Leonid Barenboim Roee Weisbert.
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Fast FAST By Noga Alon, Daniel Lokshtanov And Saket Saurabh Presentation by Gil Einziger.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computational problems, algorithms, runtime, hardness
1 Polynomial Church-Turing thesis A decision problem can be solved in polynomial time by using a reasonable sequential model of computation if and only.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Approximation Algorithms
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 23 Instructor: Paul Beame.
Analysis of Algorithms CS 477/677
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
CSE 421 Algorithms Richard Anderson Lecture 27 NP Completeness.
Linear Programming and Parameterized Algorithms. Linear Programming n real-valued variables, x 1, x 2, …, x n. Linear objective function. Linear (in)equality.
1 Refined Search Tree Technique for Dominating Set on Planar Graphs Jochen Alber, Hongbing Fan, Michael R. Fellows, Henning Fernau, Rolf Niedermeier, Fran.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Fixed Parameter Complexity Algorithms and Networks.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Kernel Bounds for Structural Parameterizations of Pathwidth Bart M. P. Jansen Joint work with Hans L. Bodlaender & Stefan Kratsch July 6th 2012, SWAT 2012,
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
1 Bart Jansen Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter STACS 2011, Dortmund March 10 th, 2011 Joint work with.
1 Bart Jansen Independent Set Kernelization for a Refined Parameter: Upper and Lower bounds TACO Day, Utrecht January 12 th, 2011 Joint work with Hans.
Approximation Algorithms
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
1 Bart Jansen Independent Set Kernelization for a Refined Parameter: Upper and Lower bounds ALGORITMe Staff Colloquium, Utrecht September 10 th, 2010 Joint.
Minicourse on parameterized algorithms and complexity Part 4: Linear programming Dániel Marx (slides by Daniel Lokshtanov) Jagiellonian University in Kraków.
Bidimensionality (Revised) Daniel Lokshtanov Based on joint work with Hans Bodlaender,Fedor Fomin,Eelko Penninkx, Venkatesh Raman, Saket Saurabh and Dimitrios.
Applications of Treewidth in Algorithm Design Daniel Lokshtanov Based on joint work with Hans Bodlaender,Fedor Fomin,Eelko Penninkx, Venkatesh Raman, Saket.
CSE 589 Part V One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important. Bertrand Russell.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
CS6045: Advanced Algorithms NP Completeness. NP-Completeness Some problems are intractable: as they grow large, we are unable to solve them in reasonable.
Data Reduction for Graph Coloring Problems Bart M. P. Jansen Joint work with Stefan Kratsch August 22 nd 2011, Oslo.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Algorithms for hard problems Introduction Juris Viksna, 2015.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
Joint work with Hans Bodlaender
Exact Algorithms via Monotone Local Search
NP-Completeness Yin Tat Lee
Computability and Complexity
Constrained Bipartite Vertex Cover: The Easy Kernel is Essentially Tight Bart M. P. Jansen June 4th, WORKER 2015, Nordfjordeid, Norway.
Bart M. P. Jansen June 3rd 2016, Algorithms for Optimization Problems
Approximation and Kernelization for Chordal Vertex Deletion
Coverage Approximation Algorithms
REDUCESEARCH Polynomial Kernels for Hitting Forbidden Minors under Structural Parameterizations Bart M. P. Jansen Astrid Pieterse ESA 2018 August.
Approximation Algorithms
Linear Programming Duality, Reductions, and Bipartite Matching
Complexity Theory in Practice
Instructor: Aaron Roth
Presentation transcript:

Generalization and Specialization of Kernelization Daniel Lokshtanov

We Kernels ∃ ¬ Why?

What’s Wrong with Kernels (from a practitioners point of view) 1.Only handles NP-hard problems. 2.Don’t combine well with heuristics. 3.Only capture size reduction. 4.Don’t analyze lossy compression. Doing something about (1) is a different field altogether. This talk; attacking (2) Some preliminary work on (4)  high fidelity redections

”Kernels don’t combine with heuristics” ?? Kernel mantra; ”Never hurts to kernelize first, don’t lose anything” We don’t lose anything if after kernelization we will solve the compressed instance exactly. Do not necessarily preserve approximate solutions.

Kernel I,k I’,k’ In this talk, parameter = solution size / quality Solution of size ≤ k Solution of size ≤ k’ Solution of size 1.2k’Solution of size 1.2k ??

Known/Unknown k Don’t know OPT in advance. Solutions: -The parameter k is given and we only care whether OPT ≤ k or not. -Try all values for k. -Compute k ≈ OPT by approximation algorithm.  Overhead If k > OPT, does kernelizing with k preserve OPT?

Buss kernel for Vertex Cover Vertex Cover: Find S ⊆ V(G) of size ≤ k such that every edge has an endpoint in S. -Remove isolated vertices -Pick neighbours of degree 1 vertices into solution (and remove them) -Pick degree > k vertices into solution and remove them. Reduction rules are independent of k. Proof of correctness transforms any solution, not only any optimal solution.

Degree > k rule Any solution of size ≤ k must contain all vertices of degree > k. We preserve all solutions of size ≤ k. Lose information about solutions of size ≥ k.

Buss’ kernel for Vertex Cover -Find a 2-approximate solution S. -Run Buss kernelization with k = |S|. I,k I,k’ Solution of size 1.2k’ Solution of size 1.2k’ + (k-k’) ≤ 1.2k

Buss’ - kernel - Same size as Buss kernel, O(k 2 ), up to constants. - Preserves approximate solutions, with no loss compared to the optimum in the compression and decompression steps.

NT-Kernel In fact the Nemhauser Trotter 2k-size kernel for vertex cover already has this property – the crown reduction rule is k-independent! Proof: Exercise

Other problems For many problems applying the rules with a value of k preserves all ”nice” solutions of size ≤ k  approximation preserving kernels. Example 2: Feedback Vertex Set, we adapt a O(k 2 ) kernel of [T09].

Feedback Vertex Set FVS: Is there a subset S ⊆ V(G) of size ≤ k such that G \ S is acyclic? R1: Delete vertices of degree 0 and 1. R2: Replace degree 2 vertices by edges. R3: If v appears in > k cycles that intersect only in v, select v into S. R1 & R2 preserve all reasonable solutions R3 preserves all solutions of size ≤ k

Feedback Vertex Set R4 (handwave): If R1-R3 can’t be applied and there is a vertex x of degree > 8k, we can identify a set X such that in any feedback vertex set S of size ≤ k, either x ∈ S or X ⊆ S. R4 preserves all solutions of size ≤ k

Feedback Vertex Set Kernel Apply a 2-approximation algorithm for Feedback Vertex Set to find a set S. Apply the kernel with k=|S|. Kernel size is O(OPT 2 ). Preserves approximate solutions, with no loss compared to the optimum in the compression step.

Remarks; If we don’t know OPT, need an approximation algorithm. Most problems that have polynomial kernels also have constant factor or at least Poly(OPT) approximations. Using f(opt)-approximations to set k results in larger kernel sizes for the approximation preserving kernels.

Right definition? Approximation preserving kernels for optimization problems, definition 1: I I’ |I’I≤ poly(OPT) OPT c*OPT OPT’ Poly time c*OPT’

Right definition? Approximation preserving kernels for optimization problems, definition 2: I I’ |I’I≤ poly(OPT) OPT OPT + t OPT’ Poly time OPT’ + t

What is the right definition? Definition 1 captures more, but Definition 2 seems to capture most (all?) positive answers. Exist other reasonable variants that are not necessarily equivalent.

What do approximation preserving kernels give you? When do approximation preserving kernels help in terms of provable running times? If Π has a PTAS or EPTAS, and an approximation preserving kernel, we get (E)PTASes with running time f(ε)poly(OPT) + poly(n) or OPT f(ε) + poly(n).

Problems on planar (minor-free) graphs Many problems on planar graphs and H-minor- free graphs admit EPTAS’s and have linear kernels. Make the kernels approximation preserving? These Kernels have only one reduction rule; the protrusion rule. (to rule them all)

Protrusions A set S ⊆ V(G) is an r-protrusion if -At most r vertices in S have neighbours outside S. -The treewidth of G[S] is at most r.

Protrusion Rule A protrusion rule takes a graph G with an r- protrusion S of size > c, and outputs an equivalent instance G’, with V(G’) < V(G). Usually, the entire part G[S] is replaced by a different and smaller protrusion that ”emulates” the behaviour of S. The constant c depends on the problem and on r.

Kernels on Planar Graphs [BFLPST09]: For many problems, a protrusion rule is sufficient to give a linear kernel on planar graphs. To make these kernels apx-preserving, we need an apx-preserving protrusion rule.

Apx-Preserving Protrusion Rule I I’ |I’I< I OPT OPT + t OPT’≤ OPT Poly time OPT’ + t S

Kernels on Planar Graphs [BFLPST09]: – If a problem has finite integer index  it has a protrusion rule. – Simple to check sufficient condition for a problem to have finite integer index. Finite integer index is not enough for apx- preserving protrusion rule. But the sufficient condition is!

t-boundaried graphs A t-boundaried graph is a graph G with t distinguished vertices labelled from 1 to t. These vertices are called the boundary of G. G can be colored, i.e supplied with some vertex/edge sets C 1,C 2 … C1C1 C2C2

Gluing Gluing two colored t-boundaried graphs: (G 1,C 1,C 2 ) ⊕ (G 2,D 1,D 2 )  (G 3, C 1 ∪ D 1, C 2 ∪ D 2 ) means identifying the boundary vertices with the same label, vertices keep their colors. C1C1 C2C D2D2 D1D C1C1 C2C2 D2D2 D1D

Canonical Equivalence For a property Φ of 1-colored graphs we define the equivalence relation ≣ Φ on the set of t- boundaried c-colored graphs. (G 1,X 1 ) ≣ Φ (G 2,X 2 ) ⇔ For every (G’, X’): Φ(G 1 ⊕ G’, X 1 ∪ X’) ⇔ Φ(G 2 ⊕ G’, X 2 ∪ X’) Can also define for 10-colorable problems in the same way

Canonical Equivalence (G 1,X) ≣ Φ (G 2,Y) means ”gluing (G 1,X) onto something has the same effect as gluing (G 2,Y) onto it” X1X1 X2X Z2Z2 Z1Z Y1Y1 Y2Y

Finite State Φ is finite state if for every integer t, ≣ Φ has a finite number of equivalence classes on t- boundaried graphs. Note: The number of equivalence classes is a function f(Φ,t) of Φ and t.

Variant of Courcelle’s Theorem Finite State Theorem (FST): If Φ is CMSOL- definable, then Φ is finite state. Quantifiers: ∃ and ∀ for variables for vertex sets and edge sets, vertices and edges. Operators: = and ∊ Operators: inc(v,e) and adj(u,v) Logical operators: ∧, ∨ and ¬ Size modulo fixed integers operator: eqmod p,q (S) EXAMPLE: p(G,S) = “S is an independent set of G”: p(G,S) = ∀ u, v ∊ S, ¬adj(u,v)

CMSOL Optimization Problems for colored graphs Φ-Optimization Input: G, C 1,... C x Max / Min |S| So that Φ(G, C 1, C x, S) holds. CMSOL definable proposition

Sufficient Condition [BFLPST09]: – If a CMSO-optimization problem Π is strongly monotone  Π has finite integer index  it has a protrusion rule. Here: – If a CMSO-optimization problem Π is strongly monotone  Π has apx-preserving protrusion rule.

Signatures (for minimization problems) G H3H3 H2H2 H1H1 S H3 S H2 S H1 |S G1 | = 2 |S G3 |=1 |S G2 |= Choose smallest S ⊆ V(G) to make Φ hold Intuition: f(H,S) returns the best way to complete in G a fixed partial solution in H.

Signatures (for minimization problems) The signature of a t-boundaried graph G is a function f G with Input: t-boundaried graph H and S H ⊆ V(H) Output: Size of the smallest S G ⊆ V(G) such that Φ(G ⊕ H, S G ∪ S H ) holds. Output: ∞ if S G does not exist.

Strong Monotonicity (for minimization problems) A problem Π is strongly monotone if for any t- boundaried G, there is a vertex set Z ⊆ V(G) such that |Z| ≤ f G (H,S) + g(t) for an arbitrary function g. Signature of G, evaluated at (H,S) Size of the smallest S’ ⊆ V(G) such that S’ ∪ S is a feasible solution of G ⊕ H

Strong monotonicity - intuition Intuition: A problem is strongly monotone if for any t-boundaried G there ∃ partial solution S that can be glued onto ”anything”, and S is only g(t) larger than the smallest partial solution in G.

Super Strong Monotonicity Theorem Theorem: If a CMSO-optimization problem Π is strongly monotone, then it has apx-preserving protrusion rule. Corollary: All bidimensional’, strongly monotone CMSO-optimization problems Π have linear size apx-preserving kernels on planar graphs.

Proof of SSMT Lemma 1: Let G 1 and G 2 be t-boundaried graphs of constant treewidth, f 1 and f 2 be the signatures of G 1 and G 2, and c be an integer such that for any H, S H ⊆ V(H): f 1 (H,S H ) + c = f 2 (H,S H ). Then: G 1 ⊕ H Feasible solution Z 1 G 2 ⊕ H Feasible solution Z 2 Poly time Decrease size by c Poly time Increase size by c

Proof of Lemma 1 G1G1 H H G2G2 Decrease size by c Poly time? Constant treewidth!

Proof of SSMT Lemma 2: If a CMSO-min problem Π is strongly monotone, then: For every t there exists a finite collection F of t- boundaried graphs such that: For every G 1, there is a G 2 ∈ F and c ≥ 0 such that: For any H, S H ⊆ V(H): f 1 (H,S H ) + c = f 2 (H,S H ).

SSMT = Lemma Keep a list F of graphs t-boundaried graphs as guaranteed by Lemma 2. Replace large protrusions by the corresponding guy in F. Lemma 1 gives correctness.

Proof of Lemma 2 (H 1, S 1 ) Signature value (H 2, S 2 )(H 3, S 3 )(H 4, S 4 )(H 5, S 5 )(H 6, S 6 )(H 7, S 7 )(H 8, S 8 )... G1G1 ≤ g(t) G2G2

Proof of Lemma 2 Only a constant number of finite, integer curves that satisfy max-min ≤ t (up to translation). Infinite number of infinite such curves. Since Π is a min-CMSO problem, we only need to consider the signature of G on a finite number of pairs (H i,S i ).

Super Strong Monotonicity Theorem Theorem: If a CMSO-optimization problem Π is strongly monotone, then it has apx-preserving protrusion rule. Corollary: All bidimensional’, strongly monotone CMSO-optimization problems Π have linear size apx-preserving kernels on planar graphs.

Recap Approximation preserving kernels are much closer to the kernelization ”no loss” mantra. It looks like most kernels can be made approximation preserving at a small cost. Is it possible to prove that some problems have smaller kernels than apx-preserving kernels?

What I was planning to talk about, but didn’t. ”Kernels” that do not reduce size, but rather reduce a parameter to a function of another in polynomial time. – This IS pre-processing – Many many examples exist already – Fits well into Mike’s ”multivariate” universe.