Download presentation
Presentation is loading. Please wait.
Published byCordelia Hensley Modified over 8 years ago
1
An algorithmic proof of the Lovasz Local Lemma via resampling oracles Jan Vondrak IBM Almaden TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A Nick Harvey University of British Columbia
2
Lovasz Local Lemma: [Erdos-Lovasz ‘75] Let E 1, E 2,…, E n be events in a probability space. Suppose E i is jointly independent of all but d events P [ E i ] · 1 / ed for all i Then P [ Å i E i ] > 0 (typically exponentially small). Simultaneously avoiding many events A mysterious statement about nature: Rare objects can randomly appear.
3
Lovasz Local Lemma: [Erdos-Lovasz ‘75] Let E 1, E 2,…, E n be events in a probability space. Suppose E i is jointly independent of all but d events P [ E i ] · 1 / ed for all i Then P [ Å i E i ] > 0 (typically exponentially small). Simultaneously avoiding many events Some applications: Erdos’ Conjecture on Covering Systems [Hough] Near-Ramanujan expanders via 2-lifts [Bilu-Linial] O( 1 )-approx for Santa Claus problem [Feige], [Haeupler-Saha-Srinivasan] Some applications: Randomly partition an expander into two expanders [Frieze-Molloy]
4
Example: 2-coloring hypergraphs Let E i be event i th set is all-red or all-blue. Note P [ E i ]= 2 1-k. E i is independent of all but 2 k-1 / e =: d sets. Since P [ E i ] · 1 / ed, LLL implies P [ Å i E i ]> 0. So, there is a coloring where no set is all-red or all-blue. Given: System of sets of size k, each intersecting · 2 k-1 / e sets. Goal: Color vertices red/blue so that each set has both colors. Random approach: color each vertex independently red/blue.
5
Under the original distribution P, it is unlikely but possible to avoid all events E 1, E 2,…, E n. Can we find another distribution (i.e. a randomized algorithm) in which it is likely to avoid E 1, E 2,…, E n ? Algorithmic LLL Humans cheating nature: Amplifying probability of rare event.
6
Under the original distribution P, it is unlikely but possible to avoid all events E 1, E 2,…, E n. Can we find another distribution (i.e. a randomized algorithm) in which it is likely to avoid E 1, E 2,…, E n ? Yes: Hypergraph case, weaker parameters: [Beck ’91], [Alon ’91], [Molloy-Reed ’98], [Czumaj-Scheideler’00], [Srinivasan ’08] Algorithmic LLL Breakthrough [Moser-Tardos ‘08]: Linear-time algorithm for LLL in “the variable model”.
7
The Variable Model (dependency based on shared variables) Independent random variables Y 1,…, Y m “Bad” events E 1, E 2,…, E n E i depends on variables var( E i ) A dependency graph G : i » j if var( E i ) Å var( E j ) ;. Theorem: Suppose max-degree < d and P [ E i ] · 1 / ed. The algorithm finds a point in Å i E i after O ( n ) resampling operations, in expectation. Y1Y1 Y2Y2 Y3Y3 Y4Y4 Y5Y5 Y6Y6 Y7Y7 Y8Y8 Y9Y9 Y 10 Y 11 Y 12 Y 13 Y 10 Y 11 Y 12 Moser-Tardos Algorithm [Moser-Tardos ‘08] While some event E i occurs Resample all variables in var( E i ) Sets sharing variables are neighbors
8
Algorithmic Improvements Many extensions of Moser-Tardos: – deterministic LLL algorithm [Chandrasekaran-Goyal-Haupler ’10] – exponentially many events [Haupler-Saha-Srinivasan ’10] – better conditions on probabilities [Kolipaka-Szegedy ’11], [Harris ’14] Require variable model (dependencies based on Y 1,..., Y m ) Hypergraph coloring k -SAT ( x 1 Ç x 2 Ç x 3 ) Æ ( x 4 Ç x 5 Ç x 6 )...
9
Algorithmic Improvements Original LLL works for arbitrary probability spaces – Permutations [Erdos-Spencer ‘91] – Hamilton cycles [Albert-Frieze-Reed ’95] – Spanning trees [Lu-Mohr-Szekely ‘13] Algorithms beyond variable model – Permutations [Harris-Srinivasan ‘14] – Abstract “flaw-correction” framework [Achlioptas-Iliopoulos ‘14]... 9
10
Algorithmic Local Lemma for general probability spaces? For the LLL in any probability space, can we design a randomized algorithm to quickly find ! 2 Å i E i ? Theorem: [H-Vondrak ‘15] There is an LLL scenario where ={ 0, 1 } n, P is uniform, but finding ! is discrete-log hard. Some assumptions are necessary to get an efficient algorithm. ! !
11
Algorithmic Local Lemma for general probability spaces? For the LLL in any probability space, can we design a randomized algorithm to quickly find ! 2 Å i E i ? ! ! How can algorithm “move about” in general probability space? Flaw-correcting actions [Achlioptas-Iliopoulos ‘14] Resampling oracles [This paper]
12
Consider probability space , measure P, events E 1, E 2,…, E n and dependency relation denoted ». A resampling oracle for E i is a random function r i : ! Removes conditioning on E i : If X has measure P cond. on E i, then r i ( X ) has measure P. Does not cause non-neighbor events: If E k ¿ E i and X E k, then r i ( X ) E k. Resampling Oracles X EiEi ri(X)ri(X) EkEk
13
Consider probability space , measure P, events E 1, E 2,…, E n and dependency relation denoted ». A resampling oracle for E i is a random function r i : ! Removes conditioning on E i : If X has measure P cond. on E i, then r i ( X ) has measure P. Does not cause non-neighbor events: If E k ¿ E i and X E k, then r i ( X ) E k. Resampling Oracles Example: Resampling Oracle for Hypergraph Coloring
14
Our Main Result: An Algorithmic LLL in a General Setting Arbitrary probability space Events E 1, E 2,…, E n An arbitrary graph G A resampling oracle for each E i, with respect to G Theorem: [H.-Vondrak ’15] Suppose max-degree < d and P [ E i ] · 1 / ed. Our algorithm finds a point in Å i E i after O ( n 2 ) resampling operations, with high probability. Holds much more generally: Lovasz’s conditions, Shearer’s conditions... 14
15
Our Main Result: An Algorithmic LLL in a General Setting Theorem: [H.-Vondrak ’15] Suppose max-degree < d and P [ E i ] · 1 / ed. Our algorithm finds a point in Å i E i after O ( n 2 ) resampling operations, with high probability. Holds much more generally: Lovasz’s conditions, Shearer’s conditions... We design efficient resampling oracles for essentially every known application of the LLL (and generalizations)...
16
Resampling Spanning Trees in K n Let T be a uniformly random spanning tree in K n For edge set A, let E A = { A µ T }. Dependency Graph: Make E A a neighbor of E B, unless A and B are vertex-disjoint. Resampling oracle r A : If T uniform conditioned on E A, want r A ( T ) uniform. But, should not disturb edges that are vtx-disjoint from A.
17
E A = { A µ T }. Resampling oracle r A ( T ): – If T uniform conditioned on E A, want r A ( T ) uniform. – But, should not disturb edges that are vtx-disjoint from A. A T – Contract edges of T vtx-disjoint from A – Delete edges adjacent to A – Let r A ( T ) be a uniformly random spanning tree in resulting (multi)-graph. rA(T )rA(T ) Lemma: r A ( T ) is uniformly random.
18
LLL via Resampling Oracles Given resampling oracles, how to get algorithm for LLL?
19
Algorithmic LLL via Resampling Oracles Similar to Moser & Tardos’ parallel algorithm Like finding a maximal independent set MIS Resample Draw ! from P Repeat J Ã ; While there is j ¡ + ( J ) s.t. E j occurs in ! Pick smallest such j ! Ã r j ( ! ) (Resample E j ) J Ã J [ { j } End Until J = ; Output ! ¡ + ( J ) is inclusive neighborhood of J
20
MIS Resample Draw ! from P Repeat J Ã ; While there is j ¡ + ( J ) s.t. E j occurs in ! Pick smallest such j ! Ã r j ( ! ) J Ã J [ { j } End Until J = ; Output ! Algorithmic LLL via Resampling Oracles A Useful Property of Algorithm Let J t be the set J in iteration t. In iteration t + 1, all violated events are in ¡ + ( J t ). So J t + 1 µ ¡ + ( J t ) for all t. J1J1 J2J2 J3J3 J4J4 Example: J5J5
21
Algorithmic LLL via Resampling Oracles MIS Resample Draw ! from P Repeat J Ã ; While there is j ¡ + ( J ) s.t. E j occurs in ! Pick smallest such j ! Ã r j ( ! ) J Ã J [ { j } End Until J = ; Output ! A Thought Experiment Resampling oracle r i for E i satisfies: If ! has distr P conditioned on E i, then r i ( ! ) has distr P. Does algorithm’s ! always have dist P ? No! ! is also conditioned on events not occurring.
22
Analysis Def: Seq = { I 1,…, I ` : I t ;, I t + 1 µ ¡ + ( I t ) } Two Ingredients 1.Coupling Lemma: For any sequence I 1,…, I ` in Seq, P [ algorithm resamples I 1,…, I ` ] · (i.e., J t = I t for all t ) 2.Bound for all sequences: is small.
23
1. Coupling Lemma Couple Alg with Alg ( I 1,…, I ` ) { Alg resamples I 1,…, I ` } µ { Alg ( I 1,…, I ` ) succeeds } P [ Alg resamples I 1,…, I ` ] · P [ Alg ( I 1,…, I ` ) succeeds ] Alg ( I 1,…, I ` ) Draw ! from P For t = 1,…, ` For each j 2 I t (in fixed order) If E j occurs in ! ! Ã r j ( ! ) Else FAIL Alg Draw ! from P Repeat J Ã ; While 9 j ¡ + ( J ) s.t. ! 2E j Pick smallest such j ! Ã r j ( ! ) J Ã J [ { j } Until J = ;
24
Analysis Def: Seq = { I 1,…, I ` : I t ;, I t + 1 µ ¡ + ( I t ) } Two Ingredients 1.Coupling Lemma: For any sequence I 1,…, I ` in Seq, P [ algorithm resamples I 1,…, I ` ] · 2.Bound for all sequences: is small.
25
Suppose G has max degree < d and P [ E i ] · p := 1/ed Claim: For fixed I 1, Proof: For ` > 1 : Induction Binomial Formula (LLL) Peel off I 1 By (LLL) Degree bound · e· e 2. Bound for all sequences
26
Now assume slack c 2 ( 0, 1 ) Suppose G has max degree < d and P [ E i ] · p := 1/ed Claim: For fixed I 1, Proof: For ` > 1 : Induction Binomial Formula (LLL) Peel off I 1 By (LLL) Degree bound · e· e c c
27
Analysis Def: Seq = { I 1,…, I ` : I t ;, I t + 1 µ ¡ + ( I t ) } Two Ingredients 1.Coupling Lemma: For any sequence I 1,…, I t in Seq, P [ algorithm resamples I 1,…, I t ] · 2.Bound for all sequences: For fixed I 1, Conclusion: P [ alg needs ¸ ` resamplings ] = P [ alg resamples I 1,…, I ` ] · 2 n ¢ c `
28
Summary LLL can be hard; assumptions are necessary for algorithms. Our algorithmic proof of LLL works for any probability space and any events, under usual LLL conditions, so long as you can design resampling oracles. Efficiency is similar to Moser-Tardos, but quadratically worse. Analysis has similar structure to Moser-Tardos. Can generalize to work under Shearer’s condition.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.