Download presentation
Presentation is loading. Please wait.
Published byKeenan Buckland Modified over 9 years ago
1
Discrepancy and SDPs Nikhil Bansal (TU Eindhoven, Netherlands ) August 24, ISMP 2012, Berlin
2
Outline Discrepancy Theory What is it Applications Basic Results (non-constructive) SDP connection Algorithms Lower Bounds 2/40
3
Discrepancy Theory: What is it? Study of discrepancy between self-perception and reality 3/40
4
Discrepancy: What is it? Study of irregularities in approximating the continuous by the discrete. Historical motivation: Numerical Integration/ Sampling How well can you approximate a region by discrete points ? 4/40
5
Discrepancy: What is it? Problem: How uniformly can you distribute points in a grid. “Uniform” : For every axis-parallel rectangle R | (# points in R) - (Area of R) | should be low. n 1/2 R Discrepancy: Max over rectangles R |(# points in R) – (Area of R)| 5/40
6
Distributing points in a grid Problem: How uniformly can you distribute points in a grid. “Uniform” : For every axis-parallel rectangle R | (# points in R) - (Area of R) | should be low. Uniform RandomVan der Corput Set n= 64 points n 1/2 discrepancyn 1/2 (loglog n) 1/2 O(log n) discrepancy! 6/40
7
Quasi-Monte Carlo Methods *Different constant of proportionality 7/40
8
Discrepancy: Example 2 Input: n points placed arbitrarily in a grid. Color them red/blue such that each axis-parallel rectangle is colored as evenly as possible Discrepancy: max over rect. R ( | # red in R - # blue in R | ) Continuous: Color each element 1/2 red and 1/2 blue (0 discrepancy) Discrete: Random has about O(n 1/2 log 1/2 n) Can achieve O(log 2.5 n) Why do we care? 8/40
9
Combinatorial Discrepancy S1S1 S2S2 S3S3 S4S4 9/40
10
Combinatorial Discrepancy Set system: A = {0,1} incidence matrix 10/40
11
Applications CS: Computational Geometry, Comb. Optimization, Monte-Carlo simulation, Machine learning, Complexity, Pseudo-Randomness, … Math: Dynamical Systems, Combinatorics, Mathematical Finance, Number Theory, Ramsey Theory, Algebra, Measure Theory, … 11/40
12
Hereditary Discrepancy A1A2…A1A2… 1 2 … n A’ 1 A’ 2 … 1’ 2’ … n’ But not so robust 12/40 Discrepancy = 0
13
Two Applications 13/40
14
Rounding Ax=b A 14/40 (-1) (+1) Key Point: Low discrepancy coloring guides our updates! x
15
Rounding 15/40
16
Discrepancy and optimization 16/40
17
Dynamic Data Structures N weighted points in a 2-d region. Weights updated over time. Query: Given an axis-parallel rectangle R, determine the total weight on points in R. Goal: Preprocess (in a data structure) 1)Low query time 2)Low update time (upon weight change) 17/40
18
Example 18/40
19
What about other queries? 19/40
20
Bounding Discrepancy 20/40
21
General set system 21/40
22
(Previous) Best Algorithm 22/40
23
Better Colorings Exist! [Spencer 85]: (Six standard deviations suffice) Any system with n sets has discrepancy · 6n 1/2 (In general for arbitrary m, discrepancy = O(n 1/2 log(m/n) 1/2 ) Tight: For m=n, cannot beat 0.5 n 1/2 (Hadamard Matrix) Inherently non-constructive proof (counting) Powerful Entropy Method. Question: Can we find it algorithmically ? Certain algorithms do not work [Spencer] Conjecture [Alon-Spencer]: May not be possible. 23/40 Space of colorings
24
Results General Technique: k-permutation problem [Spencer, Srinivasan,Tetali] geometric problems, Beck Fiala setting (Srinivasan’s bound) … 24/40
25
SDPs 25/40
26
Relaxations: LPs and SDPs Yet, SDPs will be a major tool. 26/40
27
Punch line 27/40
28
Algorithm (at high level) Cube: {-1,+1} n Analysis: Few steps to reach a vertex (walk has high variance) Disc( S i ) does a random walk (with low variance) start finish Algorithm: “Sticky” random walk Each step generated by rounding a suitable SDP Move in various dimensions correlated, e.g. t 1 + t 2 ¼ 0 Each dimension: An Element Each vertex: A Coloring 28/40
29
An SDP Hereditary disc. ) the following SDP is feasible Obtain v i 2 R n 29/40
30
Idea Which vector g to project on? Lemma: If g 2 R n is a random Gaussian, for any v 2 R n, g ¢ v is distributed as N(0, |v| 2 ). Pf: N(0,a 2 ) + N(0,b 2 ) = N(0,a 2 +b 2 ) g ¢ v = i v(i) g i » N(0, i v(i) 2 ) 30/40
31
Properties of Rounding Lemma: If g 2 R n is a random Gaussian, for any v 2 R n, g ¢ v is distributed as N(0, |v| 2 ) 1.Each i » N(0, ) 2.For each set S, i 2 S i = g ¢ ( i 2 S v i ) » N(0, · 2 ) (std deviation · ) SDP: |v i | 2 = 1 | i 2 S v i | 2 · 2 Recall: i = g ¢ v i ’s will guide our updates to x. 31/40
32
Algorithm Overview Construct coloring iteratively. Initially: Start with coloring x 0 = (0,0,0, …,0) at t = 0. At Time t: Update coloring as x t = x t-1 + ( t 1,…, t n ) ( tiny: 1/n suffices) x(i) x t (i) = ( 1 i + 2 i + … + t i ) Color of element i: Does random walk over time with step size ¼ Fixed if reaches -1 or +1. time +1 Set S: x t (S) = i 2 S x t (i) does a random walk w/ step N(0, · 2 ) 32/40
33
Analysis Consider time T = O(1/ 2 ) Claim 1: With prob. ½, an element reaches -1 or +1. Pf: Each element doing random walk (martingale) with size ¼ Recall: Random walk with step 1, is ¼ t 1/2 away in t steps. Claim 2: Each set has O( ) discrepancy in expectation. Pf: For each S, x t (S) doing random walk with step size ¼ 33/40
34
Recap At each step of walk, formulate SDP on floating variables. SDP solution -> Guides the walk Properties of walk: High Variance -> Quick convergence Low variance for discrepancy on sets -> Low discrepancy 34/40 start finish
35
Refinements Spencer’s six std deviations result: Recall: Want O(n 1/2 ) discrepancy, but random coloring gives n 1/2 (log n) 1/2 Previous approach seems useless: Expected discrepancy for a set O(n 1/2 ), but some random walks will deviate by up to (log n) 1/2 factor Tune down the variance of dangerous sets (not too many) Entropy Method -> SDP still feasible. 0 20n 1/2 30n 1/2 35n 1/2 … Danger 1 Danger 2 Danger 3 …
36
Further Developments Can be derandomized [Bansal-Spencer’11] Our algorithm still uses the Entrpoy method. Gives no new proof of Spencer’s result. Is there a purely constructive proof ? Lovett Meka’12: Yes. Gaussian random walks + Linear Algebra 36/40
37
Matousek Lower Bound 37/40
38
Detlb 38/40
39
In Conclusion 39/40
40
Thank you for your attention 40/40
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.