1/30 Discrepancy and SDPs Nikhil Bansal (TU Eindhoven)

Slides:



Advertisements
Similar presentations
1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Advertisements

Optimal Space Lower Bounds for All Frequency Moments David Woodruff MIT
Lower Bounds for Local Search by Quantum Arguments Scott Aaronson (UC Berkeley) August 14, 2003.
Numerical Linear Algebra in the Streaming Model Ken Clarkson - IBM David Woodruff - IBM.
Lower Bounds for Additive Spanners, Emulators, and More David P. Woodruff MIT and Tsinghua University To appear in FOCS, 2006.
On allocations that maximize fairness Uriel Feige Microsoft Research and Weizmann Institute.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Discrepancy and SDPs Nikhil Bansal (TU Eindhoven, Netherlands ) August 24, ISMP 2012, Berlin.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
COMP 553: Algorithmic Game Theory Fall 2014 Yang Cai Lecture 21.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
6.896: Topics in Algorithmic Game Theory Lecture 11 Constantinos Daskalakis.
1/26 Constructive Algorithms for Discrepancy Minimization Nikhil Bansal (IBM)
1/17 Deterministic Discrepancy Minimization Nikhil Bansal (TU Eindhoven) Joel Spencer (NYU)
Discrepancy Minimization by Walking on the Edges Raghu Meka (IAS/DIMACS) Shachar Lovett (IAS)
The Randomization Repertoire Rajmohan Rajaraman Northeastern University, Boston May 2012 Chennai Network Optimization WorkshopThe Randomization Repertoire1.
Approximation Algorithms for Capacitated Set Cover Ravishankar Krishnaswamy (joint work with Nikhil Bansal and Barna Saha)
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
N. Bansal 1, M. Charikar 2, R. Krishnaswamy 2, S. Li 3 1 TU Eindhoven 2 Princeton University 3 TTIC Midwest Theory Day, Purdue, May 3, 2014.
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey University of Waterloo Department of Combinatorics and Optimization Joint.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Semidefinite Programming
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Probabilistic Methods in Coding Theory: Asymmetric Covering Codes Joshua N. Cooper UCSD Dept. of Mathematics Robert B. Ellis Texas A&M Dept. of Mathematics.
Testing of Clustering Noga Alon, Seannie Dar Michal Parnas, Dana Ron.
Theta Function Lecture 24: Apr 18. Error Detection Code Given a noisy channel, and a finite alphabet V, and certain pairs that can be confounded, the.
1 On the Benefits of Adaptivity in Property Testing of Dense Graphs Joint work with Mira Gonen Dana Ron Tel-Aviv University.
Prof. Bart Selman Module Probability --- Part e)
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
(work appeared in SODA 10’) Yuk Hei Chan (Tom)
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Discrepancy Minimization by Walking on the Edges Raghu Meka (IAS & DIMACS) Shachar Lovett (IAS)
1/24 Algorithms for Generalized Caching Nikhil Bansal IBM Research Niv Buchbinder Open Univ. Israel Seffi Naor Technion.
Beating the Union Bound by Geometric Techniques Raghu Meka (IAS & DIMACS)
1 The Santa Claus Problem (Maximizing the minimum load on unrelated machines) Nikhil Bansal (IBM) Maxim Sviridenko (IBM)
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
Decision Procedures An Algorithmic Point of View
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Private Approximation of Search Problems Amos Beimel Paz Carmi Kobbi Nissim Enav Weinreb (Technion)
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
A Membrane Algorithm for the Min Storage problem Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca WMC.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
Topics in Algorithms 2007 Ramesh Hariharan. Tree Embeddings.
1/30 Constructive Algorithms for Discrepancy Minimization Nikhil Bansal (IBM)
Artur Czumaj DIMAP DIMAP (Centre for Discrete Maths and it Applications) Computer Science & Department of Computer Science University of Warwick Testing.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Pseudo-random generators Talk for Amnon ’ s seminar.
Approximation Algorithms based on linear programming.
Information Complexity Lower Bounds
Lecture 18: Uniformity Testing Monotonicity Testing
Density Independent Algorithms for Sparsifying
Linear sketching over
Linear sketching with parities
Discrepancy and Optimization
The Byzantine Secretary Problem
On Approximating Covering Integer Programs
Presentation transcript:

1/30 Discrepancy and SDPs Nikhil Bansal (TU Eindhoven)

Outline Discrepancy: definitions and applications Basic results: upper/lower bounds Partial Coloring method (non-constructive) SDPs: basic method Algorithmic Spencer’s Result Lovett-Meka result Lower bounds via SDP duality (Matousek) 2/30

Material Classic: Geometric Discrepancy by J. Matousek Papers: Bansal. Constructive algorithms for discrepancy minimization, FOCS 2010 Matousek. The determinant lower bound is almost tight Lovett, Meka. Discrepancy minimization by walking on the edges Survey with fewer technical details: Bansal. … 3/30

4/30 Discrepancy: What is it? Study of gaps in approximating the continuous by the discrete. Original motivation: Numerical Integration/ Sampling Problem: How well can you approximate a region by discrete points Discrepancy: Max over intervals I |(# points in I) – (length of I)|

5/30 Discrepancy: What is it? Study of gaps in approximating the continuous by the discrete. Problem: How uniformly can you distribute points in a grid. “Uniform” : For every axis-parallel rectangle R | (# points in R) - (Area of R) | should be low. n 1/2 Discrepancy: Max over rectangles R |(# points in R) – (Area of R)|

6/30 Distributing points in a grid Problem: How uniformly can you distribute points in a grid. “Uniform” : For every axis-parallel rectangle R | (# points in R) - (Area of R) | should be low. Uniform Random Van der Corput Set n= 64 points n 1/2 discrepancyn 1/2 (loglog n) 1/2 O(log n) discrepancy!

Quasi-Monte Carlo Methods With N random samples: Error \prop 1/\sqrt{n} Quasi-Monte Carlo Methods: \prop Disc/n Can discrepancy be O(1) for 2d grid? No. \Omega(log n) [Schmidt …] d-dimensions: O(log^{d-1} n) [Halton-Hammersely ] \Omega(log^{(d-1)/2} n) [Roth ] \Omega(log^{(d-1)/2 + \eta} n [Bilyk,Lacey,Vagharshakyan’08] 7/30

8/30 Discrepancy: Example 2 Input: n points placed arbitrarily in a grid. Color them red/blue such that each rectangle is colored as evenly as possible Discrepancy: max over rect. R ( | # red in R - # blue in R | ) Continuous: Color each element 1/2 red and 1/2 blue (0 discrepancy) Discrete: Random has about O(n 1/2 log 1/2 n) Can achieve O(log 2.5 n)

9/30 Combinatorial Discrepancy Universe: U= [1,…,n] Subsets: S 1,S 2,…,S m Color elements red/blue so each set is colored as evenly as possible. Find  : [n] ! {-1,+1} to Minimize |  (S)| 1 = max S |  i 2 S  (i) | If A is m \times n incidence matrix. Disc(A) = min_{x \in {-1,1}^n} |Ax|_\infty S1S1 S2S2 S3S3 S4S4

10/30 Applications CS: Computational Geometry, Comb. Optimization, Monte-Carlo simulation, Machine learning, Complexity, Pseudo-Randomness, … Math: Dynamical Systems, Combinatorics, Mathematical Finance, Number Theory, Ramsey Theory, Algebra, Measure Theory, …

Hereditary Discrepancy 11/30

Rounding Lovasz-Spencer-Vesztermgombi’86 Given any matrix A, and x \in R^n can round x to \tilde{x} \in Z^n s.t. |Ax – A\tilde{x}|_\infty < Herdisc(A) Proof: Round the bits one by one. 12/30

Can we find it efficiently? Nothing known until recently. Thm [B’10]. Can efficiently round so that Error \leq O(\sqrt{log m log n}) Herdisc(A) 13/30

More rounding approaches Bin Packing Refined further by Rothvoss (Entropy rounding method) 14/30

Dynamic Data Structures N points in a 2-d region. Weights update over time. Query: Given an axis-parallel rectangle R, determine the total weight on points in R. Preprocess: 1)Low query time 2)Low update time (upon weight change) 15/30

Example Line: Query = O(n) Update = 1 Query = 1 Update = O(n^2) Query = 2 Update = O(n) Query = O(log n) Update = O(log n) Recursively can get for 2-d. 16/30

What about other objects? Query Circles arbitrary rectangles aligned triangle Turns out t_q t_u \geq n^{1/2}/log^2 n ? Larsen-Green: t_q t_u \geq disc(S)^n/log^2 n 17/30

Sketch of idea A good data structure implies D = A P A = row sparse P = Column sparse (low query time) (low update time) 18/30

Outline again 19/30

Basic Results 20/30

21/30 Best Known Algorithm Random: Color each element i independently as x(i) = +1 or -1 with probability ½ each. Thm: Discrepancy = O (n log n) 1/2 Pf: For each set, expect O(n 1/2 ) discrepancy Standard tail bounds: Pr[ |  i 2 S x(i) | ¸ c n 1/2 ] ¼ e -c 2 Union bound + Choose c ¼ (log n) 1/2 Analysis tight: Random actually incurs  (n log n) 1/2 ).

22/30 Better Colorings Exist! [Spencer 85]: (Six standard deviations suffice) Always exists coloring with discrepancy · 6n 1/2 (In general for arbitrary m, discrepancy = O(n 1/2 log(m/n) 1/2 ) Tight: For m=n, cannot beat 0.5 n 1/2 (Hadamard Matrix, “orthogonal” sets) Inherently non-constructive proof (pigeonhole principle on exponentially large universe) Challenge: Can we find it algorithmically ? Certain algorithms do not work [Spencer] Conjecture [Alon-Spencer]: May not be possible.

23/30 Beck Fiala Thm U = [1,…,n] Sets: S 1,S 2,…,S m Suppose each element lies in at most t sets (t << n). [Beck Fiala’ 81]: Discrepancy 2t -1. (elegant linear algebraic argument, algorithmic result) Beck Fiala Conjecture: O(t 1/2 ) discrepancy possible Other results: O( t 1/2 log t log n ) [Beck] O( t 1/2 log n ) [Srinivasan] O( t 1/2 log 1/2 n ) [Banaszczyk] S1S1 S2S2 S3S3 S4S4 Non-constructive

24/30 Approximating Discrepancy Question: If a set system has low discrepancy (say << n 1/2 ) Can we find a good discrepancy coloring ? [Charikar, Newman, Nikolov 11]: Even 0 vs. O (n 1/2 ) is NP-Hard (Matousek): What if system has low Hereditary discrepancy? herdisc (U,S) = max U’ ½ U disc (U’, S |U’ ) Robust measure of discrepancy (often same as discrepancy) Widely used: TU set systems, Geomety, … S1S2…S1S2… S’ 1 S’ 2 … 1 2 … n1’ 2’ … n’

25/30 Our Results Thm 1: Can get Spencer’s bound constructively. That is, O(n 1/2 ) discrepancy for m=n sets. Thm 2: If each element lies in at most t sets, get bound of O(t 1/2 log n) constructively (Srinivasan’s bound) Thm 3: For any set system, can find Discrepancy · O(log (mn)) Hereditary discrepancy. Other Problems: Constructive bounds (matching current best) k-permutation problem [Spencer, Srinivasan,Tetali] Geometric problems, …

26/30 Relaxations: LPs and SDPs Not clear how to use. Linear Program is useless. Can color each element ½ red and ½ blue. Discrepancy of each set = 0! SDPs (LP on v i ¢ v j, cannot control dimension of v’s) |  i 2 S v i | 2 · n 8 S |v i | 2 = 1 Intended solution v i = (+1,0,…,0) or (-1,0,…,0). Trivially feasible: v i = e i (all v i ’s orthogonal) Yet, SDPs will be a major tool.

27/30 Punch line SDP very helpful if “tighter” bounds needed for some sets. |  i 2 S v i | 2 · 2 n |  i 2 S’ v i | 2 · n/log n |v i | 2 · 1 Not apriori clear why one can do this. Entropy Method. Algorithm will construct coloring over time and use several SDPs in the process. Tighter bound for S’

28/30 Talk Outline Introduction The Method Low Hereditary discrepancy -> Good coloring Additional Ideas Spencer’s O(n 1/2 ) bound

Partial Coloring Method 29/30

A Question 30/30 -nn

Slight improvement Can be improved to O(\sqrt{n})/2^n If you pick a random {-1,1} coloring s w.p. say >= ½ |a \cdot s| \leq c \sqrt{n} 2^{n-1} colorings s, with |a\cdot s| \leq c \sqrt{n} 31/30

Algorithmically Easy: 1/poly(n) (How?) Answer: Pick any poly(n) colorings. [Karmarkar-Karp’81]: \approx 1/n^log n Huge gap: Major open question Remark: {-1,+1} not enough. Really need color 0 also. E.g. a_1 = 1, a_2=…=a_n = 1/(2n) 32/30

Yet another enhancement There is a {-1,0,1} coloring with at least n/2 {-1,1}’s s.t. \sum_i a_i s_i \leq n/2^{n/5} Make buckets of size 2n/2^{n/5} At least 2^{4n/5} sums fall in same bucket Claim: Some two s’ and s’’ in same bucket and differ in at least n/2 coordinates Again consider s = (s’-s’’)/2 33/30

Proof of Claim Claim: Any set of 2^{4n/5} vertices of the boolean cube has [Kleitman’66] Isoperimetry for cube. Hamming ball B(v,r) has the smallest diameter for a given number of vertices. |B(v,n/4)| < 2^{4n/5} 34/30

Spencer’s proof 35/30

36/30 Our Approach

37/30 Algorithm (at high level) Cube: {-1,+1} n Analysis: Few steps to reach a vertex (walk has high variance) Disc( S i ) does a random walk (with low variance) start finish Algorithm: “Sticky” random walk Each step generated by rounding a suitable SDP Move in various dimensions correlated, e.g.  t 1 +  t 2 ¼ 0 Each dimension: An Element Each vertex: A Coloring

38/30 An SDP Hereditary disc. ) the following SDP is feasible SDP: Low discrepancy: |  i 2 S j v i | 2 · 2 |v i | 2 = 1 Rounding: Pick random Gaussian g = (g 1,g 2,…,g n ) each coordinate g i is iid N(0,1) For each i, consider  i = g ¢ v i Obtain v i 2 R n

39/30 Properties of Rounding Lemma: If g 2 R n is random Gaussian. For any v 2 R n, g ¢ v is distributed as N(0, |v| 2 ) Pf: N(0,a 2 ) + N(0,b 2 ) = N(0,a 2 +b 2 ) g ¢ v =  i v(i) g i » N(0,  i v(i) 2 ) 1.Each  i » N(0,  ) 2.For each set S,  i 2 S  i = g ¢ (  i 2 S v i ) » N(0, · 2 ) (std deviation · ) SDP: |v i | 2 = 1 |  i 2 S v i | 2 · 2 Recall:  i = g ¢ v i  ’s mimics a low discrepancy coloring (but is not {-1,+1})

40/30 Algorithm Overview Construct coloring iteratively. Initially: Start with coloring x 0 = (0,0,0, …,0) at t = 0. At Time t: Update coloring as x t = x t-1 +  (  t 1,…,  t n ) (  tiny: 1/n suffices) x(i) x t (i) =  (  1 i +  2 i + … +  t i ) Color of element i: Does random walk over time with step size ¼  Fixed if reaches -1 or +1. time +1 Set S: x t (S) =  i 2 S x t (i) does a random walk w/ step  N(0, · 2 )

41/30 Analysis Consider time T = O(1/  2 ) Claim 1: With prob. ½, at least n/2 elements reach -1 or +1. Pf: Each element doing random walk with size ¼  Recall: Random walk with step 1, is ¼ O(t 1/2 ) away in t steps. A Trouble: Various element updates are correlated Consider basic walk x(t+1) = x(t) 1 with prob ½ Define Energy  (t) = x(t) 2 E[  (t+1)] = ½ (x(t)+1) 2 + ½ (x(t)-1) 2 = x(t) =  (t)+1 Expected energy = n at t= n. Claim 2: Each set has O( ) discrepancy in expectation. Pf: For each S, x t (S) doing random walk with step size ¼ 

42/30 Analysis Consider time T = O(1/  2 ) Claim 1: With prob. ½, at least n/2 variables reach -1 or +1. ) Everything colored in O(log n) rounds. Claim 2: Each set has O( ) discrepancy in expectation per round. ) Expected discrepancy of a set at end = O( log n) Thm: Obtain a coloring with discrepancy O( log (mn)) Pf: By Chernoff, Prob. that disc(S) >= 2 Expectation + O( log m) = O( log (mn)) is tiny (poly(1/m)).

43/30 Recap At each step of walk, formulate SDP on unfixed variables. Use some (existential) property to argue SDP is feasible Rounding SDP solution -> Step of walk Properties of walk: High Variance -> Quick convergence Low variance for discrepancy on sets -> Low discrepancy

44/30 Refinements Spencer’s six std deviations result: Goal: Obtain O(n 1/2 ) discrepancy for any set system on m = O(n) sets. Random coloring has n 1/2 (log n) 1/2 discrepancy Previous approach seems useless: Expected discrepancy for a set O(n 1/2 ), but some random walks will deviate by up to (log n) 1/2 factor Need an additional idea to prevent this.

45/30 Spencer’s O(n 1/2 ) result Partial Coloring Lemma: For any system with m sets, there exists a coloring on ¸ n/2 elements with discrepancy O(n 1/2 log 1/2 (2m/n)) [For m=n, disc = O(n 1/2 )] Algorithm for total coloring: Repeatedly apply partial coloring lemma Total discrepancy O( n 1/2 log 1/2 2 ) [Phase 1] + O( (n/2) 1/2 log 1/2 4 ) [Phase 2] + O((n/4) 1/2 log 1/2 8 ) [Phase 3] + … = O(n 1/2 )

46/30 Proving Partial Coloring Lemma Beautiful Counting argument (entropy method + pigeonhole) Idea: Too many colorings (2 n ), but few “discrepancy profiles” Key Lemma: There exist k=2 4n/5 colorings X 1,…,X k such that every two X i, X j are “similar” for every set S 1,…,S n. Some X 1,X 2 differ on ¸ n/2 positions Consider X = (X 1 – X 2 )/2 Pf: X(S) = (X 1 (S) – X 2 (S))/2 2 [-10 n 1/2, 10 n 1/2 ] X 1 = ( 1,-1, 1, …,1,-1,-1) X 2 = (-1,-1,-1, …,1, 1, 1) X = ( 1, 0, 1, …,0,-1,-1)

47/30 A useful generalization There exists a partial coloring with non-uniform discrepancy bound  S for set S Even if  S =  ( n 1/2 ) in some average sense

48/30 An SDP Suppose there exists partial coloring X: 1. On ¸ n/2 elements 2. Each set S has |X(S)| ·  S SDP: Low discrepancy: |  i 2 S j v i | 2 ·  S 2 Many colors:  i |v i | 2 ¸ n/2 |v i | 2 · 1 Pick random Gaussian g = (g 1,g 2,…,g n ) each coordinate g i is iid N(0,1) For each i, consider  i = g ¢ v i Obtain v i 2 R n

49/30 Algorithm Initially write SDP with  S = c n 1/2 Each set S does random walk and expects to reach discrepancy of O(  S ) = O(n 1/2 ) Some sets will become problematic. Reduce their  S on the fly. Not many problematic sets, and entropy penalty low. 0 20n 1/2 30n 1/2 35n 1/2 … Danger 1 Danger 2 Danger 3 …

50/30 Concluding Remarks Construct coloring over time by solving sequence of SDPs (guided by existence results) Works quite generally Can be derandomized [Bansal-Spencer] (use entropy method itself for derandomizing + usual tech.) E.g. Deterministic six standard deviations can be viewed as a way to derandomize something stronger than Chernoff bounds.

51/30 Thank You!

52/30

53/30

54/30 Rest of the talk 1.How to generate  i with required properties. 2.How to update  S over time. Show n 1/2 (log log log n) 1/2 bound.

55/30 Why so few algorithms? Often algorithms rely on continuous relaxations. –Linear Program is useless. Can color each element ½ red and ½ blue. Improved results of Spencer, Beck, Srinivasan, … based on clever counting (entropy method). –Pigeonhole Principle on exponentially large systems (seems inherently non-constructive)

56/30 Partial Coloring Lemma Suppose we have discrepancy bound  S for set S. Consider 2 n possible colorings Signature of a coloring X: (b(S 1 ), b(S 2 ),…, b(S m )) Want partial coloring with signature (0,0,0,…,0)

57/30 Progress Condition Energy increases at each step: E(t) = \sum_i x_i(t)^2 Initially energy =0, can be at most n. Expected value of E(t) = E(t-1) + \sum_i \gamma_i(t)^2 Markov’s inequality.

58/30 Missing Steps 1.How to generate the \eta_i 2.How to update \Delta_S over time

59/30 Partial Coloring If exist two colorings X 1,X 2 1. Same signature (b 1,b 2,…,b m ) 2. Differ in at least n/2 positions. Consider X = (X 1 –X 2 )/ or 1 on at least n/2 positions, i.e. partial coloring 2.Has signature (0,0,0,…,0) X(S) = (X 1 (S) – X 2 (S)) / 2, so |X(S)| ·  S for all S. Can show that there are 2 4n/5 colorings with same signature. So, some two will differ on > n/2 positions. (Pigeon Hole) X 1 = (1,-1, 1, …, 1,-1,-1) X 2 = (-1,-1,-1, …, 1,1, 1)

60/30

61/30 Spencer’s O(n 1/2 ) result Partial Coloring Lemma: For any system with m sets, there exists a coloring on ¸ n/2 elements with discrepancy O(n 1/2 log 1/2 (2m/n)) [For m=n, disc = O(n 1/2 )] Algorithm for total coloring: Repeatedly apply partial coloring lemma Total discrepancy O( n 1/2 log 1/2 2 ) [Phase 1] + O( (n/2) 1/2 log 1/2 4 ) [Phase 2] + O((n/4) 1/2 log 1/2 8 ) [Phase 3] + … = O(n 1/2 ) Let us prove the lemma for m = n

62/30 Proving Partial Coloring Lemma Pf: Associate with coloring X, signature = (b 1,b 2,…,b n ) (b i = bucket in which X(S i ) lies ) Wish to show: There exist 2 4n/5 colorings with same signature Choose X randomly: Induces distribution  on signatures. Entropy (  ) · n/5 implies some signature has prob. ¸ 2 -n/5. Entropy (  ) ·  i Entropy( b i ) [Subadditivity of Entropy] b i = 0 w.p. ¼ 1- 2 e -50, = 1 w.p. ¼ e -50 = 2 w.p. ¼ e -450 …. -10 n 1/2 -30 n 1/2 10 n 1/2 30 n 1/ Ent(b 1 ) · 1/5

63/30 A useful generalization Partial coloring with non-uniform discrepancy  S for set S SS  S  S  S  S For each set S, consider the “bucketing” Suffices to have  s Ent (b s ) · n/5 Or, if  S = s n 1/2, then  s g( s ) · n/5 g( ) ¼ e - 2 /2 > 1 ¼ ln(1/ ) < 1 Bucket of n 1/2 /100 has penalty ¼ ln(100)

64/30 Recap Partial Coloring:  S ¼ 10 n 1/2 gives low entropy ) 2 4n/5 colorings exist with same signature. ) some X 1,X 2 with large hamming distance. (X 1 – X 2 ) /2 gives the desired partial coloring. Trouble: 2 4n/5 /2 n is an exponentially small fraction. Only if we could find the partial coloring efficiently…