Ryan O’Donnell Carnegie Mellon University. Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s proof of Arrow’s.

Slides:



Advertisements
Similar presentations
Hypercontractive inequalities via SOS, and the Frankl-Rödl graph Manuel Kauers (Johannes Kepler Universität) Ryan ODonnell (Carnegie Mellon University)
Advertisements

LEARNIN HE UNIFORM UNDER DISTRIBUTION – Toward DNF – Ryan ODonnell Microsoft Research January, 2006.
Subhash Khot IAS Elchanan Mossel UC Berkeley Guy Kindler DIMACS Ryan O’Donnell IAS.
Computational Applications of Noise Sensitivity Ryan O’Donnell.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
The Max-Cut problem: Election recounts? Majority vs. Electoral College? 7812.
On the Unique Games Conjecture Subhash Khot Georgia Inst. Of Technology. At FOCS 2005.
Hardness of Robust Graph Isomorphism, Lasserre Gaps, and Asymmetry of Random Graphs Ryan O’Donnell (CMU) John Wright (CMU) Chenggang Wu (Tsinghua) Yuan.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Ryan O’Donnell Carnegie Mellon University analysisofbooleanfunctions.org.
Computing Kemeny and Slater Rankings Vincent Conitzer (Joint work with Andrew Davenport and Jayant Kalagnanam at IBM Research.)
Ryan O’Donnell & Yi Wu Carnegie Mellon University (aka, Conditional hardness for satisfiable 3-CSPs)
Constraint Satisfaction over a Non-Boolean Domain Approximation Algorithms and Unique Games Hardness Venkatesan Guruswami Prasad Raghavendra University.
3-Query Dictator Testing Ryan O’Donnell Carnegie Mellon University joint work with Yi Wu TexPoint fonts used in EMF. Read the TexPoint manual before you.
Learning intersections and thresholds of halfspaces Adam Klivans (MIT/Harvard) Ryan O’Donnell (MIT) Rocco Servedio (Harvard)
Probabilistically Checkable Proofs (and inapproximability) Irit Dinur, Weizmann open day, May 1 st 2009.
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)
Dynamic percolation, exceptional times, and harmonic analysis of boolean functions Oded Schramm joint w/ Jeff Steif.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Ties Matter: Complexity of Voting Manipulation Revisited based on joint work with Svetlana Obraztsova (NTU/PDMI) and Noam Hazon (CMU) Edith Elkind (Nanyang.
Venkatesan Guruswami (CMU) Yuan Zhou (CMU). Satisfiable CSPs Theorem [Schaefer'78] Only three nontrivial Boolean CSPs for which satisfiability is poly-time.
Analysis of Boolean Functions Fourier Analysis, Projections, Influence, Junta, Etc… And (some) applications Slides prepared with help of Ricky Rosen.
Semidefinite Programming
On the Fourier Tails of Bounded Functions over the Discrete Cube Irit Dinur, Ehud Friedgut, and Ryan O’Donnell Joint work with Guy Kindler Microsoft Research.
Vol.1: Geometry Subhash Khot IAS Elchanan Mossel UC Berkeley Guy Kindler DIMACS Ryan O’Donnell IAS.
Fourier Analysis, Projections, Influences, Juntas, Etc…
Avraham Ben-Aroya (Tel Aviv University) Oded Regev (Tel Aviv University) Ronald de Wolf (CWI, Amsterdam) A Hypercontractive Inequality for Matrix-Valued.
Ryan O'Donnell (CMU, IAS) Yi Wu (CMU, IBM) Yuan Zhou (CMU)
Junta Distributions and the Average Case Complexity of Manipulating Elections A. D. Procaccia & J. S. Rosenschein.
Fourier Analysis of Boolean Functions Juntas, Projections, Influences Etc.
18-859S: Analysis of Boolean Functions. Administrivia Me: Ryan O’Donnell; Office hours: Wean 7121, by appointment.
Packing Element-Disjoint Steiner Trees Mohammad R. Salavatipour Department of Computing Science University of Alberta Joint with Joseph Cheriyan Department.
Finding Almost-Perfect
Ryan ’Donnell Carnegie Mellon University O. Ryan ’Donnell Carnegie Mellon University.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
A Fourier-Theoretic Perspective on the Condorcet Paradox and Arrow ’ s Theorem. By Gil Kalai, Institute of Mathematics, Hebrew University Presented by:
Ryan ’Donnell Carnegie Mellon University O. f : {−1, 1} n → {−1, 1} is “quasirandom” iff fixing O(1) input coords changes E[f(x)] by only o (1)
Primer on Fourier Analysis Dana Moshkovitz Princeton University and The Institute for Advanced Study.
1 Elections and Manipulations: Ehud Friedgut, Gil Kalai, and Noam Nisan Hebrew University of Jerusalem and EF: U. of Toronto, GK: Yale University, NN:
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Analysis of Boolean Functions and Complexity Theory Economics Combinatorics …
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
Shorter Long Codes and Applications to Unique Games 1 Boaz Barak (MSR, New England) Parikshit Gopalan (MSR, SVC) Johan Håstad (KTH) Prasad Raghavendra.
C&O 355 Lecture 24 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Analysis of Boolean Functions and Complexity Theory Economics Combinatorics Etc. Slides prepared with help of Ricky Rosen.
Maximizing Symmetric Submodular Functions Moran Feldman EPFL.
Yuan Zhou, Ryan O’Donnell Carnegie Mellon University.
Boaz Barak (MSR New England) Fernando G.S.L. Brandão (Universidade Federal de Minas Gerais) Aram W. Harrow (University of Washington) Jonathan Kelner (MIT)
Approximation Algorithms based on linear programming.
Analysis of Boolean Functions and Complexity Theory Economics Combinatorics …
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Non Linear Invariance Principles with Applications Elchanan Mossel U.C. Berkeley
Analysis of Boolean Functions and Complexity Theory Economics Combinatorics Etc. Slides prepared with help of Ricky Rosen.
Finding Almost-Perfect
Information Complexity Lower Bounds
Primer on Fourier Analysis
Gil Kalai Einstein Institute of Mathematics
Log-Sobolev Inequality on the Multislice (and what those words mean)
Noise stability of functions with low influences:
Structural Properties of Low Threshold Rank Graphs
Tight Fourier Tails for AC0 Circuits
Subhash Khot Dept of Computer Science NYU-Courant & Georgia Tech
Linear sketching with parities
Venkatesan Guruswami Yuan Zhou (Carnegie Mellon University)
Linear sketching over
Linear sketching with parities
Sparse Kindler-Safra Theorem via agreement theorems
Presentation transcript:

Ryan O’Donnell Carnegie Mellon University

Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s proof of Arrow’s Theorem

10 Minute Break

Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps

Sadly no time for: Learning theory Pseudorandomness Arithmetic combinatorics Random graphs / percolation Communication complexity Metric / Banach spaces Coding theory etc.

1A. Fourier expansion basics

f : {0,1} n  {0,1}

f : {−1,+1} n  {−1,+1}

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) (+1,+1,−1) (+1,−1,+1) (−1,+1,+1) −1 +1 −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1 −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1 −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1 +1 −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1 +1 −1 +1 −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1 −1 +1 −1

ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1

(+1,+1,+1) +1 −1 +1 −1 (+1,+1,−1) (+1,−1,−1)

=

=

Proposition: Every f : {−1,+1} n  {−1,+1} can be expressed as a multilinear polynomial, That’s it. That’s the “Fourier expansion” of f. (uniquely) (indeed, → ℝ )

Proposition: Every f : {−1,+1} n  {−1,+1} can be expressed as a multilinear polynomial, That’s it. That’s the “Fourier expansion” of f. (uniquely) (indeed, → ℝ )

⇓ Rest: 0

Why? Coefficients encode useful information. When? 1. Uniform probability involved 2. Hamming distances relevant

Parseval’s Theorem: Let f : {−1,+1} n  {−1,+1}. Then avg { f(x) 2 }

“Weight” of f on S ⊆ [n] =

{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}

{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}

{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}

{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}

{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}

{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}

1B. Concepts: Bias, Influences, Noise Sensitivity

Social Choice: Candidates ±1 n voters Votes are random f : {−1,+1} n  {−1,+1} is the “voting rule”

Bias of f: avg f(x) = Pr[+1 wins] − Pr[−1 wins] Fact: Weight on ∅ = measures “imbalance”.

Influence of i on f: Pr[ f(x) ≠ f(x ( ⊕ i) ) ] = Pr[voter i is a swing voter] Fact:

{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3} Maj(x 1, x 2, x 3 )

+1 −1 Inf i (f) = Pr[ f(x) ≠ f(x ( ⊕ i) ) ]

+1 −1 Inf i (f) = Pr[ f(x) ≠ f(x ( ⊕ i) ) ]

avg Inf i (f) = frac. of edges which are cut edges

LMN Theorem: If f is in AC 0 then avg Inf i (f)

⇒ avg Inf i (Parity n ) = 1 ⇒ Parity ∉ AC 0 ⇒ avg Inf i (Maj n ) = ⇒ Majority ∉ AC 0

KKL Theorem: If Bias(f) = 0, then Corollary: Assuming f monotone, −1 or +1 can bribe o(n) voters and win w.p. 1−o(1).

Noise Sensitivity of f at : NS(f) = Pr[wrong winner wins], when each vote misrecorded w/prob f( ) ) +−++−−+−− −−+++++−−

Learning Theory principle: [LMN’93, …, KKMS’05] If all f ∈ C have small NS(f) then C is efficiently learnable.

{2} {1} ∅ {3} {1,3}{1,2}{2,3} [3]

Proposition: for small, with Electoral College: 10 1

1C. Kalai’s proof of Arrow’s Theorem

Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no ) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A?

Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no ) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A? “ C > A > B ” “ A > B > C ” “ B > C > A ” + − − + + − + − − − + − − − + − + − + − + − + +

Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no ) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A? “ C > A > B ” “ A > B > C ” “ B > C > A ” f() ) ) = + = + = − Society: “A > B > C” + − − + + − + − − − + − − − + − + − + − + − + +

Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no ) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A? “ C > A > B ” “ A > B > C ” “ B > C > A ” f() ) ) = + = + = − Society: “A > B > C” + − − + + − + − + − + − − − + − + − + − + − + +

Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no ) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? “ C > A > B ” “ A > B > C ” “ B > C > A ” Society: “A > B > C” A > B? B > C? C > A? f() ) ) = + = + = + + − − + + − + − + − + − − − + − + − + − + − + +

Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no ) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? “ C > A > B ” “ A > B > C ” “ B > C > A ” Society: “A > B > C > A” ? A > B? B > C? C > A? f() ) ) = + = + = + + − − + + − + − + − + − − − + − + − + − + − + +

Arrow’s Impossibility Theorem [1950]: If f : {−1,+1} n  {−1,+1} never gives irrational outcome in Condorcet elections, then f is a Dictator or a negated-Dictator.

Gil Kalai’s Proof [2002]:

“ C > A > B ” “ A > B > C ” “ B > C > A ” A > B? B > C? C > A? f() ) ) = + = + = − + − − + + − + − − − + − − − + − + − + − + − + +

“ C > A > B ” “ A > B > C ” “ B > C > A ” A > B? B > C? C > A? f() ) ) = + = + = − + − − + + − + − − − + − − − + − + − + − + − + +

Gil Kalai’s Proof:

Gil Kalai’s Proof, concluded: f never gives irrational outcomes ⇒ equality ⇒ all Fourier weight “at level 1” ⇒ f(x) = ±x j for some j (exercise).

⇓ Guilbaud’s Theorem [1952] Guilbaud’s Number ≈.912

Corollary of “Majority Is Stablest” [MOO05]: If Inf i (f) ≤ o(1) for all i, then Pr[rational outcome with f]

Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps

2A. The Hypercontractive Inequality AKA Bonami-Beckner Inequality

KKL TheoremFriedgut’s TheoremTalagrand’s TheoremEvery monotone graph property has a sharp thresholdFKN TheoremBourgain’s Junta TheoremMajority Is Stablest Theorem all use “Hypercontractive Inequality”

Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c 2 x 2 + ··· + c n x n, where x i ’s are indep., unif. random ±1.

Mean: μ = c 0 Variance: Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c 2 x 2 + ··· + c n x n,

Mean: μ = Variance: Hypercontractive Inequality*: Let

Then for all q ≥ 2, Hypercontractive Inequality: Let

Then F is a “reasonable d ” random variable. Hypercontractive Inequality: Let

Then for all q ≥ 2, Hypercontractive Inequality: Let

Then “q = 4” Hypercontractive Inequality: Let

Then “q = 4” Hypercontractive Inequality: Let

KKL TheoremFriedgut’s TheoremTalagrand’s TheoremEvery monotone graph property has a sharp thresholdFKN TheoremBourgain’s Junta TheoremMajority Is Stablest Theorem all use Hypercontractive Inequality

KKL TheoremFriedgut’s TheoremTalagrand’s TheoremEvery monotone graph property has a sharp thresholdFKN TheoremBourgain’s Junta TheoremMajority Is Stablest Theorem just use “q = 4” Hypercontractive Inequality

“q = 4” Hypercontractive Inequality: Let F be degree d over n i.i.d. ±1 r.v.’s. Then Proof [MOO’05] : Induction on n. Obvious step. Use induction hypothesis. Use Cauchy-Schwarz on the obvious thing. Use induction hypothesis. Obvious step.

2B. Algorithmic Gaps

Opt best poly-time guarantee ln(N) “Set-Cover is NP-hard to approximate to factor ln(N)”

Opt LP-Rand-Rounding guarantee ln(N) “Factor ln(N) Algorithmic Gap for LP-Rand-Rounding”

Opt( S ) LP-Rand-Rounding( S ) ln(N) “Algorithmic Gap Instance S for LP-Rand-Rounding”

Algorithmic Gap instances are often “based on” {−1,+1} n.

Sparsest-Cut: Algorithm: Arora-Rao-Vazirani SDP. Guarantee:Factor

Opt = 1/n

f(x) = sgn( )

Opt = 1/n f(x) = sgn(r 1 x r n x n ) ARV gets

Opt = 1/n ARV gets gap:

Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance n optimal “Dictator” solutions “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation

Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance n optimal “Dictator” solutions “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation

KKL / Talagrand Theorem: If f is balanced, Inf i (f) ≤ 1/n.01 for all i, then avg Inf i (f) ≥ Gap: Θ(log n) = Θ(log log N).

[CKKRS05]: KKL + Unique Games Conjecture ⇒ Ω(log log log N) hardness-of-approx.

2-Colorable 3-Uniform hypergraphs: Input:2-colorable, 3-unif. hypergraph Output:2-coloring Obj:Max. fraction of legally colored hyperedges

2-Colorable 3-Uniform hypergraphs: Algorithm: SDP [KLP96]. Guarantee: [Zwick99]

Algorithmic Gap Instance Vertices:{−1,+1} n 6 n hyperedges:{ (x,y,z) : poss. prefs in a Condorcet election} (i.e., triples s.t. (x i,y i,z i ) NAE for all i)

Elts: {−1,+1} n Edges: Condorcet votes (x,y,z) 2-coloring = f : {−1,+1} n → {−1,+1} frac. legally colored hyperedges = Pr[“rational” outcome with f] Instance 2-colorable? ✔ (2n optimal solutions: ±Dictators)

Elts: {−1,+1} n Edges: Condorcet votes (x,y,z) SDP rounding alg. may output Random weighted majority also rational-with-prob.-.912! [same CLT arg.] f(x) = sgn(r 1 x r n x n )

Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance n optimal “Dictator” solutions “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation

Corollary of Majority Is Stablest: If Inf i (f) ≤ o(1) for all i, then Pr[rational outcome with f] Cor: this + Unique Games Conjecture ⇒.912 hardness-of-approx*

2C. Future Directions

Develop the “structure vs. pseudorandomness” theory for Boolean functions.