Download presentation
Presentation is loading. Please wait.
Published byDwayne Nelson Modified over 9 years ago
1
Ryan O’Donnell Carnegie Mellon University
2
Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s proof of Arrow’s Theorem
3
10 Minute Break
4
Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps
5
Sadly no time for: Learning theory Pseudorandomness Arithmetic combinatorics Random graphs / percolation Communication complexity Metric / Banach spaces Coding theory etc.
6
1A. Fourier expansion basics
7
f : {0,1} n {0,1}
8
f : {−1,+1} n {−1,+1}
9
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) (+1,+1,−1) (+1,−1,+1) (−1,+1,+1) −1 +1 −1
10
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1 −1
11
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1
12
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1 −1
13
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1
14
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1
15
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1 +1 −1
16
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1 +1 −1 +1 −1
17
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) +1 −1 +1 −1
18
ℝ3ℝ3 (+1,+1,+1) (−1,−1,−1) −1 +1 −1
19
(+1,+1,+1) +1 −1 +1 −1 (+1,+1,−1) (+1,−1,−1)
20
=
22
=
23
Proposition: Every f : {−1,+1} n {−1,+1} can be expressed as a multilinear polynomial, That’s it. That’s the “Fourier expansion” of f. (uniquely) (indeed, → ℝ )
24
Proposition: Every f : {−1,+1} n {−1,+1} can be expressed as a multilinear polynomial, That’s it. That’s the “Fourier expansion” of f. (uniquely) (indeed, → ℝ )
25
⇓ Rest: 0
26
Why? Coefficients encode useful information. When? 1. Uniform probability involved 2. Hamming distances relevant
27
Parseval’s Theorem: Let f : {−1,+1} n {−1,+1}. Then avg { f(x) 2 }
28
“Weight” of f on S ⊆ [n] =
29
{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}
30
{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}
31
{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}
32
{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}
33
{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}
34
{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3}
35
1B. Concepts: Bias, Influences, Noise Sensitivity
36
Social Choice: Candidates ±1 n voters Votes are random f : {−1,+1} n {−1,+1} is the “voting rule”
37
Bias of f: avg f(x) = Pr[+1 wins] − Pr[−1 wins] Fact: Weight on ∅ = measures “imbalance”.
38
Influence of i on f: Pr[ f(x) ≠ f(x ( ⊕ i) ) ] = Pr[voter i is a swing voter] Fact:
39
{2} {1} ∅ {3} {1,3}{1,2}{2,3} {1,2,3} Maj(x 1, x 2, x 3 )
41
+1 −1 Inf i (f) = Pr[ f(x) ≠ f(x ( ⊕ i) ) ]
42
+1 −1 Inf i (f) = Pr[ f(x) ≠ f(x ( ⊕ i) ) ]
43
avg Inf i (f) = frac. of edges which are cut edges
44
LMN Theorem: If f is in AC 0 then avg Inf i (f)
45
⇒ avg Inf i (Parity n ) = 1 ⇒ Parity ∉ AC 0 ⇒ avg Inf i (Maj n ) = ⇒ Majority ∉ AC 0
46
KKL Theorem: If Bias(f) = 0, then Corollary: Assuming f monotone, −1 or +1 can bribe o(n) voters and win w.p. 1−o(1).
47
Noise Sensitivity of f at : NS(f) = Pr[wrong winner wins], when each vote misrecorded w/prob f( ) ) +−++−−+−− −−+++++−−
49
Learning Theory principle: [LMN’93, …, KKMS’05] If all f ∈ C have small NS(f) then C is efficiently learnable.
50
{2} {1} ∅ {3} {1,3}{1,2}{2,3} [3]
51
Proposition: for small, with Electoral College: 10 1
52
1C. Kalai’s proof of Arrow’s Theorem
53
Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no 111 -1-1-1) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A?
54
Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no 111 -1-1-1) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A? “ C > A > B ” “ A > B > C ” “ B > C > A ” + − + + + − + + − + − − − + − − − + − + − + − + − + +
55
Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no 111 -1-1-1) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A? “ C > A > B ” “ A > B > C ” “ B > C > A ” f() ) ) = + = + = − Society: “A > B > C” + − + + + − + + − + − − − + − − − + − + − + − + − + +
56
Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no 111 -1-1-1) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? A > B? B > C? C > A? “ C > A > B ” “ A > B > C ” “ B > C > A ” f() ) ) = + = + = − Society: “A > B > C” + − + + + − + + − + − + − + − − − + − + − + − + − + +
57
Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no 111 -1-1-1) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? “ C > A > B ” “ A > B > C ” “ B > C > A ” Society: “A > B > C” A > B? B > C? C > A? f() ) ) = + = + = + + − + + + − + + − + − + − + − − − + − + − + − + − + +
58
Ranking 3 candidates Condorcet [1775] Election: => (x_i, y_i, z_i) are Not All Equal (no 111 -1-1-1) Condorcet: Try f = Maj. Outcome can be “irrational” A > B > C > A. [easy eg] Maybe some other f? “ C > A > B ” “ A > B > C ” “ B > C > A ” Society: “A > B > C > A” ? A > B? B > C? C > A? f() ) ) = + = + = + + − + + + − + + − + − + − + − − − + − + − + − + − + +
59
Arrow’s Impossibility Theorem [1950]: If f : {−1,+1} n {−1,+1} never gives irrational outcome in Condorcet elections, then f is a Dictator or a negated-Dictator.
60
Gil Kalai’s Proof [2002]:
61
“ C > A > B ” “ A > B > C ” “ B > C > A ” A > B? B > C? C > A? f() ) ) = + = + = − + − + + + − + + − + − − − + − − − + − + − + − + − + +
62
“ C > A > B ” “ A > B > C ” “ B > C > A ” A > B? B > C? C > A? f() ) ) = + = + = − + − + + + − + + − + − − − + − − − + − + − + − + − + +
63
Gil Kalai’s Proof:
65
Gil Kalai’s Proof, concluded: f never gives irrational outcomes ⇒ equality ⇒ all Fourier weight “at level 1” ⇒ f(x) = ±x j for some j (exercise).
66
⇓ Guilbaud’s Theorem [1952] Guilbaud’s Number ≈.912
67
Corollary of “Majority Is Stablest” [MOO05]: If Inf i (f) ≤ o(1) for all i, then Pr[rational outcome with f]
69
Part 2: A. The Hypercontractive Inequality B. Algorithmic Gaps
70
2A. The Hypercontractive Inequality AKA Bonami-Beckner Inequality
71
KKL TheoremFriedgut’s TheoremTalagrand’s TheoremEvery monotone graph property has a sharp thresholdFKN TheoremBourgain’s Junta TheoremMajority Is Stablest Theorem all use “Hypercontractive Inequality”
72
Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c 2 x 2 + ··· + c n x n, where x i ’s are indep., unif. random ±1.
73
Mean: μ = c 0 Variance: Hoeffding Inequality: Let F = c 0 + c 1 x 1 + c 2 x 2 + ··· + c n x n,
74
Mean: μ = Variance: Hypercontractive Inequality*: Let
75
Then for all q ≥ 2, Hypercontractive Inequality: Let
76
Then F is a “reasonable d ” random variable. Hypercontractive Inequality: Let
77
Then for all q ≥ 2, Hypercontractive Inequality: Let
78
Then “q = 4” Hypercontractive Inequality: Let
79
Then “q = 4” Hypercontractive Inequality: Let
80
KKL TheoremFriedgut’s TheoremTalagrand’s TheoremEvery monotone graph property has a sharp thresholdFKN TheoremBourgain’s Junta TheoremMajority Is Stablest Theorem all use Hypercontractive Inequality
81
KKL TheoremFriedgut’s TheoremTalagrand’s TheoremEvery monotone graph property has a sharp thresholdFKN TheoremBourgain’s Junta TheoremMajority Is Stablest Theorem just use “q = 4” Hypercontractive Inequality
82
“q = 4” Hypercontractive Inequality: Let F be degree d over n i.i.d. ±1 r.v.’s. Then Proof [MOO’05] : Induction on n. Obvious step. Use induction hypothesis. Use Cauchy-Schwarz on the obvious thing. Use induction hypothesis. Obvious step.
83
2B. Algorithmic Gaps
84
Opt best poly-time guarantee ln(N) “Set-Cover is NP-hard to approximate to factor ln(N)”
85
Opt LP-Rand-Rounding guarantee ln(N) “Factor ln(N) Algorithmic Gap for LP-Rand-Rounding”
86
Opt( S ) LP-Rand-Rounding( S ) ln(N) “Algorithmic Gap Instance S for LP-Rand-Rounding”
87
Algorithmic Gap instances are often “based on” {−1,+1} n.
88
Sparsest-Cut: Algorithm: Arora-Rao-Vazirani SDP. Guarantee:Factor
90
Opt = 1/n
93
f(x) = sgn( )
94
Opt = 1/n f(x) = sgn(r 1 x 1 + + r n x n ) ARV gets
95
Opt = 1/n ARV gets gap:
96
Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance n optimal “Dictator” solutions “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation
97
Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance n optimal “Dictator” solutions “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation
98
KKL / Talagrand Theorem: If f is balanced, Inf i (f) ≤ 1/n.01 for all i, then avg Inf i (f) ≥ Gap: Θ(log n) = Θ(log log N).
99
[CKKRS05]: KKL + Unique Games Conjecture ⇒ Ω(log log log N) hardness-of-approx.
100
2-Colorable 3-Uniform hypergraphs: Input:2-colorable, 3-unif. hypergraph Output:2-coloring Obj:Max. fraction of legally colored hyperedges
101
2-Colorable 3-Uniform hypergraphs: Algorithm: SDP [KLP96]. Guarantee: [Zwick99]
102
Algorithmic Gap Instance Vertices:{−1,+1} n 6 n hyperedges:{ (x,y,z) : poss. prefs in a Condorcet election} (i.e., triples s.t. (x i,y i,z i ) NAE for all i)
103
Elts: {−1,+1} n Edges: Condorcet votes (x,y,z) 2-coloring = f : {−1,+1} n → {−1,+1} frac. legally colored hyperedges = Pr[“rational” outcome with f] Instance 2-colorable? ✔ (2n optimal solutions: ±Dictators)
104
Elts: {−1,+1} n Edges: Condorcet votes (x,y,z) SDP rounding alg. may output Random weighted majority also rational-with-prob.-.912! [same CLT arg.] f(x) = sgn(r 1 x 1 + + r n x n )
105
Algorithmic Gaps → Hardness-of-Approx LP / SDP-rounding Alg. Gap instance n optimal “Dictator” solutions “generic mixture of Dictators” much worse + PCP technology = same-gap hardness-of-approximation
106
Corollary of Majority Is Stablest: If Inf i (f) ≤ o(1) for all i, then Pr[rational outcome with f] Cor: this + Unique Games Conjecture ⇒.912 hardness-of-approx*
107
2C. Future Directions
108
Develop the “structure vs. pseudorandomness” theory for Boolean functions.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.