Sensitivity of voting coin tossing protocols, Nov 1

Slides:



Advertisements
Similar presentations
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
Advertisements

Coin flipping from a cosmic source OR Error correction of truly random bits Elchanan MosselRyan O’Donnell Microsoft Research MIT (now at Berkeley)
Foundations of Cryptography Lecture 2: One-way functions are essential for identification. Amplification: from weak to strong one-way function Lecturer:
Many-to-one Trapdoor Functions and their Relations to Public-key Cryptosystems M. Bellare S. Halevi A. Saha S. Vadhan.
Computational Applications of Noise Sensitivity Ryan O’Donnell.
Theory of Computing Lecture 23 MAS 714 Hartmut Klauck.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
I NFORMATION CAUSALITY AND ITS TESTS FOR QUANTUM COMMUNICATIONS I- Ching Yu Host : Prof. Chi-Yee Cheung Collaborators: Prof. Feng-Li Lin (NTNU) Prof. Li-Yi.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Foundations of Cryptography Lecture 4 Lecturer: Moni Naor.
Chain Rules for Entropy
Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)
Dynamic percolation, exceptional times, and harmonic analysis of boolean functions Oded Schramm joint w/ Jeff Steif.
CS151 Complexity Theory Lecture 6 April 15, 2015.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Avraham Ben-Aroya (Tel Aviv University) Oded Regev (Tel Aviv University) Ronald de Wolf (CWI, Amsterdam) A Hypercontractive Inequality for Matrix-Valued.
Theory of Computing Lecture 22 MAS 714 Hartmut Klauck.
Primer on Fourier Analysis Dana Moshkovitz Princeton University and The Institute for Advanced Study.
Econ 805 Advanced Micro Theory 1 Dan Quint Fall 2007 Lecture 3 – Sept
Consistency An estimator is a consistent estimator of θ, if , i.e., if
Umans Complexity Theory Lectures Lecture 7b: Randomization in Communication Complexity.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Tight Bound for the Gap Hamming Distance Problem Oded Regev Tel Aviv University TexPoint fonts used in EMF. Read the TexPoint manual before you delete.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Hartmut Klauck Centre for Quantum Technologies Nanyang Technological University Singapore.
Non Linear Invariance Principles with Applications Elchanan Mossel U.C. Berkeley
Imperfectly Shared Randomness
Existence of Non-measurable Set
Random Access Codes and a Hypercontractive Inequality for
High order procedures Primality testing The RSA cryptosystem
Information Complexity Lower Bounds
Visual Recognition Tutorial
Randomness and Computation: Some Prime Examples
Worst case to Average case Reductions for Polynomials
Advanced Algorithms Analysis and Design
Primer on Fourier Analysis
Gil Kalai Einstein Institute of Mathematics
Existence of Non-measurable Set
Sensitivity of voting schemes and coin tossing protocols
Communication Amid Uncertainty
On Sensitivity and Chaos
Vapnik–Chervonenkis Dimension
Branching Programs Part 3
General Expectation So far we have considered expected values separately for discrete and continuous random variables only. This separation is somewhat.
Noise stability of functions with low influences:
Monte Carlo Approximations – Introduction
Hidden Markov Models Part 2: Algorithms
Haim Kaplan and Uri Zwick
Linear sketching with parities
RS – Reed Solomon List Decoding.
Uncertain Compression
Killing and Collapsing
Linear sketching over
Advances in Linear Sketching over Finite Fields
Summarizing Data by Statistics
Sungho Kang Yonsei University
Linear sketching with parities
Imperfectly Shared Randomness
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
Functions Rosen 2.3, 2.5 f( ) = A B Lecture 5: Oct 1, 2.
Communication Amid Uncertainty
CS21 Decidability and Tractability
Learning From Observed Data
11.3a: Positive-Term Series
Randomized Algorithms CS648
CS151 Complexity Theory Lecture 7 April 23, 2019.
Data Exploration and Pattern Recognition © R. El-Yaniv
Sparse Kindler-Safra Theorem via agreement theorems
Presentation transcript:

Sensitivity of voting coin tossing protocols, Nov 1 Lecturer: Elchanan Mossel Scribe: Radu Mihaescu Slides partially stolen from Ryan O’Donnell 9/18/2018

Tossing coins from cosmic source x 01010001011011011111 (n bits) y1 01010001011011011111 y2 01010001011011011111 y3 01010001011011011111 ° ° ° yk 01010001011011011111 first bit Alice Bob Cindy o o o Kate 9/18/2018

Broadcast with ε errors x 01010001011011011111 (n bits) y1 01011000011011011111 y2 01010001011110011011 y3 11010001011010011111 ° ° ° yk 01010011011001010111 first bit Alice 1 Bob Cindy o o o Kate 9/18/2018

Broadcast with ε errors x 01010001011011011111 (n bits) y1 01011000011011011111 y2 01010001011110011011 y3 11010001011010011111 ° ° ° yk 01010011011001010111 majority Alice 1 Bob Cindy o o o Kate 1 9/18/2018

The parameters Our goal n bit uniform random “source” string x k parties who cannot communicate, but wish to agree on a uniformly random bit ε each party gets an independently corrupted version yi, each bit flipped independently with probability ε f (or f1… fk): balanced “protocol” functions Our goal For each n, k, ε, find the best protocol function f (or functions f1…fk) which maximize the probability that all parties agree on the same bit. 9/18/2018

Coins and voting schemes Our goal For each n, k, ε, find the best protocol function f (or functions f1…fk) which maximize the probability that all parties agree on the same bit. Coins and voting schemes For k=2 we want to maximize P[f1(y1) = f2(y2)], where y1 and y2 are related by applying  noise twice. Optimal protocol: f1 = f2 = dictatorship. Same is true for k=3 (M-O’Donnell). 9/18/2018

Proof that optimality is achieved at f1=f2=x1 We want to maximize E[f1Tηf2] for η=1-2ε. But By Cauchy-Schwartz Equality is trivially achieved for f1=f2=x1 9/18/2018

Proof that optimality is achieved for f1=f2=f3=x1 For 3 functions, disagreement means that two agree and the third disagrees. Therefore: Now each term in the sum above can be maximized independently. 9/18/2018

Notation Further motivation We write: S(f1, …, fk; ε) = Pr[f1(y1) = ··· = fk(yk)], Sk(f; ε) in the case f = f1 = ··· = fk. Further motivation Noise in “Ever-lasting security” crypto protocols (Ding and Rabin). Variant of a decoding problem. Study of noise sensitivity: |T(f)|kk where T is the Bonami-Beckner operator. 9/18/2018

protocols Recall that we want the parties’ bits, when agreed upon, to be uniformly random. To get this, we restricted to balanced functions. However this is neither necessary nor sufficient! In particular, for n = 5 and k = 3, there is a balanced function f such that, if all players use f, they are more likely to agree on 1 than on 0!. To get agreed-upon bits to be uniform, it suffices for functions be antisymmetric: Thm[M-O’Donnell]: In optimal f1 = … = fk = f and f is monotone (Pf uses convexity and symmetrization). We are thus in the same setting as in the voting case. 9/18/2018

Proof of M-O’Donnell Theorem Claim 1: in optimal protocol, f1=f2=…=fk=f. Proof: Let f1,f2…fM be all the possible functions, where M=22n. Let t1,t2…tM be the numbers of players using each function. Then But for each value of x, 0<Tfi(x)<1, and therefore for each value of x, the both terms above are convex. Therefore the expectation of the sum is also convex in (t1,…tM). Which implies that the optimum is achieved at (k,0,0,…0). 9/18/2018

Proof of M-O’Donnell Theorem (continued) Claim 2: Optimum is achieved when f is monotone. Proof: We will use the technique of shifting (as in the proof of the isoperimetric inequality). If f(0,x2,…xn)= f(1,x2,…xn), then set g(0,x2,…xn)= g(1,x2,…xn)= f(0,x2,…xn)= f(1,x2,…xn). If f(0,x2,…xn)= f(1,x2,…xn), then set g(0,x2,…xn)= 0 and g(1,x2,…xn)= 1. Subclaim: g is “better” than f, even if conditioned on the values of (yji) for j≥2 and 1≤i≤k. Proof of subclaim: Suppose a functions are identically 0, b are identically 1 and c are non-trivial (having fixed the (yji)’s). If both a,b>0, agreement is with probability 0. 9/18/2018

Proof of M-O’Donnell Theorem (continued) Suppose a=b=0. Let c=cup+cdown, where cup is the number of increasing functions and cdown is the number of decreasing functions. Then the probability of agreement for f is On the other hand, the probability of agreement for g is and Pagreeg> Pagreef by convexity. For a>0 and b=0 or vice-versa the analysis is identical save for a factor of ½.■ ■ Thm. 9/18/2018

More results [M-O’Donnell] When k = 2 or 3, the first-bit function is best. For fixed n, when k→∞ majority is best. For fixed n and k when ε→0 and ε→½, the first-bit is best. Proof for ε→0 uses isoperimetric inq for edge boundary. Proof for ε→ ½ uses Fourier. For unbounded n, things get harder… in general we don’t know the best function, but we can give bounds for Sk(f; ε). Main open problem for finite n (odd): Is optimal protocol always a majority of a subset? Conjecture M: No Conjecture O: Yes. 9/18/2018

For fixed n and ε, when k→∞ majority is best Proof: We have seen that in the optimal case all the f’s are equal and monotone. Then But when k→∞, we only care about the dominant term, i.e. (Tf(1))k+(1-Tf(0))k. (Tf is monotone when f is monotone.) We are therefore trying to maximize the following quantity over f But ε≤1/2, therefore maximization is achieved when one picks the top half of the distribution, i.e. majority. ■ 9/18/2018

Unbounded n Fixing  and n = 1, how does h(k,) := P[f1 = … = fk] decay as a function of k? First guess: h(k,) decays exponentially with k. But! Prop[M-O’Donnell]: h(k,) ¸ k-c() where c() > 0. Conj[M-O’Donnell]: h(k,) ! 0 as k ! 1. Thm[M-O’Donnell-Regev-Steif-Sudakov]: h(k,) · k-c’() 9/18/2018

Harmonic analysis of Boolean functions To prove “hard” results need to do harmonic analysis of Boolean functions. Consists of many combinatorial and probabilistic tricks + “Hyper-contractivity”. If p-1=2(q-1) then |T f|q · |f|p if p > 1 (Bonami-Beckner) |T f|q ¸ |f|p if p < 1 and f > 0 (Borell). Our application uses 2nd – in particular implies that for all A and B: P[x 2 A, N(x) 2 B] ¸ P(A)1/p P(B)q. Similar inequalities hold for Ornstein-Uhlenbeck processes and “whenever” there is a log-sob inequality. 9/18/2018

Coins on other trees We can define the coin problem on trees. So far we have only discusses the star. Y1 Y4 x x x=y1 Y2 Y3 Y4 Y5 Y4 Y2 Y1 Y2 Y3 Y3 Y5 Some highlights from MORSS: On line dictator is always optimal (new result in MCs). For some trees, different fi’s needed. 9/18/2018

Wrap-up We have seen a variety of “stability” problems for voting and coins tossing. Sometimes it is “easy” to show that dictator is optimal. Sometimes majority is (almost) optimal, but typically hard to prove (why?). Recursive majority is really (the most) unstable. 9/18/2018

Open problems Does f monotone anti-symmetric,  FKG and [Xi] = p > ½, ei <  ) [f] ¸ 1 - ? For  the i.i.d. measure the (almost) most stable f with ei = o(1) is maj (for k=2? All k?). The most stable f for Gaussian coin problem is f(x) = sign(x) and result is robust. For the coin problem, the optimal f is always a majority of a subset. 9/18/2018