Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.

Slides:



Advertisements
Similar presentations
Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Advertisements

On the Complexity of Parallel Hardness Amplification for One-Way Functions Chi-Jen Lu Academia Sinica, Taiwan.
PRG for Low Degree Polynomials from AG-Codes Gil Cohen Joint work with Amnon Ta-Shma.
An Introduction to Randomness Extractors Ronen Shaltiel University of Haifa Daddy, how do computers get random bits?
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
CS151 Complexity Theory Lecture 8 April 22, 2004.
CIS 5371 Cryptography 3b. Pseudorandomness.
Complexity Theory Lecture 11 Lecturer: Moni Naor.
Derandomized parallel repetition theorems for free games Ronen Shaltiel, University of Haifa.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
Time vs Randomness a GITCS presentation February 13, 2012.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Graph Sparsifiers by Edge-Connectivity and Random Spanning Trees Nick Harvey University of Waterloo Department of Combinatorics and Optimization Joint.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Derandomization: New Results and Applications Emanuele Viola Harvard University March 2006.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
CS151 Complexity Theory Lecture 7 April 20, 2015.
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
CS151 Complexity Theory Lecture 8 April 22, 2015.
1 Streaming Computation of Combinatorial Objects Ziv Bar-Yossef U.C. Berkeley Omer Reingold AT&T Labs – Research Ronen.
CS151 Complexity Theory Lecture 10 April 29, 2004.
1 A Universal Task: Finding Hay in a Haystack Given circuit C: {0,1} n →{0,1} with μ( C) ≥ ½, find x so that C(x)=1. Want: Algorithm polynomial in n and.
The Power of Randomness in Computation 呂及人中研院資訊所.
Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)
CS151 Complexity Theory Lecture 9 April 27, 2004.
Complexity Theory Lecture 12 Lecturer: Moni Naor.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
CS151 Complexity Theory Lecture 9 April 27, 2015.
Extractors against classical and quantum adversaries AmnonTa-Shma Tel-Aviv University.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness Seeded.
CS151 Complexity Theory Lecture 10 April 29, 2015.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
Randomness Extraction Beyond the Classical World Kai-Min Chung Academia Sinica, Taiwan 1 Based on joint works with Xin Li, Yaoyun Shi, and Xiaodi Wu.
When is Key Derivation from Noisy Sources Possible?
Pseudo-random generators Talk for Amnon ’ s seminar.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Umans Complexity Theory Lectures Lecture 9b: Pseudo-Random Generators (PRGs) for BPP: - Hardness vs. randomness - Nisan-Wigderson (NW) Pseudo- Random Generator.
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Umans Complexity Theory Lectures
Introduction to Machine Learning
Umans Complexity Theory Lectures
CS151 Complexity Theory Lecture 11 May 8, 2017.
Pseudorandomness when the odds are against you
The Curve Merger (Dvir & Widgerson, 2008)
Umans Complexity Theory Lectures
Imperfectly Shared Randomness
Cryptography Lecture 5.
CS21 Decidability and Tractability
CS151 Complexity Theory Lecture 10 May 2, 2019.
CS151 Complexity Theory Lecture 7 April 23, 2019.
CS151 Complexity Theory Lecture 11 May 7, 2019.
On Derandomizing Algorithms that Err Extremely Rarely
Presentation transcript:

Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors

2 Extractors PRGs: can remove randomness from algorithms –based on unproven assumption –polynomial slow-down –not applicable in other settings Question: can we use “real” randomness? –physical source –imperfect – biased, correlated

3 Extractors “Hardware” side –what physical source? –ask the physicists… “Software” side –what is the minimum we need from the physical source?

4 Extractors imperfect sources: –“stuck bits”: –“correlation”: –“more insidious correlation”: there are specific ways to get independent unbiased random bits from specific imperfect physical sources “ “ “ “ “ “ perfect squares

5 Extractors want to assume we don’t know details of physical source general model capturing all of these? –yes: “min-entropy” universal procedure for all imperfect sources? –yes: “extractors”

6 Min-entropy General model of physical source w/ k < n bits of hidden randomness Definition: random variable X on {0,1} n has min-entropy min x –log(Pr[X = x]) –min-entropy k implies no string has weight more than 2 -k {0,1} n 2 k strings string sampled uniformly from this set

7 Extractor Extractor: universal procedure for “purifying” imperfect source: –E is efficiently computable –truly random seed as “catalyst” seed source string near-uniform {0,1} n 2 k strings E t bits m bits

8 Extractor “(k, ε)-extractor”  for all X with min-entropy k: –output fools all circuits C: |Pr z [C(z) = 1] - Pr y, x  X [C(E(x, y)) = 1]| ≤ ε –distributions E(X, U t ), U m “ε-close” (L 1 dist ≤ 2 ε) Notice similarity to PRGs –output of PRG fools all efficient tests –output of extractor fools all tests

9 Extractors Goals:good: best: short seed O(log n) log n+O(1) long outputm = k Ω(1) m = k+t–O(1) many k’sk = n Ω(1) any k = k(n) seed source string near-uniform {0,1} n 2 k strings E t bits m bits

10 Extractors random function for E achieves best ! –but we need explicit constructions –many known; often complex + technical –optimal extractors still open Trevisan Extractor: –insight: use NW generator with source string in place of hard function –this works (!!) –proof slightly different than NW, easier

11 Trevisan Extractor Ingredients: (  > 0, m are parameters) –error-correcting code C:{0,1} n  {0,1} n’ distance (½ - ¼m -4 )n’ blocklength n’ = poly(n) –(log n’, a = δlog n/3) design: S 1,S 2,…,S m  {1…t = O(log n’)} E(x, y)=C(x)[y |S 1 ]◦C(x)[y |S 2 ]◦…◦C(x)[y |S m ]

12 Trevisan Extractor E(x, y)=C(x)[y |S 1 ]◦C(x)[y |S 2 ]◦…◦C(x)[y |S m ] Theorem (T): E is an extractor for min-entropy k = n δ, with –output length m = k 1/3 –seed length t = O(log n) –error ε ≤ 1/m C(x): seed y

13 Trevisan Extractor Proof: –given X  {0,1} n of size 2 k –assume E fails to ε-pass statistical test C |Pr z [C(z) = 1] - Pr x  X, y [C(E(x, y)) = 1]| > ε –distinguisher C  predictor P: Pr x  X, y [P(E(x, y) 1 … i-1 )=E(x, y) i ] > ½ + ε/m

14 Trevisan Extractor Proof (continued): –for at least ε/2 of x  X we have: Pr y [P(E(x, y) 1 … i-1 )=E(x, y) i ] > ½ + ε/(2m) –fix bits  outside of S i to preserve advantage Pr y’ [P(E(x;  y’  ) 1 … i-1 )=C(x)[y’] ] >½ + ε/(2m) –as vary y’, for j ≠ i, j-th bit of E(x;  y’  ) varies over only 2 a values –(m-1) tables of 2 a values supply E(x;  y’  ) 1 … i-1

15 Trevisan Extractor P output C(x)[y’] w.p. ½ + ε/(2m) y’y’ Y’  {0,1} log n’

16 Trevisan Extractor Proof (continued): –(m-1) tables of size 2 a constitute a description of a string that has ½ + ε/(2m) agreement with C(x) –# of strings x with such a description? exp((m-1)2 a ) = exp( n δ2/3 ) = exp(k 2/3 ) strings Johnson Bound: each string accounts for at most O(m 4 ) x’s total #: O(m 4 )exp(k 2/3 ) << 2 k (ε/2) contradiction

17 Extractors (k,  )- extractor: –E is efficiently computable – 8 X with minentropy k, E fools all circuits C: |Pr z [C(z) = 1] - Pr y, x  X [C(E(x, y)) = 1]| ≤ ε seed source string near-uniform {0,1} n 2 k strings E t bits m bits Trevisan: k = n ± t = O(log n) m = k 1/3 ² = 1/m

18 Strong error reduction L  BPP if there is a p.p.t. TM M: x  L  Pr y [M(x,y) accepts] ≥ 2/3 x  L  Pr y [M(x,y) rejects] ≥ 2/3 Want: x  L  Pr y [M(x,y) accepts] ≥ k x  L  Pr y [M(x,y) rejects] ≥ k We saw: repeat O(k) times –n = O(k)·|y| random bits; 2 n-k bad strings Want to spend n = poly(|y|) random bits; achieve << 2 n/3 bad strings

19 Strong error reduction Better: –E extractor for minentropy k=|y| 3 =n δ, ε < 1/6 –pick random w  {0,1} n, run M(x, E(w, z)) for all z  {0,1} t, take majority –call w “bad” if maj z M(x, E(w, z)) incorrect |Pr z [M(x,E(w,z))=b] - Pr y [M(x,y)=b]| ≥ 1/6 –extractor property: at most 2 k bad w –n random bits; 2 n δ bad strings