Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.

Slides:



Advertisements
Similar presentations
Randomness Conductors Expander Graphs Randomness Extractors Condensers Universal Hash Functions
Advertisements

Randomness Conductors (II) Expander Graphs Randomness Extractors Condensers Universal Hash Functions
Low-End Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Ronen Shaltiel, University of Haifa Chris Umans, Caltech.
Invertible Zero-Error Dispersers and Defective Memory with Stuck-At Errors Ariel Gabizon Ronen Shaltiel.
An Introduction to Randomness Extractors Ronen Shaltiel University of Haifa Daddy, how do computers get random bits?
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Short seed extractors against quantum storage Amnon Ta-Shma Tel-Aviv University 1.
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
How to get more mileage from randomness extractors Ronen Shaltiel University of Haifa.
Deterministic extractors for bit- fixing sources by obtaining an independent seed Ariel Gabizon Ran Raz Ronen Shaltiel Seedless.
Extracting Randomness David Zuckerman University of Texas at Austin.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Expander Graphs, Randomness Extractors and List-Decodable Codes Salil Vadhan Harvard University Joint work with Venkat Guruswami (UW) & Chris Umans (Caltech)
List decoding and pseudorandom constructions: lossless expanders and extractors from Parvaresh-Vardy codes Venkatesan Guruswami Carnegie Mellon University.
The Unified Theory of Pseudorandomness Salil Vadhan Harvard University See also monograph-in-progress Pseudorandomness
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
Derandomized parallel repetition theorems for free games Ronen Shaltiel, University of Haifa.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Randomized Algorithms Kyomin Jung KAIST Applied Algorithm Lab Jan 12, WSAC
TAMPER DETECTION AND NON-MALLEABLE CODES Daniel Wichs (Northeastern U)
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 12 June 18, 2006
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
Constant Degree, Lossless Expanders Omer Reingold AT&T joint work with Michael Capalbo (IAS), Salil Vadhan (Harvard), and Avi Wigderson (Hebrew U., IAS)
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
3-source extractors, bi-partite Ramsey graphs, and other explicit constructions Boaz barak rOnen shaltiel Benny sudakov avi wigderson Joint work with GUY.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
CS151 Complexity Theory Lecture 8 April 22, 2015.
1 Streaming Computation of Combinatorial Objects Ziv Bar-Yossef U.C. Berkeley Omer Reingold AT&T Labs – Research Ronen.
CS151 Complexity Theory Lecture 10 April 29, 2004.
The Power of Randomness in Computation 呂及人中研院資訊所.
Information Theory and Security
CS151 Complexity Theory Lecture 9 April 27, 2004.
On the Complexity of Approximating the VC Dimension Chris Umans, Microsoft Research joint work with Elchanan Mossel, Microsoft Research June 2001.
Extractors with Weak Random Seeds Ran Raz Weizmann Institute.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
RANDOMNESS AND PSEUDORANDOMNESS Omer Reingold, Microsoft Research and Weizmann.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness Seeded.
15-853:Algorithms in the Real World
Testing the independence number of hypergraphs
CS151 Complexity Theory Lecture 10 April 29, 2015.
When is Randomness Extraction Possible? David Zuckerman University of Texas at Austin.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
1 Leonid Reyzin Boston University Adam Smith Weizmann  IPAM  Penn State Robust Fuzzy Extractors & Authenticated Key Agreement from Close Secrets Yevgeniy.
When is Key Derivation from Noisy Sources Possible?
Pseudo-random generators Talk for Amnon ’ s seminar.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
RANDOMNESS AND PSEUDORANDOMNESS Omer Reingold, Microsoft Research and Weizmann.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Pseudorandomness when the odds are against you
The Curve Merger (Dvir & Widgerson, 2008)
Extractors: Optimal Up to Constant Factors
Non-Malleable Extractors New tools and improved constructions
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
The Zig-Zag Product and Expansion Close to the Degree
The Weizmann Institute
Presentation transcript:

Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa

Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions

Extractor graphs: Definition [NZ] An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=N δ, M=exp( (log N) δ ). Every vertex x on the left has D neighbors. the extractor is better when D is small. (e.g. D=polylog N) Convention: N=2 n, M=2 m, D=2 d … {1, …,N} ≈ {0,1} n D edges x N≈{0,1} n M≈{0,1} m E(x,1) E(x,D)..

Extractor graphs: expansion properties (K,ε)-Extractor: ∀set X of size K the dist. E(X,U) ε-close to uniform. => “ expansion ” property: ∀ set X of size K, |Γ)x)| ≥ (1-ε)M. Distribution versus Set size X N≈{0,1} n M≈{0,1} m K Γ(X) (1-ε)M *A distribution P is ε-close to uniform if ||P-U|| 1 ≤ 2 ε => P supports 1-ε elements. x Identify X with the uniform distribution on X

Extractors and Expander graphs X N≈{0,1} n M≈{0,1} m Γ(X) (1-ε)M Extractor N≈{0,1} n X Γ(X) D=2 d edges (1+δ)-Expander K (1+δ)K K N≈{0,1} n

Requires degree log N Allows constant degree Extractors and Expander graphs X N≈{0,1} n M≈{0,1} m Γ(X) (1-ε)M Extractor N≈{0,1} n X Γ(X) (1+δ)-Expander (1+δ)K N≈{0,1} n Balanced graph Unbalanced graph Absolute expansion: K -> (1+δ)K Relative expansion: K -> (1-ε)M K/N -> (1-ε) Expands sets smaller than threshold K Expands sets larger than threshold K K K

Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions

Successful Paradigm in CS: Probabilistic Algorithms. Probabilistic Algorithms/Protocols: Use an additional input stream of independent coin tosses. Helpful in solving computational problems. Where can we get random bits? The initial motivation: running probabilistic algorithms with “ real-life ” sources We have access to distributions in nature: Electric noise Key strokes of user Timing of past events These distributions are “ somewhat random ” but not “ truly random ”. Paradigm: [SV,V,VV,CG,V,CW,Z]. Randomness Extractors Assumption for this talk: Somewhat random = uniform over subset of size K. random coins Probabilistic algorithm input output Randomness Extractor Somewhat random

Parameters: (function view) Source length: n (= log N) Seed length: d ~ O(log n) Entropy threshold: k ~ n/100 Output length: m ~ k Required error: ε ~ 1/100 We allow an extractor to also receive an additional input of (very few) random bits. Extractors use few random bits to extract many random bits from arbitrary distributions which “contain” sufficient randomness. Extractors as functions that use few bits to extract randomness source distribution X Extractor seed Y random output Randomness Definition: A (K,ε)-extractor is a function E(x,y) s.t. For every set. X of size K, E(X,U) is ε-close * to uniform. Lower bounds [NZ,RT]: seed length (in bits) ≥ log n Probabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extracts all the k=log K random bits in the source distribution. Explicit constructions: E(x,y) can be computed in poly-time.

Simulating probabilistic algorithms using weak random sources Goal: Run prob algorithm using a somewhat random distribution. Where can we get a seed? Idea: Go over all seeds. Given a source element x. ∀ y compute z y = E(x,y) Compute Alg(input,z y ) Answer majority vote. Seed=O(logn) => poly-time Explicit constructions. Probabilistic algorithm input output random coins Randomness Extractor seed Somewhat random

Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions

Applications Simulating probabilistic algorithms using weak sources of randomness [vN,SV,V,VV,CG,V,CW,Z]. Constructing Graphs (Expanders, Super- concentrators) [WZ]. Oblivious sampling [S,Z]. Constructions of various pseudorandom generators [NZ,RR,STV,GW,MV]. Distributed algorithms [WZ,Z,RZ]. Cryptography [CDHK,L,V,DS,MST]. Hardness of approximations [Z,U,MU]. Error correcting codes [TZ].

Expanders that beat the eigenvalue bound [WZ] Goal: Construct low deg expanders with huge expansion. Line up two low degree extractors. ∀ set X of size K, |Γ)x)| ≥ (1-ε)M > M/2. ∀ sets X,X’ of size K X and X’ have common neighbour. Contract middle layer. Low degree (ND 2 /K) bipartite graph in which every set of size K sees N-K vertices. Better constructions for large K [CRVW]. N≈{0,1} n X X’X’

v1v1 v2v2 v3v3 vDvD Randomness efficient (oblivious) sampling using expanders Random walk variables v 1..v D behave like i.i.d: ∀A of size ½M Hitting property: Pr[∀i : v i ∊A] ≤ δ = 2 -Ω(D). Chernoff style property: Pr[#i : v i ∊A far from exp.] ≤ 2 -Ω(D). # of random bits used for walk: m+O(D)=m+O(log(1/δ)) # of random bits for i.i.d. m∙D=m ∙ O(log(1/δ)) M≈{0,1} m Random walk on constant degree expander

Randomness efficient (oblivious) sampling using extractors [S] Given parameters m, δ: Use E with K=M=2 m, N=M/δ and small D. Choose random x: m+log(1/δ) random bits. Set v i =E(x,i) Ext property ⇒ Hitting property ∀A of size ½M Call x bad if E(x) inside A. # of bad x’s < K Pr[x is bad] < K/N = δ D edges x N≈{0,1} n M≈{0,1} m E(x,1) E(x,D).. bad x ’ s (1-ε)M A

Every (oblivious) sampling scheme yields an extractor An (oblivious) sampling scheme uses a random n bit string x to generated D random variables with Chrnoff style property. Thm: [Z] The derived graph is an extractor. Extractors  oblvs Sampling D=2 d edges x N≈{0,1} n M≈{0,1} m v1v1 vDvD..

Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions

Constructions

Extractors from error correcting codes Can construct extractors from error-correcting code [ILL,SZ,T]. Short seed. Extract one additional bit Extractors that extract one additional bit  List-decodable error-correcting codes Extractors that extract many bits  codes with strong list-recovering properties [TZ].

20% errors List-decodable error-correcting codes [S] encoding noisy channeldecoding x xEC(x) EC(x) ’ x encoding EC(x) extremely noisy channel EC(x) ’ x1x1 x2x2 x3x3 List decoding 49% errors EC(x) is 20%-decodable if for every w there is a unique x s.t. EC(x) differs from w in 20% of positions. EC(x) is (49%,t)-list-decodable if for every w there are at most t x ’ s s.t. EC(x) differs from w in 49% of positions. There are explicit constructions of such codes.

Extractors from list-decodable error-correcting codes [ILL,T] Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Note: E outputs its seed y. Such an extractor is called “strong”. E outputs only one additional output bit EC(x) y There are constructions of list-decodable error correcting codes with |y|=O(log n). Strong extractors with one additional bit  List- decodable error correcting codes. Strong extractors with many additional bits translate into very strong error correcting codes [TZ].

Extractors from list-decodable error-correcting codes: proof Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Proof: by contradiction. Let X be a distribution/set of size K s.t. E(X,Y)=(Y,EC(X) Y ) is far from uniform. Observation: Y and EC(X) Y are both uniform.  They are correlated.  Exists P s.t. P(Y)=EC(X) Y with prob > ½+2ε.

Extractors from list-decodable error-correcting codes: proof II Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Exists P s.t. Pr X,Y [P(Y)=EC(X) Y ] > ½+2ε. By a Markov argument: For εK x ’ s in X Pr Y [P(Y)=EC(x) Y ] > ½+ε. Think of P as a string P y =P(y). We have that P and EC(x) differ in ½-ε coordinates. Story so far: If E is bad then there is a string P s.t. for εK x ’ s P and EC(x) differ in few coordinates.

Extractors from list-decodable error-correcting codes: proof III Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Story so far: If E is bad then there is a string P s.t. for εK x ’ s P and EC(x) differ in ½-ε coordinates. x encoding EC(x) noisy channel P=EC(x) ’ x1x1 x2x2 x3x3 List decoding 49% errors By list-decoding properties of the code:  # of such x ’ s < εK. Contradiction!

Roadmap Can construct extractors from error-correcting code. Short seed. Output = Seed + 1. Next: How to extract more bits. General paradigm: Once you construct one extractor you can try to boost its quality.

Y’Y’ Starting point: An extractor E that extracts only few bits. Idea: (X|E(X,Y)) contains randomness. We can apply E to extract randomness from (X|E(X,Y)). Need a “ fresh ” seed. E ’ (X;(Y,Y ’ ))=E(X,Y),E(X,Y ’ ) Extract more randomness. Use larger seed. Extracting more bits [WZ] X Extractor Y Z Z Z X New Extractor Y Y’Y’

Trevisan ’ s extractor: reducing the seed length Idea: Use few random bits to generate (correlated) seeds Y 1,Y 2,Y 3 … Walk on expander? Extractor? Works but gives small savings. Trevisan: use Nisan-Wigderson pseudorandom generator (based on combinatorial designs). [TZS,SU]: Use Y,Y+1,Y+2,... (based on the [STV] algorithm for list-decoding Reed-Muller code). X Extractor Y1Y1 Y2Y2 Y

The extractor designer tool kit Many ways to “ compose ” extractors with themselves and related objects. Arguments use “ entropy manipulations ” depend on “ function view ” of extractors. Impact on other graph construction problems: Expander graphs (zig-zag product) [RVW,CRVW]. Ramsey graphs that beat the Frankl-Wilson construction [BKSSW,BRSW].

Y’Y’ Entropy manipulations: composing two extractors [Z,NZ] X 2 Small Extractor Z X 1 Large Extractor Observation: Can compose a small ext. and a large ext. and obtain ext. which inherits small seed and large output. Paradigm: If given only one source try to convert it into two sources that are “ sufficiently independent ”. Two independent sources

Summary: Extractors are X M≈{0,1} m K=2 k Γ(X) (1-ε)M source distribution X Extractor seed Y random output Randomness Functions Graphs

Conclusion Unifying role of extractors: Expanders, Oblivious samplers, Error correcting codes, Pseudorandom generators, hash functions … Open problems: More applications/connections. The quest for explicitly constructing the optimal extractor. (Current record [LRVW]). Direct and simple constructions. Things I didn ’ t talk about: Seedless extractors for special families of sources.

That ’ s it …

Extractor graphs x N≈{0,1} n M≈{0,1} m E(x) 1 E(x) D.. D=2 d edges x

Extractor graphs: expansion X N≈{0,1} n M≈{0,1} m K=2 k Γ(X) (1-ε)M

Issues in a formal definition: 2. One extractor for all sources Goal: Design one extractor function E(x) that works on all sufficiently high entropy distributions. Problem: Impossible to extract even 1 bit from distributions with n-1 bits of entropy. Have to settle for less! source distribution X Extractor random output Randomness {0,1} n x:E(x)=0 x:E(x)=1 Distribution X with entropy n-1 on which E(X) is fixed

Parameters: Source length: n Seed length: d ~ O(log n) Entropy threshold: k ~ n/100 Output length: m ~ k Required error: ε ~ 1/100 Definition of extractors [NZ] We allow an extractor to also receive an additional seed of (very few) random bits. Extractors use few random bits to extract many random bits from arbitrary distributions with sufficiently high entropy. source distribution X Extractor seed Y random output Randomness Definition: A (k,ε)-extractor is a function E(x,y) s.t. For every distribution X with min-entropy k, E(X,Y) is ε-close * to uniform. Lower bounds [NZ,RT]: seed length ≥ log n + 2log(1/ε) Probabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extracts k+d-2log(1/ε) bits. *A distribution P is ε-close to uniform if ||P-U|| 1 ≤ 2 ε => P supports 1-ε elements.

Extractor graphs: Definition [NZ] An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=N δ, M=exp( (log N) δ ). Every vertex x on the left has D neighbors. E(x)=(E(x) 1,..,E(x) D ) the extractor is better when D is small. (e.g. D=polylog N) Convention: E(x,y) = E(x) y D edges x N≈{0,1} n M≈{0,1} m E(x) 1 E(x) D..

Issues in a formal definition: 1. Notion of entropy The source distribution X must “ contain randomness ” Necessary condition for extracting k bits: ∀ x Pr[X=x] ≤2 -k Dfn: X has min-entropy k if ∀ x Pr[X=x] ≤2 -k Example: flat distributions: X is uniformly distributed on a subset of size 2 k. Every X with min-entropy k is a convex combination of flat distributions. source distribution X Extractor random output Randomness 2 k =|S| {0,1} n

errors Noisy channels and error corrections noisy channel x x’x’ Goal: Transmit messages using a noisy channel Guarantee: x ’ differs from x in at most (say) 20% positions. Coding Theory: Encode x prior to transmission.