Download presentation
Presentation is loading. Please wait.
1
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa
2
Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions
3
Extractor graphs: Definition [NZ] An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=N δ, M=exp( (log N) δ ). Every vertex x on the left has D neighbors. the extractor is better when D is small. (e.g. D=polylog N) Convention: N=2 n, M=2 m, D=2 d … {1, …,N} ≈ {0,1} n D edges x N≈{0,1} n M≈{0,1} m E(x,1) E(x,D)..
4
Extractor graphs: expansion properties (K,ε)-Extractor: ∀set X of size K the dist. E(X,U) ε-close to uniform. => “ expansion ” property: ∀ set X of size K, |Γ)x)| ≥ (1-ε)M. Distribution versus Set size X N≈{0,1} n M≈{0,1} m K Γ(X) (1-ε)M *A distribution P is ε-close to uniform if ||P-U|| 1 ≤ 2 ε => P supports 1-ε elements. x Identify X with the uniform distribution on X
5
Extractors and Expander graphs X N≈{0,1} n M≈{0,1} m Γ(X) (1-ε)M Extractor N≈{0,1} n X Γ(X) D=2 d edges (1+δ)-Expander K (1+δ)K K N≈{0,1} n
6
Requires degree log N Allows constant degree Extractors and Expander graphs X N≈{0,1} n M≈{0,1} m Γ(X) (1-ε)M Extractor N≈{0,1} n X Γ(X) (1+δ)-Expander (1+δ)K N≈{0,1} n Balanced graph Unbalanced graph Absolute expansion: K -> (1+δ)K Relative expansion: K -> (1-ε)M K/N -> (1-ε) Expands sets smaller than threshold K Expands sets larger than threshold K K K
7
Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions
8
Successful Paradigm in CS: Probabilistic Algorithms. Probabilistic Algorithms/Protocols: Use an additional input stream of independent coin tosses. Helpful in solving computational problems. Where can we get random bits? The initial motivation: running probabilistic algorithms with “ real-life ” sources We have access to distributions in nature: Electric noise Key strokes of user Timing of past events These distributions are “ somewhat random ” but not “ truly random ”. Paradigm: [SV,V,VV,CG,V,CW,Z]. Randomness Extractors Assumption for this talk: Somewhat random = uniform over subset of size K. random coins Probabilistic algorithm input output Randomness Extractor Somewhat random
9
Parameters: (function view) Source length: n (= log N) Seed length: d ~ O(log n) Entropy threshold: k ~ n/100 Output length: m ~ k Required error: ε ~ 1/100 We allow an extractor to also receive an additional input of (very few) random bits. Extractors use few random bits to extract many random bits from arbitrary distributions which “contain” sufficient randomness. Extractors as functions that use few bits to extract randomness source distribution X Extractor seed Y random output Randomness Definition: A (K,ε)-extractor is a function E(x,y) s.t. For every set. X of size K, E(X,U) is ε-close * to uniform. Lower bounds [NZ,RT]: seed length (in bits) ≥ log n Probabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extracts all the k=log K random bits in the source distribution. Explicit constructions: E(x,y) can be computed in poly-time.
10
Simulating probabilistic algorithms using weak random sources Goal: Run prob algorithm using a somewhat random distribution. Where can we get a seed? Idea: Go over all seeds. Given a source element x. ∀ y compute z y = E(x,y) Compute Alg(input,z y ) Answer majority vote. Seed=O(logn) => poly-time Explicit constructions. Probabilistic algorithm input output random coins Randomness Extractor seed Somewhat random
11
Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions
12
Applications Simulating probabilistic algorithms using weak sources of randomness [vN,SV,V,VV,CG,V,CW,Z]. Constructing Graphs (Expanders, Super- concentrators) [WZ]. Oblivious sampling [S,Z]. Constructions of various pseudorandom generators [NZ,RR,STV,GW,MV]. Distributed algorithms [WZ,Z,RZ]. Cryptography [CDHK,L,V,DS,MST]. Hardness of approximations [Z,U,MU]. Error correcting codes [TZ].
13
Expanders that beat the eigenvalue bound [WZ] Goal: Construct low deg expanders with huge expansion. Line up two low degree extractors. ∀ set X of size K, |Γ)x)| ≥ (1-ε)M > M/2. ∀ sets X,X’ of size K X and X’ have common neighbour. Contract middle layer. Low degree (ND 2 /K) bipartite graph in which every set of size K sees N-K vertices. Better constructions for large K [CRVW]. N≈{0,1} n X X’X’
14
v1v1 v2v2 v3v3 vDvD Randomness efficient (oblivious) sampling using expanders Random walk variables v 1..v D behave like i.i.d: ∀A of size ½M Hitting property: Pr[∀i : v i ∊A] ≤ δ = 2 -Ω(D). Chernoff style property: Pr[#i : v i ∊A far from exp.] ≤ 2 -Ω(D). # of random bits used for walk: m+O(D)=m+O(log(1/δ)) # of random bits for i.i.d. m∙D=m ∙ O(log(1/δ)) M≈{0,1} m Random walk on constant degree expander
15
Randomness efficient (oblivious) sampling using extractors [S] Given parameters m, δ: Use E with K=M=2 m, N=M/δ and small D. Choose random x: m+log(1/δ) random bits. Set v i =E(x,i) Ext property ⇒ Hitting property ∀A of size ½M Call x bad if E(x) inside A. # of bad x’s < K Pr[x is bad] < K/N = δ D edges x N≈{0,1} n M≈{0,1} m E(x,1) E(x,D).. bad x ’ s (1-ε)M A
16
Every (oblivious) sampling scheme yields an extractor An (oblivious) sampling scheme uses a random n bit string x to generated D random variables with Chrnoff style property. Thm: [Z] The derived graph is an extractor. Extractors oblvs Sampling D=2 d edges x N≈{0,1} n M≈{0,1} m v1v1 vDvD..
17
Outline of talk 1. Extractors as graphs with expansion properties 2. Extractors as functions which extract randomness 3. Applications 4. Explicit Constructions
18
Constructions
19
Extractors from error correcting codes Can construct extractors from error-correcting code [ILL,SZ,T]. Short seed. Extract one additional bit Extractors that extract one additional bit List-decodable error-correcting codes Extractors that extract many bits codes with strong list-recovering properties [TZ].
20
20% errors List-decodable error-correcting codes [S] encoding noisy channeldecoding x xEC(x) EC(x) ’ x encoding EC(x) extremely noisy channel EC(x) ’ x1x1 x2x2 x3x3 List decoding 49% errors EC(x) is 20%-decodable if for every w there is a unique x s.t. EC(x) differs from w in 20% of positions. EC(x) is (49%,t)-list-decodable if for every w there are at most t x ’ s s.t. EC(x) differs from w in 49% of positions. There are explicit constructions of such codes.
21
Extractors from list-decodable error-correcting codes [ILL,T] Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Note: E outputs its seed y. Such an extractor is called “strong”. E outputs only one additional output bit EC(x) y There are constructions of list-decodable error correcting codes with |y|=O(log n). Strong extractors with one additional bit List- decodable error correcting codes. Strong extractors with many additional bits translate into very strong error correcting codes [TZ].
22
Extractors from list-decodable error-correcting codes: proof Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Proof: by contradiction. Let X be a distribution/set of size K s.t. E(X,Y)=(Y,EC(X) Y ) is far from uniform. Observation: Y and EC(X) Y are both uniform. They are correlated. Exists P s.t. P(Y)=EC(X) Y with prob > ½+2ε.
23
Extractors from list-decodable error-correcting codes: proof II Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Exists P s.t. Pr X,Y [P(Y)=EC(X) Y ] > ½+2ε. By a Markov argument: For εK x ’ s in X Pr Y [P(Y)=EC(x) Y ] > ½+ε. Think of P as a string P y =P(y). We have that P and EC(x) differ in ½-ε coordinates. Story so far: If E is bad then there is a string P s.t. for εK x ’ s P and EC(x) differ in few coordinates.
24
Extractors from list-decodable error-correcting codes: proof III Thm: If EC(x) is (½-ε,εK)-list-decodable then E(x,y)=(y,EC(x) y ) is a (K,2ε)-extractor. Story so far: If E is bad then there is a string P s.t. for εK x ’ s P and EC(x) differ in ½-ε coordinates. x encoding EC(x) noisy channel P=EC(x) ’ x1x1 x2x2 x3x3 List decoding 49% errors By list-decoding properties of the code: # of such x ’ s < εK. Contradiction!
25
Roadmap Can construct extractors from error-correcting code. Short seed. Output = Seed + 1. Next: How to extract more bits. General paradigm: Once you construct one extractor you can try to boost its quality.
26
Y’Y’ Starting point: An extractor E that extracts only few bits. Idea: (X|E(X,Y)) contains randomness. We can apply E to extract randomness from (X|E(X,Y)). Need a “ fresh ” seed. E ’ (X;(Y,Y ’ ))=E(X,Y),E(X,Y ’ ) Extract more randomness. Use larger seed. Extracting more bits [WZ] X Extractor Y Z Z Z X New Extractor Y Y’Y’
27
Trevisan ’ s extractor: reducing the seed length Idea: Use few random bits to generate (correlated) seeds Y 1,Y 2,Y 3 … Walk on expander? Extractor? Works but gives small savings. Trevisan: use Nisan-Wigderson pseudorandom generator (based on combinatorial designs). [TZS,SU]: Use Y,Y+1,Y+2,... (based on the [STV] algorithm for list-decoding Reed-Muller code). X Extractor Y1Y1 Y2Y2 Y
28
The extractor designer tool kit Many ways to “ compose ” extractors with themselves and related objects. Arguments use “ entropy manipulations ” depend on “ function view ” of extractors. Impact on other graph construction problems: Expander graphs (zig-zag product) [RVW,CRVW]. Ramsey graphs that beat the Frankl-Wilson construction [BKSSW,BRSW].
29
Y’Y’ Entropy manipulations: composing two extractors [Z,NZ] X 2 Small Extractor Z X 1 Large Extractor Observation: Can compose a small ext. and a large ext. and obtain ext. which inherits small seed and large output. Paradigm: If given only one source try to convert it into two sources that are “ sufficiently independent ”. Two independent sources
30
Summary: Extractors are X M≈{0,1} m K=2 k Γ(X) (1-ε)M source distribution X Extractor seed Y random output Randomness Functions Graphs
31
Conclusion Unifying role of extractors: Expanders, Oblivious samplers, Error correcting codes, Pseudorandom generators, hash functions … Open problems: More applications/connections. The quest for explicitly constructing the optimal extractor. (Current record [LRVW]). Direct and simple constructions. Things I didn ’ t talk about: Seedless extractors for special families of sources.
32
That ’ s it …
34
Extractor graphs x N≈{0,1} n M≈{0,1} m E(x) 1 E(x) D.. D=2 d edges x
35
Extractor graphs: expansion X N≈{0,1} n M≈{0,1} m K=2 k Γ(X) (1-ε)M
36
Issues in a formal definition: 2. One extractor for all sources Goal: Design one extractor function E(x) that works on all sufficiently high entropy distributions. Problem: Impossible to extract even 1 bit from distributions with n-1 bits of entropy. Have to settle for less! source distribution X Extractor random output Randomness {0,1} n x:E(x)=0 x:E(x)=1 Distribution X with entropy n-1 on which E(X) is fixed
37
Parameters: Source length: n Seed length: d ~ O(log n) Entropy threshold: k ~ n/100 Output length: m ~ k Required error: ε ~ 1/100 Definition of extractors [NZ] We allow an extractor to also receive an additional seed of (very few) random bits. Extractors use few random bits to extract many random bits from arbitrary distributions with sufficiently high entropy. source distribution X Extractor seed Y random output Randomness Definition: A (k,ε)-extractor is a function E(x,y) s.t. For every distribution X with min-entropy k, E(X,Y) is ε-close * to uniform. Lower bounds [NZ,RT]: seed length ≥ log n + 2log(1/ε) Probabilistic method [S,RT]: Exists optimal extractor which matches lower bound and extracts k+d-2log(1/ε) bits. *A distribution P is ε-close to uniform if ||P-U|| 1 ≤ 2 ε => P supports 1-ε elements.
38
Extractor graphs: Definition [NZ] An extractor is an (unbalanced) bipartite graph M<<N. (e.g. M=N δ, M=exp( (log N) δ ). Every vertex x on the left has D neighbors. E(x)=(E(x) 1,..,E(x) D ) the extractor is better when D is small. (e.g. D=polylog N) Convention: E(x,y) = E(x) y D edges x N≈{0,1} n M≈{0,1} m E(x) 1 E(x) D..
39
Issues in a formal definition: 1. Notion of entropy The source distribution X must “ contain randomness ” Necessary condition for extracting k bits: ∀ x Pr[X=x] ≤2 -k Dfn: X has min-entropy k if ∀ x Pr[X=x] ≤2 -k Example: flat distributions: X is uniformly distributed on a subset of size 2 k. Every X with min-entropy k is a convex combination of flat distributions. source distribution X Extractor random output Randomness 2 k =|S| {0,1} n
40
errors Noisy channels and error corrections noisy channel x x’x’ Goal: Transmit messages using a noisy channel Guarantee: x ’ differs from x in at most (say) 20% positions. Coding Theory: Encode x prior to transmission.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.