Download presentation
Presentation is loading. Please wait.
Published byMyrtle Glenn Modified over 9 years ago
1
1 Explicit Two-Source Extractors and Resilient Functions Eshan Chattopadhyay David Zuckerman UT Austin
2
2 Randomness in Computation Randomness widely used: Algorithms: Randomized algorithms can dramatically outperform known deterministic algorithms. Distributed Computing, Cryptography, Data Structures etc. Applications: require uniform uncorrelated bits. Cryptographic tasks: bit commitment, ZK, NIZK etc cannot work with even ‘almost’ random bits [DOPS04]. Most randomized algorithms: analysis assumes uniform, uncorrelated bits. 2
3
3 Weak Random Sources Natural sources may be defective. Clock drift, thermal noise, Zener diode Weak sources arise in cryptography: Condition on adversary’s information. Weak sources arise in pseudorandom generators: Condition on state of computation. Goal: Purify weak source. 3
4
4 Some Simple Models von Neumann 51: Sequence of independent, biased coin flips. Blum 84: Sequence of bits produced by a Markov chain. Sanha-Vazirani Sources ’84: Each new bit is almost uniform conditioned on the previous bits.
5
5 Modelling a Weak Random Sources Problem: Modelling a weak source: Shannon Entropy: D: with prob. 0.99 0 n, with prob 0.01 uniform on n bits Min-Entropy:
6
6 Min-Entropy X X: (n,k)-source (Chor-Goldreich ’88) A source on n bits with min-entropy at least k. All strings have probability ≤ 2 -k. Special Case: X uniform on set of size 2 k. General Case: Enough to deal with special case (Chor-Goldreich 88).
7
7 Randomness Extractors Extractor: deterministic procedure to extract uniform bits from ANY (n,k)-source. X
8
8 One Source Extractor ? X Lemma. There cannot exist such a function. Ext -1 (0) Ext -1 (1) Proof Idea: Assume Ext. max{|Ext -1 (0)|, |Ext -1 (1)|}≥2 n- 1.
9
9 Getting past this difficulty Make assumptions on weak source: Bit-fixing sources, Affine sources, Samplable sources etc. Without making assumptions: Seeded Extractors: Extract using a short uniform seed. Extract using ≥ 2 independent weak source. Focus of this talk.
10
10 Outline Introduction and Results Extractors: Seeded, Two-Source Ramsey Graphs Techniques Reduction to ‘generalized’ bit-fixing sources Extracting using a resilient function Conclusion and Open Questions
11
11 2-Source Extractor X Y X, Y are independent (n,k)-sources More formally,
12
12 Existence of 2-Source Extractors Thm. ( Probabilistic method ) ∃ 2-source extractor for min-entropy k = log n+O(1). Naive Derandomization: Exponential time. In fact, a random function is a good 2-source extractor with with high probability.
13
13 Explicit Constructions of Pseudorandom Objects Central theme in complexity theory: constructing deterministic objects with strong combinatorial properties. Generic goal: black-box ways of reducing randomness requirements in algorithms using such objects. Some other examples: hard functions, expanders, pseudorandom generators, error correcting codes. Constructing explicit extractors is part of a bigger project. Final goal: BPP=P?
14
14 Explicit Constructions? Thm. ( Probabilistic method ) ∃ 2-source extractor for min-entropy k = log n+O(1). Santha-Vazirani 86, Chor-Goldreich 88: Explicit 2-source extractor? Referencek1k1 k2k2 [Chor-Goldreich 88] >0.5n [Bourgain 05]≥0.49n0.49n [Raz 05]>0.5nO(log n) Explicit Constructions: X : (n,k 1 ), Y: (n,k 2 )
15
15 Relaxation to More Sources Barak-Impaggliazzo-Wigderson 04: Explicit extractors for constant number of (n,k)- sources with min-entropy δn. Rao 06: Explicit extractors for constant number of (n,k)-sources with min-entropy n ɣ. Li 11: Explicit extractor for 3 sources at n 0.51 min-entropy. Li 15: Explicit extractor for 3 sources at log C (n) min-entropy.
16
16 Matrix Formulation ‘Efficiently’ construct a low discrepancy Boolean N×N matrix: i.e, Every K×K submatrix contains ‘almost equal’ number of 0’s and 1’s. N=2 n, K=2 k 11 00 0 0 0 1 1 1 1 Truth table of a 2- source extractor Our 2-source extractor thus implies such low discrepancy matrices.
17
17 Our Main Result Thm.(Main theorem) Explicit 2-source extractor for k=log C n.
18
18 Explicit 2-Source Extractors Referencek1k1 k2k2 Output Length Chor- Goldreich 88 >0.5n 1 bit Zuckerman 91 >0.5n Ω(n) Bourgain 05≥0.499n Ω(n) Raz 05>0.5n≥O(log n)Ω(n) Chattopadhya y-Zuckerman 15 ≥log C n 1 bit Li 15≥log C n 0.9k 1
19
19 Ramsey Theory As a corollary of explicit 2-source extractors, we obtain new results in the area of Ramsey Theory. Ramsey Theory: Branch of combinatorics that studies conditions under which there is unavoidable presence of local structure.
20
20 Ramsey Graphs Erdos (1947): Existence of K-Ramsey graphs on N vertices for K> (2+o(1)) log N. Bipartite K-Ramsey graph: Bipartite graph with no complete or empty K×K sub-graph. K-Ramsey graph: No independent set or clique of size K. Explicit Constructions?
21
21 Ramsey Graphs via 2-Source Extractor N N N=2 n, K=2 k K K X Y Ext: 2-source extractor for min-entropy k a b Ext(a,b)=1 Ext(c,d)=0 d c
22
22 Explicit Ramsey Graphs ReferenceKBipartite Erdös 47 (existential)≥ 2 log NYes Hadamard Matrix√NYes Frankl-Wilson81, Naor92, Alon98, Grolmusz00, Ba, Gop 2 Ω(√(log N log log N)) No Pudlak-Rödl 04√N/2 √log N Yes Barak-Kindler-Shaltiel- Sudakov-Wigderson 10 N o(1) Yes Barak-Rao-Shaltiel- Wigderson 12 2 (log N) o(1) Yes Cohen 152 (log log N) C Yes [CZ15]2 (log log N) C Yes (N=2 n, K=2 k )
23
23 Matching Erdos’ Challenge The exponent C in our work is 75. Subsequent refinement by Meka makes C=10. Open to get C=1 and earn $100!
24
24 Techniques
25
25 Strong Seeded Extractors [Nisan-Z 93] Ext:{0,1} n ×{0,1} d →{0,1} m X: (n,k)-source Ext(X, U d ), U d ≈ ε U m, U d Explicit Construction: d=O(log(n/ε)), m=.99k. […Guruswami-Umans-Vadhan 07…] Thus for most seeds s: Ext(X,s) ≈ U m.
26
26 Resilient Function (q,ε)-resilient function: For any subset of q co- ordinates, probability f is fixed on uniform sampling of the remaining co-ordinates is ≥1-ε. Example: MAJORITY is (n 0.49,ε)-resilient. f: {0,1} n →{0,1} PARITY is NOT (q, ε )-resilient for any q>0, ε <1.
27
27 A Preliminary Attempt X: (n,k)-source Z’ Z’ i = Ext(X,s i ) (1-ε) fraction of the bits in Z’ are uniform Majority b Does not work: The uniform bits are arbitrarily correlated Ext:{0,1} n × {0,1} d ➝ {0,1}: Strong-seeded extractor Min-entropy k, error ε. Bad bit: Depends on good bits. D=2 d =(n/ε) O(1)
28
28 Most t-tuples of seeds (s 1,s 2,...., s t ) satisfy a t- independence property: Ext(X, s 1 ), Ext(X, s 2 ),....,Ext(X, s t ) ≈ U tm. t-Non-malleable Extractors
29
29 A Second Attempt Use a t-non-malleable extractor from Chattopadhyay-Goyal-Li 15. X: (n,k)-source Z’ i = nmExt(X,s i ) Z’ Majority b Idea: Make the uniform bits almost t-wise independent Does not work: >D 0.5 bad bits. (not surprisingly! since we have 1 source) D=(n/ε) poly(t, log(n/ε))
30
30 Our Approach: A Very High Level Idea Step 1: Use X and Y to construct Z on D=n O(1) bits such that ≥(D -D 0.99 ) bits are uniform and polylog-wise independent. Step 2: An explicit function that extracts from Z.
31
31 Z’ Z’ i = nmExt(X,s i ) The good bits of Z are almost t-wise independent Idea: Sample a pseudorandom subset T of [D] using Y. # of bad indices ≤ εD X: (n,k)-source X Executing Step 1 D=(n/ε) poly(t, log(n/ε))
32
32 Z’ Z’ i = nmExt(X,s i ) D=2 poly(t, log(n/ε)) Pseudorandom Subset: T= {Ext(Y,r 1 ),...,Ext(Y,r M )}, M = 2 O(log(n/ε’)) indices in T No. of bad indices in T: (ε+ε’)M<M 0.99 Executing Step 1 An alternate way of achieving this is by modifying a construction by Li. Z=Z’ T
33
33 No. of bad indices < M 0.99 The good bits of Z are (t=polylog(M))-wise independent. M Z ∧ ∧∧ ∧ ∧ ∨∨∨ Executing Step 2 Step 2a: Good bits can be assumed to uniform, independent
34
34 Easy to Check if a Monotone function is Fixed Z ∧ ∧∧ ∧ ∧ ∨∨∨
35
35 Z ∧ ∧∧ ∧ ∧ ∨∨∨ Limited Independence fools Small Circuits!
36
36 Z ∧ ∧∧ ∧ ∧ ∨∨∨ Thus, we can assume good bits are independent, uniform. Executing Step 2
37
37 ∧ ∧∧ ∧ ∧ ∨∨∨ Executing Step 2 Remaining Task: Explicit construction of a monotone C in AC 0 on M bits s.t: (1) C is (M 1-δ,ε)-resilient and (2) almost unbiased.
38
38 ∧ ∧∧ ∧ ∧ ∨∨∨ Executing Step 2 Remaining Task: We construct such a C by derandomizing Ajtai-Linial. Ajtai-Linial function (Probabilistic): Resilient to coalitions of size O(n/log 2 n).
39
39 Ajtai-Linial Function Tribes function: AL Function: T 1 ∧ T 2........ ∧ Tn Resilient to coalitions of size O(n/log 2 n). (2) Not Monotone. (1) Probabilistic: Naive derandomization takes time n O(n 2 ). Problems: ∧∧ ∧ ∧ ∨ n T i : randomly-negated Tribes on randomly chosen partitions.
40
40 Derandomizing Ajtai-Linial Key Ingredient: An explicit construction of a collection of partitions of [n] s.t: (1) Any small subset of [n] has small intersection with most partitions. (2) The partitions are pairwise pseudorandom: - the intersection of any two blocks is bounded. n 1-δ Used to bound influenceUsed to bound bias
41
A Pseudorandom Collection of Partitions BAD={x: |N(x) ∩ T|>|N(x)|(µ(T)+ε)} Thm.(Zuckerman 97)|BAD|≤2 k. [M] R x x’ Graph of a seeded extractor T Idea: Use explicit seeded extractor to construct partitions.
42
42 A Pseudorandom Collection of Partitions Let [n] = [MB] and S 1,....,S R ⊂ [n], |S i |=B, S i ={( ₁,i 1 ),..., (B,i B )}. Each S i defines a partition of [n]: {S i, S i +[ ₁ ],...., S i +[M- ₁ ]}. M B SiSi S i +1
43
43 A Pseudorandom Collection of Partitions [n] = [MB] and S 1,....,S R ⊂ [n], S x ={N(x)} [M] R x x’
44
44 A Pseudorandom Collection of Partitions Properties required: (1) Any T ⊂ [n], |T|<n 1-δ, small intersection with partitions. [M] R x x’ We show Trevisan Extractor satisfies these. (Slightly simpler version of Property 2 proved by Li ’12). (2) For all x ≠ x’, i, j |(N(x)+i) ∩ (N(x’)+j)|<0.9B
45
45 Some Ingredients in Analysis New way of analyzing bias of Ajtai-Linial function: AND of TRIBE functions Crucial in achieving monotonicity, derandomizing. A useful inequality: Janson’s Inequality If the S i ’s are small and have low pairwise intersection, then T: Each element r in R picked independently with probability p. S 1,..., S t ⊂ [R].
46
46 Subsequent Applications and Extensions of our Work Li: explicit affine extractor for min-entropy log C n. Li: explicit 2-source extractor output length 0.9k. Meka: explicit resilient function to match the probabilistic Ajtai-Linial construction.
47
47 Open Questions Negligible error? We achieve error 1/n Ω(1) ; not enough for cryptographic applications. More applications? Of results or techniques.
48
48 Thank You!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.