CS151 Complexity Theory Lecture 9 April 27, 2015.

Slides:



Advertisements
Similar presentations
Low-End Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Ronen Shaltiel, University of Haifa Chris Umans, Caltech.
Advertisements

PRG for Low Degree Polynomials from AG-Codes Gil Cohen Joint work with Amnon Ta-Shma.
Multiplicity Codes Swastik Kopparty (Rutgers) (based on [K-Saraf-Yekhanin ’11], [K ‘12], [K ‘14])
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Many-to-one Trapdoor Functions and their Relations to Public-key Cryptosystems M. Bellare S. Halevi A. Saha S. Vadhan.
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Locally Decodable Codes from Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers Kiran Kedlaya Sergey Yekhanin MIT Microsoft Research.
Information and Coding Theory
List decoding Reed-Muller codes up to minimal distance: Structure and pseudo- randomness in coding theory Abhishek Bhowmick (UT Austin) Shachar Lovett.
CS151 Complexity Theory Lecture 8 April 22, 2004.
Complexity Theory Lecture 11 Lecturer: Moni Naor.
Time vs Randomness a GITCS presentation February 13, 2012.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
Linear-time encodable and decodable error-correcting codes Daniel A. Spielman Presented by Tian Sang Jed Liu 2003 March 3rd.
1 Robust PCPs of Proximity (Shorter PCPs, applications to Coding) Eli Ben-Sasson (Radcliffe) Oded Goldreich (Weizmann & Radcliffe) Prahladh Harsha (MIT)
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
CS151 Complexity Theory Lecture 7 April 20, 2015.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
CS151 Complexity Theory Lecture 8 April 22, 2015.
Locally Decodable Codes Uri Nadav. Contents What is Locally Decodable Code (LDC) ? Constructions Lower Bounds Reduction from Private Information Retrieval.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Variable-Length Codes: Huffman Codes
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
1 High noise regime Desire code C : {0,1} k  {0,1} n such that (1/2-  ) fraction of errors can be corrected (think  = o(1) )  Want small n  Efficient.
CS151 Complexity Theory Lecture 9 April 27, 2004.
On the Complexity of Approximating the VC Dimension Chris Umans, Microsoft Research joint work with Elchanan Mossel, Microsoft Research June 2001.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
Complexity Theory Lecture 12 Lecturer: Moni Naor.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
Great Theoretical Ideas in Computer Science.
On Constructing Parallel Pseudorandom Generators from One-Way Functions Emanuele Viola Harvard University June 2005.
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Great Theoretical Ideas in Computer Science.
1 Building The Ultimate Consistent Reader. 2 Introduction We’ve already built a consistent reader (cube-Vs.-point)... Except it had variables ranging.
Probabilistic verification Mario Szegedy, Rutgers www/cs.rutgers.edu/~szegedy/07540 Lecture 5.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
CS151 Complexity Theory Lecture 10 April 29, 2015.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
CS151 Complexity Theory Lecture 16 May 20, The outer verifier Theorem: NP  PCP[log n, polylog n] Proof (first steps): –define: Polynomial Constraint.
Pseudorandom Bits for Constant-Depth Circuits with Few Arbitrary Symmetric Gates Emanuele Viola Harvard University June 2005.
List Decoding Product and Interleaved Codes Prasad Raghavendra Joint work with Parikshit Gopalan & Venkatesan Guruswami.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Pseudo-random generators Talk for Amnon ’ s seminar.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
1 List decoding of binary codes: New concatenation-based results Venkatesan Guruswami U. Washington Joint work with Atri Rudra.
CS151 Complexity Theory Lecture 15 May 18, Gap producing reductions Main purpose: –r-approximation algorithm for L 2 distinguishes between f(yes)
Umans Complexity Theory Lectures Lecture 9b: Pseudo-Random Generators (PRGs) for BPP: - Hardness vs. randomness - Nisan-Wigderson (NW) Pseudo- Random Generator.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Umans Complexity Theory Lectures
Sublinear-Time Error-Correction and Error-Detection
Locality in Coding Theory
Sublinear-Time Error-Correction and Error-Detection
Pseudorandomness when the odds are against you
RS – Reed Solomon List Decoding.
Locally Decodable Codes from Lifting
The Curve Merger (Dvir & Widgerson, 2008)
Umans Complexity Theory Lectures
CS151 Complexity Theory Lecture 10 May 2, 2019.
CS151 Complexity Theory Lecture 7 April 23, 2019.
Zeev Dvir (Princeton) Shachar Lovett (IAS)
Presentation transcript:

CS151 Complexity Theory Lecture 9 April 27, 2015

2 NW PRG NW: for fixed constant δ, G = {G n } with seed length t = O(log n) t = O(log m) running time n c m c output length m = n δ m error ε < 1/m fooling size s = m Using this PRG we obtain BPP = P –to fool size n k use G n k/δ –running time O(n k + n ck/δ )2 t = poly(n)

April 27, NW PRG First attempt: build PRG assuming E contains unapproximable functions Definition: The function family f = {f n }, f n :{0,1} n  {0,1} is s(n)-unapproximable if for every family of size s(n) circuits {C n }: Pr x [C n (x) = f n (x)] ≤ ½ + 1/s(n).

April 27, One bit Suppose f = {f n } is s(n)-unapproximable, for s(n) = 2 Ω(n), and in E a “1-bit” generator family G = {G n }: G n (y) = y◦f log n (y) Idea: if not a PRG then exists a predictor that computes f log n with better than ½ + 1/s(log n) agreement; contradiction.

April 27, One bit Suppose f = {f n } is s(n)-unapproximable, for s(n) = 2 δn, and in E a “1-bit” generator family G = {G n }: G n (y) = y◦f log n (y) –seed length t = log n –output length m = log n + 1 (want n δ ) –fooling size s  s(log n) = n δ –running time n c –error ε  1/ s(log n) = 1/ n δ

April 27, Many bits Try outputting many evaluations of f: G(y) = f(b 1 (y))◦f(b 2 (y))◦…◦f(b m (y)) Seems that a predictor must evaluate f(b i (y)) to predict i-th bit Does this work?

April 27, Many bits Try outputting many evaluations of f: G(y) = f(b 1 (y))◦f(b 2 (y))◦…◦f(b m (y)) predictor might notice correlations without having to compute f but, more subtle argument works for a specific choice of b 1 …b m

April 27, Nearly-Disjoint Subsets Definition: S 1,S 2,…,S m  {1…t} is an (h, a) design if –for all i, |S i | = h –for all i ≠ j, |S i  S j | ≤ a {1..t} S1S1 S2S2 S3S3

April 27, Nearly-Disjoint Subsets Lemma: for every ε > 0 and m < n can in poly(n) time construct an (h = log n, a = εlog n) design S 1,S 2,…,S m  {1…t} with t = O(log n).

April 27, Nearly-Disjoint Subsets Proof sketch: –pick random (log n)-subset of {1…t} –set t = O(log n) so that expected overlap with a fixed S i is εlog n/2 –probability overlap with S i is > εlog n is at most 1/n –union bound: some subset has required small overlap with all S i picked so far… –find it by exhaustive search; repeat n times.

April 27, The NW generator f  E s(n)-unapproximable, for s(n) = 2 δn S 1,…,S m  {1…t} (log n, a = δlog n/3) design with t = O(log n) G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : seed y

April 27, The NW generator Theorem (Nisan-Wigderson): G={G n } is a pseudo-random generator with: –seed length t = O(log n) –output length m = n δ/3 –running time n c –fooling size s = m –error ε = 1/m

April 27, The NW generator Proof: –assume does not ε-pass statistical test C = {C m } of size s: |Pr x [C(x) = 1] – Pr y [C( G n (y) ) = 1]| > ε –can transform this distinguisher into a predictor P of size s’ = s + O(m): Pr y [P(G n (y) 1 … i-1 ) = G n (y) i ] > ½ + ε/m

April 27, The NW generator Proof (continued): Pr y [P(G n (y) 1 … i-1 ) = G n (y) i ] > ½ + ε/m –fix bits outside of S i to preserve advantage: Pr y’ [P(G n (  y’  ) 1 … i-1 ) = G n (  y’  ) i ] > ½ + ε/m  G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : y ’ SiSi

April 27,  The NW generator Proof (continued): –G n (  y’  ) i is exactly f log n (y’) –for j ≠ i, as vary y’, G n (  y’  ) j varies over 2 a values! –hard-wire up to (m-1) tables of 2 a values to provide G n (  y’  ) 1 … i-1 G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : y ’ SiSi

April 27, The NW generator G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : P output f log n (y ’) y’ size m + O(m) + (m-1)2 a < s(log n) = n δ advantage ε/m=1/m 2 > 1/s(log n) = n -δ contradiction hardwired tables

April 27, Worst-case vs. Average-case Theorem (NW): if E contains 2 Ω(n) -unapp- roximable functions then BPP = P. How reasonable is unapproximability assumption? Hope: obtain BPP = P from worst-case complexity assumption –try to fit into existing framework without new notion of “unapproximability”

April 27, Worst-case vs. Average-case Theorem (Impagliazzo-Wigderson, Sudan-Trevisan-Vadhan) If E contains functions that require size 2 Ω(n) circuits, then E contains 2 Ω(n) –unapp- roximable functions. Proof: –main tool: error correcting code

April 27, Error-correcting codes Error Correcting Code (ECC): C:Σ k  Σ n message m  Σ k received word R –C(m) with some positions corrupted if not too many errors, can decode: D(R) = m parameters of interest: –rate: k/n –distance: d = min m  m’ Δ(C(m), C(m’)) C(m) R

April 27, Distance and error correction C is an ECC with distance d can uniquely decode from up to  d/2  errors ΣnΣn d

April 27, Distance and error correction can find short list of messages (one correct) after closer to d errors! Theorem (Johnson): a binary code with distance (½ - δ 2 )n has at most O(1/δ 2 ) codewords in any ball of radius (½ - δ)n.

April 27, Example: Reed-Solomon alphabet Σ = F q : field with q elements message m  Σ k polynomial of degree at most k-1 p m (x) = Σ i=0…k-1 m i x i codeword C(m) = (p m (x)) x  F q rate = k/q

April 27, Example: Reed-Solomon Claim: distance d = q – k + 1 –suppose Δ(C(m), C(m’)) < q – k + 1 –then there exist polynomials p m (x) and p m’ (x) that agree on more than k-1 points in F q –polnomial p(x) = p m (x) - p m’ (x) has more than k-1 zeros –but degree at most k-1… –contradiction.

April 27, Example: Reed-Muller Parameters: t (dimension), h (degree) alphabet Σ = F q : field with q elements message m  Σ k multivariate polynomial of total degree at most h: p m (x) = Σ i=0…k-1 m i M i {M i } are all monomials of degree ≤ h

April 27, Example: Reed-Muller M i is monomial of total degree h –e.g. x 1 2 x 2 x 4 3 –need # monomials (h+t choose t) > k codeword C(m) = (p m (x)) x  (F q ) t rate = k/q t Claim: distance d = (1 - h/q)q t –proof: Schwartz-Zippel: polynomial of degree h can have at most h/q fraction of zeros

April 27, Codes and hardness Reed-Solomon (RS) and Reed-Muller (RM) codes are efficiently encodable efficient unique decoding? –yes (classic result) efficient list-decoding? –yes (RS on problem set)

April 27, Codes and Hardness Use for worst-case to average case: truth table of f:{0,1} log k  {0,1} (worst-case hard) truth table of f’:{0,1} log n  {0,1} (average-case hard) m:m: Enc(m): 00010

April 27, Codes and Hardness if n = poly(k) then f  E implies f’  E Want to be able to prove: if f’ is s’-approximable, then f is computable by a size s = poly(s’) circuit

April 27, Codes and Hardness Key: circuit C that approximates f’ implicitly gives received word R Decoding procedure D “computes” f exactly R: Enc(m): D C Requires special notion of efficient decoding

April 27, Codes and Hardness m:m: Enc(m): R: D C f:{0,1} log k  {0,1} f ’:{0,1} log n  {0,1} small circuit C approximating f’ decoding procedure i 2 {0,1} log k small circuit that computes f exactly f(i)

April 27, Encoding use a (variant of) Reed-Muller code concatenated with the Hadamard code –q (field size), t (dimension), h (degree) encoding procedure: –message m 2 {0,1} k –subset S µ F q of size h –efficient 1-1 function Emb: [k] ! S t –find coeffs of degree h polynomial p m :F q t ! F q for which p m (Emb(i)) = m i for all i (linear algebra) so, need h t ≥ k

April 27, Encoding encoding procedure (continued): –Hadamard code Had:{0,1} log q ! {0,1} q = Reed-Muller with field size 2, dim. log q, deg. 1 distance ½ by Schwartz-Zippel –final codeword: (Had(p m (x))) x 2 F q t evaluate p m at all points, and encode each evaluation with the Hadamard code

April 27, Encoding m:m: Emb: [k] ! S t StSt FqtFqt p m degree h polynomial with p m (Emb(i)) = m i evaluate at all x 2 F q t encode each symbol with Had:{0,1} log q ! {0,1} q

April 27, Decoding small circuit C computing R, agreement ½ +  Decoding step 1 –produce circuit C’ from C given x 2 F q t outputs “guess” for p m (x) C’ computes {z : Had(z) has agreement ½ +  with x-th block}, outputs random z in this set Enc(m): R: 0100

April 27, Decoding Decoding step 1 (continued): –for at least  /2 of blocks, agreement in block is at least ½ +  /2 –Johnson Bound: when this happens, list size is S = O(1/  2 ), so probability C’ correct is 1/S –altogether: Pr x [C’(x) = p m (x)] ≥  (  3 ) C’ makes q queries to C C’ runs in time poly(q)

April 27, Decoding small circuit C’ computing R’, agreement  ’ =  (  3 ) Decoding step 2 –produce circuit C’’ from C’ given x 2 emb(1,2,…,k) outputs p m (x) idea: restrict p m to a random curve; apply efficient R-S list-decoding; fix “good” random choices pm:pm: R’: 816

April 27, Restricting to a curve –points x=  1,  2,  3, …,  r 2 F q t specify a degree r curve L : F q ! F q t w 1, w 2, …, w r are distinct elements of F q for each i, L i :F q ! F q is the degree r poly for which L i (w j ) = (  j ) i for all j Write p m (L(z)) to mean p m (L 1 (z), L 2 (z), …, L t (z)) p m (L(w 1 )) = p m (x) degree r ¢ h ¢ t univariate poly x=  1 22 33 rr

April 27, Restricting to a curve Example: –p m (x 1, x 2 ) = x 1 2 x x 2 –w 1 = 1, w 2 = 0 –L 1 (z) = 2z + 1(1-z) = z + 1 –L 2 (z) = 1z + 0(1-z) = z –p m (L(z)) = (z+1) 2 z 2 + z = z 4 + 2z 3 + z 2 + z FqtFqt  1 = (2,1)  2 = (1,0)

April 27, Decoding small circuit C’ computing R’, agreement  ’ =  (  3 ) Decoding step 2 (continued): –pick random w 1, w 2, …, w r ;  2,  3, …,  r to determine curve L –points on L are (r-1)-wise independent –random variable: Agr = |{z : C’(L(z)) = p m (L(z))}| –E[Agr] =  ’q and Pr[Agr < (  ’q)/2] < O(1/(  ’q)) (r-1)/ pm:pm: R’: 816

April 27, Decoding small circuit C’ computing R’, agreement  ’ =  (  3 ) Decoding step 2 (continued): –agr = |{z : C’(L(z)) = p m (L(z))}| is ¸ (  ’q)/2 with very high probability –compute using Reed-Solomon list-decoding: {q(z) : deg(q) · r ¢ h ¢ t, Pr z [C’(L(z)) = q(z)] ¸ (  ’q)/2} –if agr ¸ (  ’q)/2 then p m (L( ¢ )) is in this set! pm:pm: R’: 816

April 27, Decoding Decoding step 2 (continued): –assuming (  ’q)/2 > (2r ¢ h ¢ t ¢ q) 1/2 –Reed-Solomon list-decoding step: running time = poly(q) list size S · 4/  ’ –probability list fails to contain p m (L( ¢ )) is O(1/(  q)) (r-1)/2