CS151 Complexity Theory Lecture 10 April 29, 2004.

Slides:



Advertisements
Similar presentations
Fast Moment Estimation in Data Streams in Optimal Space Daniel Kane, Jelani Nelson, Ely Porat, David Woodruff Harvard MIT Bar-Ilan IBM.
Advertisements

1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.
Russell Impagliazzo ( IAS & UCSD ) Ragesh Jaiswal ( Columbia U. ) Valentine Kabanets ( IAS & SFU ) Avi Wigderson ( IAS ) ( based on [IJKW08, IKW09] )
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Many-to-one Trapdoor Functions and their Relations to Public-key Cryptosystems M. Bellare S. Halevi A. Saha S. Vadhan.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
15-853:Algorithms in the Real World
List decoding Reed-Muller codes up to minimal distance: Structure and pseudo- randomness in coding theory Abhishek Bhowmick (UT Austin) Shachar Lovett.
CS151 Complexity Theory Lecture 8 April 22, 2004.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Time vs Randomness a GITCS presentation February 13, 2012.
CS151 Complexity Theory Lecture 6 April 15, 2015.
CS21 Decidability and Tractability
CS151 Complexity Theory Lecture 7 April 20, 2004.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 12 June 18, 2006
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
CS151 Complexity Theory Lecture 7 April 20, 2015.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
CS151 Complexity Theory Lecture 8 April 22, 2015.
Locally Decodable Codes Uri Nadav. Contents What is Locally Decodable Code (LDC) ? Constructions Lower Bounds Reduction from Private Information Retrieval.
CS151 Complexity Theory Lecture 6 April 15, 2004.
Improved Approximation Bounds for Planar Point Pattern Matching (under rigid motions) Minkyoung Cho Department of Computer Science University of Maryland.
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)
1 High noise regime Desire code C : {0,1} k  {0,1} n such that (1/2-  ) fraction of errors can be corrected (think  = o(1) )  Want small n  Efficient.
CS151 Complexity Theory Lecture 9 April 27, 2004.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
CS151 Complexity Theory Lecture 9 April 27, 2015.
Great Theoretical Ideas in Computer Science.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
CS151 Complexity Theory Lecture 13 May 11, Outline proof systems interactive proofs and their power Arthur-Merlin games.
Great Theoretical Ideas in Computer Science.
CS151 Complexity Theory Lecture 10 April 29, 2015.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
The parity bits of linear block codes are linear combination of the message. Therefore, we can represent the encoder by a linear system described by matrices.
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
CS151 Complexity Theory Lecture 16 May 20, The outer verifier Theorem: NP  PCP[log n, polylog n] Proof (first steps): –define: Polynomial Constraint.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Pseudo-random generators Talk for Amnon ’ s seminar.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
1 Reliability-Based SD Decoding Not applicable to only graph-based codes May even help with some algebraic structure SD alternative to trellis decoding.
Richard Cleve DC 2117 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Lecture (2011)
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
1 List decoding of binary codes: New concatenation-based results Venkatesan Guruswami U. Washington Joint work with Atri Rudra.
Umans Complexity Theory Lectures Lecture 9b: Pseudo-Random Generators (PRGs) for BPP: - Hardness vs. randomness - Nisan-Wigderson (NW) Pseudo- Random Generator.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Data Structures and Algorithm Analysis Lecture 24
Great Theoretical Ideas in Computer Science
Umans Complexity Theory Lectures
Lecture 22: Linearity Testing Sparse Fourier Transform
Sublinear-Time Error-Correction and Error-Detection
Pseudorandomness when the odds are against you
COMS E F15 Lecture 2: Median trick + Chernoff, Distinct Count, Impossibility Results Left to the title, a presenter can insert his/her own image.
RS – Reed Solomon List Decoding.
Locally Decodable Codes from Lifting
CS21 Decidability and Tractability
Umans Complexity Theory Lectures
Clustering.
CS151 Complexity Theory Lecture 10 May 2, 2019.
CS151 Complexity Theory Lecture 7 April 23, 2019.
Soft decoding, dual BCH codes, & better -biased list decodable codes
Presentation transcript:

CS151 Complexity Theory Lecture 10 April 29, 2004

CS151 Lecture 102 Outline Decoding Reed-Muller codes Transforming worst-case hardness into average-case hardness Extractors

April 29, 2004CS151 Lecture 103 Decoding RM Main idea: reduce to decoding RS (Fq)t(Fq)t a b RM codeword p(x 1, x 2, …, x t ) of total degree at most h: “restriction to line L(z) passing though a, b” L 1 (z) = a 1 z + b 1 (1-z) L 2 (z) = a 2 z + b 2 (1-z) … L t (z) = a t z + b t (1-z) p |L (z) = p(L 1 (z), L 2 (z), …, L t (z))

April 29, 2004CS151 Lecture 104 Decoding RM Two key observations: 1.If p has total degree at most h, then p |L has degree at most h 2.p |L is a univariate polynomial (Fq)t(Fq)t a b

April 29, 2004CS151 Lecture 105 Decoding RM Example: –p(x 1, x 2 ) = x 1 2 x 2 + x 2 2 –L 1 (z) = 2z + 1(1-z) = z + 1 –L 2 (z) = 1z + 0(1-z) = z –p |L (z) = (z+1) 2 z + z 2 = 2z 3 + 2z 2 + z (Fq)t(Fq)t a = (2,1) b = (1,0)

April 29, 2004CS151 Lecture 106 Decoding RM Key property: If pick a, b randomly in (F q ) t then points in the vector (az + b(1-z)) z  F q are pairwise independent. (Fq)t(Fq)t a b

April 29, 2004CS151 Lecture 107 Decoding RM Meaning of pairwise independent in this context: L = az + b(1-z) for all w, z  F q, ,   (F q ) t Pr a,b [L(w) =  | L(z) =  ] = 1/q t every pair of points on L behaves just as if it was picked independently

April 29, 2004CS151 Lecture 108 Decoding RM (small error) The setup: –Codeword is a polynomial p:(F q ) t  F q with total degree h k = (h + t choose t) n = q t –Given received word R:(F q ) t  F q –Suppose Pr a [p(a) = R(a)] > 1 – δ –Try to recover p by accessing R (Fq)t(Fq)t

April 29, 2004CS151 Lecture 109 Decoding RM (small error) To decode one position a  (F q ) t : –pick b randomly in (F q ) t –L is line passing through a, b –q pairs (z, R |L (z)) for z  F q –each point L(z) random in (F q ) t –E[# errors hit] < δq –Pr b [# errors hit > 4δq] < ¼(Markov) –try to find degree h univariate poly. r for which Pr z [r(z) ≠ R |L (z)] ≤ 4δ RS decoding!

April 29, 2004CS151 Lecture 1010 Decoding RM (small error) –with probability 3/4 data is close to the univariate polynomial p |L, and then r(1) = p |L (1) = p(a) –output r(1) –repeat O(log n) times, choose top vote-getter –reduces error probability to 1/3n union bound: all n positions decoded correctly with probability at least 2/3

April 29, 2004CS151 Lecture 1011 Decoding RM (small error) Assume relative distance 1-h/q > ½ Previous algorithm works when 4δ < 1/4 requires very small relative error δ < 1/16

April 29, 2004CS151 Lecture 1012 List-decoding RM (large error) The setup: –Given received word R:(F q ) t  F q –each nearby codeword is a polynomial p:(F q ) t  F q with total degree h. k = (h + t choose t) n = q t –By accessing R, try to recover all p such that Pr a [p(a) = R(a)] >  (Fq)t(Fq)t

April 29, 2004CS151 Lecture 1013 List-decoding RM (large error) Procedure (sketch): –pick a and b randomly in (F q ) t –L is line passing through a, b –q pairs (z, R |L (z)) for z  F q –each point L(z) random in (F q ) t –E[# non-errors hit] >  q –Pr a,b [# non-errors hit <  q/2] < 4/(  q) (Chebyshev)

April 29, 2004CS151 Lecture 1014 List-decoding RM (large error) –using RS list-decoding, find list of all degree h univariate polynomials r 1,r 2,…,r m for which Pr z [r i (z) = R |L (z)] ≥  q/2 –One r i is p |L (i.e., r i (z) = p |L (z)) for all z) –How can we find that one? –given p(b), can find it with high probability: Pr a,b [exists i  j for which r j (0) = r i (0)] < 8m 2 h/q –find the unique r i for which r i (0) = p(b); output r i (1), which should be p |L (1) = p(a)

April 29, 2004CS151 Lecture 1015 List-decoding RM (large error) –Pr a,b [# non-errors hit <  q/2] < 4/(  q) –given p(b), Pr a,b [fail to output p(a)] < 8m 2 h/q –Pr a,b [output p(a)] > 1 – 4/(  q) - 8m 2 h/q –Key: try for O(log n) random b values for each b, try all q values for p(b) (one is right!) apply RM decoding from small error each good trial gives small error required for previous decoding algorithm

April 29, 2004CS151 Lecture 1016 Decoding RM (large error) Requirements on parameters: –  q/2 > (2hq) 1/2  q > 8h/  2 (for RS list decoding) –4/(  q) 256/  –know m < 2/  from q-ary Johnson bound –8m 2 h/q 2 11 h/  2 conclude: Pr a,b [output p(a)] > 1 – 1/32

April 29, 2004CS151 Lecture 1017 Decoding RM (large error) Pr a,b [fail to output p(a)] < 1/32 Pr b [Pr a [fail to output p(a)] > 1/16] < ½ on trial with correct value for p(b), with probability at least ½ RM decoder from small error is correct on all a repetition decreases error exponentially union bound: all p with agreement  included in list

April 29, 2004CS151 Lecture 1018 Local decodability Amazing property of decoding method: Local decodability: each symbol of C(m) decoded by looking at small number of symbols in R R: ??????1? C(m): ?????

April 29, 2004CS151 Lecture 1019 Local decodability Local decodability: each symbol of C(m) decoded by looking at small number of symbols in R –small decoding circuit D –small circuit computing R –implies small circuit computing C(m) D R i C(m) i

April 29, 2004CS151 Lecture 1020 Concatenation Problem: symbols of F q rather than bits Solution: encode each symbol with binary code –our choice: RM with degree h ≤ 1, field size 2 # variables t = log q also called “Hadamard code” –Schwartz-Zippel implies distance = ½

April 29, 2004CS151 Lecture 1021 Concatenation decoding: –whenever would have accessed symbol i of received word, decode binary code first, then proceed C(m): C’(m): R’:

April 29, 2004CS151 Lecture 1022 The concatenated code outer code: RM with parameters –field size q, degree h, dimension t –# coefficients (h+t choose t) > k –q t = poly(k) inner code: RM with parameters –field size q’ = 2 –degree h’ = 1 –dimension t’ = log q block length n = q t q = poly(k)

April 29, 2004CS151 Lecture 1023 Codes and Hardness Recall our strategy: truth table of f:{0,1} log k  {0,1} (worst-case hard) truth table of f ’:{0,1} log n  {0,1} (average-case hard) m:m: C(m): 00010

April 29, 2004CS151 Lecture 1024 Hardness amplification Claim: f  E  f ’  E –f  E  f computable by a TM running in time 2 c(log k) = k c –to write out truth table of f: time kk c –to compute truth table of f ’: time poly(n, k) –recall n = poly(k) –f ’ computable by TM running in time n c’ = 2 c’(log n)  f ’  E

April 29, 2004CS151 Lecture 1025 Hardness amplification Need to prove: if f’ is s’-approximable by circuit C, then f is computable by a size s = poly(s’) circuit C: f: f’: “message” “codeword” “received word”

April 29, 2004CS151 Lecture 1026 Hardness amplification suppose f’ is s’-approximable by C –Pr x [C(x) = f ’(x)] ≥ ½ + 1/s’ –at least s’/2 “inner” blocks have C, f ’ agreement ½ + 1/(2s’) –Johnson Bound: at most O(s’ 2 ) inner codewords with this agreement –find by brute force search: time = O(q) –pick random mesg. from list for each symbol –get “outer” received word with agreement 1/s’ 3

April 29, 2004CS151 Lecture 1027 Hardness amplification –“outer” received word with agreement  = 1/s’ 3 –can only uniquely decode from agreement > ½ (since relative distance < 1) –our agreement << ½ need to use list-decoding of RM for large error

April 29, 2004CS151 Lecture 1028 Setting parameters have to handle agreement  = 1/s’ 3 pick q, h, t: –need q >  (h/  2 ) for decoder to work –need (h + t choose t) > k (note s’ < k) –need q t = poly(k) h = s’ t  (log k)/(log s’) q =  (h/  2 ) =  (s’ 3 )

April 29, 2004CS151 Lecture 1029 List-decoding RM final result: short list of circuits size = poly(q)poly(s’) =poly(s’) one computes f exactly! circuit for f of size poly(s’) D1D1 R i (w 1 ) i DjDj R i (w 2 ) i DjDj R i (w j ) i D1D1 R D2D2 R DjDj R

April 29, 2004CS151 Lecture 1030 Putting it all together Theorem 1 (IW, STV): If E contains functions that require size 2 Ω(n) circuits, then E contains 2 Ω(n) -unapproximable functions. Theorem (NW): if E contains 2 Ω(n) -unapp- roximable functions then BPP = P. Theorem 2 (IW): E requires exponential size circuits  BPP = P.

April 29, 2004CS151 Lecture 1031 Putting it all together Proof of Theorem 1: –let f = {f n } be such a function that requires size s = 2 δn circuits –define f ’ = {f n ’ } be just-described encoding of (truth table of) f –just showed: if f ’ is s’ = 2 δ’n -approximable, then f is computable by size s = poly(s’) = 2 δn circuit. –contradiction.

April 29, 2004CS151 Lecture 1032 Extractors PRGs: can remove randomness from algorithms –based on unproven assumption –polynomial slow-down –not applicable in other settings Question: can we use “real” randomness? –physical source –imperfect – biased, correlated

April 29, 2004CS151 Lecture 1033 Extractors “Hardware” side –what physical source? –ask the physicists… “Software” side –what is the minimum we need from the physical source?

April 29, 2004CS151 Lecture 1034 Extractors imperfect sources: –“stuck bits”: –“correlation”: –“more insidious correlation”: there are specific ways to get independent unbiased random bits from specific imperfect physical sources “ “ “ “ “ “ perfect squares

April 29, 2004CS151 Lecture 1035 Extractors want to assume we don’t know details of physical source general model capturing all of these? –yes: “min-entropy” universal procedure for all imperfect sources? –yes: “extractors”