Soft decoding, dual BCH codes, & better -biased list decodable codes

Slides:



Advertisements
Similar presentations
Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Advertisements

Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.
Recovering Data in Presence of Malicious Errors Atri Rudra University at Buffalo, SUNY.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
May 24, 2005STOC 2005, Baltimore1 Limits to List Decoding Reed-Solomon Codes Venkatesan Guruswami Atri Rudra (University of Washington)
Information and Coding Theory
List decoding Reed-Muller codes up to minimal distance: Structure and pseudo- randomness in coding theory Abhishek Bhowmick (UT Austin) Shachar Lovett.
Group Testing and Coding Theory Atri Rudra University at Buffalo, SUNY Or, A Theoretical Computer Scientist’s (Biased) View of Group Testing.
Time vs Randomness a GITCS presentation February 13, 2012.
Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
6/20/2015List Decoding Of RS Codes 1 Barak Pinhas ECC Seminar Tel-Aviv University.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Linear-Time Encodable and Decodable Error-Correcting Codes Jed Liu 3 March 2003.
15-853Page :Algorithms in the Real World Error Correcting Codes I – Overview – Hamming Codes – Linear Codes.
Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)
1 High noise regime Desire code C : {0,1} k  {0,1} n such that (1/2-  ) fraction of errors can be corrected (think  = o(1) )  Want small n  Efficient.
CS151 Complexity Theory Lecture 9 April 27, 2004.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
CS151 Complexity Theory Lecture 9 April 27, 2015.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
1 (Chapter 15): Concatenated codes Simple (classical, single-level) concatenation Length of concatenated code: n 1 n 2 Dimension of concatenated code:
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
Some Computation Problems in Coding Theory
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
Quantum Computing MAS 725 Hartmut Klauck NTU
List Decoding Product and Interleaved Codes Prasad Raghavendra Joint work with Parikshit Gopalan & Venkatesan Guruswami.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Pseudo-random generators Talk for Amnon ’ s seminar.
1 List decoding of binary codes: New concatenation-based results Venkatesan Guruswami U. Washington Joint work with Atri Rudra.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
P & NP.
Tali Kaufman (Bar-Ilan)
Algebraic Property Testing:
15-853:Algorithms in the Real World
Approximating the MST Weight in Sublinear Time
Umans Complexity Theory Lectures
Sublinear-Time Error-Correction and Error-Detection
Locality in Coding Theory
Sublinear-Time Error-Correction and Error-Detection
Error-Correcting Codes:
General Strong Polarization
Pseudorandomness when the odds are against you
NP-Completeness Yin Tat Lee
Background: Lattices and the Learning-with-Errors problem
Algebraic Codes and Invariance
Maximally Recoverable Local Reconstruction Codes
Local Error-Detection and Error-correction
RS – Reed Solomon List Decoding.
Locally Decodable Codes from Lifting
Linear sketching over
Beer Therapy Madhu Ralf Vote early, vote often
Robust PCPs of Proximity (Shorter PCPs, applications to Coding)
General Strong Polarization
CS 583 Fall 2006 Analysis of Algorithms
Linear sketching with parities
Lattices. Svp & cvp. lll algorithm. application in cryptography
NP-Completeness Yin Tat Lee
Every set in P is strongly testable under a suitable encoding
CS151 Complexity Theory Lecture 10 May 2, 2019.
How many deleted bits can one recover?
Error Correction Coding
Zeev Dvir (Princeton) Shachar Lovett (IAS)
Presentation transcript:

Soft decoding, dual BCH codes, & better -biased list decodable codes Venkat Guruswami (U. of Washington) Atri Rudra (U. at Buffalo) Start with a slide, with work in blah blah. Worked in coding theory. Specific contribs blah and blah. State in course of work solved and closed a major theory problem– state this upfront

Error-correcting codes C(x) x Mapping C : kn Dimension k, block length n n≥ k Rate = k/n  1 Efficient means polynomial in n Decoding Complexity  : fraction of errors y Try to justify why n is increasing. Just say efficient is polynomial in n. Avoid saying family of codes. Mention what R is later. Write on the slide instead of just saying it. x Give up

Codes for Complexity Binary codes Correct =½ -  fraction of worst case errors   0 ( can depend on n) Unique decoding Cannot correct more than ¼ fraction of errors

The list decoding problem Given a code and an error parameter  For any received word w Output all codewords c such that c and w disagree in at most  fraction of places (, L)-list decodable code (½-, poly(n))-list decodable code exist! Maybe remove capacity, mention spell checker

Applications of list decoding Hardcore predicates from one way functions [Goldreich-Levin 89; Impagliazzo 97; Ta-Shama-Zuckerman 01] Worst-case vs. average-case hardness [Cai-Pavan-Sivakumar 99; Goldreich-Ron-Sudan 99; Sudan-Trevisan- Vadhan 99; Impagliazzo-Jaiswal-Kabanets 06] Pseudorandomness [Trevisan 99; Shaltiel-Umans 01; Ta-Shama-Zuckerman 01; Guruswami-Umans-Vadhan 07] Membership-Comparable Sets [Sivakumar 99] Approximating NP-Witnesses [Gal-Halevi-Lipton-Petrank 99, Kumar-Sivakumar99] 5 5

Approximating NP-witness L is NP-complete Polytime computable relation R It is NP-hard to compute y given x Approximating the certificate? Compute w such that y and w differ in  (½-)|y| positions [Kumar-Sivakumar 99] NP-hard to approximate x  L   y such that R(x,y) holds

Kumar-Sivakumar reduction NP-hard to do approximation for relation R’ C is list decodable from ½- fraction of errors Main idea: Replace y with C(y) R’ is also polytime computable Decode C from zero error in poly time Keep track of required properties of C x  L   z such that R’(x,z) holds R’(x,z) holds if  y s.t. R(x,y) holds and z=C(y) x  L   y such that R(x,y) holds

Kumar-Sivakumar reduction Assume can compute w s.t. w and z differ in  (½-)|z| positions in poly time Run list decoder for C on w Returns a list y1,…., ym Check if for any i, R(x, yi) holds Can check if x in L in poly time! Assume x in L C is (½-,poly(|z|) list decodable C can be list decoded in polytime R’(x,z) holds if  y s.t. R(x,y) holds and z=C(y)

Some possible concerns… R’ is the not the same as R! [Gal-Halevi-Lipton-Petrank 99] considers hardness of approximation for the same R Crucially uses properties of R and L

Required properties of C C : {0,1}k  {0,1}n n = poly(k,1/) C can be list decoded from (½-)n errors in poly(n) time C can be constructed in poly(n) time Not necessary in the NP-witness application Given these constraints, how small can n be?

Binary list decodable code Say n is O(kb/ea) a and b are constants For NP-witness application, smallest e  1/n1/a (½-, poly(n))-list decodable code with n in O(k/e2) exist a cannot be < 2 Existence proved via random coding argument What is the smallest a, given b is some constant?

Efficient list decodable binary codes a = 4, b = 2; i.e. n in O(k2/e4) [Guruswami, Sudan 00] The code is explicit Hadamard “concatenated” with Reed-Solomon code e-biased code All non-zero codewords have [½-, ½+] frac. of 1s Important pseduorandom object a=3, b=1; i.e. n in O(k/e3) [Guruswami, R. 06+07] Construction and decoding time > n1/e Only useful for constant e

Our main result a = 3+g, b = 3; i.e. n in O(k3/e3+g) g>0 The code is explicit Dual BCH “concatenated” with “Folded” Reed-Solomon e-biased code All algorithms run in time poly(k,1/) More or less recovers the a=3 result [Guruswami, R. 06+07]

(Folded) Reed-Solomon codes View mesg. as f(X) = m0+m1X+…+mk-1Xk-1 F* ={1,g1,g2,…,gN-1} (q=N+1=|F|) Reed-Solomon (RS) codeword s-order Folded RS codeword (N=ms) Optimal list decodable properties for large alphabet f(1) f(g2) f(g3) f(g4) f(gN-1) f(1) f(gs) f(g(m-1)s ) f(gs-1) f(g2s-1) f(ms-1) f(g) f(gs+1) f(g(m-1)s+1 )

How do we get binary codes ? Concatenation of codes [Forney 66] C1: (GF(2k))K  (GF(2k))N (“Outer” code) C2: GF(2)k  (GF(2))n (“Inner” code) C1 C2: (GF(2))kK (GF(2))nN Typically k=O(log N) Brute force decoding for inner code m m1 m2 mK w1 w2 wN C1(m) C2(w1) C2(w2) C2(wN) C1  C2(m) 15

List Decoding concatenated code One natural decoding algorithm Divide up the received word into blocks of length n Find closest C2 codeword for each block Run list decoding algorithm for C1 Loses Information! 16

How do we “list decode” from lists ? List Decoding C2 2 GF(2)n y1 y2 yN T1 T2 TN 2 GF(2)k How do we “list decode” from lists ? 17

The list recovery problem Given a code and an error parameter  For any set of lists T1,…,TN such that |Ti|  t, for every i Output all codewords c such that ci 2 Ti for at least 1- fraction of i’s List decoding is special case with t=1 18

List Recovering Algorithm for C1 List Decoding C1C2 y1 y2 yN List decode C2 T1 T2 TN List Recovering Algorithm for C1 19

[Guruswami, R. 06] Result Pick C1 to be folded RS of rate e Has optimal list recoverability Pick C2 to be “suitably” chosen binary code of rate e2 C1C2 List decodable from ½- frac of errors

A Closer look… List Recovering Algorithm for C1 y1 y2 Dist(C2(a1),y1)  ½- Dist(C2(a2),y1)  ½- yN at a1 List decode C2 from ½- frac of errors What if Dist(C2(a1),y1) << Dist (C2(a2),y1)? T1 T2 TN List Recovering Algorithm for C1 21

Weighted List Recovery 2 {a1,…,aq} y1 y2 yN Output codewords w/ small weighted distance a1 aq wN,1 wN,q w1,1 w1,q w2,1 w2,q T1 T2 TN List Recovering Algorithm for C1 Soft Decoding 22

List Recovery as a special case wi,j ½- ½ 1 List decode C2 y1 y2 yN T1 T2 TN List Recovering Algorithm for C1 Dist(C2(ai),yj) 23

A natural weight function Give more weight to “closer” symbols List decode from ½- frac of errors C1 is RS and C2 is Hadamard Hadamard codewords are evaluations of linear functions wi,j ½ 1 Dist(C2(ai),yj) 24

Uses properties of Hadamard code RS concat Hadamard Need to show the following for every j: i wi,j2  O(1) RS code of rate e2 Follows from Parseval’s identity N  O(K/ e2) n= q Hadamard code Final length = nN = N2  O(K2/ e4) [GS00] Uses properties of Hadamard code wi,j ½ 1 Dist(C2(ai),yj) 25

Using order s folded RS as outer code Need to show the following for every j: i wi,js+1  O(1) ….. (*) s ≥ 1 Folded RS of rate e1+1/s Hadamard as inner code (*) is true n = qs = Ns As Folded RS alphabet size is qs wi,j ½ 1 Too large  Dist(C2(ai),yj) 26

Using dual BCH as inner code Need to show the following for every j: i wi,js+1  O(1) ….. (*) s ≥ 1 Folded RS of rate e1+1/s Dual BCH as inner code (*) from [Kaufman-Litsyn 05] n  N2 suffices Final length = N3  O(k3/e3+3/s) wi,j ½ 1 Dist(C2(ai),yj) 27

Some comments Final codes are e-biased Dual BCH have non-zero weights close to ½ Weil-Carlitz-Uchiyama bound Dual BCH is a generalization of Hadamard Our result needed to generalize both Outer code (RS  folded RS) Inner code (Hadamard  dual BCH) Can replace folded RS with Parvaresh-Vardy codes

Open questions Improve on the cubic dependence on e Needs new algorithmic ideas Use information about outer codes while decoding inner codes Rate e2 using concatenated codes possible [Guruswami, R. 08] Any application that also uses e-biasedness?