Optimal Lower Bounds for 2-Query Locally Decodable Linear Codes Kenji Obata.

Slides:



Advertisements
Similar presentations
Randomness Conductors Expander Graphs Randomness Extractors Condensers Universal Hash Functions
Advertisements

1+eps-Approximate Sparse Recovery Eric Price MIT David Woodruff IBM Almaden.
Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Lower Bounds for Additive Spanners, Emulators, and More David P. Woodruff MIT and Tsinghua University To appear in FOCS, 2006.
Truthful Mechanisms for Combinatorial Auctions with Subadditive Bidders Speaker: Shahar Dobzinski Based on joint works with Noam Nisan & Michael Schapira.
Invertible Zero-Error Dispersers and Defective Memory with Stuck-At Errors Ariel Gabizon Ronen Shaltiel.
Computing with adversarial noise Aram Harrow (UW -> MIT) Matt Hastings (Duke/MSR) Anup Rao (UW)
1 Bart Jansen Polynomial Kernels for Hard Problems on Disk Graphs Accepted for presentation at SWAT 2010.
Russell Impagliazzo ( IAS & UCSD ) Ragesh Jaiswal ( Columbia U. ) Valentine Kabanets ( IAS & SFU ) Avi Wigderson ( IAS ) ( based on [IJKW08, IKW09] )
Direct Product : Decoding & Testing, with Applications Russell Impagliazzo (IAS & UCSD) Ragesh Jaiswal (Columbia) Valentine Kabanets (SFU) Avi Wigderson.
Vertex cover might be hard to approximate within 2 - ε Subhash Khot, Oded Regev Slides by: Ofer Neiman.
A threshold of ln(n) for approximating set cover By Uriel Feige Lecturer: Ariel Procaccia.
Inapproximability of Hypergraph Vertex-Cover. A k-uniform hypergraph H= : V – a set of vertices E - a collection of k-element subsets of V Example: k=3.
Hardness of Approximating Multicut S. Chawla, R. Krauthgamer, R. Kumar, Y. Rabani, D. Sivakumar (2005) Presented by Adin Rosenberg.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Bart Jansen 1.  Problem definition  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least k leaves?
Circuit and Communication Complexity. Karchmer – Wigderson Games Given The communication game G f : Alice getss.t. f(x)=1 Bob getss.t. f(y)=0 Goal: Find.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
1 Decomposing Hypergraphs with Hypertrees Raphael Yuster University of Haifa - Oranim.
Lower Bound for Sparse Euclidean Spanners Presented by- Deepak Kumar Gupta(Y6154), Nandan Kumar Dubey(Y6279), Vishal Agrawal(Y6541)
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Price Of Anarchy: Routing
Distributional Property Estimation Past, Present, and Future Gregory Valiant (Joint work w. Paul Valiant)
Locally Decodable Codes from Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers Kiran Kedlaya Sergey Yekhanin MIT Microsoft Research.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Tutorial 6 of CSCI2110 Bipartite Matching Tutor: Zhou Hong ( 周宏 )
Polynomial Time Approximation Schemes Presented By: Leonid Barenboim Roee Weisbert.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Oded Goldreich Shafi Goldwasser Dana Ron February 13, 1998 Max-Cut Property Testing by Ori Rosen.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
Linear-time encodable and decodable error-correcting codes Daniel A. Spielman Presented by Tian Sang Jed Liu 2003 March 3rd.
Threshold Phenomena and Fountain Codes
5 Qubits Error Correcting Shor’s code uses 9 qubits to encode 1 qubit, but more efficient codes exist. Given our error model where errors can be any of.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Michael Bender - SUNY Stony Brook Dana Ron - Tel Aviv University Testing Acyclicity of Directed Graphs in Sublinear Time.
Private Information Retrieval. What is Private Information retrieval (PIR) ? Reduction from Private Information Retrieval (PIR) to Smooth Codes Constructions.
Locally Decodable Codes Uri Nadav. Contents What is Locally Decodable Code (LDC) ? Constructions Lower Bounds Reduction from Private Information Retrieval.
Lower Bounds for Property Testing Luca Trevisan U C Berkeley.
Linear-Time Encodable and Decodable Error-Correcting Codes Jed Liu 3 March 2003.
Experts and Boosting Algorithms. Experts: Motivation Given a set of experts –No prior information –No consistent behavior –Goal: Predict as the best expert.
Repairable Fountain Codes Megasthenis Asteris, Alexandros G. Dimakis IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 32, NO. 5, MAY /5/221.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Computing and Communicating Functions over Sensor Networks A.Giridhar and P. R. Kumar Presented by Srikanth Hariharan.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Threshold Phenomena and Fountain Codes Amin Shokrollahi EPFL Joint work with M. Luby, R. Karp, O. Etesami.
15-853:Algorithms in the Real World
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
NP-completeness NP-complete problems. Homework Vertex Cover Instance. A graph G and an integer k. Question. Is there a vertex cover of cardinality k?
Channel Coding Theorem (The most famous in IT) Channel Capacity; Problem: finding the maximum number of distinguishable signals for n uses of a communication.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Kernel Bounds for Path and Cycle Problems Bart M. P. Jansen Joint work with Hans L. Bodlaender & Stefan Kratsch September 8 th 2011, Saarbrucken.
Approximation Algorithms based on linear programming.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
New Locally Decodable Codes and Private Information Retrieval Schemes
Information Complexity Lower Bounds
The Viterbi Decoding Algorithm
New Characterizations in Turnstile Streams with Applications
Approximating the MST Weight in Sublinear Time
Sublinear-Time Error-Correction and Error-Detection
Sublinear-Time Error-Correction and Error-Detection
Modern symmetric-key Encryption
Computability and Complexity
Local Error-Detection and Error-correction
Enumerating Distances Using Spanners of Bounded Degree
Depth Estimation via Sampling
RS – Reed Solomon List Decoding.
Presentation transcript:

Optimal Lower Bounds for 2-Query Locally Decodable Linear Codes Kenji Obata

Codes Error correcting code C : {0,1} n {0,1} m with decoding procedure A s.t. for y {0,1} m with d(y,C(x)) δm, A(y) = x

Locally Decodable Codes Weaken power of A: Can only look at a constant number q of input bits Weaken requirements: A need only recover a single given bit of x Can fail with some probability bounded away from ½ Study initiated by Katz and Trevisan [KT00]

Locally Decodable Codes Define a (q, δ, )-locally decodable code: A can make q queries (w.l.o.g. exactly q queries) For all x {0,1} n, all y {0,1} m with d(y, C(x)) δm, all inputs bits i 1,…, n A(y, i) = x i w/ probability ½ +

LDC Applications Direct: Scalable fault-tolerant information storage Indirect: Lower bounds for certain classes of private information retrieval schemes (more on this later)

Lower Bounds for LDCs [KT00] proved a general lower bound m n q/(q-1) (at best n 2, but known codes exponential) For 2-query linear LDCs Goldreich, Karloff, Schulman, Trevisan [GKST02] proved an exponential bound m 2 Ω(εδn)

Lower Bounds for LDCs Restriction to linear codes interesting, since known LDC constructions are linear But 2 Ω(εδn) not quite right: –Lower bound should increase arbitrarily as decoding probability 1 ( ε ½) –No matching construction

Lower Bounds for LDCs In this work, we prove that for 2-query linear LDCs, m 2 Ω(δ/(1-2ε)n) Optimal: There is an LDC construction matching this within a constant factor in the exponent

Techniques from [KT00] Fact: An LDC is also a smooth code (A queries each position w/ roughly the same probability) … so can study smooth codes Connects LDCs to information-theoretic PIR schemes: q queries q servers smoothness statistical indistinguishability

Techniques from [KT00] For i 1,…,n, define the recovery graph G i associated with C: Vertex set {1,…,m} (bits of the codeword) Edges are pairs (q 1, q 2 ) such that, conditioned on A querying q 1, q 2, A(C(x),i) outputs x i with prob > ½ Call these edges good edges (endpoints contain information about x i )

Techniques from [KT00]/[GKST02] Theorem: If C is (2, c, ε )-smooth, then G i contains a matching of size εm/c. Better to work with non-degenerate codes Each bit of the encoding depends on more than one bit of the message For linear codes, good edges are non-trivial linear combinations Fact: Any smooth code can be made non-degenerate (with constant loss in parameters).

Core Lemma [GKST02] Let q 1,…,q m be linear functions on {0,1} n s.t. for every i 1,…,n there is a set M i of at least γm disjoint pairs of indices j 1, j 2 such that x i = q j 1 (x) + q j 2 (x). Then m 2 γn.

Putting it all together… If C is a (2, c, )-smooth linear code, then (by reduction to non-degenerate code + existence of large matchings + core lemma), m 2 n/4c. If C is a (2, δ, )-locally decodable linear code, then (by LDC smooth reduction), m 2 δn/8.

Putting it all together… Summary: locally decodable smooth big matchings exponential size This work: locally decodable big matchings (skip smoothness reduction, argue directly about LDCs)

The Blocking Game Let G(V,E) be a graph on n vertices, w a prob distribution on E, X w an edge sampled according to w, S a subset of V Define the blocking probability β δ (G) as min w ( max |S|δn Pr (X w intersects S) )

The Blocking Game Want to characterize β δ (G) in terms of size of a maximum matching M(G), equivalently defect d(G) = n – 2M(G) Theorem: Let G be a graph with defect αn. Then β δ (G) min (δ/(1-α), 1).

The Blocking Game clique αnαn (1-α)n Define K(n,α) to be the edge- maximal graph on n vertices with defect αn: K1K1 K2K2

The Blocking Game Optimization on K(n,α) is a relaxation of optimization on any graph with defect αn If d(G) αn then β δ (G) β δ (K(n,α)) So, enough to think about K(n,α).

The Blocking Game Intuitively, best strategy for player 1 is to spread distribution as uniformly as possible A (λ 1,λ 2 )-symmetric dist: all edges in (K 1,K 2 ) have weight λ 1 all edges in (K 2,K 2 ) have weight λ 2 Lemma: (λ 1,λ 2 )-symmetric dist w s.t. β δ (K(n,α)) = max |S|δn Pr (X w intersects S).

The Blocking Game Claim: Let w 1,…,w k be dists s.t. max |S|δn Pr (X w i intersects S) = β δ (G). Then for any convex comb w = γ i w i max |S|δn Pr (X w intersects S) = β δ (G). Proof: For S V, |S| δn, intersection prob is γ i β δ (G) = β δ (G). So max |S| δn Pr (X w intersects S) β δ (G). But by defn of β δ (G), this must be β δ (G).

The Blocking Game Proof: Let w be any distribution optimizing β δ (G). If w does, then so does π(w) for π Aut(G) = Γ. By prior claim, so does w = (1/|Γ|) π Γ π(w). For e E, σ Γ, w(e) = (1/|Γ|) π Γ w(π(e)) = (1/|Γ|) π Γ w(πσ(e)) = w(σ(e)).. So, if e, e are in the same Γ-orbit, they have the same weight in w w is (λ 1,λ 2 )-symmetric.

The Blocking Game Claim: If w is (λ 1,λ 2 )-sym then S V, |S| δn s.t. Pr (X w intersects S) min (δ/(1-α), 1). Proof: If δ 1 – α then can cover every edge. Otherwise, set S = any δn vertices of K 2. Then Pr = δ ( 1/(1 - α) + ½ n 2 (1 - α – δ) λ 2 ) which, for δ < 1 - α, is at least δ/(1 - α) (optimized when λ 2 = 0).

The Blocking Game Theorem: Let G be a graph with defect αn. Then β δ (G) min (δ/(1-α), 1). Proof: β δ (G) β δ (K(n,α)). Blocking prob on K(n,α) is optimized by some (λ 1,λ 2 )-sym dist. For any such dist w, δn vertices blocking w with Pr min (δ/(1-α), 1).

Lower Bound for LDLCs Still need a degenerate non-degenerate reduction (this time, for LDCs instead of smooth codes) Theorem: Let C be a (2, δ, ε)-locally decodable linear code. Then, for large enough n, there exists a non-degenerate (2, δ/2.01, ε)-locally decodable linear code C : {0,1} n {0,1} 2m.

Lower Bound for LDLCs Theorem: Let C be a (2, δ, ε)-LDLC. Then, for large enough n, m 2 1/4.03 δ/(1-2ε) n. Proof: Make C non-degenerate Local decodability low blocking probability (at most ¼ - ½ ε) low defect (α 1 – (δ/2.01)/(1-2ε)) big matching (½ (δ/2.01)/(1-2ε) (2m) ) exponentially long encoding (m 2 (1/4.02) δ/(1-2ε)n – 1 )

Matching Upper Bound Hadamard code on {0,1} n y i = a i · x (a i runs through {0,1} n ) 2-query locally decodable Recovery graphs are perfect matchings on n-dim hypercube Success parameter ε = ½ - 2δ Can use concatenated Hadamard codes (Trevisan):

Matching Upper Bound Set c = (1-2ε)/4δ (can be shown that for feasible values of δ, ε, c 1). Divide input into c blocks of n/c bits, encode each block with Hadamard code on {0,1} n/c. Each block has a fraction cδ corrupt entries, so code has recovery parameter ½ - 2 (1-2ε)/4δ δ = ε Code has length (1-2ε)/4δ 2 4δ/(1-2ε)n

Conclusions There is a matching upper bound (concatenated Hadamard code) New results for 2-query non-linear codes (but using apparently completely different techniques) q > 2? –No analog to the core lemma for more queries –But blocking game analysis might generalize to useful properties other than matching size