Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)

Slides:



Advertisements
Similar presentations
Optimal Lower Bounds for 2-Query Locally Decodable Linear Codes Kenji Obata.
Advertisements

Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Russell Impagliazzo ( IAS & UCSD ) Ragesh Jaiswal ( Columbia U. ) Valentine Kabanets ( IAS & SFU ) Avi Wigderson ( IAS ) ( based on [IJKW08, IKW09] )
Direct Product : Decoding & Testing, with Applications Russell Impagliazzo (IAS & UCSD) Ragesh Jaiswal (Columbia) Valentine Kabanets (SFU) Avi Wigderson.
Average-case Complexity Luca Trevisan UC Berkeley.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Isolation Technique April 16, 2001 Jason Ku Tao Li.
Hardness amplification proofs require majority Ronen Shaltiel University of Haifa Joint work with Emanuele Viola Columbia University June 2008.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Time vs Randomness a GITCS presentation February 13, 2012.
Machine Learning Week 2 Lecture 2.
CS151 Complexity Theory Lecture 5 April 13, 2015.
CS151 Complexity Theory Lecture 7 April 20, 2004.
CS151 Complexity Theory Lecture 5 April 13, 2004.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
Derandomization: New Results and Applications Emanuele Viola Harvard University March 2006.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Population Stratification with Limited Data By Kamalika Chaudhuri, Eran Halperin, Satish Rao and Shuheng Zhou.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Dasgupta, Kalai & Monteleoni COLT 2005 Analysis of perceptron-based active learning Sanjoy Dasgupta, UCSD Adam Tauman Kalai, TTI-Chicago Claire Monteleoni,
Correlation Immune Functions and Learning Lisa Hellerstein Polytechnic Institute of NYU Brooklyn, NY Includes joint work with Bernard Rosell (AT&T), Eric.
1 Streaming Computation of Combinatorial Objects Ziv Bar-Yossef U.C. Berkeley Omer Reingold AT&T Labs – Research Ronen.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Hardness amplification proofs require majority Emanuele Viola Columbia University Work done at Harvard, IAS, and Columbia Joint work with Ronen Shaltiel.
ON THE PROVABLE SECURITY OF HOMOMORPHIC ENCRYPTION Andrej Bogdanov Chinese University of Hong Kong Bertinoro Summer School | July 2014 based on joint work.
1 High noise regime Desire code C : {0,1} k  {0,1} n such that (1/2-  ) fraction of errors can be corrected (think  = o(1) )  Want small n  Efficient.
CS151 Complexity Theory Lecture 9 April 27, 2004.
On the Complexity of Approximating the VC Dimension Chris Umans, Microsoft Research joint work with Elchanan Mossel, Microsoft Research June 2001.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
If NP languages are hard on the worst-case then it is easy to find their hard instances Danny Gutfreund, Hebrew U. Ronen Shaltiel, Haifa U. Amnon Ta-Shma,
CS151 Complexity Theory Lecture 9 April 27, 2015.
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Zeev Dvir Weizmann Institute of Science Amir Shpilka Technion Locally decodable codes with 2 queries and polynomial identity testing for depth 3 circuits.
Direct-product testing, and a new 2-query PCP Russell Impagliazzo (IAS & UCSD) Valentine Kabanets (SFU) Avi Wigderson (IAS)
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
CS151 Complexity Theory Lecture 10 April 29, 2015.
The Price of Uncertainty in Communication Brendan Juba (Washington U., St. Louis) with Mark Braverman (Princeton)
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
Survivable Paths in Multilayer Networks Marzieh Parandehgheibi Hyang-won Lee Eytan Modiano 46 th Annual Conference on Information Sciences and Systems.
NP-Completness Turing Machine. Hard problems There are many many important problems for which no polynomial algorithms is known. We show that a polynomial-time.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Hardness amplification proofs require majority Emanuele Viola Columbia University Work also done at Harvard and IAS Joint work with Ronen Shaltiel University.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
Comparing Notions of Full Derandomization Lance Fortnow NEC Research Institute With thanks to Dieter van Melkebeek.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
The NP class. NP-completeness
New Characterizations in Turnstile Streams with Applications
Umans Complexity Theory Lectures
Sublinear-Time Error-Correction and Error-Detection
Sublinear-Time Error-Correction and Error-Detection
Pseudorandomness when the odds are against you
Background: Lattices and the Learning-with-Errors problem
Faster Space-Efficient Algorithms for Subset Sum
Nikhil Bansal, Shashwat Garg, Jesper Nederlof, Nikhil Vyas
Pseudo-derandomizing learning and approximation
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
CS151 Complexity Theory Lecture 10 May 2, 2019.
Soft decoding, dual BCH codes, & better -biased list decodable codes
Presentation transcript:

Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)

Hardness Amplification Given a hard function we can get an even harder function fF Hard functionHarder function

Hardness f {0, 1} n s A function f is called δ-hard for circuits of size s (Algorithm with running time t), if any circuit of size s (Algorithm with running time t) makes mistake in predicting the function on at least δ fraction of the inputs δ.2 n

XOR Lemma f fff XOR 0/1 {0, 1} n {0, 1} nk k fkfk  f  k :{0, 1} nk {0, 1}  f  k (x 1,…, x k ) = f(x 1 )  …  f(x k ) XOR Lemma: If f is δ-hard for size s circuits, then f  k is (1/2 - ε)-hard for size s’ circuits (ε = e -Ω(δk), s’ = s·poly(δ, ε))

XOR Lemma Proof: Ideal case CC A C (which computes f for at least (1 - δ) fraction of inputs) (which computes f  k for at least (½ + ε) fraction of inputs) whp

XOR Lemma Proof CC A C (which computes f for at least (1 - δ) fraction of inputs) Advice (|Advice|=poly(1/ε)) C1C1 ClCl One of them computes f for at least (1 - δ) fraction of inputs l = 2 |Advice| = 2 poly(1/ε) (which computes f  k for at least (½ + ε) fraction of inputs) whp A “lesser” nonuniform reduction

Optimal List Size Question: What is the reduction in the list size we should target? A good combinatorial answer using error correcting codes CC A C1C1 ClCl whp

XOR-based Code [T03] Think of a binary message msg on M=2 n bits as a truth-table of a Boolean function f. The code of msg is of length M k where code(x 1,…,x k ) = f(x 1 )  …  f(x k ) msg x (|x| = n) x = (x 1, …, x k ) f(x 1 )  …  f(x k ) code f(x)

List Decoder m XOR Encoding cw Decoding m 1,…,m l Decoder Local Approximate List ≈ (1/2 +  ) ≈ (1 - δ) Information theoretically l should be O(1/  2 ) channel

The List Size The proof of Yao’s XOR Lemma yields an approximate local list-decoding algorithm for the XOR-code defined above But the list size is 2 poly(1/  ) rather than the optimal poly(1/  ) Goal: Match the information theoretic bound on list-decoding i.e. get advice of length log(1/  )

The Main Result

 C  ((½ + ε)-computes f  k ) A C ((1 - δ)-computes f) ε = poly(1/k), δ = O(k -0.1 )  Running time of A and size of C is at most poly(|C  |, 1/ε) whp Advice(|Advice| = log(1/ε))

The Main Result  C  ((½ + ε)-computes f  k ) A C ((1 - δ)-computes f) ε = poly(1/k), δ = O(k -0.1 ) Running time of A and size of C is at most poly(|C  |, 1/ε) w.p. poly(ε)

The Main Result We get a list size of poly(1/ε) … which is optimal but… ε is large: ε = poly(1/k)  C  (( ½ + ε)-computes f  k ) A C ((1 - δ)-computes f) A’ C1C1 ClCl At least one of them (1 - ρ)-computes f l = poly(1/ε) Advice(|Advice| = log(1/ε)) whpw.p. poly(ε) Advice efficient XOR Lemma

Uniform Hardness Amplification

What we want f hard wrt BPPg harder wrt BPP What we get f hard wrt BPP/logg harder wrt BPP Advice efficient XOR Lemma

Uniform Hardness Amplification What we can do: f Є NP: hard wrt BPPf’ Є NP: hard wrt BPP/log [BDCGL92] g Є ?? harder wrt BPP Advice efficient XOR Lemma g not necessarily Є NP but g Є P NP|| P NP|| : poly-time TM which can make polynomially many parallel Oracle queries to an NP oracle h Є P NP|| : hard wrt BPP Simple average-case reduction g Є P NP|| : harder wrt BPP 1/n c ½ - 1/n d Trevisan gives a weaker reduction (from 1/n c to (1/2 – 1/(log n) α ) hardness) but within NP.

Techniques

Advice efficient Direct Product Theorem A Sampling Lemma Learning without Advice Self-generated advice Fault tolerant learning using faulty advice

Direct Product Theorem f fff concatenation 0/1 {0, 1} k {0, 1} n {0, 1} nk k f k :{0, 1} nk {0, 1} k f k (x 1,…, x k ) = f(x 1 ) | … | f(x k ) fkfk Direct Product Theorem: If f is δ–hard for size s circuits, then f k is (1 - ε)-hard for size s’ circuits (ε = e -Ω(δk), s’ = s·poly(δ, ε)) Goldreich-Levin Theorem: XOR Lemma and Direct Product Theorem are saying the same thing

XOR Lemma from Direct Product Theorem C DP (poly(ε)-computes f k ) A2A2 C ((1 - δ)-computes f) ε = poly(1/k), δ = O(k -0.1 ) Using Goldreich-Levin Theorem  C  ((½ + ε)-computes f  k ) A1A1 whp w.p. poly(ε)

LEARN from [IW97] LEARN [IW97] C DP (  -computes f k ) Advice: n/  2 pairs of (x, f(x)) for independent uniform x’s C ((1 - δ)-computes f) whp ε = e -Ω(δk)

Goal LEARN [IW97] C DP (  -computes f k ) Advice: n/  2 pairs of (x, f(x)) for independent uniform x’s C ((1 - δ)-computes f) whp ε = e -Ω(δk) LEARN’ w.p. poly(  ) No advice!!! ε = poly(1/k), δ = O(k -0.1 ) We want to eliminate the advice (or the (x, f(x)) pairs). In exchange we are ready to make some compromise on the success probability of the randomized algorithm

Self-generated advice

Imperfect samples We want to use the circuit C DP to generate n/   pairs (x, f(x)) for independent uniform x’s We will settle for n/   pairs (x,b x ) The distribution on x’s is statistically close to uniform and for most x’s we have b x = f(x). Then run a fault-tolerant version of LEARN on C DP and the generated pairs (x,b x )

How to generate imperfect samples

A Sampling Lemma nk 2 nk x1x1 x2x2 xkxk x3x3 D is a Uniform Distribution

A Sampling Lemma nk G x1x1 x2x2 xkxk x3x3 |G| >=  2 nk Stat-Dist(D, U) <= ((log 1/  )/k) 1/2

Getting Imperfect Samples G: subset of inputs on which C DP (x) = f k (x) |G| >=  2 nk Pick a random k-tuple x, then pick a random subtuple x’ of size k 1/2 With probability  x lands in the “good” set G Conditioned on this, the Sampling Lemma says that x’ is close to being uniformly distributed If k 1/2 > the number of samples required by LEARN,  then done! Else…

Direct Product Amplification C DP C DP’ which poly(ε)-computes f k’ where (k’) 1/2 > n/ε 2 ?? C DP C DP’ such that for at least poly(ε) fraction of k’-tuples, x C DP’ (x) and f k’ (x) agree on most bits

Putting Everything Together

C DP for f k C DP’ for f k ’ DP Amplification Sampling Fault tolerant LEARN pairs (x,b x ) circuit C (1-  )-computes f with probability > poly(  ) Repeat poly(1/  ) times to get a list containing a good circuit for f, w.h.p.

Open Questions

Advice efficient XOR Lemma for smaller  For ε > exp(-k α ) we get a quasi-polynomial list size Can we get an advice efficient hardness amplification result using a monotone combination function m (instead of  )? Some results: [Buresh-Oppenheim, Kabanets, Santhanam] use monotone list-decodable codes to re-prove Trevisan’s results for amplification within NP

Thank You