Download presentation
Presentation is loading. Please wait.
1
Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)
2
Hardness Amplification Given a hard function we can get an even harder function fF Hard functionHarder function
3
Hardness f {0, 1} n s A function f is called δ-hard for circuits of size s (Algorithm with running time t), if any circuit of size s (Algorithm with running time t) makes mistake in predicting the function on at least δ fraction of the inputs δ.2 n
4
XOR Lemma f fff XOR 0/1 {0, 1} n {0, 1} nk k fkfk f k :{0, 1} nk {0, 1} f k (x 1,…, x k ) = f(x 1 ) … f(x k ) XOR Lemma: If f is δ-hard for size s circuits, then f k is (1/2 - ε)-hard for size s’ circuits (ε = e -Ω(δk), s’ = s·poly(δ, ε))
5
XOR Lemma Proof: Ideal case CC A C (which computes f for at least (1 - δ) fraction of inputs) (which computes f k for at least (½ + ε) fraction of inputs) whp
6
XOR Lemma Proof CC A C (which computes f for at least (1 - δ) fraction of inputs) Advice (|Advice|=poly(1/ε)) C1C1 ClCl One of them computes f for at least (1 - δ) fraction of inputs l = 2 |Advice| = 2 poly(1/ε) (which computes f k for at least (½ + ε) fraction of inputs) whp A “lesser” nonuniform reduction
7
Optimal List Size Question: What is the reduction in the list size we should target? A good combinatorial answer using error correcting codes CC A C1C1 ClCl whp
8
XOR-based Code [T03] Think of a binary message msg on M=2 n bits as a truth-table of a Boolean function f. The code of msg is of length M k where code(x 1,…,x k ) = f(x 1 ) … f(x k ) msg x (|x| = n) x = (x 1, …, x k ) f(x 1 ) … f(x k ) code f(x)
9
List Decoder m XOR Encoding cw Decoding m 1,…,m l Decoder Local Approximate List ≈ (1/2 + ) ≈ (1 - δ) Information theoretically l should be O(1/ 2 ) channel
10
The List Size The proof of Yao’s XOR Lemma yields an approximate local list-decoding algorithm for the XOR-code defined above But the list size is 2 poly(1/ ) rather than the optimal poly(1/ ) Goal: Match the information theoretic bound on list-decoding i.e. get advice of length log(1/ )
11
The Main Result
12
C ((½ + ε)-computes f k ) A C ((1 - δ)-computes f) ε = poly(1/k), δ = O(k -0.1 ) Running time of A and size of C is at most poly(|C |, 1/ε) whp Advice(|Advice| = log(1/ε))
13
The Main Result C ((½ + ε)-computes f k ) A C ((1 - δ)-computes f) ε = poly(1/k), δ = O(k -0.1 ) Running time of A and size of C is at most poly(|C |, 1/ε) w.p. poly(ε)
14
The Main Result We get a list size of poly(1/ε) … which is optimal but… ε is large: ε = poly(1/k) C (( ½ + ε)-computes f k ) A C ((1 - δ)-computes f) A’ C1C1 ClCl At least one of them (1 - ρ)-computes f l = poly(1/ε) Advice(|Advice| = log(1/ε)) whpw.p. poly(ε) Advice efficient XOR Lemma
15
Uniform Hardness Amplification
16
What we want f hard wrt BPPg harder wrt BPP What we get f hard wrt BPP/logg harder wrt BPP Advice efficient XOR Lemma
17
Uniform Hardness Amplification What we can do: f Є NP: hard wrt BPPf’ Є NP: hard wrt BPP/log [BDCGL92] g Є ?? harder wrt BPP Advice efficient XOR Lemma g not necessarily Є NP but g Є P NP|| P NP|| : poly-time TM which can make polynomially many parallel Oracle queries to an NP oracle h Є P NP|| : hard wrt BPP Simple average-case reduction g Є P NP|| : harder wrt BPP 1/n c ½ - 1/n d Trevisan gives a weaker reduction (from 1/n c to (1/2 – 1/(log n) α ) hardness) but within NP.
18
Techniques
19
Advice efficient Direct Product Theorem A Sampling Lemma Learning without Advice Self-generated advice Fault tolerant learning using faulty advice
20
Direct Product Theorem f fff concatenation 0/1 {0, 1} k {0, 1} n {0, 1} nk k f k :{0, 1} nk {0, 1} k f k (x 1,…, x k ) = f(x 1 ) | … | f(x k ) fkfk Direct Product Theorem: If f is δ–hard for size s circuits, then f k is (1 - ε)-hard for size s’ circuits (ε = e -Ω(δk), s’ = s·poly(δ, ε)) Goldreich-Levin Theorem: XOR Lemma and Direct Product Theorem are saying the same thing
21
XOR Lemma from Direct Product Theorem C DP (poly(ε)-computes f k ) A2A2 C ((1 - δ)-computes f) ε = poly(1/k), δ = O(k -0.1 ) Using Goldreich-Levin Theorem C ((½ + ε)-computes f k ) A1A1 whp w.p. poly(ε)
22
LEARN from [IW97] LEARN [IW97] C DP ( -computes f k ) Advice: n/ 2 pairs of (x, f(x)) for independent uniform x’s C ((1 - δ)-computes f) whp ε = e -Ω(δk)
23
Goal LEARN [IW97] C DP ( -computes f k ) Advice: n/ 2 pairs of (x, f(x)) for independent uniform x’s C ((1 - δ)-computes f) whp ε = e -Ω(δk) LEARN’ w.p. poly( ) No advice!!! ε = poly(1/k), δ = O(k -0.1 ) We want to eliminate the advice (or the (x, f(x)) pairs). In exchange we are ready to make some compromise on the success probability of the randomized algorithm
24
Self-generated advice
25
Imperfect samples We want to use the circuit C DP to generate n/ pairs (x, f(x)) for independent uniform x’s We will settle for n/ pairs (x,b x ) The distribution on x’s is statistically close to uniform and for most x’s we have b x = f(x). Then run a fault-tolerant version of LEARN on C DP and the generated pairs (x,b x )
26
How to generate imperfect samples
27
A Sampling Lemma nk 2 nk x1x1 x2x2 xkxk x3x3 D is a Uniform Distribution
28
A Sampling Lemma nk G x1x1 x2x2 xkxk x3x3 |G| >= 2 nk Stat-Dist(D, U) <= ((log 1/ )/k) 1/2
29
Getting Imperfect Samples G: subset of inputs on which C DP (x) = f k (x) |G| >= 2 nk Pick a random k-tuple x, then pick a random subtuple x’ of size k 1/2 With probability x lands in the “good” set G Conditioned on this, the Sampling Lemma says that x’ is close to being uniformly distributed If k 1/2 > the number of samples required by LEARN, then done! Else…
30
Direct Product Amplification C DP C DP’ which poly(ε)-computes f k’ where (k’) 1/2 > n/ε 2 ?? C DP C DP’ such that for at least poly(ε) fraction of k’-tuples, x C DP’ (x) and f k’ (x) agree on most bits
31
Putting Everything Together
32
C DP for f k C DP’ for f k ’ DP Amplification Sampling Fault tolerant LEARN pairs (x,b x ) circuit C (1- )-computes f with probability > poly( ) Repeat poly(1/ ) times to get a list containing a good circuit for f, w.h.p.
33
Open Questions
34
Advice efficient XOR Lemma for smaller For ε > exp(-k α ) we get a quasi-polynomial list size Can we get an advice efficient hardness amplification result using a monotone combination function m (instead of )? Some results: [Buresh-Oppenheim, Kabanets, Santhanam] use monotone list-decodable codes to re-prove Trevisan’s results for amplification within NP
35
Thank You
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.