Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS
Why Hardness Amplification Goal: Show there are hard problems in NP. Lower bounds out of reach. Cryptography, Derandomization require average case hardness. Revised Goal: Relate various kinds of hardness assumptions. Hardness Amplification: Start with mild hardness, amplify.
Hardness Amplification Generic Amplification Theorem: If there are problems in class A that are mildly hard for algorithms in Z, then there are problems in A that are very hard for Z. NP, EXP, PSPACE P/poly, BPP, P
PSPACE versus P/poly, BPP Long line of work: Theorem: If there are problems in PSPACE that are worst case hard for P/poly (BPP), then there are problems that are ½ + hard for P/poly(BPP). Yao, Nisan-Wigderson, Babai-Fortnow-Nisan-Wigderson, Impagliazzo, Impagliazzo-Wigderson1, Impagliazzo- Wigderson2, Sudan-Trevisan-Vadhan, Trevisan-Vadhan, Impagliazzo-Jaiswal-Kabanets, Impagliazzo-Jaiswal-Kabanets- Wigderson.
NP versus P/poly ODonnell. Theorem: If there are problems in NP that are 1 - hard for P/poly, then there are problems that are ½ + hard. Starts from average-case assumption. Healy-Vadhan-Viola.
NP versus BPP Trevisan03. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ¾ + hard.
NP versus BPP Trevisan05. Theorem: If there are problems in NP that are 1 - hard for BPP, then there are problems that are ½ + hard. BureshOppenheim-Kabanets-Santhanam: alternate proof via monotone codes. Optimal up to.
Our results Amplification against P. Theorem 1: If there is a problem in NP that is 1 - hard for P, then there is a problem which is ¾ + hard. Theorem 2: If there is a problem in PSPACE that is1 - hard for P, then there is a problem which is ¾ + hard. Trevisan: 1 - hardness to 7/8 + for PSPACE. Goldreich-Wigderson: Unconditional hardness for EXP against P. = 1/n 100 = 1/(log n) 100
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.
Amplification via Decoding Trevisan, Sudan-Trevisan-Vadhan Encode f: Mildly hard g: Wildly hard Decode Approx. to g f
Amplification via Decoding. Case Study: PSPACE versus BPP Encode f: Mildly hard g: Wildly hard fs table has size 2 n. gs table has size 2 n 2. Encoding in space n 100. PSPACE
Amplification via Decoding. Case Study: PSPACE versus BPP Decode BPP Randomized local decoder. List-decoding beyond ¼ error. Approx. to g f
Amplification via Decoding. Case Study: NP versus BPP Encode f: Mildly hard g: Wildly hard g is a monotone function M of f. M is computable in NTIME(n 100 ) M needs to be noise-sensitive. NP
Amplification via Decoding. Case Study: NP versus BPP Decode Randomized local decoder. Monotone codes are bad codes. Can only approximate f. BPP Approx. to g Approx. to f
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP.
Deterministic Amplification Decode P Deterministic local decoding?
Deterministic Amplification Decode Can force an error on any bit. Need near- linear length encoding. Monotone codes for NP. P Deterministic local decoding? 2n2n 2 n n 100
Deterministic Local Decoding … … up to unique decoding radius. Deterministic local decoding up to 1 - from ¾ + agreement. Monotone code construction with similar parameters. Main tool: ABNNR codes + GMD decoding. [Guruswami-Indyk, Akavia-Venkatesan] Open Problem: Go beyond Unique Decoding.
The ABNNR Construction. Expander graph. 2 n vertices. Degree n 100.
The ABNNR Construction Expander graph. 2 n vertices. Degree n 100.
The ABNNR Construction Start with a binary code with small distance. Gives a code of large distance over large alphabet. Expander graph. 2 n vertices. Degree n 100.
Concatenated ABNNR Codes Inner code of distance ½. Binary code of distance ½. [GI]: ¼ error, not local. [T]: 1/8 error, local.
Decoding ABNNR Codes
Decoding ABNNR Codes Decode inner codes. Works if error < ¼. Fails if error > ¼.
Decoding ABNNR Codes Majority vote on the LHS. [Trevisan]: Corrects 1/8 fraction of errors.
GMD decoding [Forney67] c 2 [0,1] If decoding succeeds, error 2 [0, ¼]. If 0 error, confidence is 1. If ¼ error, confidence is 0. c = (1 – 4 ). Could return wrong answer with high confidence… … but this requires close to ½.
GMD Decoding for ABNNR Codes c c c c c GMD decoding: Pick threshold, erase, decode. Non-local. Our approach: Weighted Majority. Thm: Corrects ¼ fraction of errors locally.
GMD Decoding for ABNNR Codes c c c c c Thm: GMD decoding corrects ¼ fraction of error. Proof Sketch: 1. Globally, good nodes have more confidence than bad nodes. 2. Locally, this holds for most neighborhoods of vertices on LHS. Proof similar to Expander Mixing Lemma.
Outline of This Talk: 1. Amplification via Decoding. 2. Deterministic Local Decoding. 3. Amplification within NP. Finding an inner monotone code [BOKS]. Implementing GMD decoding.
The BOKS construction k krkr x T(x) T(x) : Sample an r-tuple from x, apply the Tribes function. If x, y are balanced, and (x,y) >, (T(x),T(y)) ¼ ½. If x, y are very close, so are T(x), T(y). Decoding: brute force.
GMD Decoding for Monotone codes c c c c c Start with a balanced f, apply concatenated ABNNR. Inner decoder returns closest balanced message. Apply GMD decoding. Thm: Decoder corrects ¼ fraction of error approximately. Analysis becomes harder.
GMD Decoding for Monotone codes c c c c c Inner decoder finds the closest balanced message. Assume 0 error: Decoder need not return message. Good nodes have few errors, Bad nodes have many. Thm: Decoder corrects ¼ fraction of error approximately.
Beyond Unique Decoding… Deterministic local list-decoder: Set L of machines such that: - For any received word - Every nearby codeword is computed by some M 2 L. Is this possible?