Average-case Complexity Luca Trevisan UC Berkeley.

Slides:



Advertisements
Similar presentations
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
Advertisements

Impagliazzos Worlds in Arithmetic Complexity: A Progress Report Scott Aaronson and Andrew Drucker MIT 100% QUANTUM-FREE TALK (FROM COWS NOT TREATED WITH.
The Equivalence of Sampling and Searching Scott Aaronson MIT.
Solving Hard Problems With Light Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov vs.
Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.
On the Complexity of Parallel Hardness Amplification for One-Way Functions Chi-Jen Lu Academia Sinica, Taiwan.
Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
An Introduction to Randomness Extractors Ronen Shaltiel University of Haifa Daddy, how do computers get random bits?
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Complexity Theory Lecture 6
Complexity Theory Lecture 8
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Lecture 9. Resource bounded KC K-, and C- complexities depend on unlimited computational resources. Kolmogorov himself first observed that we can put resource.
Shortest Vector In A Lattice is NP-Hard to approximate
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Quantum Information and the PCP Theorem Ran Raz Weizmann Institute.
Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
Complexity Theory Lecture 9 Lecturer: Moni Naor. Recap Last week: –Toda’s Theorem: PH  P #P. –Program checking and hardness on the average of the permanent.
Complexity Theory Lecture 1 Lecturer: Moni Naor. Computational Complexity Theory Study the resources needed to solve computational problems –Computer.
Hardness amplification proofs require majority Ronen Shaltiel University of Haifa Joint work with Emanuele Viola Columbia University June 2008.
Derandomized parallel repetition theorems for free games Ronen Shaltiel, University of Haifa.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
CS151 Complexity Theory Lecture 7 April 20, 2004.
Complexity and Cryptography
Derandomization: New Results and Applications Emanuele Viola Harvard University March 2006.
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Analysis of Algorithms CS 477/677
Hardness amplification proofs require majority Emanuele Viola Columbia University Work done at Harvard, IAS, and Columbia Joint work with Ronen Shaltiel.
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
Approximate List- Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)
Ragesh Jaiswal Indian Institute of Technology Delhi Threshold Direct Product Theorems: a survey.
Optimal Proof Systems and Sparse Sets Harry Buhrman, CWI Steve Fenner, South Carolina Lance Fortnow, NEC/Chicago Dieter van Melkebeek, DIMACS/Chicago.
Cs3102: Theory of Computation Class 24: NP-Completeness Spring 2010 University of Virginia David Evans.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Lower Bounds for Property Testing Luca Trevisan U.C. Berkeley.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
The Computational Complexity of Satisfiability Lance Fortnow NEC Laboratories America.
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
CS151 Complexity Theory Lecture 16 May 20, The outer verifier Theorem: NP  PCP[log n, polylog n] Proof (first steps): –define: Polynomial Constraint.
Probabilistic verification Mario Szegedy, Rutgers www/cs.rutgers.edu/~szegedy/07540 Lecture 1.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
The NP class. NP-completeness
Probabilistic Algorithms
Information Complexity Lower Bounds
Sublinear-Time Error-Correction and Error-Detection
Pseudorandomness when the odds are against you
NP-Completeness Yin Tat Lee
Intro to Theory of Computation
Intro to Theory of Computation
Umans Complexity Theory Lectures
Chapter 34: NP-Completeness
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
NP-Completeness Yin Tat Lee
CS21 Decidability and Tractability
CS151 Complexity Theory Lecture 1 April 2, 2019.
Oracle Separation of BQP and PH
On Derandomizing Algorithms that Err Extremely Rarely
Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity Lijie Chen Ryan Williams.
Oracle Separation of BQP and PH
Presentation transcript:

Average-case Complexity Luca Trevisan UC Berkeley

Distributional Problem P computational problem – e.g. SAT D distribution over inputs – e.g. n vars 10n clauses

Positive Results: Algorithm that solves P efficiently on most inputs – Interesting when P useful problem, D distribution arising “in practice” Negative Results: If, then no such algorithm – P useful, D natural guide algorithm design – Manufactured P,D, still interesting for crypto, derandomization

Positive Results: Algorithm that solves P efficiently on most inputs – Interesting when P useful problem, D distribution arising “in practice” Negative Results: If, then no such algorithm – P useful, D natural guide algorithm design – Manufactured P,D, still interesting for crypto, derandomization

Holy Grail If there is algorithm A that solves P efficiently on most inputs from D Then there is an efficient worst-case algorithm for [the complexity class] P [belongs to]

Part (1) In which the Holy Grail proves elusive

The Permanent Perm (M) :=    i M(i,  (i)) Perm() is #P-complete Lipton (1990): If there is algorithm that solves Perm() efficiently on most random matrices, Then there is an algorithm that solves it efficiently on all matrices (and BPP=#P)

Lipton’s Reduction Suppose operations are over finite field of size >n A is good-on-average algorithm (wrong on < 1/(10(n+1)) fraction of matrices) Given M, pick random X, compute A(M+X), A(M+2X),…,A(M+(n+1)X) Whp the same as Perm(M+X),Perm(M+2X),…,Perm(M+(n+1)X)

Lipton’s Reduction Given Perm(M+X),Perm(M+2X),…,Perm(M+(n+1)X) Find univariate degree-n polynomial p such that p(t) = Perm(M+tX) for all t Output p(0)

Improvements / Generalizations Can handle constant fraction of errors [Gemmel-Sudan] Works for PSPACE-complete, EXP-complete,… [Feigenbaum-Fortnow, Babai-Fortnow-Nisan-Wigderson] Encode the problem as a polynomial

Strong Average-Case Hardness [Impagliazzo, Impagliazzo-Wigderson] Manufacture problems in E, EXP, such that – Size-t circuit correct on ½ + 1/t inputs implies – Size poly(t) circuit correct on all inputs Motivation: [Nisan-Wigderson] P=BPP if there is problem in E of exponential average-case complexity

Strong Average-Case Hardness [Impagliazzo, Impagliazzo-Wigderson] Manufacture problems in E, EXP, such that – Size-t circuit correct on ½ + 1/t inputs implies – Size poly(t) circuit correct on all inputs Motivation: [Impagliazzo-Wigderson] P=BPP if there is problem in E of exponential average worst-case complexity

Open Question 1 Suppose there are worst-case intractable problems in NP Are there average-case intractable problems?

Strong Average-Case Hardness [Impagliazzo, Impagliazzo-Wigderson] Manufacture problems in E, EXP, such that – Size-t circuit correct on ½ + 1/t inputs implies – Size poly(t) circuit correct on all inputs [Sudan-T-Vadhan] – IW result can be seen as coding-theoretic – Simpler proof by explicitly coding-theoretic ideas

Encoding Approach Viola proves that an error-correcting code cannot be computed in AC0 The exponential-size error-correcting code computation not possible in PH

Problem-specific Approaches? [Ajtai] Proves that there is a lattice problem such that: – If there is efficient average-case algorithm – There is efficient worst-case approximation algorithm

Ajtai’s Reduction Lattice Problem – If there is efficient average-case algorithm – There is efficient worst-case approximation algorithm The approximation problem is in NP  coNP Not NP-hard

Holy Grail Distributional Problem: – If there is efficient average-case algorithm – P=NP (or NP in BPP, or NP has poly-size circuits,…) Already seen: no “encoding” approach works Can extensions of Ajtai’s approach work?

A Class of Approaches L problem in NP, D distribution of inputs R reduction of SAT to : Given instance f of SAT, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide f If there is good-on-average algorithn for, we solve SAT in polynomial time [cf. Lipton’s work on Permanent]

A Class of Approaches L,W problems in NP, D (samplable) distribution of inputs R reduction of W to Given instance w of W, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide w If there is good-on-average algorithm for, we solve W in polynomial time; Can W be NP-complete?

A Class of Approaches Given instance w of W, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide w Given good-on-average algorithm for, we solve W in polynomial time; If we have such reduction, and W is NP-complete, we have Holy Grail! Feigenbaum-Fortnow: W is in “coNP”

Feigenbaum-Fortnow Given instance w of W, – R produces instances x 1,…,x k of L, each distributed according to D – Given L(x 1 ),…,L(x 1 ), R is able to decide w Using R, Feigenbaum-Fortnow design a 2-round interactive proof with advice for coW Given w, Prover convinces Verifier that R rejects w after seeing L(x 1 ),…,L(x 1 )

Feigenbaum-Fortnow Given instance w of W, – R produces instances x of L distributed as in D – w in L iff x in L Suppose we know Pr D [ x in L]= ½ V P w R(w) = x 1 R(w) = x 2... R(w) = x m x 1, x 2,..., x m (Yes,w 1 ),No,..., (Yes, w m ) Accept iff all simulations of R reject and m/2 +/- sqrt(m) answers are certified Yes

Feigenbaum-Fortnow Given instance w of W, p:= Pr[ x i in L] – R produces instances x 1,…,x k of L, each distrib. according to D – Given L(x 1 ),…,L(x k ), R is able to decide w V w R(w) -> x 1 1,…,x k 1... R(w) -> x 1 m,…,x k m P x 1 1,…,x k m (Yes,w 1 1 ),…,NO Accept iff -pkm +/- sqrt(pkm) YES with certificates -R rejects in each case

Generalizations Bogdanov-Trevisan: arbitrary non-adaptive reductions Main Open Question: What happens with adaptive reductions?

Open Question 1 Prove the following: Suppose: W,L are in NP, D is samplable distribution, R is poly-time reduction such that – If A solves on 1-1/poly(n) frac of inputs – Then R with oracle A solves W on all inputs Then W is in “coNP”

By the Way Probably impossible by current techniques: If NP not contained in BPP There is a samplable distribution D and an NP problem L Such that is hard on average

By the Way Probably impossible by current techniques: If NP not contained in BPP There is a samplable distribution D and an NP problem L Such that for every efficient A A makes many mistakes solving L on D

By the Way Probably impossible by current techniques: If NP not contained in BPP There is a samplable distribution D and an NP problem L Such that for every efficient A A makes many mistakes solving L on D [Guttfreund-Shaltiel-TaShma] Prove: If NP not contained in BPP For every efficient A There is a samplable distribution D Such that A makes many mistakes solving SAT on D

Part (2) In which we amplify average-case complexity and we discuss a short paper

Revised Goal Proving “If NP contains worst-case intractable problems, then NP contains average-case intractable problems” Might be impossible Average-case intractability comes in different quantitative degrees Equivalence?

Average-Case Hardness What does it mean for to be hard-on-average? Suppose A is efficient algorithm Sample x ~ D Then A(x) is noticeably likely to be wrong How noticeably?

Average-Case Hardness Amplification Ideally: If there is, L in NP, such that every poly-time algorithm (poly-size circuit) makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm (poly-size circuit) makes > ½ - 1/poly(n) mistakes

Amplification “Classical” approach: Yao’s XOR Lemma Suppose: for every efficient A Pr D [ A(x) = L(x) ] < 1-  Then: for every efficient A’ Pr D [ A’(x 1,…,x k ) = L(x 1 ) xor … xor L(x k ) ] < ½ + (1 - 2  ) k + negligible

Yao’s XOR Lemma Suppose: for every efficient A Pr D [ A(x) = L(x) ] < 1-  Then: for every efficient A’ Pr D [ A’(x 1,…,x k ) = L(x 1 ) xor … xor L(x k ) ] < ½ + (1 - 2  ) k + negligible Note: computing L(x 1 ) xor … xor L(x k ) need not be in NP, even if L is in NP

O’Donnell Approach Suppose: for every efficient A Pr D [ A(x) = L(x) ] < 1-  Then: for every efficient A’ Pr D [ A’(x 1,…,x k ) = g(L(x 1 ), …, L(x k )) ] < ½ + small(k,  ) For carefully chosen monotone function g Now computing g(L(x 1 ),…, L(x k )) is in NP, if L is in NP

Amplification (Circuits) Ideally: If there is, L in NP, such that every poly-time algorithm (poly-size circuit) makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm (poly-size circuit) makes > ½ - 1/poly(n) mistakes Achieved by [O’Donnell, Healy-Vadhan-Viola] for poly-size circuits

Amplification (Algorithms) If there is, L in NP, such that every poly- time algorithm makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm makes > ½ - 1/polylog(n) mistakes [T] [Impagliazzo-Jaiswal-Kabanets-Wigderson] ½ - 1/poly(n) but for P NP||

Open Question 2 Prove: If there is, L in NP, such that every poly-time algorithm makes > 1/poly(n) mistakes Then there is, L’ in NP, such that every poly-time algorithm makes > ½ - 1/poly(n) mistakes

Completeness Suppose we believe there is L in NP, D distribution, such that is hard Can we point to a specific problem C such that is also hard?

Completeness Suppose we believe there is L in NP, D distribution, such that is hard Can we point to a specific problem C such that is also hard? Must put restriction on D, otherwise assumption is the same as P != NP

Side Note Let K be distribution such that x has probability proportional to 2 -K(x) Suppose A solves on 1-1/poly(n) fraction of inputs of length n Then A solves L on all but finitely many inputs Exercise: prove it

Completeness Suppose we believe there is L in NP, D samplable distribution, such that is hard Can we point to a specific problem C such that is also hard?

Completeness Suppose we believe there is L in NP, D samplable distribution, such that is hard Can we point to a specific problem C such that is also hard? Yes we can! [Levin, Impagliazzo-Levin]

Levin’s Completeness Result There is an NP problem C, such that If there is L in NP, D computable distribution, such that is hard Then is also hard

Reduction Need to define reduction that preserves efficiency on average (Note: we haven’t yet defined efficiency on average) R is a (Karp) average-case reduction from to if 1.x in A iff R(x) in B 2.R(D A ) is “dominated” by D B : Pr[ R(D A )=y] < poly(n) * Pr [D B = y]

Reduction R is an average-case reduction from to if x in A iff R(x) in B R(D A ) is “dominated” by D B : Pr[ R(D A )=y] < poly(n) * Pr [D B = y] Suppose we have good algorithm for Then algorithm also good for Solving reduces to solving

Reduction If Pr[ Y=y] < poly(n) * Pr [D B = y] and we have good algorithm for Then algorithm also good for Reduction works for any notion of average-case tractability for which above is true.

Levin’s Completeness Result Follow presentation of [Goldreich] If is easy on average Then for every L in NP, every D computable distribution, is easy on average BH is non-deterministic Bounded Halting: given, does M(x) accept with t steps?

Levin’s Completeness Result BH, non-deterministic Bounded Halting: given, does M(x) accept with t steps? Suppose we have good-on-average alg A Want to solve, where L solvable by NDTM M First try: x ->

Levin’s Completeness Result First try: x -> Doesn’t work: x may have arbitrary distribution, we need target string to be nearly uniform (high entropy) Second try: x -> Where C() is near-optimal compression alg, M’ recover x from C(x), then runs M

Levin’s Completeness Result Second try: x -> Where C() is near-optimal compression alg, M’ recover x from C(x), then runs M Works! Provided C(x) has length at most O(log n) + log 1/Pr D [x] Possible if cumulative distribution function of D is computable.

Impagliazzo-Levin Do the same but for all samplable distribution Samplable distribution not necessarily efficiently compressible in coding theory sense. (E.g. output of PRG) Hashing provides “non-constructive” compression

Complete Problems BH with Uniform distribution Tiling problem with Uniform distribution [Levin] Generalized edge-coloring [Venkatesan-Levin] Matrix representability [Venkatesan-Rajagopalan] Matrix transformation [Gurevich]...

Open Question 3 L in NP, M NDTM for L is specified by k bits Levin’s reduction incurs 2 k bits in fraction of “problematic” inputs (comparable to having 2 k slowdown) Limited to problems having non-deterministic algorithm of 5 bytes Inherent?

More Reductions? Still relatively few complete problems Similar to study of inapproximability before Papadimitriou-Yannakakis and PCP Would be good, as in Papadimitriou-Yannakakis, to find reductions between problems that are not known to be complete but are plausibly hard

Open Question 4 (Heard from Russell Impagliazzo) Prove that If 3SAT is hard on instances with n variables and 10n clauses, Then it is also hard on instances with 12n clauses

See [slides, references, addendum to Bogdanov-T, coming soon] [average-case complexity forum] Impagliazzo A personal view of average-case complexity Structures’95 Goldreich Notes on Levin’s theory of average-case complexity ECCC TR Bogdanov-T. Average case complexity F&TTCS 2(1): (2006)