Ryan O'Donnell (CMU, IAS) Yi Wu (CMU, IBM) Yuan Zhou (CMU)

Slides:



Advertisements
Similar presentations
Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Advertisements

Subhash Khot IAS Elchanan Mossel UC Berkeley Guy Kindler DIMACS Ryan O’Donnell IAS.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Dana Moshkovitz MIT Joint work with Subhash Khot, NYU 1.
The Max-Cut problem: Election recounts? Majority vs. Electoral College? 7812.
Hardness of Robust Graph Isomorphism, Lasserre Gaps, and Asymmetry of Random Graphs Ryan O’Donnell (CMU) John Wright (CMU) Chenggang Wu (Tsinghua) Yuan.
Ryan O’Donnell Carnegie Mellon University analysisofbooleanfunctions.org.
Gillat Kol joint work with Irit Dinur Covering CSPs.
MaxClique Inapproximability Seminar on HARDNESS OF APPROXIMATION PROBLEMS by Dr. Irit Dinur Presented by Rica Gonen.
Ryan O’Donnell & Yi Wu Carnegie Mellon University (aka, Conditional hardness for satisfiable 3-CSPs)
Constraint Satisfaction over a Non-Boolean Domain Approximation Algorithms and Unique Games Hardness Venkatesan Guruswami Prasad Raghavendra University.
3-Query Dictator Testing Ryan O’Donnell Carnegie Mellon University joint work with Yi Wu TexPoint fonts used in EMF. Read the TexPoint manual before you.
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
1 Truthful Mechanism for Facility Allocation: A Characterization and Improvement of Approximation Ratio Pinyan Lu, MSR Asia Yajun Wang, MSR Asia Yuan Zhou,
Introduction to PCP and Hardness of Approximation Dana Moshkovitz Princeton University and The Institute for Advanced Study 1.
A 3-Query PCP over integers a.k.a Solving Sparse Linear Systems Prasad Raghavendra Venkatesan Guruswami.
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
Complexity ©D.Moshkovits 1 Hardness of Approximation.
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Inapproximability from different hardness assumptions Prahladh Harsha TIFR 2011 School on Approximability.
Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)
Infinite Horizon Problems
Proclaiming Dictators and Juntas or Testing Boolean Formulae Michal Parnas Dana Ron Alex Samorodnitsky.
Venkatesan Guruswami (CMU) Yuan Zhou (CMU). Satisfiable CSPs Theorem [Schaefer'78] Only three nontrivial Boolean CSPs for which satisfiability is poly-time.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Vol.1: Geometry Subhash Khot IAS Elchanan Mossel UC Berkeley Guy Kindler DIMACS Ryan O’Donnell IAS.
Oded Regev Tel-Aviv University On Lattices, Learning with Errors, Learning with Errors, Random Linear Codes, Random Linear Codes, and Cryptography and.
1 Tight Hardness Results for Some Approximation Problems [mostly Håstad] Adi Akavia Dana Moshkovitz S. Safra.
1 The PCP starting point. 2 Overview In this lecture we’ll present the Quadratic Solvability problem. We’ll see this problem is closely related to PCP.
Finding Almost-Perfect
Ryan ’Donnell Carnegie Mellon University O. Ryan ’Donnell Carnegie Mellon University.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Continuity ( Section 1.8) Alex Karassev. Definition A function f is continuous at a number a if Thus, we can use direct substitution to compute the limit.
Bypassing the Unique Games Conjecture for two geometric problems Yi Wu IBM Almaden Research Based on joint work with Venkatesan Guruswami Prasad Raghavendra.
Ryan O’Donnell Carnegie Mellon University. Part 1: A. Fourier expansion basics B. Concepts: Bias, Influences, Noise Sensitivity C. Kalai’s proof of Arrow’s.
Ryan ’Donnell Carnegie Mellon University O. f : {−1, 1} n → {−1, 1} is “quasirandom” iff fixing O(1) input coords changes E[f(x)] by only o (1)
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Direct-product testing, and a new 2-query PCP Russell Impagliazzo (IAS & UCSD) Valentine Kabanets (SFU) Avi Wigderson (IAS)
Precise definition of limits The phrases “x is close to a” and “f(x) gets closer and closer to L” are vague. since f(x) can be arbitrarily close to 5 as.
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Chapter 21 Exact Differential Equation Chapter 2 Exact Differential Equation.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
AUTOMATIC CONTROL THEORY II Slovak University of Technology Faculty of Material Science and Technology in Trnava.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
Shorter Long Codes and Applications to Unique Games 1 Boaz Barak (MSR, New England) Parikshit Gopalan (MSR, SVC) Johan Håstad (KTH) Prasad Raghavendra.
Lattice-based cryptography and quantum Oded Regev Tel-Aviv University.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Forrelation: A Problem that Optimally Separates Quantum from Classical Computing.
CS151 Complexity Theory Lecture 16 May 20, The outer verifier Theorem: NP  PCP[log n, polylog n] Proof (first steps): –define: Polynomial Constraint.
Yuan Zhou, Ryan O’Donnell Carnegie Mellon University.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
CSC 413/513: Intro to Algorithms
Why almost all satisfiable k - CNF formulas are easy? Danny Vilenchik Joint work with A. Coja-Oghlan and M. Krivelevich.
Boaz Barak (MSR New England) Fernando G.S.L. Brandão (Universidade Federal de Minas Gerais) Aram W. Harrow (University of Washington) Jonathan Kelner (MIT)
CS151 Complexity Theory Lecture 15 May 18, Gap producing reductions Main purpose: –r-approximation algorithm for L 2 distinguishes between f(yes)
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Finding Almost-Perfect
Number Theory (Chapter 7)
Noise stability of functions with low influences:
Subhash Khot Dept of Computer Science NYU-Courant & Georgia Tech
Venkatesan Guruswami Yuan Zhou (Carnegie Mellon University)
The Curve Merger (Dvir & Widgerson, 2008)
Cryptography Lecture 12 Arpita Patra © Arpita Patra.
Introduction to PCP and Hardness of Approximation
Exponential Functions
Presentation transcript:

Ryan O'Donnell (CMU, IAS) Yi Wu (CMU, IBM) Yuan Zhou (CMU)

Solving linear equations Given a set of linear equations over reals, is there a solution satisfying all the equations? –Easy : Gaussian elimination. Noisy version Given a set of linear equations for which there is a solution satisfying 99% of the equations, –can we find a solution that satisfies at least 1% of the equations? I.e. 99% vs 1% approximation algorithm for linear equations over reals?

Hardness of Max-3Lin(q) Theorem. [Håstad '01] Given a set of linear equations modulo q, it is NP-hard to distinguish between –there is a solution satisfying (1 - ε)-fraction of the equations –no solution satisfies more than (1/q + ε)-fraction of the equations Equations are sparse, and are of the form x i + x j - x k = c (mod q) (1 - ε) vs (1/q + ε) approx. for Max-3Lin(q) is NP-Hard A 3-query PCP of completeness (1 - ε), soundness (1/q + ε)

Sparser equations: Max-2Lin(q) Theorem. [KKMO '07] Assuming Unique Games Conjecture, for any ε, δ > 0, there exists q > 0, such that (1 - ε) vs δ approx. for Max-2Lin(q) is NP-Hard

Max-3LinMax-2Lin over [q] (1 - ε) vs (1/q + ε) NP-hardness [Håstad '01] (1 - ε) vs δ UG-hardness [KKMO '07] over integers/reals ? ?

Equations over integers: Max-3Lin(Z) Approximate Max-3Lin/Max2Lin over large domains? Intuitively, it should be harder, because when domain size increases, –soundness becomes smaller in both [Håstad '01] and [KKMO '07] Obstacle of getting hardness –"Long code" becomes too long (even infinitely long)

Hardness of Max-3Lin(Z) Theorem. [Guruswami-Raghavendra '07] For all ε, δ > 0, it is NP-Hard to (1 - ε) vs δ approximate Max-3Lin(Z) –3-query PCP over integers –Implies the hardness for Max-3Lin(R) Proof follows [Håstad '01], but much more involved –derandomized Long Code testing –Fourier analysis with respect to an exponential distribution on Z +

Max-3LinMax-2Lin over [q] (1 - ε) vs (1/q + ε) NP-hardness [Håstad '01] (1 - ε) vs δ UG-hardness [KKMO '07] over integers/reals (1 - ε) vs δ NP-hardness [GR '07] ?

Unique Games over Integers? Can we use the techniques in [Guruswami-Raghavendra '07] prove a (1 - ε) vs δ UG-hardness for Max-2Lin(Z)? –Seems difficult –Open question from Raghavendra's thesis [Raghavendra '09] :

Our results Relatively easy to modify the KKMO proof to get –Theorem. For all ε, δ > 0, it is UG-Hard to (1 - ε) vs δ approximate Max-2Lin(Z) Also applies to Max-2Lin over reals and large domains –Simpler proof (and better parameters) of Max- 3Lin(Z) hardness

Dictatorship Test Theorem. For all ε, δ > 0, it is UG-Hard to (1 - ε) vs δ approximate Max-2Lin(Z) By [KKMO '07], only need to design a (1 - ε) vs δ 2-query dictatorship test over integers.

Dictatorship Test (cont'd) f: [q] d -> Z is called a dictator if f(x 1, x 2,..., x d ) = x i (for some i) Dictatorship test over [q]: a distribution over equations f(x) - f(y) = c (mod q) –Completeness: for dictators, Pr[equation holds] ≥ 1 - ε –Soundness: for functions far from dictators, Pr[equation holds] < δ (1 - ε) vs δ hardness of Max-2Lin(q)

Dictatorship Test over Integers A distribution over equations f(x) - f(y) = c –Completeness: for dictators, Pr[f(x) - f(y) =c] ≥ 1 - ε –Soundness: for functions far from dictators, Pr[f(x) - f(y) = c mod q] < δ It is UG-Hard to distinguish between –a Max-2Lin(Z) instance is (1 - ε)-satisfiable –the instance is not δ-satisfiable even when the the equations are modulo q

Recap of KKMO Dictatorship Test

Back to KKMO Dictatorship Test Dictatorship test over [q]: a distribution over equations f(x) - f(y) = c (mod q) Completeness: for dictators, Pr[equation holds] ≥ 1 - ε Soundness: for functions far from dictators, Pr[equation holds] < δ KKMO Test Pick x ∈ [q] d by random Get y by rerandomizing each coordinate of x w.p. ε Test f(x) - f(y) = 0 (mod q)

Back to KKMO Dictatorship Test (cont'd) KKMO Test Pick x ∈ [q] d by random Get y by rerandomizing each coordinate of x w.p. ε Test f(x) - f(y) = 0 (mod q) Soundness analysis "Majority Is Stablest" Theorem [MOO '05] –If f is far from dictators and "β-balanced", then Pr[f passes the test] < β ε/2 –f is β-balanced : Pr[f(x) = a mod q] < β for all 0 ≤ a < q

Back to KKMO Dictatorship Test (cont'd) KKMO Test Pick x ∈ [q] d by random Get y by rerandomizing each coordinate of x w.p. ε Test f(x) - f(y) = 0 (mod q) Soundness analysis –"Folding" trick: to make sure f is β-balanced –Idea: when query f(x) = f(x 1, x 2,..., x n ), return g(x) = f(0, (x 2 - x 1 ) mod q,..., (x n - x 1 ) mod q) + x 1 –Dictators not affected in completeness analysis –g(x) is 1/q-balanced

Dictatorship Test for Max-2Lin(Z) A distribution over equations f(x) - f(y) = c –Completeness: for dictators, Pr[f(x) - f(y) =c] ≥ 1 - ε –Soundness: for functions far from dictators, Pr[f(x) - f(y) = c mod q] < δ If we use KKMO test... –Soundness: the same, –Completeness does not hold, because when query f(x), get g(x) = (x i - x 1 ) mod q + x 1 when query f(y), get g(y) = (y i - y 1 ) mod q + y 1 Max-2Lin(q): Pr[g(x) - g(y) = 0 mod q] ≥ 1 - ε Max-2Lin(Z): Pr[g(x) - g(y) ≠ 0] ≥ Pr["wrap-around" (exactly one of g(x), g(y) ≥ q)] ≈ 1/2

Our method Step I Introducing the new "active folding"

The new "active folding" Completeness: Soundness: –Claim. g(x) = f(x 1 - c,..., x n - c) + c is 1/q-balanced –Proof. Pr x,c [f(x 1 - c,..., x n - c) + c = a mod q] = E c [Pr x [f(x 1 - c,..., x n - c) = a - c mod q] ] = E c [Pr x [f(x) = a - c mod q] ] = E x [Pr c [f(x) = a - c mod q] ] ≤ 1/q KKMO Test with active folding Pick x ∈ [q] d by random Get y by rerandomizing each coordinate of x w.p. ε Pick c, c' ∈ [q] by random, test f(x 1 - c,..., x n - c ) + c = f(y 1 - c',..., y n - c') + c' (mod q) mod q

Our method Step II "Partial active folding"

"Partial active folding" Completeness: –f(x 1 - c,..., x n - c) + c = (x i - c) mod q + c = (x i - c) + c = x i w.p /q 0.5 –f(y 1 - c',..., y n - c') + c' = y i w.p /q 0.5 Pr[f(x 1 -c,..., x n -c)+c = f(y 1 -c',..., y n -c')+c'] ≥ 1 - ε - 2/q 0.5 KKMO Test with partial active folding for Max-2Lin(Z) Pick x ∈ [q] d by random Get y by rerandomizing each coordinate of x w.p. ε Pick c, c' ∈ [q 0.5 ] by random, test f(x 1 - c,..., x n - c ) + c = f(y 1 - c',..., y n - c') + c'

"Partial active folding" (cont'd) Completeness: Soundness: –Claim. g(x) = f(x 1 - c,..., x n - c) + c is 1/q 0.5 -balanced –Proof. Pr x,c [f(x 1 - c,..., x n - c) + c = a mod q] = E c [Pr x [f(x 1 - c,..., x n - c) = a - c mod q] ] = E c [Pr x [f(x) = a - c mod q] ] = E x [Pr c [f(x) = a - c mod q] ] ≤ 1/q 0.5 KKMO Test with partial active folding for Max-2Lin(Z) Pick x ∈ [q] d by random Get y by rerandomizing each coordinate of x w.p. ε Pick c, c' ∈ [q 0.5 ] by random, test f(x 1 - c,..., x n - c ) + c = f(y 1 - c',..., y n - c') + c'

"Partial active folding" (cont'd) Completeness: Soundness: –Claim. g(x) = f(x 1 - c,..., x n - c) + c is 1/q 0.5 -balanced –By Majority Is Stablest Theorem, when f is far from dictators Pr[f(x 1 -c,...,x n -c)+c = f(y 1 -c',...,y n -c')+c' mod q] < 1/q ε/4 KKMO Test with partial active folding for Max-2Lin(Z) Pick x ∈ [q] d by random Get y by rerandomizing each coordinate of x w.p. ε Pick c, c' ∈ [q 0.5 ] by random, test f(x 1 - c,..., x n - c ) + c = f(y 1 - c',..., y n - c') + c'

Application to Max-3Lin(Z) Key Idea in Max-2Lin(Z): "Partial folding" to deal with "wrap-around" event

H åstad's reduction for Max-3Lin(q) Hastad's Matching Dictatorship Test for f: [q] L -> Z, g : [q] R -> Z, π : [R] -> [L] Pick x ∈ [q] L, y ∈ [q] R, by random Let z ∈ [q] R, s.t. z i = (y i + x π(i) ) mod q Rerandomizing each coordinate of x, y, z w.p. ε Test f(0, x 2 - x 1,..., x n - x 1 ) + x 1 + g(y) = g(z) mod q Completeness: if g is i-th dictator, f is π(i)-th dictator Pr[f, g pass the test] ≥ 1 - 3ε Soundness: if f and g far from being "matching dictators" Pr[f, g pass the test] < 1/q + δ (1 - 3ε) vs (1/q + δ) NP-Hardness of Max-3Lin(q)

Our reduction for Max-3Lin(Z) Matching Dictatorship Test with partial active folding for f: [q 2 ] L -> Z, g : [q 3 ] R -> Z, π : [R] -> [L] Pick x ∈ [q 2 ] L, y ∈ [q 3 ] R, by random Let z ∈ [q 3 ] R, s.t. z i = (y i + x π(i) ) mod q Rerandomizing each coordinate of x, y, z w.p. ε Pick c ∈ [q] by random Test f(x 1 - c,..., x n - c) + c + g(y) = g(z) Completeness: if g is i-th dictator, f is π(i)-th dictator Pr[f(x 1 - c,..., x n - c) + c + g(y) = g(z)] ≥ 1 - 3ε - 2/q Soundness: if f and g far from being "matching dictators" Pr[f(x 1 - c,..., x n - c) + c + g(y) = g(z) mod q] < 1/q + δ (1-3ε-2/q) vs (1/q+δ) NP-Hardness of Max-3Lin(Z)

The End. Any questions?