Prasad Raghavendra Georgia Institute of Technology, Atlanta, GA Complexity of Constraint Satisfaction Problems Exact and Approximate TexPoint fonts used.

Slides:



Advertisements
Similar presentations
How to Round Any CSP Prasad Raghavendra University of Washington, Seattle David Steurer, Princeton University (In Principle)
Advertisements

A threshold of ln(n) for approximating set cover By Uriel Feige Lecturer: Ariel Procaccia.
Inapproximability of MAX-CUT Khot,Kindler,Mossel and O ’ Donnell Moshe Ben Nehemia June 05.
Dana Moshkovitz MIT Joint work with Subhash Khot, NYU 1.
The Max-Cut problem: Election recounts? Majority vs. Electoral College? 7812.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
On the Unique Games Conjecture Subhash Khot Georgia Inst. Of Technology. At FOCS 2005.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Inapproximability Seminar – 2005 David Arnon  March 3, 2005 Some Optimal Inapproximability Results Johan Håstad Royal Institute of Technology, Sweden.
Approximating NP-hard Problems Efficient Algorithms and their Limits Prasad Raghavendra University of Washington Seattle.
Gillat Kol joint work with Irit Dinur Covering CSPs.
Prasad Raghavendra University of Washington Seattle Optimal Algorithms and Inapproximability Results for Every CSP?
Recent Progress in Approximability. Administrivia Most agreeable times: Monday 2:30-4:00 Wednesday 4:00-5:30 Thursday 4:00-5:30 Friday 1:00-2:30 Please.
The Unique Games Conjecture with Entangled Provers is False Julia Kempe Tel Aviv University Oded Regev Tel Aviv University Ben Toner CWI, Amsterdam.
Constraint Satisfaction over a Non-Boolean Domain Approximation Algorithms and Unique Games Hardness Venkatesan Guruswami Prasad Raghavendra University.
3-Query Dictator Testing Ryan O’Donnell Carnegie Mellon University joint work with Yi Wu TexPoint fonts used in EMF. Read the TexPoint manual before you.
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
Semi-Definite Algorithm for Max-CUT Ran Berenfeld May 10,2005.
Introduction to PCP and Hardness of Approximation Dana Moshkovitz Princeton University and The Institute for Advanced Study 1.
A 3-Query PCP over integers a.k.a Solving Sparse Linear Systems Prasad Raghavendra Venkatesan Guruswami.
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
Inapproximability from different hardness assumptions Prahladh Harsha TIFR 2011 School on Approximability.
Dictator tests and Hardness of approximating Max-Cut-Gain Ryan O’Donnell Carnegie Mellon (includes joint work with Subhash Khot of Georgia Tech)
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Venkatesan Guruswami (CMU) Yuan Zhou (CMU). Satisfiable CSPs Theorem [Schaefer'78] Only three nontrivial Boolean CSPs for which satisfiability is poly-time.
A Linear Round Lower Bound for Lovasz-Schrijver SDP relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani UC Berkeley.
Semidefinite Programming
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Generic Rounding Schemes for SDP Relaxations
Semidefinite Programming Based Approximation Algorithms Uri Zwick Uri Zwick Tel Aviv University UKCRC’02, Warwick University, May 3, 2002.
On the hardness of approximating Sparsest-Cut and Multicut Shuchi Chawla, Robert Krauthgamer, Ravi Kumar, Yuval Rabani, D. Sivakumar.
1 Slides by Asaf Shapira & Michael Lewin & Boaz Klartag & Oded Schwartz. Adapted from things beyond us.
1 Joint work with Shmuel Safra. 2 Motivation 3 Motivation.
Finding Almost-Perfect
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Subhash Khot’s work and its impact Sanjeev Arora Computer Science Dept, Princeton University ICM 2014 Nevanlinna Prize Laudatio.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
Ryan ’Donnell Carnegie Mellon University O. f : {−1, 1} n → {−1, 1} is “quasirandom” iff fixing O(1) input coords changes E[f(x)] by only o (1)
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Hardness of Learning Halfspaces with Noise Prasad Raghavendra Advisor Venkatesan Guruswami.
Generic Conversion of SDP gaps to Dictatorship Test (for Max Cut) Venkatesan Guruswami Fields Institute Summer School June 2011 (Slides borrowed from Prasad.
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Semidefinite Programming
C&O 355 Mathematical Programming Fall 2010 Lecture 16 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Shorter Long Codes and Applications to Unique Games 1 Boaz Barak (MSR, New England) Parikshit Gopalan (MSR, SVC) Johan Håstad (KTH) Prasad Raghavendra.
C&O 355 Lecture 24 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Forrelation: A Problem that Optimally Separates Quantum from Classical Computing.
Yuan Zhou, Ryan O’Donnell Carnegie Mellon University.
Boaz Barak (MSR New England) Fernando G.S.L. Brandão (Universidade Federal de Minas Gerais) Aram W. Harrow (University of Washington) Jonathan Kelner (MIT)
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Approximation Algorithms based on linear programming.
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Approximation algorithms
On the Size of Pairing-based Non-interactive Arguments
Polynomial integrality gaps for
Sum of Squares, Planted Clique, and Pseudo-Calibration
Possibilities and Limitations in Computation
Subhash Khot Dept of Computer Science NYU-Courant & Georgia Tech
Venkatesan Guruswami Yuan Zhou (Carnegie Mellon University)
Introduction to PCP and Hardness of Approximation
Presentation transcript:

Prasad Raghavendra Georgia Institute of Technology, Atlanta, GA Complexity of Constraint Satisfaction Problems Exact and Approximate TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A aka Where does the number come from?

Constraint Satisfaction Problem A constraint satisfaction problem Λ : Λ = (A finite domain [q]={1,2,..q}, Set of Predicates/Relations {P 1,P 2,.. P r }) Example: MaxCut = ({0,1}, {P(a,b) = a NOTEQUAL b}) 3-SAT = ({0,1}, {P 1 (a,b,c) = a ѵ b ѵ c, P 2 (a,b,c) = ¬a ѵ b ѵ c …. P 8 (a,b,c) = ¬a ѵ ¬ b ѵ ¬ c P 9 (a,b) = a ѵ b, P 10 (a,b) = ¬a ѵ b … P 12 (a,b) = ¬a ѵ ¬ b )

P1P1 Instance of Λ –CSP I: Set of variables: {x 1, x 2,.. X n } Set of constraints: Predicates from Λ applied to subsets of variables. x1x1 x2x2 x3x3 x9x9 xnxn P 31 P 13 Max-Λ-CSP: “Given an instance of Λ-CSP, find an assignment to the variables, that satisfies the maximum number of constraints.” Remarks: 1)Use fraction of constraints instead of number of constraints (objective value always between 0 and 1). 2) The constraints can be weighted, then one maximizes the total weight. Easily seen to be equivalent. 3)Predicates can be replaced by bounded real valued payoffs (Generalized CSPs) Exact-Λ-CSP: “Given an instance of Λ-CSP, is it possible to satisfy all the constraints?”

Exact-Λ-CSP

Complexity of Exact-Λ-CSP Theorem [Schaefer] Among boolean CSPs Λ, Exact-Λ-CSP is in P for the following cases: 2-SAT ¬a ѵ b Horn-SAT b  a ѵ c ѵ d Dual Horn-SAT a ^ b ^ c  d Linear Equations Mod 2 CSPs where all 0s or all 1s is a solution. The problem is NP-hard for all other CSPs. Any Pattern?

The Pattern P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P Given an instance I of Linear Equations Mod 2 For every 3 solutions {X 1, X 2, X 3 } to instance I, X 1 = X 3 = X 2 = is also a solution to instance I (Here XOR is applied for each variable separately)

Polymorphisms A function F : [q] R -> [q] for some constant R is a “polymorphism” for a CSP Λ if, P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P For every instance I of CSP Λ, For every set of R solutions {X 1, X 2, … X R } to instance I, X 1 = X R = X 2 = F(X 1, X 2, … X R ) is also a solution to instance I (Here F is applied for each variable separately) F(X 1, X 2, … X R ) =

Polymorphisms and Complexity of Exact CSP Examples: The dictator functions F(x) = x i are polymorphisms for all CSPs Λ (Algebraic Dichotomy Conjecture) Exact CSP Λ is in P if and only if there are “non-trivial” polymorphisms, i.e., polymorphisms that are very different from dictators. (precisely defined in [Bulatov-Jeavons-Krohkin]) Tractable Boolean CSPs Linear Equations Mod 2 XOR 2-SAT Majority Horn-SAT AND Dual Horn-SAT OR Conjecture is proven for domain sizes 2 [Schraefer] and 3 [Bulatov] P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P

Max-Λ-CSP

Approximability Threshold: “α Λ is the largest constant for which there is an α Λ approximation for Max- Λ-CSP” Approximability: An algorithm A is an α -approximation for a Max-Λ-CSP if for every instance I, A (I) ≥ α ∙ OPT(I) Approximability Curve: α Λ (c) “Given an instance I of CSP Λ, with value at least c what is the largest constant α Λ (c) for which there is an α Λ (c) approximation for Max- Λ-CSP?”

ALGORITHMS [Charikar-Makarychev-Makarychev 06] [Goemans-Williamson] [Charikar-Wirth] [Lewin-Livnat-Zwick] [Charikar-Makarychev-Makarychev 07] [Hast] [Charikar-Makarychev-Makarychev 07] [Frieze-Jerrum] [Karloff-Zwick] [Zwick SODA 98] [Zwick STOC 98] [Zwick 99] [Halperin-Zwick 01] [Goemans-Williamson 01] [Goemans 01] [Feige-Goemans] [Matuura-Matsui] [Trevisan-Sudan-Sorkin-Williamson] Approximability of Max-CSP MAX CUT MAX 2-SAT MAX 3-SAT MAX 4-SAT MAX DI CUT MAX k-CUT Unique Games MAX k-CSP MAX Horn SAT MAX 3 DI-CUT MAX E2 LIN3 MAX 3-MAJ MAX 3-CSP MAX 3-AND 01 NP HARD NP Hardness Results [Hastad] [Samorodnitsky-Trevisan]

Unique Games A Special Case E2LIN mod p Given a set of linear equations of the form: X i – X j = c ij mod p Find a solution that satisfies the maximum number of equations. x-y = 11 (mod 17) x-z = 13 (mod 17) … …. z-w = 15(mod 17)

Unique Games Conjecture [Khot 02] An Equivalent Version [Khot-Kindler-Mossel-O’Donnell] For every ε> 0, the following problem is NP-hard for large enough prime p Given a E2LIN mod p system, distinguish between: There is an assignment satisfying 1-ε fraction of the equations. No assignment satisfies more than ε fraction of equations.

Assuming UGC UGC Hardness Results [Khot-Kindler-Mossel-O’donnell] [Austrin 06] [Austrin 07] [Khot-Odonnell] [Odonnell-Wu] [Samorodnitsky-Trevisan] NP HARDUGC HARD 01 MAX CUT MAX 2-SAT MAX 3-SAT MAX 4-SAT MAX DI CUT MAX k-CUT Unique Games MAX k-CSP MAX Horn SAT MAX 3 DI-CUT MAX E2 LIN3 MAX 3-MAJ MAX 3-CSP MAX 3-AND Any Pattern?

Approximate Polymorphisms A function F : [q] R -> [q] for some constant R is an “α-approximate polymorphism” for a CSP Λ if, P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P For every instance I of CSP Λ, and c > 0, For every set of R assignments {X 1, X 2, … X R } to instance I, that satisfy c fraction of constraints X 1 = X R = X 2 = F(X 1, X 2, … X R ) satisfies at least α c fraction of constraints (Here F is applied for each variable separately) F(X 1, X 2, … X R ) = “(α,c) –approximate polymorphisms” Fix the value of c in the CSP instance.

Distributional Function Definition: A distributional function is a map F: [q] R  {Probability distribution over [q]}} Alternately, F: [q] R  such that F 1 (x) + F 2 (x) +.. F q (x) = 1 and F i (x) ≥ 0 Definition: A DDF Ψ is a probability distribution over distributional functions F Є Ψ over [q] R F: [q] R  {Probability distribution over [q]}}

Approximate Polymorphisms A DDF Ψ for some constant R is an “α-approximate polymorphism” for a CSP Λ if, For every instance I of CSP Λ, and c > 0, For every set of R assignments {X 1, X 2, … X R } to instance I, that satisfy c fraction of constraints Sample a distributional function F Є Ψ Apply F to each bit separately. The expected value of the solution returned is at least α c P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P P1P1 P2P2 P3P3

Influences Dictator functions are trivially 1-approximate polymorphisms for every CSP. Non-trivial  not like a dictator. Definition: Influence of the i th co-ordinate on a function F:[q] R  under a product distribution μ R is defined as: Inf i μ (F) = E [ Variance [F] ] Random Fixing of All Other Coordinates from μ R-1 over changing the i th coordinate as per μ Definition: A function is τ-quasirandom if for all product distributions μ R and all i, Inf i μ (F) ≤ τ (For the i th dictator function : Inf i μ (F) is as large as variance of F)

Example 1 MaxCut ?????????? – Majority function is a polymorphism for Exact-2- SAT – Majority function is -approximate polymorphism for Max-Cut

Example 2 Submodular CSPs like MinCut P is submodular if P(X) + P(Y) >= P(X and Y) + P(X or Y) So if X, Y in {0,1} n have value = c, then, With probability ½ use X and Y With probability ½ use X or Y The expected cost of combined solution is <= c  “½ AND + ½ OR” is a 1-approximate polymorphism.

For a CSP Λ, α Λ = largest constant such that there are α Λ –approximate non-trivial polymorphisms Complexity of Approximability α Λ –approximate τ –quasirandom polymorphisms for every τ > 0 (Analogue of Algebraic Dichotomy Conjecture): “For every Max-CSP Λ, α Λ is the threshold of approximability of Max-CSP Λ”. Algorithm : can approximate to factor α Λ Hardness : cannot approximate to better than α Λ factor True for all known approximation thresholds Define: α Λ (c) - Restricted to instances with value ≥ c.

Theorem [R 08] For every ε >0, every Max-CSP Λ, can be approximated within a ratio α Λ - ε in time exp(exp(poly(1/ ε, |Λ|))· poly(n). For every Max-CSP Λ and c, ε >0, it is NP-hard to approximate the problem on instances with value c-ε to a factor better than α Λ (c) Hardness and Algorithm For every Max-CSP Λ, it is NP-hard to approximate the problem better than α Λ Unique Games Conjecture [Khot 02] Generalization of the reduction of [Khot-Kindler-Mossel- O’Donnell] A slightly more general version of hardness is equivalent to Unique Games Conjecture. Algorithm based on semidefinite programming – LC relaxation. Settles the approximability of every CSP under Unique Games Conjecture.

More Generally, Theorem Under UGC, this semidefinite program (LC) yields the “optimal” approximation for: Every Generalized Constraint Satisfaction Problem: bounded real valued functions instead of predicates. (minimization problems also - Metric Labelling) [R 08] On instances with value c, Under UGC, it is hard to approximate instances with value c to a factor better than α Λ (c+ ε)+ ε The LC relaxation approximates instances with value c to α Λ (c- ε)- ε

Remarks Theorem [R 08] For every Max-CSP Λ, the value of α Λ can be computed within an error ε in time exp(exp(poly(1/ε,|Λ|))) Approximation threshold α Λ is not very explicit. Including all local constraints on up to 2 O(loglogn^0.5) variables to LC Semidefinite program does not improve approximation ratios for any CSP, Ordering CSP or Labelling problem. [R-Steurer 09][Khot-Saket] LC Semidefinite program can be solved in near linear time in number of constraints. [Steurer 09]

Rest of the Talk Algorithm: – Intuitive Idea – Invariance Principle – Description of LC semidefinite program – Random Projections Hardness: – Dictatorship Tests – Polymorphisms vs Dictatorship Tests Future Work

Algorithm Theorem [R 08] For every ε >0, every Max-CSP Λ, can be approximated within a ratio α Λ - ε in time exp(exp(poly(1/ ε, |Λ|))· poly(n). Design an α Λ -approximation algorithm using the existence of α Λ - approximate polymorphism.

Intuitive Idea P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P Input: An instance I of the Max- CSP Λ We Know: For every τ > 0 α Λ –approximate τ –quasirandom polymorphisms. F : [q] R  [q] R Optimal Solutions Foolish Algorithm: Take R Optimal Solutions X 1, X 2,.., X R, Apply F(X 1,..X R ) to get a solution that is α Λ –approximate optimal. A Plausible Algorithm: Relax the constraint that solutions are integral {0,1} - allow real values. Using semidefinite programming, one can generate “real valued optimal solutions”. Feed in these real valued optimal solutions. A τ –quasirandom polymorphism cannot distinguish between integral solutions and these real valued solutions

Multi-linear Expansion Every function F : {-1,1} R  can be written as a multilinear polynomial using Fourier Expansion. Example: AND(x (1),x (2) ) = (1-x (1)) )(1-x (2) )/4 OR(x (1),x (2) ) = 1 – (1-x (1) )(1-x (2) ) More generally, the polynomial P F corresponding to F Any function F:[q] R ! can be written as a multilinear polynomial after suitably arithmetizing the inputs. where

If Z 1, Z 2, Z 3 and Y 1, Y 2, Y 3 are two sets of random variables with matching first two moments. Z 1 1, Z 2 1, Z 3 1 Y 1 1, Y 2 1, Y 3 1 Z 1 2, Z 2 2, Z 3 2 Y 1 2, Y 2 2, Y 3 2 Z 1 3, Z 2 3, Z 3 3 Y 1 3, Y 2 3, Y 3 3 ……,…………. Z 1 R, Z 2 R, Z 3 R Y 1 R, Y 2 R, Y 3 R Invariance Principle for Low Degree Low Influence Polynomials [Rotar] [Mossel-O’Donnell-Oleszkiewich], [Mossel 2008] If P(X 1,… X R ) is a constant degree polynomial, Inf i (P) ≤ τ for all i (All influences are small), Z 1, Z 2,.. Z R are iid random variables, then the distribution of P(Z 1,Z 2,.. Z R ) only depends on the first and second moments of random variables Z i. (P cannot distinguish between two sets of random variables with the same first two moments) Central Limit Theorem: ``Sum of large number of {-1,1} random variables has similar distribution as Sum of large number of Gaussian random variables.” For the polynomial P(X) =, the distribution of P(X) is similar for X being iid {-1,1} or iid Gaussian vector. P(Z 1 )P(Z 2 ) P(Z 3 )P(Y 1 ) P(Y 2 )P(Y 3 )

Noise P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P 13 Proof: Let X 1,X 2,.. X R be solutions to instance I with value c. Perturb each coordinate of each solution X i probability ε Y i = X i with ε noise The expected value of each perturbed solution is > c – O(ε) The expected value of solution F(Y 1,.., Y R ) is at least α Λ (c-O(ε)) Lemma: LetH(X ) = E Y [F(Y)] where Y = X with ε noise Y i = X i with 1- ε probability, random bit with prob ε. Then H is a α Λ - O(ε) polymorphism. Advantage: H is a α Λ - O(ε) polymorphism essentially a low-degree function due to averaging. Let F be an α Λ -approximate polymorphism

Semidefinite Program for CSPs Variables : For each variable X a Vectors {V (a,0), V (a,1) } For each clause P = (x a ν x b ν x c ), Scalar variables μ (P,000), μ (P,001), μ (P,010), μ (P,100), μ (P,011), μ (P,110), μ (P,101), μ (P,111) X a = 1 V (a,0) = 0 V (a,1) = 1 X a = 0 V (a,0) = 1 V (a,1) = 0 If X a = 0, X b = 1, X c = 1 μ (P,000) = 0μ (P,011) = 1 μ (P,001) = 0μ (P,110) = 0 μ (P,010) = 0μ (P,101) = 0 μ (P,100) = 0μ (P,111) = 0 Objective Function :Constraints : For each clause P, 0 ≤μ (P,α) ≤ 1 For each clause P (x a ν x b ν x c ), For each pair X a, X b in P, consitency between vector and LP variables. V (a,0) ∙V (b,0) = μ (P,000) + μ (P,001) V (a,0) ∙V (b,1) = μ (P,010) + μ (P,011) V (a,1) ∙V (b,0) = μ (P,100) + μ (P,101) V (a,1) ∙V (b,1) = μ (P,100) + μ (P,101)

Semidefinite Relaxation for CSP SDP solution for = : SDP objective: for every constraint ɸ in I -local distributions µ φ over assignments to the variables of ɸ Example of local distr.: ɸ = 3XOR(x 3, x 4, x 7 ) x 3 x 4 x 7 ¹ Á … for every variable x i in = -vectors v i,1, …, v i,q constraints (also for first moments) Explanation of constraints : first and second moments of distributions are consistent and form PSD matrix maximize Simplest SDP that yields, Local Distributions over Integral Assignments with first two moments matching inner product of SDP vectors.

Gaussian Projections Sample g = a random Gaussian vector in Generate real valued solution Z = Z 1, Z 2,… …… Z n-1 Z n by random projection along g where Z i = v i ¢ g Lemma: For every constraint Á in the instance I, the first two moments of random variables {Z i | i 2 Á } agree with local distribution ¹ Á. Formally, for all i,j 2 Á, E xi,x j » ¹ Á [x i x j ] = E[Z i Z j ] E x i » ¹ Á [x i ] = E[Z i ]

Algorithm: Sample R independent Gaussian vectors {g 1,g 2,.. g R } in Generate the corresponding Gaussian projections. Apply the polynomial P H corresponding to H. Outputs are not distributions, but close to distributions over {0,1}. Round them to the nearest distribution, and sample from them. Algorithm Setup: {v 1, v 2, v 3,… v n } - SDP vectors. H be a low degree α Λ - O(ε) approximate polymorphism P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P 13 P(Z 1 ) P(Z 2 ) P(Z 3 ) Q (Z 1 ) Q (Z 2 )Q (Z 3 )

Each row satisfies in expectation constraints.  Output satisfiesin expectation. Analysis of Algorithm Consider a predicate φ of the instance I. Let µ φ be local distribution for φ φ x1x1 x2x2 x3x3 x4x X i from µ φ Z i from Gaussian projection By invariance Output satisfies in expectation. Summing up over all constraints, Output value ≥ α Λ * SDP value ≥ α Λ *OPT

Hardness

Dictatorship Test Given a function F : {-1,1} R {-1,1} Toss random coins Make a few queries to F Output either ACCEPT or REJECT F is a dictator function F(x 1,… x R ) = x i F is far from every dictator function (No influential coordinate) Pr[ACCEPT ] = Completeness Pr[ACCEPT ] = Soundness

UG Hardness Rule of Thumb: [Khot-Kindler-Mossel-O’Donnell] A dictatorship test where Completeness = c and Soundness = αc the verifier’s tests are predicates from a CSP Λ It is UG-hard to approximate CSP Λ to a factor better than α

Polymorphisms and Dictatorship Tests By definition of α Λ, For every α > α Λ there is no α-approximate polymorphism.  there is some instance I and R solutions of value c such that every low influence function F yields solution of value < αc. P1P1 x1x1 x2x2 x3x3 x9x9 xnxn P 31 P Dictatorship Test: Function : F:{0,1} R  {0,1} Pick a random constraint P from the instance I Let X 1, X 2,.. X k denote the R dimensional vectors in {0,1} R corresponding to variables in P Query the function values F(X 1 ) F(X 2 ) … F(X k ) Accept if P(F(X 1 ), F(X 2 ) …, F(X k ) ) is TRUE. vvv Completeness = c Soundness = αc

FUTURE WORK

Disproving Unique Games Conjecture On UG instances with value 1-ε, Alphabet size = p, Instance Size = n, Spectral Gap λ Algorithm On (1-Є) satisfiable instances [Khot 02] [Trevisan] [Gupta-Talwar] 1 – O(ε logn) [Charikar-Makarychev-Makarychev] [Chlamtac-Makarychev-Makarychev] [Arora-Khot-Kolla-Steurer-Tulsiani-Vishnoi]

Disproving Unique Games Conjecture Theorem: [Arora-Khot-Kolla-Steurer-Tulsiani-Vishnoi] Unique Games is easy if constraint graph is a sufficiently good expander Spectral Gap (λ) > 10* Completeness Gap (10 ε) Theorem: [Arora-Barak-Steurer 09] Unique Games with completeness 1- ε can be solved in time 2^{n^poly(ε)} Above algorithms use the basic SDP  cannot solve the known integrality gap instances. Theorem: [Arora-Russell-Steurer 09][R-Steurer 09] Unique Games is easy if there is sufficiently good local expansion

Constraint Satisfaction Problems Metric Labelling Problems Ordering Constraint Satisfaction Problems Kernel Clustering Problems Grothendieck Problem Unique Games Conjecture Vertex Cover Sparsest Cut MISSING: Steiner Tree Assymetric Travelling Salesman Metric Travelling Salesman More UG Hardness Results?

Reverse Reductions CSPs, Vertex Cover, Sparsest Cut, Metric Labelling, kernel clustering Unique Games Failed Approach: Use parallel repetition on MaxCut to reduce it to Unique Games. [Raz 08] [Barak-Hardt-Haviv-Rao-Regev-Steurer] [Kindler-O’Donnell-Rao] Theorem: [R-Steurer 09] A variant of Sparsest Cut problem reduces to Unique Games.

Power of Linear and Semidefinite Programs Theorem [Charikar-Makarychev-Makarychev] Even n α rounds of certain Linear programming hierarchies like Sherali-Adams, Lovasz-Schriver do not disprove UGC. Possibility: Adding a simple constraint on every 5 variables in the LC SDP relaxation yields a better approximation for MaxCut and disproves Unique Games Conjecture! Theorem [R-Steurer 09][Khot-Saket 09] Adding all local constraints on up to 2 O(loglogn^{1/4}) variables does not disprove UGC. logn

Integrality Gaps via UG Hardness Results Rule of Thumb: [Khot-Vishnoi 05] Integrality Gaps compose with Hardness Reductions Unique Games Instance (Integrality Gap) Instance of MaxCut (Integrality Gap) Hardness Reduction UGC based hardness results yield integrality gaps – limitations of semidefinite or linear programs.

Back to Exact CSPs Introduced ε noise to make the polymorphism low degree.  will not get fully satisfiable assignments. Definition: A function F is noise stable if ε perturbation of inputs changes F with probability g(ε) such that g(ε)  0 as ε  0 Theorem: If there exists noise stable low influence polymorphisms for a CSP Λ then it is tractable (using semidefinite programming). Above condition holds for all boolean CSPs except linear equations. Semidefinite programs cannot solve linear equations over {0,1}. SDPs can solve all bounded width CSPs trivially. OPEN PROBLEM: Can one show that if CSP Λ does not contain the affine type, then it has noise stable low influence polymorphisms?

Thank You