SDP Based Approach for Graph Partitioning and Embedding Negative Type Metrics into L 1 Subhash Khot (Georgia Tech) Nisheeth K. Vishnoi (IBM Research and.

Slides:



Advertisements
Similar presentations
Geometry and Expansion: A survey of some results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh Vazirani, STOC04; S. A., Elad Hazan,
Advertisements

Hardness of Approximating Multicut S. Chawla, R. Krauthgamer, R. Kumar, Y. Rabani, D. Sivakumar (2005) Presented by Adin Rosenberg.
Satyen Kale (Yahoo! Research) Joint work with Sanjeev Arora (Princeton)
On the Unique Games Conjecture Subhash Khot Georgia Inst. Of Technology. At FOCS 2005.
What have we learnt about graph expansion in the new millenium? Sanjeev Arora Princeton University & Center for Computational Intractability.
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
Metric embeddings, graph expansion, and high-dimensional convex geometry James R. Lee Institute for Advanced Study.
Geometric embeddings and graph expansion James R. Lee Institute for Advanced Study (Princeton) University of Washington (Seattle)
Semi-Definite Algorithm for Max-CUT Ran Berenfeld May 10,2005.
Distance Scales, Embeddings, and Metrics of Negative Type By James R. Lee Presented by Andy Drucker Mar. 8, 2007 CSE 254: Metric Embeddings.
1/17 Optimal Long Test with One Free Bit Nikhil Bansal (IBM) Subhash Khot (NYU)
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Graph Clustering. Why graph clustering is useful? Distance matrices are graphs  as useful as any other clustering Identification of communities in social.
Balanced Graph Partitioning Konstantin Andreev Harald Räcke.
Sparsest Cut S S  G) = min |E(S, S)| |S| S µ V G = (V, E) c- balanced separator  G) = min |E(S, S)| |S| S µ V c |S| ¸ c ¢ |V| Both NP-hard.
Semidefinite Programming
On the Unique Games Conjecture Subhash Khot NYU Courant CCC, June 10, 2010.
Geometry and Expansion: A survey of recent results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh Vazirani, STOC’04; S. A., Elad Hazan,
Expander flows, geometric embeddings, and graph partitioning Sanjeev Arora Princeton Satish Rao UC Berkeley Umesh Vazirani UC Berkeley ( + survey of other.
Network Design Adam Meyerson Carnegie-Mellon University.
Expander flows, geometric embeddings, and graph partitioning Sanjeev Arora Princeton Satish Rao UC Berkeley Umesh Vazirani UC Berkeley.
Geometric Embeddings, Graph Partitioning, and Expander flows: A survey of recent results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh.
Geometric Embeddings, Graph Partitioning, and Expander flows: A survey of recent results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh.
EDA (CS286.5b) Day 6 Partitioning: Spectral + MinCut.
Semidefinite Programming Based Approximation Algorithms Uri Zwick Uri Zwick Tel Aviv University UKCRC’02, Warwick University, May 3, 2002.
Geometric Embeddings, Graph Partitioning, and Expander flows: A survey of recent results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh.
Three Algorithms for Nonlinear Dimensionality Reduction Haixuan Yang Group Meeting Jan. 011, 2005.
Theta Function Lecture 24: Apr 18. Error Detection Code Given a noisy channel, and a finite alphabet V, and certain pairs that can be confounded, the.
On the hardness of approximating Sparsest-Cut and Multicut Shuchi Chawla, Robert Krauthgamer, Ravi Kumar, Yuval Rabani, D. Sivakumar.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Finding Almost-Perfect
Distance scales, embeddings, and efficient relaxations of the cut cone James R. Lee University of California, Berkeley.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
Integrality Gaps for Sparsest Cut and Minimum Linear Arrangement Problems Nikhil R. Devanur Subhash A. Khot Rishi Saket Nisheeth K. Vishnoi.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Subhash Khot’s work and its impact Sanjeev Arora Computer Science Dept, Princeton University ICM 2014 Nevanlinna Prize Laudatio.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Institute for Advanced Study, April Sushant Sachdeva Princeton University Joint work with Lorenzo Orecchia, Nisheeth K. Vishnoi Linear Time Graph.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Expander Flows, Graph Spectra and Graph Separators Umesh Vazirani U.C. Berkeley Based on joint work with Khandekar and Rao and with Orrechia, Schulman.
Packing Rectangles into Bins Nikhil Bansal (CMU) Joint with Maxim Sviridenko (IBM)
13 th Nov Geometry of Graphs and It’s Applications Suijt P Gujar. Topics in Approximation Algorithms Instructor : T Kavitha.
Semidefinite Programming
Geometry and Expansion: A survey of recent results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh Vazirani, STOC’04; S. A., Elad Hazan,
1/19 Minimizing weighted completion time with precedence constraints Nikhil Bansal (IBM) Subhash Khot (NYU)
Embeddings, flow, and cuts: an introduction University of Washington James R. Lee.
Multicommodity flow, well-linked terminals and routing problems Chandra Chekuri Lucent Bell Labs Joint work with Sanjeev Khanna and Bruce Shepherd Mostly.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
Shorter Long Codes and Applications to Unique Games 1 Boaz Barak (MSR, New England) Parikshit Gopalan (MSR, SVC) Johan Håstad (KTH) Prasad Raghavendra.
New algorithms for Disjoint Paths and Routing Problems
Effective-Resistance-Reducing Flows, Spectrally Thin Trees, and ATSP Nima Anari UC Berkeley Shayan Oveis Gharan Univ of Washington.
Unique Games Approximation Amit Weinstein Complexity Seminar, Fall 2006 Based on: “Near Optimal Algorithms for Unique Games" by M. Charikar, K. Makarychev,
Graph Partitioning using Single Commodity Flows
Lower Bounds for Embedding Edit Distance into Normed Spaces A. Andoni, M. Deza, A. Gupta, P. Indyk, S. Raskhodnikova.
Yuan Zhou, Ryan O’Donnell Carnegie Mellon University.
 In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method.
Multi-way spectral partitioning and higher-order Cheeger inequalities University of Washington James R. Lee Stanford University Luca Trevisan Shayan Oveis.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
1 Approximation Algorithms for Low- Distortion Embeddings into Low- Dimensional Spaces Badoiu et al. (SODA 2005) Presented by: Ethan Phelps-Goodman Atri.
Yuan Zhou Carnegie Mellon University Joint works with Boaz Barak, Fernando G.S.L. Brandão, Aram W. Harrow, Jonathan Kelner, Ryan O'Donnell and David Steurer.
Coarse Differentiation and Planar Multiflows
Finding Almost-Perfect
Optimization problems such as
Generalized Sparsest Cut and Embeddings of Negative-Type Metrics
Bo-Young Kim Applied Algorithm Lab, KAIST 月
Possibilities and Limitations in Computation
Subhash Khot Dept of Computer Science NYU-Courant & Georgia Tech
Embedding Metrics into Geometric Spaces
Presentation transcript:

SDP Based Approach for Graph Partitioning and Embedding Negative Type Metrics into L 1 Subhash Khot (Georgia Tech) Nisheeth K. Vishnoi (IBM Research and Georgia Tech) CS Perspective Math Perspective Parts I & II

CS Story: Sparsity S ScSc Sparsity of a Cut |E(S,S c )| |S| |S c | Sparsest Cut (SC) cut of minimum sparsity b-Balanced Separator (BS) cut with |S|,|S c | ≥ bn that minimizes |E(S,S c )| b ~ ½

Applications Related Measures  Sparsity is often referred to as Graph Conductance  Edge expansion or isoperimetric constant Applications  VLSI Layout  Clustering  Markov Chains  Geometric (Metric) Embeddings

Estimating Sparsity λ 2 (G)/n ≤ sparsity(G) ≤ 3/n √λ 2 (G) √∆ λ 2 : spectral gap (second eigenvalue of the Laplacian) Not satisfactory, e.g. n-cycle Approx. Algo. : For any graph G on n vertices, compute a(G), which is within a mult. factor f(n)≥1 of sparsity of G. a(G) ≤ sparsity(G) ≤ a(G) f(n) f(n)=1 is hard What about f(n) = log n or even 10? Hard to compute exactly – compute “approximations”

History Algorithms  Spectral Graph partitioning Alon-Milman ’85, Speilman-Teng ’96 (eigenvector based)  O(log n) Leighton-Rao ’88 (Linear Programming (LP) based)  O(log n) London-Linial-Rabinovitch ’94, Aumann-Rabani ’94 (connection to metric embeddings)  O(√log n) Arora-Rao-Vazirani ‘04 (Semi-Definite Programming (SDP) based) Hardness  NP-Hard  Hard to approximate within any constant factor (assuming UGC) Chawla-Krauthgamer-Kumar-Rabani-Sivakumar ‘05, Khot-Vishnoi ‘05

Outline of this Talk Part I 1.Graph Partitioning Motivation & History SDP Approach 2.Embedding Negative Type Metrics into L 1 Metric Spaces and Embeddability Cut Cone ≈ L 1 Negative Type Metrics as SDP Solutions LLR/AR Connection Part II 1.Integrality Gap instance Hypercube and Cuts Kahn-Kalai-Linial (isoperimetry of hypercube) The Graph The SDP Solution 2.Conclusion

Outline of this Talk Part I 1.Graph Partitioning Motivation & History SDP Approach 2.Embedding Negative Type Metrics into L 1 Metric Spaces and Embeddability Cut Cone ≈ L 1 Negative Type Metrics as SDP Solutions LLR/AR Connection Part II 1.Integrality Gap instance Hypercube and Cuts Kahn-Kalai-Linial (isoperimetry of hypercube) The Graph The SDP Solution 2.Conclusion

Quadratic Program for BS  i, v i  {-1,1} Minimize ¼  |v i - v j | 2 ij ε E  |v i - v j | 2  = n 2 i<j Quadratic Program Input: G(V,E) Output: (S,S c ) s.t. |S|,|S c | = n/2 which minimizes |E(S,S c )| Balanced Separator

SDP for Balanced Separator  i, v i  {-1,1} Minimize ¼  |v i - v j | 2 ij ε E  |v i - v j | 2  = n 2 i<j Quadratic Program SDP Relaxation  i, v i  R n, ||v i ||=1 Minimize ¼  || v i - v j || 2 ij  E Well-Separatedness  || v i - v j || 2 = n 2 i<j

Why is this a Relaxation ? SDP Relaxation G(V,E)  i, v i  R n, ||v i ||=1 Minimize ¼  || v i - v j || 2 ij  E Well-Separatedness  || v i - v j || 2 = n 2 i<j Relaxation u be a unit vector and (S,S c ) |S|, |S c | = n/2 For i є S, v i = u, i є S c, v i = - u ||v i -v j || 2 = 4 · δ S (i,j) Cost of solution = |E(S,S c )| sdp ≤ opt SDP can be computed in polynomial time! Boils down to the spectral approach. Nothing gained(?)

Quadratic Program for BS …  i, v i  {-1,1} Minimize ¼  |v i - v j | 2 ij ε E  |v i - v j | 2  = n 2 i<j Triangle Inequality  i, j, k, |v i - v j | 2 + |v j - v k | 2  |v i - v k | 2 (redundant) Quadratic Program Input: G(V,E) Output: (S,S c ) s.t. |S|,|S c | = n/2 which minimizes |E(S,S c )| Balanced Separator

SDP for Balanced Separator …  i, v i  {-1,1} Minimize ¼  |v i - v j | 2 ij ε E  |v i - v j | 2  = n 2 i<j Triangle Inequality  i, j, k, |v i - v j | 2 +|v j - v k | 2  |v i - v k | 2 Quadratic Program SDP Relaxation  i, v i  R n, ||v i ||=1 Minimize ¼  || v i - v j || 2 ij  E Well-Separatedness  || v i - v j || 2 = n 2 i<j Triangle Inequality  i, j, k, ||v i - v j || 2 +||v j - v k || 2  ||v i - v k || 2 Still a relaxation …

Geometry of Triangle Ineq ≤ 90 o vivi vkvk vjvj each step of length 1 t-steps: length at-most √t Rules out the embedding obtained by the spectral method!

Integrality Gap: Upper Bound Arora-Rao-Vazirani ’04: O(√log n) for Sparsest Cut, Balanced Separator sdp within a factor of O(√log n) of the opt Integrality gap: max over all graphs on n vertices, the ratio of opt/sdp (as a function of n) ARV conjectured that the integrality gap is upper bounded by some constant (independent of n) Lack of any counterexample!

Outline of this Talk Part I 1.Graph Partitioning Motivation & History SDP Approach 2.Embedding Negative Type Metrics into L 1 Metric Spaces and Embeddability Cut Cone ≈ L 1 Negative Type Metrics as SDP Solutions LLR/AR Connection Part II 1.Integrality Gap instance Hypercube and Cuts Kahn-Kalai-Linial (isoperimetry of hypercube) The Graph The SDP Solution 2.Conclusion

Math Story: Metric Embeddings Metric is a distance function d on [n] x [n] s.t. d(i, j) + d(j, k)  d(i, k) (triangle inequality) Metric d embeds into metric  with distortion   1 if there is a map φ s.t.  i,j d(i, j)   (φ(i), φ(j))   d(i, j) ( distances are preserved upto a factor of  )

Negative Type Metrics (squared- L 2 ) d on {1,2,…,n} is of negative type if there are vectors v 1, v 2, …, v n Such that d(i, j) = || v i - v j || 2 satisfies triangle inequality Same as:  i, j, k, || v i - v j || 2 + || v j - v k || 2  || v i - v k || 2 NEG = class of such metrics. arise as SDP solutions L 1  NEG

Embedding NEG into L 1 Conjecture: (Explicit by Goemans, Linial abt ’95) Every NEG metric embeds into L 1 with O(1) (constant) distortion What’s the connection to sparsity ?

Cuts and L 1 Metrics p S : non-negative real for every subset S of [n] d(i,j) :=  p S δ S (i,j) Fact : d is isometrically embeddable in L 1 Further : Every L 1 embeddable metric on n-points can be written as non-negative linear sum of cut-metrics on {1,…,n} Cut-Metrics on {1,2,…,n} δ S (i, j) = 1 if i, j are separated by (S,S c ) = 0 otherwise S ScSc

Sparsest Cut ≈ Optimizing Over L 1 [Aumann Rabani 98, Linial London Rabinovich ’94] LP-relaxation over all METRICS [Bourgain ’85] Every n-point metric embeds into L 1 with O(log n) distortion O(log n) factor approximation for Sparsity!  i~j δ S (i, j)  i<j δ S (i, j) Minimize S  V Minimize d is L 1  i~j d(i, j)  i i<j d(i, j) =

Metric Embeddings & Sparsity Optimizing over cuts ≈ Optimizing over L 1 metrics SDP solution ≈ Optimizing over NEG Goemans-Linial/ARV Conjecture: NEG embeds in L 1 with O(1) distortion/ Integrality Gap is O(1) Implies O(1) approx. algo for estimating sparsity!

Integrality Gap: Lower Bound Sparsest Cut, Balanced Separator  log log n integrality gap instance Khot-Vishnoi ’05, Krauthgamer-Rabani ‘06 Devanur-Khot-Saket-Vishnoi ‘06 Disproves the GL/ARV Conjecture Previous best lower bound: 1.16 [Zatloukal ’04]

Outline of this Talk Part I 1.Graph Partitioning Motivation & History SDP Approach 2.Embedding Negative Type Metrics into L 1 Metric Spaces and Embeddability Cut Cone ≈ L 1 Negative Type Metrics as SDP Solutions LLR/AR Connection Part II 1.Integrality Gap instance Hypercube and Cuts Kahn-Kalai-Linial (isoperimetry of hypercube) The Graph The SDP Solution 2.Conclusion

Outline of this Talk Part I 1.Graph Partitioning Motivation & History SDP Approach 2.Embedding Negative Type Metrics into L 1 Metric Spaces and Embeddability Cut Cone ≈ L 1 Negative Type Metrics as SDP Solutions LLR/AR Connection Part II 1.Integrality Gap instance Hypercube and Cuts Kahn-Kalai-Linial (isoperimetry of hypercube) The Graph The SDP Solution 2.Conclusion

Recall: Integrality Gap Lower Bound Sparsest Cut, Balanced Separator  log log n integrality gap instance Khot-Vishnoi ’05, Krauthgamer-Rabani ’06, Devanur-Khot-Saket-Vishnoi ‘06 Disproves the GL/ARV Conjecture Previous best lower bound: 1.16 [Zatloukal ’04]

Integrality Gap: Lower Bound Thm: Construct graph G({1,…,n},E) and unit vector assignment i -> v i є R n s.t. 1.G is an “expander”: every ¼ balanced cut has Ω(|E|(log log n)/log n) edges ( Kahn-Kalai-Linial) 2.“Low” SDP solution: O(|E|/log n) 3.Well-Separatedness: Σ i<j ||v i -v j || 2 = n 2 4.Triangle inequality: d(i,j):=||v i -v j || 2 is a metric Integrality gap: Ω(log log n)

Starting Point: Hypercube(!) H={-1,1} k n = 2 k (1,1,1)(1,1,1) (-1,1,1) (1,1,-1) (1,-1,1) (1,-1,-1) (-1,-1,1) (-1,1,-1) (-1,-1,-1)

Hypercube … H={-1,1} k Advantages? Understand cuts in H : tools from Fourier Analysis Vertex is a vector in R k : starting point for SDP solution But … Hypercube has “small” balanced cuts : coordinate cuts have 1/k fraction of the edges

Cuts in Hypercube: Coordinate # of edges = |E(H)|/k Edges across pairs of vertices differing in i-th bit

Cuts in Hypercube … decompose into coordinate cuts any balanced cut has a coord. cut which contributes E(H)/k 2 edges

Kahn-Kalai-Linial any balanced cut has a coordinate cut which contributes E(H) (log k)/k 2 edges

Increasing Size of Balanced Cuts consider balanced cuts in which coordinates are indistinguishable (w.r.t. to their contribution to the cut) can be achieved by symmetrizing the hypercube! each coordinate contributes equally: total E(H) (log k)/k edges

e.g. 4-dim hypercube (1,1,1,1) (-1,-1,-1,-1) (1,1,1,-1) (-1,1,1,1) (1,-1,1,1) (1,1,-1,1) (-1,-1,-1,1) (1,-1,-1,-1) (-1,1,-1,-1) (-1,-1,1,-1) (1,1,-1,-1) (-1,1,1,-1) (-1,-1,1,1) (1,-1,-1,1) (1,-1,1,-1) (-1,1,-1,1)

More Formally … H={-1,1} k with a rotation group acting on its coordinates Partitions H into equivalence classes V 1,…,V n Each V i is a vertex. Edges are hypercube edges G(V,E), |E(G)|=|E(H)|, k ~ log n Balanced cuts in G correspond to balanced cuts in H KKL: any balanced cut (in H) has “a” coordinate cut which contributes E(H) (log k)/k 2 edges to the cut. Group is transitive: “every” coordinate cut has the same contribution Any balanced cut in G has ≥ E(G) (log k)/k = E(G) (log log n)/(log n)

Integrality Gap: Lower Bound Thm: Construct graph G({1,…,n},E) and unit vector assignment i -> v i є R n s.t. 1.G is an “expander”: every ¼ balanced cut has Ω(|E|(log log n)/log n) edges ( Kahn-Kalai-Linial) 2.“Low” SDP solution: O(|E|/log n) 3.Well-Separatedness: Σ i<j ||v i -v j || 2 = n 2 4.Triangle inequality: d(i,j):=||v i -v j || 2 is a metric Integrality gap: Ω(log log n)

SDP Solution (1,1,1,1) (-1,-1,-1,-1) (1,1,1,-1) (-1,1,1,1) (1,-1,1,1) (1,1,-1,1) (-1,-1,-1,1) (1,-1,-1,-1) (-1,1,-1,-1) (-1,-1,1,-1) (1,1,-1,-1) (-1,1,1,-1) (-1,-1,1,1) (1,-1,-1,1) (1,-1,1,-1) (-1,1,-1,1)

Formally: SDP Solution Vertex: equivalence class: x 1, x 2, …, x k (rotations) Vector: (1/√k) · Σ j x j Observations: Edge across two nodes differing in one bit Contribution to sdp ~ 1/k. sdp ≤ |E(G)|/k = |E(G)|/(log n) Triangle Inequality: (little bit of work and case analysis!) For most classes {x 1, …, x k } is “nearly orthogonal” Hidden: Gram-Schmidt Orthogonalization, Tensoring, Well-separatedness

Conclusion (Simple) log log n integrality gap for SC/BS Close the gap between log log n and  log n ? [Lee-Naor ’06, Cheeger-Kleiner ’06] Another counter-example. May give (log n) c Thank you!