Bo-Young Kim Applied Algorithm Lab, KAIST 月

Slides:



Advertisements
Similar presentations
Geometry and Expansion: A survey of some results Sanjeev Arora Princeton ( touches upon: S. A., Satish Rao, Umesh Vazirani, STOC04; S. A., Elad Hazan,
Advertisements

Satyen Kale (Yahoo! Research) Joint work with Sanjeev Arora (Princeton)
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Linear Programming (LP) (Chap.29)
Sub Exponential Randomize Algorithm for Linear Programming Paper by: Bernd Gärtner and Emo Welzl Presentation by : Oz Lavee.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Introduction to Algorithms
Online Social Networks and Media. Graph partitioning The general problem – Input: a graph G=(V,E) edge (u,v) denotes similarity between u and v weighted.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Semi-Definite Algorithm for Max-CUT Ran Berenfeld May 10,2005.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Sparsest Cut S S  G) = min |E(S, S)| |S| S µ V G = (V, E) c- balanced separator  G) = min |E(S, S)| |S| S µ V c |S| ¸ c ¢ |V| Both NP-hard.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Expander flows, geometric embeddings, and graph partitioning Sanjeev Arora Princeton Satish Rao UC Berkeley Umesh Vazirani UC Berkeley.
Approximation Algorithms
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Distributed Combinatorial Optimization
SDP Based Approach for Graph Partitioning and Embedding Negative Type Metrics into L 1 Subhash Khot (Georgia Tech) Nisheeth K. Vishnoi (IBM Research and.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Integrality Gaps for Sparsest Cut and Minimum Linear Arrangement Problems Nikhil R. Devanur Subhash A. Khot Rishi Saket Nisheeth K. Vishnoi.
Decision Procedures An Algorithmic Point of View
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Minimizing Stall Time in Single Disk Susanne Albers, Naveen Garg, Stefano Leonardi, Carsten Witt Presented by Ruibin Xu.
13 th Nov Geometry of Graphs and It’s Applications Suijt P Gujar. Topics in Approximation Algorithms Instructor : T Kavitha.
Linear Programming (Convex) Cones  Def: closed under nonnegative linear combinations, i.e. K is a cone provided a 1, …, a p  K  R n, 1, …, p.
C&O 355 Mathematical Programming Fall 2010 Lecture 16 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Learning Spectral Clustering, With Application to Speech Separation F. R. Bach and M. I. Jordan, JMLR 2006.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Multicommodity flow, well-linked terminals and routing problems Chandra Chekuri Lucent Bell Labs Joint work with Sanjeev Khanna and Bruce Shepherd Mostly.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
C&O 355 Lecture 24 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Approximation Algorithms based on linear programming.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Fernando G.S.L. Brandão MSR -> Caltech Faculty Summit 2016
The Theory of NP-Completeness
Generalized Sparsest Cut and Embeddings of Negative-Type Metrics
Linear Programming (LP) (Chap.29)
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Amir Ali Ahmadi (Princeton University)
Georgina Hall Princeton, ORFE Joint work with Amir Ali Ahmadi
Proving that a Valid Inequality is Facet-defining
Chapter 5. Optimal Matchings
A Combinatorial, Primal-Dual Approach to Semidefinite Programs
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 6. Large Scale Optimization
Chap 3. The simplex method
Integer Programming (정수계획법)
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Graph Partitioning Problems
2. Generating All Valid Inequalities
I.4 Polyhedral Theory (NW)
Dimension versus Distortion a.k.a. Euclidean Dimension Reduction
Embedding Metrics into Geometric Spaces
Flow Feasibility Problems
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
Proving that a Valid Inequality is Facet-defining
The Theory of NP-Completeness
(Convex) Cones Def: closed under nonnegative linear combinations, i.e.
Chapter 2. Simplex method
Topics in Algorithms 2005 Max Cuts
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
Presentation transcript:

Bo-Young Kim Applied Algorithm Lab, KAIST 2011.6.13 月 Introduction to Semidefinite Programming via MAX-CUT and SDP Application : Algorithms for Sparsest Cut Bo-Young Kim Applied Algorithm Lab, KAIST 2011.6.13 月

Contents Introduction to SDP via MAX-CUT Review: MAX-CUT Problem & Approximation Algorithm [1st Algorithm] A Naïve Algorithm : Randomized 0.5-Approximation From LP to SDP (informal) Cholesky Factorization SDP (formal definition) [2nd Algorithm] Randomized 0.878-Approximation

Review: MAX-CUT Problem A graph G=(V,E) is given. A cut of G : a pair (S, V \ S) for a nonzero subset S ⊆ V. Size of the cut : Def (Recall) MAX-CUT problem : Decision version : NP-complete Given G and n∈N, is there a cut of size≥n? Garey, Johnson, and Stockmeyer (‘74, STOC) Optimization version : NP-hard

Review: Approximation Algorithm Def (Recall) Approximation Algorithm. P : a maximization(ref.minimization) problem : set of instances A : an algorithm that returns for every instances I ∈ a feasible solution A(I) ∈ F(I). is a function. A is a -approximation algorithm for P if the following two properties hold. (ref. .) * From now on, only consider maximization If is a constant c  c-approximation algorithm. c ≤ 1. (ref. c ≥1 ) A randomized - approximation algorithm : Expected polynomial runtime +

A Naïve Algorithm : Randomized 0.5-Approximation The above algorithm is a randomized 0.5-approximation algorithm for the MAX-CUT problem. Runs in polynomial time. . Deterministic 0.5-approximation, 0.5(1+1/|E|)-approximation algorithm is possible. Until 1994, no c-approximation algorithm could be found for c>0.5.

From LP to SDP Def (Recall) LP in equational form : where x,b,c are column vectors ( x,c∈Rn, b∈Rm) and A∈Rmxn. From LP to SDP (Brief Introduction) Replace the vector space Rn by another real vector space; vector space Sn of symmetric nxn matrices. Replace <x,y>=xTy over Rn by <X,Y> over Sn. Replace the constraint x ≥ 0 by the constraint Def (Recall) A symmetric matrix M is Positive Semidefinite : All its eigenvalues are nonnegative.

Cholesky Factorization Fact : Let M∈Sn. TFAE. Cholesky Factorization of a positive semidefinite matrix M : The computation of U that satisfies (iii). (Often needed in SDP.) Outer product Cholesky Factorization : Recursive, O(n3) operation for M ∈ Rnxn. M=( ) ∈ R1x1  U=( ). : a nonnegative eigenvalue. Otherwise, M= [Note] . If not,  Contradiction to (ii).

Cholesky Factorization : Positive semidefinite again  Recursively compute the decomposition. ∴ . ii)  q=0. ∴ where Polynomial time algorithm in a bit model. (Counting elementary operations)

Semidefinite Programs Def Semidefinite Program(SDP) in equational form : where , , , and is a linear operator. As LP case, we say the SDP is feasible if there is some feasible solution; The value of a feasible SDP : An optimal solution : a feasible solution X* s.t. .

A Randomized 0.878-Approximation Algorithm Goemans and Williamson (‘94, STOC) Fact ! SDP can be solved up to any desired accuracy ε, where the runtime depends polynomially on the sizes of the input numbers, and on log(R/ε), where R is the maximum size ||X|| of a feasible solution. Formulating MAX-CUT problem as a constraint optimization problem V={1,2, … ,n}. Variables x1, x2, … ,xn ∈{-1,1}. Any assignment of these variables encodes a cut (S, V \ S). S={i ∈V : xi =1}, V \ S={i ∈V : xi =-1}. Define  : the contribution of the pair {i,j} to the size of the cut. Reformulated MAX-CUT problem : … (1) Value of this program: opt(G), the size of a maximum cut.

A Randomized 0.878-Approximation Algorithm SDP Relaxation Opt(G) : Value of (1), i.e. the size of a maximum cut. A SDP whose value is an upper bound for OPT(G). Let xi ∈ Sn-1={x ∈Rn : ||x||=1}, the (n-1) dimensional unit sphere. Consider the problem : …(2) Remark S0={-1,1} can be embedded into Sn-1 via the mapping L: x  (0, … ,0,x). Let (x1, x2, … ,xn) be a feasible solution of (1) with objective function value  (L(x1), L(x2), … ,L(xn)) is a feasible solution of (2) with objective function value ∴ (2) is a relaxation of (1). A program w/ more feasible solutions. Has value at least opt(G). This value is still finite since xiTxj is lower bounded by -1 for all i,j.

A Randomized 0.878-Approximation Algorithm xij : = xiTxj (2)  SDP : … (3) xij : = xiTxj  X=UTU where U = x1 x2 … xn . by the condition (iii). xi ∈ Sn-1  xii=1. Conversely, X : a feasible matrix for (3)  the columns of any matrix U with X=UTU is feasible vector solution of (2). ∴ (2) ⇔ (1). Up to constant term |E|/2, (3) assumes the form of SDP.

A Randomized 0.878-Approximation Algorithm (3) is feasible with the same finite value γ ≥ opt(G) as (2). From the “Fact! ”  We can find in poly time a matrix X* with objective function value at least γ – ε, for any ε>0. Recall We can compute U s.t. X*=UTU in poly time. ∴ The columns x1*, x2*, … , xn* of U : form an almost optimal solution of (2) :

A Randomized 0.878-Approximation Algorithm Rounding the vector solution Mapping Sn-1  S0. Choose p ∈ Sn-1 u.a.r. Define  p partitions Sn-1 into a closed hemisphere H={x ∈ Sn-1 : pTx≥0} and its complement.  Vectors in H  1 , vectors in complement  -1.

A Randomized 0.878-Approximation Algorithm Lemma Let xi*, xj* ∈ Sn-1. Then Getting the bound Expected number of cut edges : Know : Lemma For z ∈ [-1,1], (1-z)/2 arccos(z)/pi

A Randomized 0.878-Approximation Algorithm By choosing ε ≤ 5*10-4, .

SDP Application : Algorithms for Sparsest Cut

Sparsest Cut Problem A weighted graph G=(V,E) with positive edge weight(=cost, capacity) is given. Edge weight : ce for every edge e ∈ E. |V|=n. A set of pairs of vertices {(s1,t1), (s2,t2), … , (sk,tk)} with associated demands Di between si and ti. Def Sparsest Cut problem : Minimize the sparsity of a cut S ⊆ V where and

Sparsest Cut Problem - Example The sparsest cut value = 1. Solid edges : edges of the graph with weight 1. Dashed edges : the demand edges of demand value 1.

Sparsest Cut Problem - Example The sparsest cut value = 1. = 3 = 3 Solid edges : edges of the graph with weight 1. Dashed edges : the demand edges of demand value 1.

Relation with ncut and Expansion Unit demands case ; The demands consist of all pairs and for all  Sparsest Cut Problem in unit demands case = Minimize node-normalized cut (=ncut) : If |S| ≤ n/2, then n/2 ≤ |S| ≤ n,  the above problem = Minimize the expansion : Unit demands case 에는 sparsity = ncut이 됨. 두 번째 건 왜 그렇지? -_- ..

LP Relaxation Cut metric Any n-point metric can be associated with vector in . Reformulated Sparsest Cut Problem : … (1) (equiv.) where ∈ and is the weight of the edge between i and j, and is the demand between i and j.

LP Relaxation The positive cone generated by all cut metrics From convexity, the optimum of (1) is achieved at an extreme point. … (2) (equiv.) Prop d is l1-embeddable iff d is in CUTn.  … (3) (equiv.) (Relax l1 to all metric)  …(4) (relaxed.) Check : Positive cone  convex & FACT (말로)

LP Relaxation LP : … (5) (relaxed.) Thm Suppose for each metric (V,d), there exists a metric μ= μ(d) ∈ l1 such that d(x,y) ≤ μ (x,y) ≤ α d(x,y) for all x,y ∈ V. Then (5) has an integrality gap ≤ α . Thm For all metrics d, there exists μ= μ(d) ∈ l1 such that α=O(logn). Moreover, the number of dimensions needed ≤ O(log2n). (Approximate max-flow min-cut thm for multi-commodity flows) Cor The LP relaxation of Sparsest Cut has integrality gap of O(logn).

LP Relaxation Any metric in l1 can be written as a positive linear combination of cuts   Simply pick the best cut S amongst the ones with non-zero αS in the cut-decomposition of μ.

SDP Relaxation Tighter relaxation LP : minimizing over all cut metrics = minimizing over all l1-metrics  minimizing over all metrics. SDP : minimizing over all cut metrics = minimizing over all l1-metrics  minimizing over all l22 –metrics. If d ∈ l1 then d ∈ l22. (Relax l1 to l22 -metric)  …(4)’ (relaxed.) SDP : … (5)’ (relaxed)

SDP Relaxation Lemma (Structure Lemma) O((logn)1/2) approximation in uniform case: SDP embedding lies on the unit ball  Use Structure Lemma. Pick S and T satisfying the Structure Lemma and cut at a random distance from S,  The expected total capacity crossing the cut =

Reference Introduction to SDP via MAX-CUT Approximation Algorithms and Semidefinite Programming (http://www.ti.inf.ethz.ch/ew/courses/ApproxSDP09/), Jiří Matoušek, Bernd Gärtner. SDP Application : Algorithms for Sparsest Cut Energy Models for Graph Clustering, Andreas Noack, Journal of Graph Algorithms and Applications, 2007. Expander Flows, Geometric Embeddings and Graph Partitioning, Sanjeev Arora, Satish Rao, and Umesh Vazirani, 2007(Longer version of an ACM STOC 2004). Scribing Note of CMU 18-854B Spring 2008 “Advanced Approximation Algorithms”, Lecture 19, 27(http://www.cs.cmu.edu/~anupamg/adv-approx/), Lecturer: Anupam Gupta