Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bo-Young Kim Applied Algorithm Lab, KAIST 月

Similar presentations


Presentation on theme: "Bo-Young Kim Applied Algorithm Lab, KAIST 月"— Presentation transcript:

1 Bo-Young Kim Applied Algorithm Lab, KAIST 2011.6.13 月
Introduction to Semidefinite Programming via MAX-CUT and SDP Application : Algorithms for Sparsest Cut Bo-Young Kim Applied Algorithm Lab, KAIST

2 Contents Introduction to SDP via MAX-CUT
Review: MAX-CUT Problem & Approximation Algorithm [1st Algorithm] A Naïve Algorithm : Randomized 0.5-Approximation From LP to SDP (informal) Cholesky Factorization SDP (formal definition) [2nd Algorithm] Randomized Approximation

3 Review: MAX-CUT Problem
A graph G=(V,E) is given. A cut of G : a pair (S, V \ S) for a nonzero subset S ⊆ V. Size of the cut : Def (Recall) MAX-CUT problem : Decision version : NP-complete Given G and n∈N, is there a cut of size≥n? Garey, Johnson, and Stockmeyer (‘74, STOC) Optimization version : NP-hard

4 Review: Approximation Algorithm
Def (Recall) Approximation Algorithm. P : a maximization(ref.minimization) problem : set of instances A : an algorithm that returns for every instances I ∈ a feasible solution A(I) ∈ F(I). is a function. A is a -approximation algorithm for P if the following two properties hold. (ref ) * From now on, only consider maximization If is a constant c  c-approximation algorithm. c ≤ 1. (ref. c ≥1 ) A randomized - approximation algorithm : Expected polynomial runtime +

5 A Naïve Algorithm : Randomized 0.5-Approximation
The above algorithm is a randomized 0.5-approximation algorithm for the MAX-CUT problem. Runs in polynomial time. . Deterministic 0.5-approximation, 0.5(1+1/|E|)-approximation algorithm is possible. Until 1994, no c-approximation algorithm could be found for c>0.5.

6 From LP to SDP Def (Recall) LP in equational form :
where x,b,c are column vectors ( x,c∈Rn, b∈Rm) and A∈Rmxn. From LP to SDP (Brief Introduction) Replace the vector space Rn by another real vector space; vector space Sn of symmetric nxn matrices. Replace <x,y>=xTy over Rn by <X,Y> over Sn. Replace the constraint x ≥ 0 by the constraint Def (Recall) A symmetric matrix M is Positive Semidefinite : All its eigenvalues are nonnegative.

7 Cholesky Factorization
Fact : Let M∈Sn. TFAE. Cholesky Factorization of a positive semidefinite matrix M : The computation of U that satisfies (iii). (Often needed in SDP.) Outer product Cholesky Factorization : Recursive, O(n3) operation for M ∈ Rnxn. M=( ) ∈ R1x1  U=( ) : a nonnegative eigenvalue. Otherwise, M= [Note] If not,  Contradiction to (ii).

8 Cholesky Factorization
: Positive semidefinite again  Recursively compute the decomposition. ii)  q=0. ∴ where Polynomial time algorithm in a bit model. (Counting elementary operations)

9 Semidefinite Programs
Def Semidefinite Program(SDP) in equational form : where , , , and is a linear operator. As LP case, we say the SDP is feasible if there is some feasible solution; The value of a feasible SDP : An optimal solution : a feasible solution X* s.t

10 A Randomized 0.878-Approximation Algorithm
Goemans and Williamson (‘94, STOC) Fact ! SDP can be solved up to any desired accuracy ε, where the runtime depends polynomially on the sizes of the input numbers, and on log(R/ε), where R is the maximum size ||X|| of a feasible solution. Formulating MAX-CUT problem as a constraint optimization problem V={1,2, … ,n}. Variables x1, x2, … ,xn ∈{-1,1}. Any assignment of these variables encodes a cut (S, V \ S). S={i ∈V : xi =1}, V \ S={i ∈V : xi =-1}. Define  : the contribution of the pair {i,j} to the size of the cut. Reformulated MAX-CUT problem : … (1) Value of this program: opt(G), the size of a maximum cut.

11 A Randomized 0.878-Approximation Algorithm
SDP Relaxation Opt(G) : Value of (1), i.e. the size of a maximum cut. A SDP whose value is an upper bound for OPT(G). Let xi ∈ Sn-1={x ∈Rn : ||x||=1}, the (n-1) dimensional unit sphere. Consider the problem : …(2) Remark S0={-1,1} can be embedded into Sn-1 via the mapping L: x  (0, … ,0,x). Let (x1, x2, … ,xn) be a feasible solution of (1) with objective function value  (L(x1), L(x2), … ,L(xn)) is a feasible solution of (2) with objective function value ∴ (2) is a relaxation of (1). A program w/ more feasible solutions. Has value at least opt(G). This value is still finite since xiTxj is lower bounded by -1 for all i,j.

12 A Randomized 0.878-Approximation Algorithm
xij : = xiTxj (2)  SDP : … (3) xij : = xiTxj  X=UTU where U = x1 x2 … xn by the condition (iii). xi ∈ Sn-1  xii=1. Conversely, X : a feasible matrix for (3)  the columns of any matrix U with X=UTU is feasible vector solution of (2). ∴ (2) ⇔ (1). Up to constant term |E|/2, (3) assumes the form of SDP.

13 A Randomized 0.878-Approximation Algorithm
(3) is feasible with the same finite value γ ≥ opt(G) as (2). From the “Fact! ”  We can find in poly time a matrix X* with objective function value at least γ – ε, for any ε>0. Recall We can compute U s.t. X*=UTU in poly time. ∴ The columns x1*, x2*, … , xn* of U : form an almost optimal solution of (2) :

14 A Randomized 0.878-Approximation Algorithm
Rounding the vector solution Mapping Sn-1  S0. Choose p ∈ Sn-1 u.a.r. Define  p partitions Sn-1 into a closed hemisphere H={x ∈ Sn-1 : pTx≥0} and its complement.  Vectors in H  1 , vectors in complement  -1.

15 A Randomized 0.878-Approximation Algorithm
Lemma Let xi*, xj* ∈ Sn-1. Then Getting the bound Expected number of cut edges : Know : Lemma For z ∈ [-1,1], (1-z)/2 arccos(z)/pi

16 A Randomized 0.878-Approximation Algorithm
By choosing ε ≤ 5*10-4, .

17 SDP Application : Algorithms for Sparsest Cut

18 Sparsest Cut Problem A weighted graph G=(V,E) with positive edge weight(=cost, capacity) is given. Edge weight : ce for every edge e ∈ E. |V|=n. A set of pairs of vertices {(s1,t1), (s2,t2), … , (sk,tk)} with associated demands Di between si and ti. Def Sparsest Cut problem : Minimize the sparsity of a cut S ⊆ V where and

19 Sparsest Cut Problem - Example
The sparsest cut value = 1. Solid edges : edges of the graph with weight 1. Dashed edges : the demand edges of demand value 1.

20 Sparsest Cut Problem - Example
The sparsest cut value = 1. = 3 = 3 Solid edges : edges of the graph with weight 1. Dashed edges : the demand edges of demand value 1.

21 Relation with ncut and Expansion
Unit demands case ; The demands consist of all pairs and for all  Sparsest Cut Problem in unit demands case = Minimize node-normalized cut (=ncut) : If |S| ≤ n/2, then n/2 ≤ |S| ≤ n,  the above problem = Minimize the expansion : Unit demands case 에는 sparsity = ncut이 됨. 두 번째 건 왜 그렇지? -_- ..

22 LP Relaxation Cut metric
Any n-point metric can be associated with vector in Reformulated Sparsest Cut Problem : … (1) (equiv.) where ∈ and is the weight of the edge between i and j, and is the demand between i and j.

23 LP Relaxation The positive cone generated by all cut metrics
From convexity, the optimum of (1) is achieved at an extreme point. … (2) (equiv.) Prop d is l1-embeddable iff d is in CUTn.  … (3) (equiv.) (Relax l1 to all metric)  …(4) (relaxed.) Check : Positive cone  convex & FACT (말로)

24 LP Relaxation LP : … (5) (relaxed.) Thm Suppose for each metric (V,d), there exists a metric μ= μ(d) ∈ l1 such that d(x,y) ≤ μ (x,y) ≤ α d(x,y) for all x,y ∈ V. Then (5) has an integrality gap ≤ α . Thm For all metrics d, there exists μ= μ(d) ∈ l1 such that α=O(logn). Moreover, the number of dimensions needed ≤ O(log2n). (Approximate max-flow min-cut thm for multi-commodity flows) Cor The LP relaxation of Sparsest Cut has integrality gap of O(logn).

25 LP Relaxation Any metric in l1 can be written as a positive linear combination of cuts  Simply pick the best cut S amongst the ones with non-zero αS in the cut-decomposition of μ.

26 SDP Relaxation Tighter relaxation
LP : minimizing over all cut metrics = minimizing over all l1-metrics  minimizing over all metrics. SDP : minimizing over all cut metrics = minimizing over all l1-metrics  minimizing over all l22 –metrics. If d ∈ l1 then d ∈ l22. (Relax l1 to l22 -metric)  …(4)’ (relaxed.) SDP : … (5)’ (relaxed)

27 SDP Relaxation Lemma (Structure Lemma)
O((logn)1/2) approximation in uniform case: SDP embedding lies on the unit ball  Use Structure Lemma. Pick S and T satisfying the Structure Lemma and cut at a random distance from S,  The expected total capacity crossing the cut =

28 Reference Introduction to SDP via MAX-CUT
Approximation Algorithms and Semidefinite Programming ( Jiří Matoušek, Bernd Gärtner. SDP Application : Algorithms for Sparsest Cut Energy Models for Graph Clustering, Andreas Noack, Journal of Graph Algorithms and Applications, 2007. Expander Flows, Geometric Embeddings and Graph Partitioning, Sanjeev Arora, Satish Rao, and Umesh Vazirani, 2007(Longer version of an ACM STOC 2004). Scribing Note of CMU B Spring 2008 “Advanced Approximation Algorithms”, Lecture 19, 27( Lecturer: Anupam Gupta


Download ppt "Bo-Young Kim Applied Algorithm Lab, KAIST 月"

Similar presentations


Ads by Google