C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.

Slides:



Advertisements
Similar presentations
The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
Advertisements

C&O 355 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Weighted Matching-Algorithms, Hamiltonian Cycles and TSP
Min-Max Relations, Hall’s Theorem, and Matching-Algorithms Graphs & Algorithms Lecture 5 TexPoint fonts used in EMF. Read the TexPoint manual before you.
C&O 355 Lecture 9 N. Harvey TexPoint fonts used in EMF.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
C&O 355 Mathematical Programming Fall 2010 Lecture 9
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Introduction to Algorithms
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Lecture 20 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Semi-Definite Algorithm for Max-CUT Ran Berenfeld May 10,2005.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Linear Programming and Approximation
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Semidefinite Programming
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Approximation Algorithms
Computer Algorithms Integer Programming ECE 665 Professor Maciej Ciesielski By DFG.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
C&O 355 Mathematical Programming Fall 2010 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
APPROXIMATION ALGORITHMS VERTEX COVER – MAX CUT PROBLEMS
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Theory of Computing Lecture 13 MAS 714 Hartmut Klauck.
Approximation Algorithms
C&O 355 Mathematical Programming Fall 2010 Lecture 18 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
C&O 355 Mathematical Programming Fall 2010 Lecture 16 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
C&O 355 Lecture 7 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
C&O 355 Lecture 24 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Approximation Algorithms based on linear programming.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Approximation algorithms
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
Maximum Matching in the Online Batch-Arrival Model
Algorithms and Networks
Chapter 5. Optimal Matchings
Instructor: Shengyu Zhang
Analysis of Algorithms
Linear Programming and Approximation
Problem Solving 4.
Lecture 19 Linear Program
Topics in Algorithms 2005 Max Cuts
Chapter 1. Formulations.
Presentation transcript:

C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A

Topics Weight-Splitting Method Shortest Paths Primal-Dual Interpretation Local-Ratio Method Max Cut

k Weight-Splitting Method Let C ½ R n be set of feasible solutions to some optimization problem. Let w 2 R n be a “weight vector”. x is “optimal under w” if x optimizes min { w T y : y 2 C } Lemma: Suppose w = w 1 + w 2. Suppose that x is optimal under w 1, and x is optimal under w 2. Then x is optimal under w. Hassin ‘82Frank ‘81

Weight-Splitting Method Appears in this paper: Andr á s Frank

Weight-Splitting Method Scroll down a bit… Andr á s Frank Weight-Splitting Method was discovered in U. Waterloo C&O Department!

ShortestPath( G, S, t, w ) Input: Digraph G = (V,A), source vertices S µ V, destination vertex t 2 V, and integer lengths w(a), such that w(a)>0, unless both endpoints of a are in S. Output: A shortest path from some s 2 S to t. If t 2 S, return the empty path p=() Set w 1 (a)=1 for all a 2 ± + (S), and w 1 (a)=0 otherwise Set w 2 = w - w 1. Set S’ = S [ { u : 9 s 2 S with w 2 ( (s,u) ) = 0 } Set p’ = (v 1,v 2, ,t) = ShortestPath( G, S’, t, w 2 ) If v 1 2 S, then set p=p’ Else, set p=(s,v 1,v 2, ,t) where s 2 S and w 2 ( (s,v 1 ) )=0 Return path p To find shortest s-t path, run ShortestPath(G, {s}, t, w)

Claim: Algorithm returns a shortest path from S to t. Proof: By induction on number of recursive calls. If t 2 S, then the empty path is trivially shortest. Otherwise, p’ is a shortest path from S’ to t under w 2. So p is a shortest path from S to t under w 2. (Note: length w 2 (p)=length w 2 (p’), because if we added an arc, it has w 2 -length 0.) Note: p cannot re-enter S, otherwise a subpath of p would be a shorter S-t path. So p uses exactly one arc of ± + (S). Correctness of Algorithm s s' S t Path p This is a shorter S-t path

Claim: Algorithm returns a shortest path from S to t. Proof: By induction on number of recursive calls. If t 2 S, then the empty path is trivially shortest. Otherwise, p’ is a shortest path from S’ to t under w 2. So p is a shortest path from S to t under w 2. (Note: length w 2 (p)=length w 2 (p’), because if we added an arc, it has w 2 -length 0.) Note: p cannot re-enter S, otherwise a subpath of p would be a shorter S-t path. So p uses exactly one arc of ± + (S). So length w 1 (p)=1. But any S-t path has length at least 1 under w 1. So p is a shortest path from S to t under w 1. ) p is a shortest S-t path under arc-lengths w, by the Weight-Splitting Lemma. ¥ Correctness of Algorithm

Another IP & LP for Shortest Paths Make variable x a for each arc a 2 A The IP is: Corresponding LP & its dual: Make variable y C for each S-t cut C Theorem: The Weight-Splitting Algorithm finds optimal primal and dual solutions to these LPs.

ShortestPath( G, S, t, w ) Output: A shortest path p from S to t, and an optimal solution y for dual LP with weights w If t 2 S – Return (p=(), y=0) Set w 1 (a)=1 for all a 2 ± + (S), and w 1 (a)=0 otherwise Set w 2 = w - w 1 Set S’ = S [ { u : 9 s 2 S with w 2 ( (s,u) ) = 0 } Set (p’,y’) = ShortestPath(G,S’,t,w 2 ) where p’=(v 1,v 2,…,t) If v 1 2 S – Set p=p’ Else – Set p=(s,v 1,v 2, ,t) where s 2 S and w 2 ( (s,v 1 ) )=0 Set y C = 1 if C= ± + (S), otherwise y C = y’ C Return (p,y)

Claim: y is feasible for dual LP with weights w. Proof: By induction, y’ feasible for dual LP with weights w 2 So  C : a 2 C y’ C · w 2 (a) 8 a 2 A The only difference between y and y’ is y ± + (S) =1 So:  C : a 2 C y C =  C : a 2 C y’ C + [ 1 if a 2 ± + (S) ] · w 2 (a) + [ 1 if a 2 ± + (S) ] = w(a) Clearly y is non-negative So y is feasible for dual LP with weights w. ¤ Proof of Theorem

Let x be characteristic vector of path p, i.e., x a =1 if a 2 P, otherwise x a =0 Note: x is feasible for primal, since p is an S-t path, and its objective value is w T x = length w (p) Claim: x is optimal for primal and y is optimal for dual. Proof: Both x and y are feasible. We already argued that: length w 2 (p)=length w 2 (p’) and length w1 (p)=1 ) length w (p)= length w 2 (p’) + 1 = § C y’ C + 1 = § C y C So primal objective at x = dual objective at y. ¤

How to solve combinatorial IPs? Two common approaches 1.Design combinatorial algorithm that directly solves IP Often such algorithms have a nice LP interpretation Eg: Weight-splitting algorithm for shortest paths 2.Relax IP to an LP; prove that they give same solution; solve LP by the ellipsoid method Need to show special structure of the LP’s extreme points Sometimes we can analyze the extreme points combinatorially Eg: Perfect matching (in bip. graphs), Min s-t Cut, Max Flow Sometimes we can use algebraic structure of the constraints. Eg: Maximum matching, Vertex cover in bipartite graphs (using TUM matrices)

Many optimization problems are hard to solve exactly P NP LP: max { c T x : x 2 P } Maximum Bipartite Matching, Maximum Flow, Min s-t Cut, Shortest Path… IP: max { c T x : x 2 P, x 2 {0,1} n } Maximum cut in graph? Largest clique in graph? Smallest vertex cover in graph? Since these are hard to solve exactly, let’s instead aim for an approximate solution

Approximation Algorithms Algorithms for optimization problems that give provably near-optimal solutions. Catch-22: How can you know a solution is near-optimal if you don’t know the optimum? Mathematical Programming to the rescue! Our techniques for analyzing exact solutions can often be modified to analyze approximate solutions. – Eg: Approximate Weight-Splitting – Eg: Relax IP to a (non-integral!) LP

k k Local-Ratio Method Let C ½ R n be set of feasible solutions to an optimization problem. Let w 2 R n be a “weight vector”. x is “r-approximate under w” if w T x ¸ r ¢ max { w T y : y 2 C } Lemma: Suppose w = w 1 + w 2. Suppose that x is r-approximate under both w 1 and w 2. Then x is r-approximate under w. Proof: Let z be optimal under w. Let z i be optimal under w i, i 2 {1,2}. Then: w T x = w 1 T x + w 2 T x ¸ r ¢ w 1 T z 1 + r ¢ w 2 T z 2 ¸ r ¢ ( w 1 T z + w 2 T z ) = r ¢ w T z. So x is also r-approximate under w. ¥ Bar-YehudaEven (Approximate Weight-Splitting)

Our Puzzle Original Statement: There are n students in a classroom. Every two students are either enemies or friends. The teacher wants to divide the students into two groups to work on a project while he leaves the classroom. Unfortunately, putting two enemies in the same group will likely to lead to bloodshed. So the teacher would like to partition the students into two groups in a way that maximizes the number of enemies that belong to different groups. Restated in graph terminology: Let G=(V,E) be a graph with n vertices. There is an edge {u,v} if student u and v are enemies. For U µ V, let ± (U) = { {u,v} : u 2 U, v  U } Find a set U µ V such that | ± (U)| is maximized. This is the Max Cut Problem: max{ | ± (U)| : U µ V } This is a computationally-hard problem: there is no algorithm to solve it exactly, unless P=NP

Puzzle Solution Input: |V| = 750, |E| = 3604 (# enemies) Solutions: Who# Cut EdgesUpper Bound on OptimumRatio Student X % Me % Student Y % Every solution cuts at most this many edges An algorithm based on Semidefinite Programming (Lecture 24) Simulated Annealing A Greedy Algorithm Can cut at most all edges For any odd-cycle of length k, any cut cuts at most k-1 edges. This bound is based on greedily packing odd-cycles.

History of Max Cut Approximation Algorithms We will see two algorithms: – Local-Ratio Method: Also achieves ratio 50% – Goemans-Williamson Algorithm (next Lecture) WhoRatioTechnique Sahni-Gonzales %Greedy algorithm Folklore50%Random Cut Folklore50%Linear Programming Goemans-Williamson %Semidefinite Programming Trevisan %Spectral Graph Theory

Weighted Max Cut We can handle the weighted version of the problem Let G=(V,E) be complete graph with n vertices. For each e 2 E, there is an integer weight w(e) ¸ 0 Notation: For U µ V, let ± (U) = { {u,v} : u 2 U, v  U } Let ± (U) T w denote § e 2 ± (U) w(e) Objective: Find a set U µ V such that ± (U) T w is maximized

MaxCut( G, w ) Input: Complete graph G = (V,E), edge weights w Output: X ½ V s.t. ± (X) T w ¸ (1/2) ¢ optimum If |V|=1 – Return X = ; Else – Pick any v 2 V – Let X = MaxCut( G\v, w ) – Return either X or X [ {v}, whichever is better Sketch of Algorithm Analysis Idea: Either X or X [ {v} cuts half the weight of edges incident on v. Since this holds for all v, we cut at least half the edges.

MaxCut( G, w ) Input: Complete graph G = (V,E), edge weights w Output: X ½ V s.t. ± (X) T w ¸ (1/2) ¢ optimum If |V|=1 – Return X = ; Else – Pick any v 2 V – Set w 1 (e) =w( e) if e is incident on v, otherwise w 1 (e) =0 – Set w 2 = w - w 1 – Let G’= G\v – Let X’ = MaxCut( G\v, w 2 ) – Return either X’ or X’ [ {v}, whichever is better Local-Ratio Algorithm

Claim: Algorithm returns X ½ V s.t. ± (X) T w ¸ ½ optimum Proof: By induction on |V|. If |V|=1, then any cut is optimal. By induction, X’ is ½-optimal for graph G’ with weights w 2. The edges incident on v have w 2 -weight zero. So both X’ and X’ [ {v} are ½-optimal for G with weights w 2. Correctness of Algorithm v G\v X’ v G\v X’ X’ [ {v} X’

Claim: Algorithm returns X ½ V s.t. ± (X) T w ¸ ½ optimum Proof: By induction on |V|. If |V|=1, then any cut is optimal. By induction, X’ is ½-optimal for graph G’ with weights w 2. The edges incident on v have w 2 -weight zero. So both X’ and X’ [ {v} are ½-optimal for G with weights w 2. Any cut U has w 1 -weight at most § e 2 E w 1 (e) Every edge incident on v is cut by either X’ or X’ [ {v} Correctness of Algorithm v G\v X’ v G\v X’ X’ [ {v} X’

Claim: Algorithm returns X ½ V s.t. ± (X) T w ¸ ½ optimum Proof: By induction on |V|. If |V|=1, then any cut is optimal. By induction, X’ is ½-optimal for graph G’ with weights w 2. The edges incident on v have w 2 -weight zero. So both X’ and X’ [ {v} are ½-optimal for G with weights w 2. Any cut U has w 1 -weight at most § e 2 E w 1 (e) Every edge incident on v is cut by either X’ or X’ [ {v} So ± (X’) T w 1 + ± (X’ [ {v}) T w 1 = § e 2 E w 1 (e) ¸ optimum under w 1 So either X’ or X’ [ {v} is ½-optimal under w 1 So the better of X’ or X’ [ {v} is ½-optimal under w 1 and w 2 ) also ½-optimal under w. ¥ Correctness of Algorithm

What’s Next? Future C&O classes you could take If you’re unhappy that the ellipsoid method is too slow, you can learn about practical methods in: – C&O 466: Continuous Optimization If you liked…You might like… Max Flows, Min Cuts, Shortest PathsC&O 351 “Network Flows” C&O 450 “Combinatorial Optimization” C&O 453 “Network Design” Integer ProgramsC&O 452 “Integer Programming” Konig’s Theorem, Hall’s TheoremC&O 342 “Intro to Graph Theory” C&O 442 “Graph Theory” C&O 444 “Algebraic Graph Theory” Convex Functions, Subgradient Inequality, KKT Theorem C&O 367 “Nonlinear Optimization” C&O 463 “Convex Optimization” C&O 466 “Continuous Optimization” Semidefinite ProgramsC&O 471 “Semidefinite Optimization”