Approximation Algorithms for Prize-Collecting Forest Problems with Submodular Penalty Functions Chaitanya Swamy University of Waterloo Joint work with.

Slides:



Advertisements
Similar presentations
Iterative Rounding and Iterative Relaxation
Advertisements

Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
Prize-collecting Frameworks Mohammad T. HajiAghayi University of Maryland, College Park & AT&T Labs-- Research TexPoint fonts used in EMF. Read the TexPoint.
1 Matching Polytope x1 x2 x3 Lecture 12: Feb 22 x1 x2 x3.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Introduction to Algorithms
Primal-Dual Algorithms for Connected Facility Location Chaitanya SwamyAmit Kumar Cornell University.
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy University of Waterloo Joint work with David Shmoys Cornell.
Algorithms for Max-min Optimization
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Greedy Algorithms for Matroids Andreas Klappenecker.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
A Constant Factor Approximation Algorithm for the Multicommodity Rent-or-Buy Problem Amit Kumar Anupam Gupta Tim Roughgarden Bell Labs CMU Cornell joint.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Computational Methods for Management and Economics Carla Gomes
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Approximation Algorithms
An Approximation Algorithm for Requirement cut on graphs Viswanath Nagarajan Joint work with R. Ravi.
Group Strategyproofness and No Subsidy via LP-Duality By Kamal Jain and Vijay V. Vazirani.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Primal-Dual Algorithms for Connected Facility Location Chaitanya SwamyAmit Kumar Cornell University.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Distributed Combinatorial Optimization
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
A General Approach to Online Network Optimization Problems Seffi Naor Computer Science Dept. Technion Haifa, Israel Joint work: Noga Alon, Yossi Azar,
1 Spanning Tree Polytope x1 x2 x3 Lecture 11: Feb 21.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell.
LP-based Algorithms for Capacitated Facility Location Chaitanya Swamy Joint work with Retsef Levi and David Shmoys Cornell University.
V. V. Vazirani. Approximation Algorithms Chapters 3 & 22
Network Design for Information Networks Chaitanya Swamy Caltech and U. Waterloo Ara HayrapetyanÉva Tardos Cornell University.
Graph Coalition Structure Generation Maria Polukarov University of Southampton Joint work with Tom Voice and Nick Jennings HUJI, 25 th September 2011.
More approximation algorithms for stochastic programming programs David B. Shmoys Joint work with Chaitanya Swamy Cal Tech.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Topics in Algorithms 2005 Constructing Well-Connected Networks via Linear Programming and Primal Dual Algorithms Ramesh Hariharan.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Martin Grötschel  Institute of Mathematics, Technische Universität Berlin (TUB)  DFG-Research Center “Mathematics for key technologies” (M ATHEON ) 
Primal-Dual Algorithms for Connected Facility Location Chaitanya SwamyAmit Kumar Cornell University.
Approximation Algorithms for Stochastic Optimization Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University.
Greedy Algorithms and Matroids Andreas Klappenecker.
The Matroid Median Problem Viswanath Nagarajan IBM Research Joint with R. Krishnaswamy, A. Kumar, Y. Sabharwal, B. Saha.
15.053Tuesday, April 9 Branch and Bound Handouts: Lecture Notes.
Branch-and-Cut Valid inequality: an inequality satisfied by all feasible solutions Cut: a valid inequality that is not part of the current formulation.
Algorithmic Game Theory and Internet Computing Vijay V. Vazirani Georgia Tech Primal-Dual Algorithms for Rational Convex Programs II: Dealing with Infeasibility.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
Submodular set functions Set function z on V is called submodular if For all A,B µ V: z(A)+z(B) ¸ z(A[B)+z(AÅB) Equivalent diminishing returns characterization:
Deterministic Algorithms for Submodular Maximization Problems Moran Feldman The Open University of Israel Joint work with Niv Buchbinder.
Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University.
Truthful and near-optimal mechanism design via linear programming Chaitanya Swamy Caltech and U. Waterloo Joint work with Ron Lavi Caltech.
Iterative Rounding in Graph Connectivity Problems Kamal Jain ex- Georgia Techie Microsoft Research Some slides borrowed from Lap Chi Lau.
Submodularity Reading Group Submodular Function Minimization via Linear Programming M. Pawan Kumar
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
Approximation Algorithms Greedy Strategies. I hear, I forget. I learn, I remember. I do, I understand! 2 Max and Min  min f is equivalent to max –f.
Approximation Algorithms based on linear programming.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Multiroute Flows & Node-weighted Network Design Chandra Chekuri Univ of Illinois, Urbana-Champaign Joint work with Alina Ene and Ali Vakilian.
Approximation Algorithms for Min-cost Chain-Constrained Spanning Trees
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
The minimum cost flow problem
Analysis of Algorithms
Branch-and-Bound Algorithm for Integer Program
Presentation transcript:

Approximation Algorithms for Prize-Collecting Forest Problems with Submodular Penalty Functions Chaitanya Swamy University of Waterloo Joint work with Yogeshwer Sharma David Williamson Cornell University

Prize-collecting Steiner tree (PCST) Given: graph G=(V,E), edge costs c e ≥ 0, root r  V, penalties p v ≥ 0 on vertices Goal: choose a set of edges F   E so as to minimize∑ e  F c e + ∑ v not connected to r p v cost of edges picked + penalty of nodes disconnected from r

Prize-collecting Steiner tree (PCST) Given: graph G=(V,E), edge costs c e ≥ 0, root r  V, penalties p v ≥ 0 on vertices Goal: choose a set of edges F   E so as to minimize∑ e  F c e + ∑ v not connected to r p v r cost of edges picked + penalty of nodes disconnected from r

Prize-collecting Steiner tree (PCST) Given: graph G=(V,E), edge costs c e ≥ 0, root r  V, penalties p v ≥ 0 on vertices Goal: choose a set of edges F   E so as to minimize∑ e  F c e + ∑ v not connected to r p v Bienstock et al.: gave a 3-approx. LP-rounding algorithm Goemans-Williamson (GW): gave a primal-dual 2-approx. algorithm r cost of edges picked + penalty of nodes disconnected from r

PCST with submodular penalty f’n. Given: graph G=(V,E), edge costs c e ≥ 0, root r  V, penalty is given by a set-function p : 2 V   ≥ 0 p(A): penalty if set A  V is disconnected from r p is submodular: p(A)+p(B) ≥ p(A   B)+p(A   B) e.g., p(A) = min(|A|, M) Goal: choose a set of edges F   E so as to minimize ∑ e  F c e + p({v not connected to r}) r Generalizes penalty function of PCST Introduced by Hayrapetyan-S-Tardos: gave a 2-approximation algorithm by extending GW primal-dual algorithm

Prize-collecting Steiner forest (PCSF) Given: graph G=(V,E), edge costs c e ≥ 0, source-sink pairs s i -t i penalties p i ≥ 0 on each s i -t i pair Goal: choose a set of edges F   E so as to minimize∑ e  F c e + ∑ i: s i not connected to t i in F p i

Prize-collecting Steiner forest (PCSF) Given: graph G=(V,E), edge costs c e ≥ 0, source-sink pairs s i -t i penalties p i ≥ 0 on each s i -t i pair Goal: choose a set of edges F   E so as to minimize∑ e  F c e + ∑ i: s i not connected to t i in F p i Generalizes connectivity function of PCST Introduced by Jain-Hajiaghayi: gave a 3-approx. primal-dual algorithm

General framework for Prize-Collecting Forest Problems PCST with submodular penalty function Prize-collecting Steiner forest Prize-Collecting Forest (PCF) –connectivity function: arbitrary 0- 1 function –penalty function: submodular function on collections of sets of vertices Prize-collecting Steiner tree

Prize-Collecting Forest (PCF) Given: graph G=(V,E) (|V|=n), edge costs c e ≥ 0, connectivity function f: 2 V   {0, 1 } f(S)= 1  need an edge from border of S,  (S) := {(u,v)  E: exactly one of u, v is in S} penalty function p: 2 2 V   ≥ 0 p( S ): penalty if collection S of subsets is violated Goal: choose a set of edges F   E so as to minimize∑ e  F c e + p({S  V: f(S)= 1, F  (S)=  }) Example: Prize-collecting Steiner forest f(S) = 1 iff there exists some i s.t. exactly one of s i, t i  S p( S ) = ∑ i:  S  S that separates s i -t i p i violated subsets

PCF: properties of p(.) p(  )=0 Monotonicity: if S  T then p( S ) ≤ p( T ) Submodularity: p( S ) + p( T ) ≥ p( S   T ) + p( S   T ) Complement property: for A  V, p({A, A c }) = p({A}) Union property: for A,B   V, p({A, B, A   B})=p({A,B}) Inactivity property: if f(A)=0, then p({A})=0 For any 0- 1 connectivity f’n f, can define penalty function, p f ( S ) = M (very large #) if  S  S with f(S)= 1 ; and 0 o/w. Solving PCF with (f, p f )  solving network design problem with connectivity f’n. f  need certain restrictions on p(.) If f(  )=0, then f is 0- 1 proper iff p f satisfies above properties. p(.) will be given as an oracle (ground set has 2 |V| elements)

Our Results Give a primal-dual 3-approximation algorithm –Requires novel ideas in implementation and analysis, to overcome difficulties caused due to the exponential size of the ground set of p(.) Give an LP-rounding 2.54-approximation algorithm –solving the LP relaxation poses a significant challenge –LP has 2 n constraints and 2 2 n variables: not clear if even a basic solution has a polynomial description –Reformulate LP as a convex program, solve via ellipsoid method; evaluating objective f’n and computing a subgradient both require solving an LP of size 2 n  2 2 n –overcome difficulty by proving certain structural properties; also required for the rounding procedure

An Integer Program x e : indicates if edge e is picked z S : indicates if penalty is incurred for collection S   2 V Minimize ∑ e c e x e + ∑ S p( S )z S subject to∑ e  (S) x e + ∑ S :S  S z S ≥ f(S) for each S  V x e, z S  {0, 1 }for each e, S

A Linear Program x e : indicates if edge e is picked z S : indicates if penalty is incurred for collection S   2 V Minimize ∑ e c e x e + ∑ S p( S )z S (PCF-LP) subject to∑ e  (S) x e + ∑ S :S  S z S ≥ f(S) for each S  V x e, z S  {0, 1 }for each e, S x e, z S ≥ 0for each e, S LP has 2 2 n variables and 2 n constraints Not clear if even a basic solution has a polynomial- size description – what does “solving the LP” mean?

A Compact Formulation x e : indicates if edge e is picked z S : indicates if penalty is incurred for collection S   2 V Minimizeh(x):=∑ e c e x e + g(x)s.t.0 ≤ x e ≤ 1 for each e (PCF-CP) where,g(x):=min ∑ S p( S )z S (Pen-P) s.t. ∑ S :S  S z S ≥f(S) – ∑ e  (S) x e for each S  V z S ≥0for each e, S g(x) is convex, so (PCF-CP) is a convex program Equivalent to earlier LP.

The Overall Strategy 1.Get an optimal (or ( 1 +  )-optimal solution) x to the convex program using the ellipsoid method. 2.Round fractional solution x to integer solution –need that f is 0- 1 proper f’n, or is weakly-submodular –use 2-approx. algorithm for the network-design problem without penalties (Goemans-Williamson or Jain). Obtain a 2.54-approximation algorithm for the prize-collecting forest problem.

The Ellipsoid Method Start with ball containing polytope P. y i = center of current ellipsoid. Min h(x) subject to x  P. P

The Ellipsoid Method P New ellipsoid = min. volume ellipsoid containing “unchopped” half-ellipsoid. Min h(x) subject to x  P. If y i is infeasible, use violated inequality to chop off infeasible half-ellipsoid. Start with ball containing polytope P. y i = center of current ellipsoid.

The Ellipsoid Method New ellipsoid = min. volume ellipsoid containing “unchopped” half-ellipsoid. P Min h(x) subject to x  P. If y i is infeasible, use violated inequality to chop off infeasible half-ellipsoid. Start with ball containing polytope P. y i = center of current ellipsoid. If y i  P – how to make progress?

The Ellipsoid Method Min h(x) subject to x  P. P Start with ball containing polytope P. y i = center of current ellipsoid. If y i is infeasible, use violated inequality. If y i  P – how to make progress? add inequality h(x) ≤ h(y i )? Separation becomes difficult. yiyi h(x) ≤ h(y i )

Let d = subgradient at y i. use subgradient cut d. (x–y i ) ≤ 0. Generate new min. volume ellipsoid. The Ellipsoid Method Min h(x) subject to x  P. P Start with ball containing polytope P. y i = center of current ellipsoid. If y i  P – how to make progress? d  m is a subgradient of h(.) at u, if for every v, h(v)-h(u) ≥ d. (v-u). add inequality h(x) ≤ h(y i )? Separation becomes difficult. If y i is infeasible, use violated inequality. d yiyi h(x) ≤ h(y i )

The Ellipsoid Method Min h(x) subject to x  P. P Start with ball containing polytope P. y i = center of current ellipsoid. If y i  P – how to make progress? d  m is a subgradient of h(.) at u, if for every v, h(v)-h(u) ≥ d. (v-u). Let d = subgradient at y i. use subgradient cut d. (x–y i ) ≤ 0. Generate new min. volume ellipsoid. x 1, x 2, …, x k : points in P. Can show, min i= 1 …k h(x i ) ≤ OPT+ . x*x* x1x1 x2x2 add inequality h(x) ≤ h(y i )? Separation becomes difficult. If y i is infeasible, use violated inequality.

Computing a subgradient h(x) := ∑ e c e x e + g(x) g(x):=min. ∑ S p( S )z S s.t.∑ S:S  S z S ≥ f(S) – ∑ e  (S) x e  S  V z S ≥ 0  S

Computing a subgradient h(x) := ∑ e c e x e + g(x) g(x):=min. ∑ S p( S )z S = max.∑ S (f(S) – ∑ e  (S) x e ) y S s.t.∑ S:S  S z S ≥ f(S) – ∑ e  (S) x e s.t. ∑ S  S y S ≤ p( S )  S  S  V z S ≥ 0  S y S ≥ 0  S Consider point u  m. Let y  optimal dual solution to g(u). Soh(u) = ∑ e c e u e + ∑ S (f(S) – ∑ e  (S) u e ) y S = ∑ e d e u e + ∑ S f(S)y S where d e = c e – ∑ S:e  (S) y S. At any point v  m, y is a feasible solution to dual of g(v), so h(v) ≥ ∑ e c e v e + ∑ S (f(S) – ∑ e  (S) v e ) y S = ∑ e d e v e + ∑ S f(S)y S Lemma: For any point v  m, we have h(v) – h(u) ≥ d. (v-u).  d is a subgradient of h(.) at point u.

Solving the dual g(x) =max∑ S [f(S) – x(  (S))]y S (Pen-D) s.t.∑ S  S y S ≤ p( S )for all S  2 V y S ≥ 0for all S Bad: Dual has 2 n variables and 2 2 n constraints Good: It is a polymatroid: p(.) is a monotone submodular f’n.  Edmonds’ greedy algorithm yields optimal solution –Sort the sets S in decreasing order of [f(S)-x(  (S))] –For the i-th set S i, if [f(S i )-x(  (S i ))] > 0, set y S i = p {S 1,…S i- 1 } (S i ) Bad: Reduces complexity to 2 n, but still not polytime Good: Show that  optimal solution where the sets S with y S > 0 form a laminar family – key structural lemma Notation: x(  (S))= ∑ e  (S) x e p S (A) = p( S  {A}) – p( S )

Useful properties of p(.) If A, B  S, then p S (T) = p S (T c ) = 0 for all sets T in {A  B, A  B, A\B, B\A, A c, B c } – due to complementarity and union properties If p({A}) = 0, then for any B  V, p S  {A} ({B}) = p S ({B}) – due to submodularity  ordering of sets A with f(A)=0 is irrelevant If p S  {A} ({B}) = p S  {B} ({A}) = 0, then for any set T  V, p S  {A} ({T}) = p S  {B} ({T}) – by submodularity

Solving the dual (contd.) Initialize y S = 0 for all sets S, laminar family L   . While  set S that does not cross any set of L –find T = argmin {x(  (S)): S does not cross L } –if x(  (T)) ≥ 1 return; else set y T = p L ({T}), L   L  {T} Theorem: y is an optimal solution to (Pen-D). Let L ' = {T  L : y T >0} = {T 1,…,T k }, T i = maximal superset of {T 1,…,T i } s.t. p( T i ) = p({T 1,…,T i }) Theorem: Setting z T i = x(  (T i+1 )) – x(  (T i )) ( x(  (T k+1 )) := 1 ) for i= 1,…,k, and z S = 0 for all other S, yields an optimal solution to (Pen-P). Structural lemma yields following algorithm:

Rounding procedure Given: fractional solution x, sets T 1,…, T k – gives succinct description of collections T 1,…, T k, and hence optimal soln. z to (Pen-P) Let  [0, 1 ] be a parameter. –Define 0- 1 connectivity function  (S) = 1 if f(S) = 1 and ∑ S :S  S z S <  ; 0 otherwise. –Solve network design problem with connectivity function . If f is proper or weakly-supermodular, then so is , therefore cost of edges picked is bounded Penalty is at most p({S   V: ∑ S :S  S z S ≥  }) ≤ [∑ S p( S )z S ]/ 

Open Questions Is there a compact description of the LP? Or a more efficient procedure to solve it? Obtaining a 2-approximation algorithm: iterative rounding may be the way to go Applications to 2-stage stochastic network design: can the second-stage cost be captured by a “nice” penalty function? Extensions to higher connectivity reqmts.

Thank You.