Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.

Slides:



Advertisements
Similar presentations
The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
Advertisements

1 LP, extended maxflow, TRW OR: How to understand Vladimirs most recent work Ramin Zabih Cornell University.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Introduction to Algorithms
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
1 of 56 Linear Programming-Based Approximation Algorithms Shoshana Neuburger Graduate Center, CUNY May 13, 2009.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Approximation Algorithms
Linear Programming and Approximation
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
1 Approximation Algorithms for Demand- Robust and Stochastic Min-Cut Problems Vineet Goyal Carnegie Mellon University Based on, [Golovin, G, Ravi] (STACS’06)
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Primal Dual Method Lecture 20: March 28 primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Approximation Algorithms
Computational Methods for Management and Economics Carla Gomes
Group Strategyproofness and No Subsidy via LP-Duality By Kamal Jain and Vijay V. Vazirani.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Distributed Combinatorial Optimization
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
1 Lecture 4 Maximal Flow Problems Set Covering Problems.
V. V. Vazirani. Approximation Algorithms Chapters 3 & 22
Primal-Dual Meets Local Search: Approximating MST’s with Non-uniform Degree Bounds Author: Jochen Könemann R. Ravi From CMU CS 3150 Presentation by Dan.
C&O 355 Mathematical Programming Fall 2010 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Approximation Algorithms
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Steiner tree LP x e = 1 if edge e is included in the Steiner tree. Min.  e c e x e (primal) s.t.  e   (S) x e ≥ 1  S  V: S  T≠ , T\S≠  x e 
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
Approximation Algorithms Duality My T. UF.
Approximation Algorithms based on linear programming.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Chapter 8 PD-Method and Local Ratio (5) Equivalence This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda.
Approximation algorithms
The Theory of NP-Completeness
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
Approximation Algorithms
Approximation algorithms
Approximation Algorithms
Chap 9. General LP problems: Duality and Infeasibility
Chapter 6. Large Scale Optimization
Analysis of Algorithms
Linear Programming and Approximation
Linear Programming Duality, Reductions, and Bipartite Matching
Chapter 5. The Duality Theorem
Integer Programming (정수계획법)
Flow Feasibility Problems
Chapter 6. Large Scale Optimization
1.2 Guidelines for strong formulations
Presentation transcript:

Lecture.6

Table of Contents Lp –rounding Dual Fitting LP-Duality

Linear Programming Problem A linear programming (LP) problem is an optimization problem in which we minimize or maximize a linear objective function subject to a given set of linear constraints. Example: Minimize 3x1 − 5x2 + 3x3 + 2x4 subject to: 3x1 + 4x2 = 6 −x3 + 2x1 − x2 ≥ 22 x5 ≤ 3.5 x3 +.5x4 =.8 xi ≥ 0 for all i

Solutions Feasible Solution A feasible solution to a linear program is a solution that satisfies all constraints. Optimal Solution An optimal solution to a linear program is a feasible solution with the largest(smallest) objective function value for a maximization(minimization) problem.

Many optimization problems involve selecting a subset of a given set of elements. Examples: A vertex cover is a subset of vertices. A spanning tree is really a subset of edges. A knapsack solution is a subset of items. Can be formulated as LPs with integrality constraints.

Integer Program An Integer Program (IP) is an LP with Integrality Constraints Integrality Constraints: Some or all the variables are constrained to be integers.

Solving Linear/Integer Programming Problems LPs can be solved efficiently (polynomially but slowly). IPs generally cannot be solved efficiently (it is NP hard). Some specific IPs can be solved efficiently. Actually, their LP optimal is guaranteed to be integral.

Using Indicator Variables Many selection problems can be formulated as IPs using indicator variables (or 0-1 variables). An indicator variable is defined for each element. A value of 1 indicating the selection of the element and a value of 0 indicating otherwise.

 vertex cover  Set Cover  Knapsack Few Examples are :

Example: Unweighted Vertex Cover Variables: {x v | v ∈ V }. The IP: Minimize ∑ x v s.t. x u + x v ≥ 1 ∀ (u, v) ∈ E, x v ∈ {0, 1} ∀ v ∈ V.

Example: Knapsack Let the item names be {1,..., n}. Variables: {x i | 1 ≤ i ≤ n}. The IP: Max ∑ i c i x i s.t. ∑ i s i x i ≤ K, x i ∈ {0, 1} ∀ 1 ≤ i ≤ n.

Solving Linear/Integer Programming Problems LPs can be solved efficiently (polynomially but slowly). IPs generally cannot be solved efficiently (it is NP hard). Some specific IPs can be solved efficiently. Actually, their LP optimal is guaranteed to be integral.

LP Relaxation (Drop the integrality constraint) Example: Unweighted Vertex Cover The IP: Minimize ∑ v x v s.t. x u + x v ≥ 1 ∀ (u, v) ∈ E, x v ∈ {0, 1} ∀ v ∈ V. The LP relaxation: Minimize ∑ v x v s.t. x u + x v ≥ 1 ∀ (u, v) ∈ E, x v >= 0 ∀ v ∈ V.

Example: Weighted Vertex Cover Variables: {x v | v ∈ V }. The IP: Min ∑C v x v where C v : cost associated with vertex x v : indicator variable s.t: x u + x v ≥ 1 ∀ (u, v) ∈ E x v ∈ {0, 1} ∀ v ∈ V

LP Relaxation (Drop the integrality constraint) Example: Weighted Vertex Cover The IP: Min ∑C v x v s.t: x u + x v ≥ 1 ∀ (u, v) ∈ E x v ∈ {0, 1} ∀ v ∈ V The LP relaxation: Min ∑C v x v s.t: x u + x v ≥ 1 ∀ (u, v) ∈ E x v ≥ 0 ∀ v ∈ V

LP rounding If x v ≥ ½, round it up to 1 Else round it down to 0. Here x v is the solution obtained from LP E.g: LP: ¼ c 1 + ½ c 2 + ¾ c 3 + 4∕5 c 4 IP : c 2 + c 3 + c 4

Claim 1: Solution Obtained is feasible Let (u,v) ∈ E Since the solution of LP is feasible, values of x v, v ∈ V, satisfy x u + x v ≥ 1(1) ⇒ atleast one of x u and x v ≥ ½ Assume x’ u and x’ v be the solutions obtained after rounding, then at least one of them must be 1, i.e. x’ u + x’ v ≥ 1 So the solution, obtained after rounding, is feasible.

Claim 2: C(S) ≤ 2L OPT According to the strategy some of the variables have been increased to a maximum of double & some have been reduced to 0, i.e C v’ <= 2C v.

So, C(S): cost of solution obtained by IP C(S) ≤ ∑ v’ C v’ x v’ ≤ 2 ∑ v C v x v ( x’ v ≤ 2* X v ) = 2 LP OPT Hence claim 2 follows

Set Cover Problem A finite set (universe) U of n elements, U= {e 1, e 2,…, e n }, a collection of subsets of U i.e. S 1, S 2, …., S k with some cost, select a minimum cost collection of these sets that covers all elements of U.

IP: IP: Indicator variable x s, x s ∈ {0,1} x s =0 if set S is not picked x s =1 if set S is picked Min ∑ s C s x s s.t. ∑ s:e belongs to S x s ≥ 1 ∀ e ∈ U x s = {0,1} LP Relaxation: Min ∑ s C s x s s.t. ∑ s:e belongs to S x s ≥ 1 ∀ e ∈ U x s > 0

LP rounding for SC Let f denote the maximum frequency of any element in U S i  Find an optimal solution to LP-Relaxation x s >1/f round it to 1 x s <1/f discard the set, i.e. round it down to 0.

Claims Claim 1: solution is feasible Claim 2: It gives factor f approximation

Claim 1: Solution is feasible Let, e i ∈ U, 1≤i≤n S be the collection of subsets of U e m : 1<m <n belongs to l subsets of S where 1<l<k Since the solution of LP is feasible i.e. values of x s s ∈ S obtained satisfies x s 1 + x s 2 + x s 3 + ….+ x s l >1(1) ⇒ atleast one of x s1, x s2, x s3,…., x l >1/f ⇒ x’ s1 + x’ s2 + x’ s3 +….+ x’ l > 1 Where x’ si is the solution obtained after rounding. Thus it is feasible.

Claim 2: Factor f approximation For each set s ∈ Collection of picked sets(S), x s has been increased by a factor of atmost f. Let C(s): Cost of our solution Therefore, C(S) ≤ ∑ s C s x’ s ∀ s ∈ S ≤ f ∑ s C s x s ( x’ s ≤ f* x s ) = f LP OPT Hence it is a factor ’f’ approximation. Note: f factor could be large. Later we’ll see a technique of rounding that gives O(log n) factor.

Linear Programming - Example Minimize 8x 1 + 5x 2 + 5x 3 + 2x 4 subject to: 3x 1 + 4x 2 ≥ 6 3x 2 + x 3 + x 4 ≥ 5 x i ≥ 0 for all i x = (2, 1,0, 3) is a feasible solution. 8*2 + 5*1 + 2*3 = 27 is an upper bound.

What is the Lower Bound? Minimize 8x 1 + 5x 2 + 5x 3 + 2x 4 subject to: 3x 1 + 4x 2 ≥ 6 3x 2 + x 3 + x 4 ≥ 5 x i ≥ 0 for all i LB: 8x 1 + 5x 2 + 5x 3 + 2x 4 ≥ 3x 1 + 4x 2 ≥ 6 Better LB: 8x 1 + 5x 2 + 5x 3 + 2x 4 ≥ (3x 1 + 4x 2 ) + (3x 2 + x 3 + x 4 ) ≥ 6+5 = 11

How to compute a good LB Minimize 8x 1 + 5x 2 + 5x 3 + 2x 4 subject to: 3x 1 + 4x 2 ≥ 6 ……………….y 1 3x 2 + x 3 + x 4 ≥ 5……………y 2 x i ≥ 0 for all i Assign a non-negative coefficient y i to every inequality such that 8x 1 + 5x 2 + 5x 3 + 2x 4 ≥ y 1 (3x 1 + 4x 2 ) + y 2 (3x 2 + x 3 + x 4 ) Then, LHS ≥ 6y 1 + 5y 2. We are interested in finding y i ’s such that RHS is maximum. This leads to our dual problem.

The corresponding dual for the given example will be: max 6y 1 + 5y 2 such that, 3y 1 < 8 4y 1 + 3y 2 < 5 y 1 < 5 y 2 < 2 and, y i > 0for all i Thanks to Divya Narang(8), Gautam Pahuja(10), Harshi Verma(11), Monika Bisla(14)

Weak Duality Theorem Theorem: If x and y are feasible then, > Proof: > = > Thanks to Divya Narang(8), Gautam Pahuja(10), Harshi Verma(11), Monika Bisla(14)

Set Cover x s is 1 iff set S in included in the cover. The Primal : Objective : min ∑ C s x s s.t > 1 U x s = {0,1} LP relaxation: x s > 0 Thanks to Divya Narang(8), Gautam Pahuja(10), Harshi Verma(11), Monika Bisla(14)

Introduce an indicator variable y e for each of the constraints in primal. The Dual : objective: max s.t < C S i for i = 1 to k Thanks to Divya Narang(8), Gautam Pahuja(10), Harshi Verma(11), Monika Bisla(14)

Example S = { x, y, z, w} S 1 = { x, y} S 2 = { y, z} S 3 = { x, w, y} Let x s 1, x s 2, x s 3 be an indicator variable for S 1, S 2, S 3 respectively. Let C s 1, C s 2, C s 3 is the cost of S 1, S 2, S 3 respectively. Thanks to Divya Narang(8), Gautam Pahuja(10), Harshi Verma(11), Monika Bisla(14)

Primal Min : C s 1 x s 1 + C s 2 x 2 + C s 3 x 3 Subject to x s 1 + x s 3 > 1 (y x ) x s 1 + x s 2 + x s 3 > 1 (y y ) x s 2 > 1 (y z ) x s 3 > 1 (y w ) x s 1, x s 2, x s 3 > 0 Thanks to Divya Narang(8), Gautam Pahuja(10), Harshi Verma(11), Monika Bisla(14)

Dual Max: y x + y y + y z + y w Subject to y x + y y < C s 1 y y + y z < C s 2 y x + y y + y w < C s 3 y x, y y, y z, y w > 0 Thanks to Divya Narang(8), Gautam Pahuja(10), Harshi Verma(11), Monika Bisla(14)

From set cover via lp

Complementary Slackness Conditions

Relaxed Complementary Slackness Conditions

Example: Weighted Vertex Cover Primal: Min ∑C v x v s.t: x u + x v ≥ 1 ∀ (u, v) ∈ E x v ∈ {0, 1} ∀ v ∈ V Dual: Max ∑y e s.t: ∑ e:e is incident on v y e < C v ∀ v ∈ V y e ∈ {0, 1} ∀ e ∈ E

Primal Dual Schema 1 U = empty, y = 0 For each edge e = (u, v) ye = min {c(u) − ∑ e ′ :u ∈ e ′ y e ′, c(v) − ∑ e ′ :v ∈ e ′ y e ′ } U = U union argmin {c(u) − ∑ e ′ :u ∈ e ′ y e ′, c(v) − ∑ e ′ :v ∈ e ′ y e ′ } Output U

Thanks to Neha& Neha Katyal

5 4 (1) 3 (0) Y e =3 For every edge pick minimum of two vertices Min{4,3} = 3 Set ye=3 U has vertex having red color Thanks to Neha& Neha Katyal

5(4) 4 (1) (0) 3(0) Y e =1 Y e =3 Min{1,5} = 1 Set ye= Thanks to Neha& Neha Katyal

5(4) 4 (1) (0) 3(0) Y e =1 Y e =3 Min{1,0} = 0 Set ye=0 Min{2,0} = 0 Set ye=0 Min{3,0} = 0 Set ye=0 Y e = Thanks to Neha& Neha Katyal

5(4)(0) Y e =4 7(3) 4 (1) (0) 3(0) Y e =1 Y e =3 Y e =0 3 2 Thanks to Neha& Neha Katyal

Y e =0 3 4 (1) (0) 3(0) Y e =1 Y e =3 Y e =0 5(4)(0) Y e =4 7(3) 2 Thanks to Neha& Neha Katyal

2 (0) 3 (1) Y e =2 4 (1) (0) 3(0) Y e =1 Y e =3 Y e =0 5(4)(0) Y e =4 7(3) Y e =0 Red-colored nodes form a vertex-cover Thanks to Neha& Neha Katyal

2 (0) 3 (1) Y e =2 4 (1) (0) 3(0) Y e =1 Y e =3 Y e =0 5(4)(0) Y e =4 7(3) Y e =0 Red-colored nodes form a vertex-cover Thanks to Neha& Neha Katyal

Solution is feasible Trivial, since the algorithm runs for every edge. Let e= (u,v) be an edge. Suppose if possible, none of the x u and x v has been set to 1 i.e constraints corresponding to u and v have not yet gone tight and we have a y e that can be raised. That means the algorithm has not yet completed.

Solution is 2 factor For every x v > 0, dual constraint is tight (trivially). For every edge e = (u,v), 1 < x u + x v < 2 Hence, by relaxed CSC, cost of the solution is at most twice the OPT.

Primal-Dual Schema 2 (Ignore) Raise the dual variables uniformly until one or more of the constraints become tight. Freeze the dual variables contributing to these constraints. Set the corresponding primal variable to 1. If more than one constraint becomes tight, take them one by one in an arbitrary order.

Thanks to Neha& Neha Katyal

5 4 3 (1) Y e =3/

5 4 3 (1) Y e =3/ Y e =3/2

5 4 3 (1) Y e =3/ Thanks to Neha& Neha Katyal Y e =3/4 Y e =3/2 Y e =7/4 Y e =3/2

Solution is feasible Let e= (u,v) be an edge. Suppose if possible, none of the x u and x v has been set to 1 i.e consraints corresponding to u and v have not yet gone tight and we have a y e that can be raised. That means the algorithm has not yet completed.

Solution is 2 factor For every x v > 0, dual constraint is tight (trivially). For every edge e = (u,v), 1 < x u + x v < 2 Hence, by relaxed CSC, cost of the solution is at most twice the OPT.