Strong LP Formulations & Primal-Dual Approximation Algorithms David Shmoys (joint work Tim Carnes & Maurice Cheung) TexPoint fonts used in EMF. Read the.

Slides:



Advertisements
Similar presentations
Iterative Rounding and Iterative Relaxation
Advertisements

The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Solving IPs – Cutting Plane Algorithm General Idea: Begin by solving the LP relaxation of the IP problem. If the LP relaxation results in an integer solution,
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Introduction to Algorithms
Primal-Dual Algorithms for Connected Facility Location Chaitanya SwamyAmit Kumar Cornell University.
Approximation Algorithms for Capacitated Set Cover Ravishankar Krishnaswamy (joint work with Nikhil Bansal and Barna Saha)
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
1 Approximation Algorithms for Demand- Robust and Stochastic Min-Cut Problems Vineet Goyal Carnegie Mellon University Based on, [Golovin, G, Ravi] (STACS’06)
Primal Dual Method Lecture 20: March 28 primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Approximation Algorithms
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Primal-Dual Algorithms for Connected Facility Location Chaitanya SwamyAmit Kumar Cornell University.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Distributed Combinatorial Optimization
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
(work appeared in SODA 10’) Yuk Hei Chan (Tom)
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
1/24 Algorithms for Generalized Caching Nikhil Bansal IBM Research Niv Buchbinder Open Univ. Israel Seffi Naor Technion.
1 The Santa Claus Problem (Maximizing the minimum load on unrelated machines) Nikhil Bansal (IBM) Maxim Sviridenko (IBM)
LP-based Algorithms for Capacitated Facility Location Chaitanya Swamy Joint work with Retsef Levi and David Shmoys Cornell University.
V. V. Vazirani. Approximation Algorithms Chapters 3 & 22
LP-Based Algorithms for Capacitated Facility Location Hyung-Chan An EPFL July 29, 2013 Joint work with Mohit Singh and Ola Svensson.
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Batch Scheduling of Conflicting Jobs Hadas Shachnai The Technion Based on joint papers with L. Epstein, M. M. Halldórsson and A. Levin.
Primal-Dual Algorithms for Connected Facility Location Chaitanya SwamyAmit Kumar Cornell University.
LP-Based Algorithms for Capacitated Facility Location Hyung-Chan An EPFL July 29, 2013 Joint work with Mohit Singh and Ola Svensson.
C&O 355 Mathematical Programming Fall 2010 Lecture 18 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Chapter 1. Formulations 1. Integer Programming  Mixed Integer Optimization Problem (or (Linear) Mixed Integer Program, MIP) min c’x + d’y Ax +
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
and 6.855J Lagrangian Relaxation I never missed the opportunity to remove obstacles in the way of unity. —Mohandas Gandhi.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Integer Programming (정수계획법)
C&O 355 Mathematical Programming Fall 2010 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
1 Approximation Algorithms for Generalized Scheduling Problems Ravishankar Krishnaswamy Carnegie Mellon University joint work with Nikhil Bansal, Anupam.
Facility Location with Service Installation Costs Chaitanya Swamy Joint work with David Shmoys and Retsef Levi Cornell University.
Approximation Algorithms Duality My T. UF.
Ion I. Mandoiu, Vijay V. Vazirani Georgia Tech Joseph L. Ganley Simplex Solutions A New Heuristic for Rectilinear Steiner Trees.
Approximation Algorithms based on linear programming.
Linear program Separation Oracle. Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer.
6.5 Stochastic Prog. and Benders’ decomposition
1.3 Modeling with exponentially many constr.
Chapter 6. Large Scale Optimization
Analysis of Algorithms
Integer Programming (정수계획법)
Chapter 1. Formulations (BW)
1.3 Modeling with exponentially many constr.
Integer Programming (정수계획법)
Flow Feasibility Problems
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 1. Formulations.
1.2 Guidelines for strong formulations
Chapter 6. Large Scale Optimization
1.2 Guidelines for strong formulations
Presentation transcript:

Strong LP Formulations & Primal-Dual Approximation Algorithms David Shmoys (joint work Tim Carnes & Maurice Cheung) TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAA June 23, 2011

Introduction The standard approach to solve combinatorial integer programs in practice – start with a “simple” formulation & add valid inequalities Our agenda: show that same approach can be theoretically justified by approximation algorithms An ® - approximation algorithm produces a solution of cost within a factor of ® of the optimal solution in polynomial time

Introduction Primal-dual method a leading approach in the design of approximation algorithms for NP-hard problems Consider several capacitated covering problems - covering knapsack problem - single-machine scheduling problems Give constant approximation algorithms based on strong linear programming (LP) formulations

Approximation Algorithms and LP Use LP to design approximation algorithms Optimal value for LP gives bound on optimal integer programming (IP) value Want to find feasible IP solution of value within a factor of ® of optimal LP solution Key is to start with “right” LP relaxation LP-based approximation algorithm produces additional performance guarantee on each problem instance Empirical success of IP cutting plane methods suggests stronger formulations - needs theory!

Primal-Dual Approximation Algorithms Do not even need to solve LP!! Each min LP has a dual max LP of equal optimal value Goal: Construct feasible integer solution S along with feasible solution D to dual of LP relaxation such that cost(S) · ®¢ cost(D) · ®¢ LP-OPT · ®¢ OPT ) ® -approximation algorithm

Adding Valid Inequalities to LP LP formulation can be too “weak” if there is “big” integrality gap - OPT/LP-OPT is often unbounded Fixed by adding additional inequalities to formulation  Restricts set of feasible LP solutions  Satisfied by all integer solutions, hence “valid”  Key technique in practical codes to solve integer programs

Knapsack-Cover Inequalities Carr, Fleischer, Leung and Phillips (2000) developed valid knapsack-cover inequalities and LP-rounding algorithms for several capacitated covering problems  Requires solving LP with ellipsoid method  Further complicated since inequalities are not known to be separable  GOAL: develop a primal-dual analog!

Highlights of Our Results For each of the following problems, we have a primal-dual algorithm that achieves a performance guarantee of 2  Min-Cost (Covering) Knapsack  Single-Demand Capacitated Facility Location  Single-Item Capacitated Lot-Sizing We extend the knapsack-cover inequalities to handle this more general setting  Single-Machine Minimum-Weight Late Jobs 1||  w j U j  Single-Machine General Minimum-Sum Scheduling 1||  f j Used valid knapsack-cover inequalities developed by Carr, Fleischer, Leung and Phillips as LP formulation

Highlights of Our Results For each of the following problems, we have a primal-dual algorithm that achieves a performance guarantee of 2  Min-Cost (Covering) Knapsack  Single-Demand Capacitated Facility Location  Single-Item Capacitated Lot-Sizing We extend the knapsack-cover inequalities to handle this more general setting  Single-Machine Minimum-Weight Late Jobs 1||  w j U j  Single-Machine General Minimum-Sum Scheduling 1||  f j Used valid knapsack-cover inequalities developed by Carr, Fleischer, Leung and Phillips as LP formulation

Min-Sum 1-Machine Scheduling 1||  f j Each job j has a cost function f j (C j ) that is non-negative non-decreasing function of its completion time C j Goal: minimize  j f j (C j ) What is known? Bansal & Pruhs (FOCS ’10) gave first constant-factor algorithm Main result of Bansal-Pruhs adds release dates, and permits preemption – result is O(loglog(nP))-approximation algorithm OPEN QUESTIONS – Is a constant-factor doable?

10 Open Problems Better constant factors? Any constant factor? Primal-dual when rounding is known?  But – nothing of the type – good constant factor is known, but is a factor of 1+  possible for any  >0?

Min-Sum 1-Machine Scheduling 1||  f j Each job j has a cost function f j (C j ) that is non-negative non-decreasing function of its completion time C j Goal: minimize  j f j (C j ) What is known? Bansal & Pruhs (FOCS ’10) gave first constant-factor algorithm Main result of Bansal-Pruhs adds release dates, and permits preemption – result is O(loglog(nP))-approximation algorithm OPEN QUESTIONS – Is a constant-factor doable? - Can 1+ ² be achieved w/o release dates?

Primal-Dual for Covering Problems Early primal-dual algorithms  Bar-Yehuda and Even (1981) – weighted vertex cover  Chvátal (1979) – weighted set cover Agrawal, Klein and Ravi (1995) Goemans and Williamson (1995) generalized Steiner (cut covering) problems Bertismas & Teo (1998) Jain & Vazirani (1999) uncapacitated facility location problem Inventory problems  Levi, Roundy and Shmoys (2006)

Minimum (Covering) Knapsack Problem Given a set of items F each with a cost c i and a value u i Want to find a subset of items with minimum cost such that the total value is at least D minimize  i  F c i x i subject to  i  F u i x i  D x i  {0,1} for each i  F

Bad Integrality Gap Consider the min knapsack problem with the following two items Integer solution must take item 1 and incurs a cost of 1 LP solution can take all of item 2 and just 1/D fraction of item 1, incurring a cost of 1/D c 1 = 1 u 1 = D c 2 = 0 u 2 = D-1

Knapsack-Cover Inequalities Proposed by Carr, Fleischer, Leung and Phillips (2000) Consider a subset A of items in F If we were to take all items in A, then we still need to take enough items to meet leftover demand D A = {1,2,3} D – u(A) u(A) =  i 2 A u i  i 2 F n A u i x i ¸ D-u(A)

Knapsack-Cover Inequalities This inequality adds nothing new, but we can now restrict the values of the items where since these inequalities only need to be valid for integer solutions D A = {1,2,3} D – u(A)  i 2 F n A u i x i ¸ D-u(A)  i 2 F n A u i (A) x i ¸ D-u(A) u i (A) = min{ u i, D-u(A) }

Knapsack-Cover Inequalities on Bad Example Before: Integer solution picks item 1 for cost 1 LP solution picks item 2 and 1/D of item 1 for cost 1/D Now: Consider knapsack-cover ineq with A = {2} Then D – u(A) = 1 and u i (A) = 1 so Thus LP must take all of item 1 for cost 1 c 1 = 1 u 1 = D c 2 = 0 u 2 = D-1

Strengthened Min Knapsack LP When A = ; the knapsack-cover inequality becomes which is the original min knapsack inequality New strengthened LP is  i 2 F n A u i (A) x i ¸ D-u(A) )  i 2 F u i x i ¸ D Minimize  i 2 F c i x i subject to  i 2 F n A u i (A) x i ¸ D- u(A), for each subset A x i ¸ 0, for each i 2 F

Dual Linear Program Dual of LP formed by knapsack-cover inequalities opt Dual := max  A µ F (D-u(A))v(A) subject to   µ F : i  u i (A) v(A) · c i, for each i 2 F v(A) ¸ 0, for each A µ F

Primal-Dual D - u(A) = 5 Increase v(A) D = Dual Variables: Initially all zero A = ;A = {3} Dual Variables: v(;) = 0.25 D - u(A) = 3 Increase v(A)

A = {3} Primal-Dual D - u(A) = 3 Increase v(A) D = Dual Variables: v(;) = 0.25

A = {3} Primal-Dual D - u(A) = 3 Increase v(A) D = Dual Variables: v(;) = 0.25

A = {3} Primal-Dual D - u(A) = 3 Increase v(A) D = Dual Variables: v(;) = 0.25 Dual Variables: v(;) = 0.25 v({3}) = 0.5 A = {3,5} D - u(A) = 1 Increase v(A)

A = {3} Primal-Dual D - u(A) = 1 Increase v(A) D = A = {3,5} Dual Variables: v(;) = 0.25 v({3}) = 0.5

A = {3} Primal-Dual D - u(A) = 1 Increase v(A) D = A = {3,5} Dual Variables: v(;) = 0.25 v({3}) = 0.5 Dual Variables: v(;) = 0.25 v({3}) = 0.5 v({3,5}) = 1 A = {3,5,1} D - u(A) = -1 Increase v(A) Stop!

Primal-Dual c 1 = 2.5 u 1 = 2 c 2 = 2 u 2 = 1 c 3 = 0.5 u 3 = 2 c 4 = 10 u 4 = 5 c 5 = 1.5 u 5 = 2 Primal-Dual Cost = 4.5 c 1 = 2.5 u 1 = 2 c 2 = 2 u 2 = 1 c 3 = 0.5 u 3 = 2 c 4 = 10 u 4 = 5 c 5 = 1.5 u 5 = 2 Opt. Integer Cost = 4

Primal-Dual Summary Start with all variables set to zero and solution A as the empty set Increase variable v(A) until a dual constraint becomes tight for some item i Add item i to solution A and repeat Stop once solution A has large enough value to meet demand D Call final solution S and set x i = 1 for all i 2 S

Analysis Let l be last item added to solution S If we increased dual variable v(A) then l was not in A Thus if v(A) > 0 then A µ ( S \ l ) Since u( S\ l ) < D then u((S\ l ) \ A) < D – u(A)

Analysis (continued) We have u((S\ l )\A) 0 Cost of solution is Dual LP

Primal-Dual Theorem For the min-cost covering knapsack problem, the LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual 2-approximation algorithm.

Knapsack-Cover Inequalities Everywhere Bansal, Buchbinder, Naor (2008) Randomized competitive algorithms for generalized caching (and weighted paging) Bansal, Gupta, & Krishnaswamy (2010) 485- approximation algorithm for min-sum set cover Bansal & Pruhs (2010) O(log log nP)-approximation algorithm for general 1-machine preemptive scheduling + O(1) with identical deadlines

Minimum-Weight Late Jobs on 1 Machine Each job j has processing time p j, deadline d j, weight w j Choose a subset L of jobs of minimum-weight to be late - not scheduled to complete by deadline This problem is (weakly) NP-hard can be solved in O( n  j p j ) time [Lawler & Moore], (1+ ² )-approximation in O(n 3 / ² ) time [Sahni] If there also are release dates that constrain when a job may start, no approximation result is possible - focus on max-weight set of jobs scheduled on time [Bar-Noy, Bar-Yehuda, Freund, Naor, & Schieber]- allow preemption [Bansal & Pruhs]

What if all deadlines are the same? Total processing time is  j p j ! P WLOG assume schedule runs through [0,P] Deadline D ) at least P-D units of processing are done after D So just select set of total processing at least P-D of minimum total weight, minimum-cost covering knapsack problem

Same Idea for General Deadlines Total processing time is  j p j  P WLOG assume schedule runs through [0,P] Assume d 1 · d 2 · … · d n Deadline d i ) among all jobs with deadlines · d i,  P(i)-d i units of processing are done after d i where S(i) = { j : d j  d i } and P(i) =  j  S(i) p j Minimize  w j y j subject to  j  S(i) p j y j ¸ P(i)-d i, i=1,…,n y j ¸ 0, j=1,…,n

Strengthened LP – Knapsack Covers Minimize  w j y j subject to  j  S(L,i) p j (L,i) y j  D(L,i), for each L,i where S(L,i) = { j: d j  d i, j  L} D(L,i) = max{  j  S(L,i) p j - d i, 0} p j (L,i) = min{ p j, D(L,i) } Dual: Maximize  D(L,i) v(L,i) subject to  (L,i): j 2 S(L,i) p j (L,i) v(L,i) · w j for each j v(L,i) ¸ 0 for each L,i

Primal-Dual Summary Start with all dual variables set to 0 and solution A as the empty set Increase variable v(A,i) with largest D(A,i) until a dual constraint becomes tight for some item i Add item i to solution A and repeat Stop once solution A is sufficient so remaining jobs N-A can be scheduled on time Examine each item j in A in reverse order and delete j if reduced late set is still feasible Call final solution L * and set y j = 1 for all j 2 L *

Highlights of the Analysis Lemma. Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible. Note: in previous case, since all deadlines were equal, the last job l added satisfies this property. Here, the reverse delete process is set exactly to ensure that the Lemma holds.

Highlights of the Analysis Lemma. Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible. Lemma.  j: j  i, j  L(i) – {k} p j (L,i) 0.

Highlights of the Analysis Lemma. Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible. Lemma.  j: j  i, j  L(i) – {k} p j (L,i) 0. Fact. p k (L,i)  D(L,i) (by definition of p k (L,i) )

Highlights of the Analysis Lemma. Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible. Lemma.  j: j  i, j  L(i) – {k} p j (L,i) 0. Fact. p k (L,i)  D(L,i) (by definition of p k (L,i) ) Corollary.  j: j  i, j  L(i) p j (L,i) 0.

Previous Analysis (flashback) We have u((S\ l )\A) 0 Cost of solution is Dual LP

Highlights of the Analysis Lemma. Suppose current iteration increases v(L,i), and let L(i) be jobs put in final late set L * afterwards. Then 9 job k  L(i) so that L * -{k} is not feasible. Lemma.  j: j  i, j  L(i) – {k} p j (L,i) 0. Fact. p k (L,i)  D(L,i) (by definition of p k (L,i) ) Corollary.  j: j  i, j  L(i) p j (L,i) 0.

Highlights of the Analysis Corollary  j: j  i, j  L(i) p j (L,i) 0. Same trick here:  j  L* w j =  j  L*  (L,i): j 2 S(L,i) p j (L,i) v(L,i) =  (L,i) v(L,i)  j: j  i, j  L(i) p j (L,i) ·  (L,i) 2D(L,i) v(L,i) · 2 OPT

Primal-Dual Theorem For the 1-machine min-weight late jobs scheduling problem with a common deadline, the LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual 2- approximation algorithm.

General 1-Machine Min-Cost Scheduling Each job j has its own nondecreasing cost function f j (C j ) – where C j denotes completion time of job j Assume that all processing times are integer Goal: construct schedule to minimize total cost incurred LP variables – x jt =1 means job j has C j =t Knapsack cover constraint: for each t and L, require that total processing time of jobs finishing at time t or later is sufficiently large

Primal-Dual Theorem(s) For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual pseudo-polynomial 2-approximation algorithm. For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual (2+ ² )-approximation algorithm.

“Weak” LP Relaxation Total processing time is  j p j  P WLOG assume schedule runs through [0,P] x jt = 1 means job j completes at time t Minimize  f j (t) x jt subject to  t  1,…,P x jt = 1, j=1,…,n  j  {1,…n}  s  {t,…,P} p j x js  D(t) t=1,…,P x jt  0 j=1,…,n; t=1,…,P where D(t) = P-t+1.

Strong LP Relaxation Total processing time is  j p j  P WLOG assume schedule runs through [0,P] x jt = 1 means job j completes at time t Minimize  f j (t) x jt subject to  t  {1,…,P} x jt = 1, for all j  j  L   s  {t,…,P} p j (L,t) x js ¸ D(L,t) for all L,t x jt ¸ 0, for all j,t where D(t) = P-t+1, D(L,t) = max{0, D(t)-  j  L p j }, and p j (L,t) = min{p j, D(L,t)}.

Primal and Dual LP Minimize  f j (t) x jt subject to  t  {1,…,P} x jt = 1, for all j  j  L  s=t,…,P p j (L,t) x js ¸ D(L,t) for all L,t x jt ¸ 0, for all j,t D(t) = P-t+1 D(L,t) = max{0,  j  L p j –t+1} p j (L,t) = min{p j, D(L,t)} Maximize  L  t D(L,t) v(L,t) subject to  L: j  L  t=1,…,s p j (L,t) v(L,t)  f j (s) for all j,s v(L,t)  0 for all L,t

Primal-Dual Summary Start w/ all dual variables set to 0 and each A t = ; Increase variable v(A t,t) with largest D(A t,t) until a dual constraint becomes tight for some item i (break ties by selecting latest time) Add item i to solution A s for all s · t and repeat Stop once solution A is sufficient so remaining jobs N-A satisfy all demand constraints Focus on pairs (j,t) where t is latest job j is in A t and perform a reverse delete Set d j =t for job j by remaining pairs (j,t) Schedule in Earliest Due Date order

Primal-Dual Theorem For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual pseudo-polynomial 2-approximation algorithm.

Removing the “Pseudo” with a (1+ ² ) Loss  This requires only standard techniques  For each job j, partition the potential job completion times {1,…,P} into blocks so that within block the cost for j increases by · 1+ ²  Consider finest partition based on all n jobs  Now consider variables x jt that assign job j to finish in block t of this partition.  All other details remain basically the same. Fringe Benefit: more general models, such as possible periods of machine non-availability

Primal-Dual Theorem For 1-machine min-cost scheduling, LP relaxation with knapsack-cover inequalities can be used to derive a (simple) primal-dual (2+ ² )-approximation algorithm.

Some Open Problems Give a constant approximation algorithm for 1-machine min-sum scheduling with release dates allowing preemption Give a (1+ ² )-approximation algorithm for 1-machine min-sum scheduling, for arbitrarily small ² > 0 Give an LP-based constant approximation algorithm for capacitated facility location Use “configuration LP” to find an approximation algorithm for bin-packing problem that uses at most ONE bin more than optimal

Thank you! Any questions?