Linear program Separation Oracle. Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer.

Slides:



Advertisements
Similar presentations
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Advertisements

The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Solving IPs – Cutting Plane Algorithm General Idea: Begin by solving the LP relaxation of the IP problem. If the LP relaxation results in an integer solution,
Linear Programming (LP) (Chap.29)
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
C&O 355 Mathematical Programming Fall 2010 Lecture 21 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
1 IOE/MFG 543 Chapter 3: Single machine models (Sections 3.1 and 3.2)
1 IOE/MFG 543 Chapter 5: Parallel machine models (Sections )
Completion Time Scheduling Notes from Hall, Schulz, Shmoys and Wein, Mathematics of Operations Research, Vol 22, , 1997.
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Design and Analysis of Algorithms
Ecole Polytechnique, Nov 7, Minimizing Total Completion Time Each job specified by  procesing time (length p j )  release time r j Goal: compute.
1 Single Machine Deterministic Models Jobs: J 1, J 2,..., J n Assumptions: The machine is always available throughout the scheduling period. The machine.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Approximation Algorithms
CSE 421 Algorithms Richard Anderson Lecture 6 Greedy Algorithms.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Distributed Combinatorial Optimization
Linear Programming Applications
1 Spanning Tree Polytope x1 x2 x3 Lecture 11: Feb 21.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Minimizing Makespan and Preemption Costs on a System of Uniform Machines Hadas Shachnai Bell Labs and The Technion IIT Tami Tamir Univ. of Washington Gerhard.
LINEAR PROGRAMMING SIMPLEX METHOD.
Decision Procedures An Algorithmic Point of View
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
Design Techniques for Approximation Algorithms and Approximation Classes.
Duality Theory LI Xiaolei.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Batch Scheduling of Conflicting Jobs Hadas Shachnai The Technion Based on joint papers with L. Epstein, M. M. Halldórsson and A. Levin.
Workforce scheduling – Days off scheduling 1. n is the max weekend demand n = max(n 1,n 7 ) Surplus number of employees in day j is u j = W – n j for.
1 Combinatorial Algorithms Local Search. A local search algorithm starts with an arbitrary feasible solution to the problem, and then check if some small,
Outline Introduction Minimizing the makespan Minimizing total flowtime
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
Problems in Combinatorial Optimization. Linear Programming.
Approximation Algorithms Duality My T. UF.
Approximation Algorithms based on linear programming.
Computational Geometry
Single Machine Scheduling Problem Lesson 5. Maximum Lateness and Related Criteria Problem 1|r j |L max is NP-hard.
Approximation Algorithms for Scheduling Lecture 11.
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
Trigonometric Identities
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
Lap Chi Lau we will only use slides 4 to 19
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
Chap 10. Sensitivity Analysis
Topics in Algorithms Lap Chi Lau.
Linear Programming.
Chap 9. General LP problems: Duality and Infeasibility
Chapter 6. Large Scale Optimization
Integer Programming (정수계획법)
2. Generating All Valid Inequalities
Chapter 5. The Duality Theorem
Integer Programming (정수계획법)
Solving Systems of Equations by the Substitution and Addition Methods
Week 2: Greedy Algorithms
Branch-and-Bound Algorithm for Integer Program
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
Presentation transcript:

Linear program Separation Oracle

Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer solutions. We will see that by solving a relaxation, we are able to get information on how the jobs might be ordered. We construct a solution in which we schedule jobs in the same order as given by the relaxation, and we are able toshow that his leads to a good solution.

1|r j |ΣC j Single machine J = {1,..., n} – jobs p j ≥ 0 – processing time of job j. r j ≥ 0 – release time of job j. С j (  ) – completion time of job j in . No preemption. The machine cannot process two jobs at the same time.

Example J3J3 J2J2 p 1 = 1 p 1 = 1 r 1 = 2 p 2 = 3 p 2 = 3 r 2 = 1 r 2 = 1 p 3 = 10 p 3 = 10 r 3 = 0 r 3 = 0 J1J J1J1 J2J2 J3J 11 С 1 (  1 ) + С 2 (  1 ) + С 3 (  1 ) = 35 22 С 1 (  2 ) + С 2 (  2 ) + С 3 (  2 ) = 24 J2J2 J1J1 J3J3

1|pmtn, r j |ΣC j We will show that we can convert any preemptive schedule into a nonpreemptive schedule in such way that the completion time of each job at most doubles. In a preemptive schedule, we can still only one job at a time on the machine, but we do not need to complete each job’s required processing consecutively; we can interrupt the processing of a job with the processing of other job.

SRPT rule Each time that a job is completed, or at the next release date, the job to be processed next has the smallest remaining processing time among the available jobs. Denote by  the schedule obtained by SRPT rule and show that  is optimal. Assume that an optimal schedule  * coincides with a schedule  up to time t.

Exchange argument jiij i iii j j i ** t t t CjCj CiCi CjCj CiCi

Preemptive solution J3J3 J2J2 p 1 = 1 p 1 = 1 r 1 = 2 p 2 = 3 p 2 = 3 r 2 = 1 r 2 = 1 p 3 = 10 p 3 = 10 r 3 = 0 r 3 = J1J1 J2J2 J3J3  pr С 1 (  pr ) + С 2 (  pr ) + С 3 (  pr ) = 22 J2J2 J1J1 J3J3

Lower bound Let C j (  pr ) be the completion time of job j in an optimal preemptive schedule. Let OPT be the sum of completion times in an optimal nonpreemptive schedule. We have

Algorithm «Rounding preemptive schedule» 1.Find an optimal preemptive schedule  pr using SRPT. 2.Schedule the jobs in  nonpreemptively in the same order that they complete in  pr. To be more precise, suppose that the jobs are indexed such that С 1 (  pr ) ≤ С 2 (  pr ) ≤ … ≤ С n (  pr ). Then we schedule job 1 from its release date r 1 to time r 1 + p 1. We schedule job 2 to start as soon as possible after job 1; that is, we schedule it from max(r 1 + p 1, r 2 ) to max(r 1 + p 1, r 2 ) + p 2. The remaining jobs are scheduled analogously. We will show that scheduling nonpreemptively in this way does not delay the jobs by too much.

Example J3J3 J2J  pr С 1 (  pr ) + С 2 (  pr ) + С 3 (  pr ) = 22 J2J2 J1J1 J3J3 С 1 (  ) + С 2 (  ) + С 3 (  ) = 25 J2J2 J1J1 J3J3 

Lemma 11.1 For each job j = 1,…,n, С j (  ) ≤ 2С j (  pr ).

Proof Let us first derive some easy lower bounds on С j (  pr ). Since we know that j is processed in  pr after jobs 1,…, j−1, we have By construction it is also the case that

Proof Consider the nonpreemptive schedule constructed by the algorithm, and focus on any period of time that the machine is idle; idle time occurs only when the next job to be processed has not yet been released. Consequently, in the time interval there cannot be any point in time at which the machine is idle. Therefore, this interval can be of length at most

2-approximation Theorem 11.2 Scheduling in order of the completion times of an optimal preemptive schedule is a 2-approximation algorithm for scheduling jobs on a single machine with release dates to minimize the sum of completion times.

1|r j |Σw j C j Single machine J = {1,..., n} – jobs p j ≥ 0 – processing time of job j. r j ≥ 0 – release time of job j. w j ≥ 0 – weight of job j. С j (  ) – completion time of job j in . No preemption. The machine cannot process two jobs at the same time..

1|pmtn, r j |Σw j C j The algorithm “Rounding preemptive schedule” and analysis give us a way to round any preemptive schedule to one whose sum of weighted completion times is at most twice more. Unfortunately, we cannot use the same technique of finding a lower bound on the cost of the optimal nonpreemptive schedule by finding an optimal preemptive schedule. Unlike the unweighted case, it is NP-hard to find an optimal schedule for the preemptive version of the weighted case.

What we use to obtain the 2-approximation? We can give a linear programming relaxation of the problem with variables C j such that these inequalities hold within a constant factor, which in turn will lead to a constant factor approximation for the 1|r j |Σw j C j problem.

Variables and constraints Denote by C j the completion time of job j. We want to minimize The first set of constraints is easy: for each job j = 1,…, n, job j cannot complete before it is released and processed, so that C j ≥ r j + p j.

Second set of constraints Consider some set S  J of jobs and the sum This sum is minimized when all the jobs in S have a release date of 0 and all the jobs in S finish first in the schedule. Assuming these two conditions hold, then any completion time C j (  ) for j  S is equal to p j + the sum of all processing times of the jobs in S that preceded j in the schedule. Then in the product p j C j, p j multiplies itself and the processing of all jobs in S that preceded j in the schedule. The sum must contain p j p k for all pairs j,k  S.

Queyranne’s inequality

LP(1|r j |Σw j C j )

Algorithm 1|r j |Σw j C j 1.Find an optimal solution  * = (С 1 (  *), С 2 (  *), …, С n (  *)) of the LP(1|r j |Σw j C j ). 2.Schedule the jobs in  nonpreemptively in the same order that they complete in  *. 3.Output (  )

3-approximation Theorem 11.3 Scheduling in order of the completion times of  * is a 3-approximation algorithm for scheduling jobs on a single machine with release dates to minimize the sum of weighted completion times.

Proof Assume that the jobs are reindexed so that С 1 (  *) ≤ С 2 (  *) ≤ … ≤ С n (  *). As in the proof of Lemma 11.1, there cannot be any idle time in the time interval Therefore it must be the case that

Let l  {1,…, j} be the index of the job that maximizes max k=1,…,j r k so that r l = max k=1,…,j r k. We have С j (  *) ≥ С l (  *) and С l (  *) ≥ r l by the LP constraints; thus С j (  *) ≥ max k=1,…,j r k. Consider set S = {1,…, j}. From the fact that  * is a feasible LP solution, we know that Since С 1 (  *) ≤ С 2 (  *) ≤ … ≤ С j (  *), we have By combining these two inequalities we see that

Proof

How to solve LP?

Ellipsoid method (draft) The input for the algorithm is a system of inequalities P ={Cx ≤ d} with n variables in integral coefficients. We would like to determine whether P is empty or not, and if it is nonempty, we would like to find a point in P. 1.Let N=2n((2n+1)  C  + n  d  − n 3 ) and k = 0 2.Find a “big” ellipsoid E 0 (A 0,a 0 ), that contains our polytope P. 3.If k = N, then STOP! (Declare P empty.) 4.If a k  P, then STOP! (A feasible solution is found.) 5.If a k  P, then choose an inequality that is violated by a k. 6. Create a new ellipsoid E k+1 (A k+1,a k+1 ), go to 3.

Löwner-John ellipsoid E=E(A,a) = { x  R n |(x−a) T A −1 (x−a) ≤ 1} E ʹ (A,a,c) = E(A,a) ∩ { x  R n |c T x ≤ c T a}

Ellipsoid method (draft) The input for the algorithm is a system of inequalities P ={Cx ≤ d} with n variables in integral coefficients. We would like to determine whether P is empty or not, and if it is nonempty, we would like to find a point in P. 1.Let N=2n((2n+1)  C  + n  d  − n 3 ) and k = 0 2.Find a “big” ellipsoid E 0 (A 0,a 0 ), that contains our polytope P. 3.If k = N, then STOP! (Declare P empty.) 4.If a k  P, then STOP! (A feasible solution is found.) 5.If a k  P, then choose an inequality that is violated by a k. 6.Create a new ellipsoid E k+1 (A k+1,a k+1 ), go to 3. We need a polynomial time procedure (separation oracle) for steps 4 and 5.

How to find the violated constraint? Given a solution . Reindex the variables so that С 1 (  ) ≤ С 2 (  ) ≤ … ≤ С n (  ). Let S 1 ={1}, S 2 ={1, 2},…, S n ={1,…, n}. We claim that it is sufficient to check whether the constraints are violated for the n sets S 1, S 2,…, S n. If any of these n constraints are violated, then we return the set as a violated constraint. If not, we show below that all constraints are satisfied.

Separation oracle Lemma 11.4 Given variables C j, if constraints (2) are satisfied for the n sets S 1, S 2,…, S n, thet they are satisfied for all S  J.

Proof (1) Let S  J be a constraint that is not satisfied; that is We will show that then there must be some set S i that is also not satisfied. We do this by considering changes to S that decrease the difference Any such change will result in another set S that also does not satisfy the constraint.

Removing a job k from S decreases x if Adding a job k to S decreases x if

Removing of jobs Let l be the highest indexed job in S. We remove l from S if In this case the resulting set S \ {l} also does not satisfy the constraint (2). We continue to remove the highest indexed job in the resulting set until finally we have a set S such that its highest indexed job l has

Adding of jobs Now suppose S ≠ S l ={1,…, l}. Let k < l and k  S l. We have It follows that, adding k to S can only decrease the difference Thus we can add all k < l to S, and the resulting set S l will also not satisfy the constraint (2).

Ellipsoid Method (1) Suppose we are trying to solve LP(1|r j |Σw j C j ). Initially, the algorithm finds an ellipsoid in R n containing all basic solutions for the linear program. Let Č be the center of the ellipsoid. The algorithm calls the separation oracle with Č. If Č is feasible, it creates a constraint Σw j C j ≤ Σw j Č j, since a basic optimal solution must have objective function value no greater than the feasible solution Č. This constraint is sometimes called an objective function cut.

Ellipsoid Method (2) If Č is not feasible the separation oracle returns a constraint Σa ij C j ≥ b i that is violated by Č. In either case, we have a hyperplane through Č such that a basic optimal solution to the linear program must lie on one side of the hyperplane. In the case of a feasible Č the hyperplane is Σw j C j ≤ Σw j Č j. In the case of an infeasible the Č the hyperplane is Σa ij C j ≥ Σa ij Č j.

Ellipsoid Method (3) The hyperplane containing Č splits the ellipsoid in two. The algorithm then finds a new ellipsoid containing the appropriate half of the original ellipsoid, and then consider the center of new ellipsoid.

Löwner-John ellipsoid E=E(A,a) = { x  R n |(x−a) T A −1 (x−a) ≤ 1} E ʹ (A,a,c) = E(A,a) ∩ { x  R n |c T x ≤ c T a}

Ellipsoid Method (3) The hyperplane conta The algorithm then finds a new ellipsoid containing the appropriate half of the original ellipsoid, and then consider the center of new ellipsoid. This process repeats until the ellipsoid is sufficiently small that it can contain at most one basic feasible solution. This solution must be a basic optimal solution.