Alexander Kononov Sobolev Institute of Mathematics Siberian Branch of Russian Academy of Science Novosibirsk, Russia.

Slides:



Advertisements
Similar presentations
On the Complexity of Scheduling
Advertisements

Approximation algorithms for geometric intersection graphs.
Models of Computation Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Analysis of Algorithms Week 1, Lecture 2.
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
NP-Completeness.
Minimum Clique Partition Problem with Constrained Weight for Interval Graphs Jianping Li Department of Mathematics Yunnan University Jointed by M.X. Chen.
Max- coloring in trees SRIRAM V.PEMMARAJU AND RAJIV RAMAN BY JAYATI JENNIFER LAW.
CALTECH CS137 Fall DeHon 1 CS137: Electronic Design Automation Day 19: November 21, 2005 Scheduling Introduction.
MATH 224 – Discrete Mathematics
Class-constrained Packing Problems with Application to Storage Management in Multimedia Systems Tami Tamir Department of Computer Science The Technion.
Approximation Algorithms
Algorithm Design Techniques: Greedy Algorithms. Introduction Algorithm Design Techniques –Design of algorithms –Algorithms commonly used to solve problems.
Scheduling with Outliers Ravishankar Krishnaswamy (Carnegie Mellon University) Joint work with Anupam Gupta, Amit Kumar and Danny Segev.
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Parameterized Approximation Scheme for the Multiple Knapsack Problem Yan Lu Klaus Jansen SODA 2009 CPSC669 Term Project—Paper Reading.
Optimal solution of binary problems Much material taken from :  Olga Veksler, University of Western Ontario
Parameterized Approximation Scheme for the Multiple Knapsack Problem by Klaus Jansen (SODA’09) Speaker: Yue Wang 04/14/2009.
Polynomial time approximation scheme Lecture 17: Mar 13.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Chapter 11: Limitations of Algorithmic Power
Minimizing Flow Time on Multiple Machines Nikhil Bansal IBM Research, T.J. Watson.
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Minimizing Makespan and Preemption Costs on a System of Uniform Machines Hadas Shachnai Bell Labs and The Technion IIT Tami Tamir Univ. of Washington Gerhard.
Integer programming Branch & bound algorithm ( B&B )
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
Design Techniques for Approximation Algorithms and Approximation Classes.
Great Theoretical Ideas in Computer Science.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
Approximation schemes Scheduling problems. Polynomial Time Approximation Scheme (PTAS) Let Π be a minimization problem. An approximation scheme for problem.
Techniques for truthful scheduling Rob van Stee Max Planck Institute for Informatics (MPII) Germany.
Packing Rectangles into Bins Nikhil Bansal (CMU) Joint with Maxim Sviridenko (IBM)
Approximation Algorithms
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
Approximation Schemes Open Shop Problem. O||C max and Om||C max {J 1,..., J n } is set of jobs. {M 1,..., M m } is set of machines. J i : {O i1,..., O.
Cliff Shaffer Computer Science Computational Complexity.
Exact and heuristics algorithms
On the Approximability of Geometric and Geographic Generalization and the Min- Max Bin Covering Problem Michael T. Goodrich Dept. of Computer Science joint.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Outline Introduction Minimizing the makespan Minimizing total flowtime
A Membrane Algorithm for the Min Storage problem Dipartimento di Informatica, Sistemistica e Comunicazione Università degli Studi di Milano – Bicocca WMC.
Operational Research & ManagementOperations Scheduling Economic Lot Scheduling 1.Summary Machine Scheduling 2.ELSP (one item, multiple items) 3.Arbitrary.
Static Process Scheduling
Computer Science Background for Biologists CSC 487/687 Computing for Bioinformatics Fall 2005.
1 BECO 2004 When can one develop an FPTAS for a sequential decision problem? with apologies to Gerhard Woeginger James B. Orlin MIT working jointly with.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
PTAS(Polynomial Time Approximation Scheme) cont. Prepared by, Umair S. March 25 th, 2009.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Feb 23, 2010 University of Liverpool Minimizing Total Busy Time in Parallel Scheduling with Application to Optical Networks Michele Flammini – University.
Great Theoretical Ideas in Computer Science.
Linear program Separation Oracle. Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
On Scheduling in Map-Reduce and Flow-Shops
Approximation Algorithms
Computability and Complexity
CSCI1600: Embedded and Real Time Software
Objective of This Course
Polynomial time approximation scheme
Chapter 11 Limitations of Algorithm Power
György Dósa – M. Grazia Speranza – Zsolt Tuza:
CSE 589 Applied Algorithms Spring 1999
Ch09 _2 Approximation algorithm
Chapter 1. Formulations.
CSCI1600: Embedded and Real Time Software
Presentation transcript:

Alexander Kononov Sobolev Institute of Mathematics Siberian Branch of Russian Academy of Science Novosibirsk, Russia

R u s s i a Novosibirsk

How to design a PTAS adapted from the novel by P. Schuurman and G. Woeginger directed by Alexander Kononov

Garry Potter problem

Could you find a schedule for my new project with the minimal cost? We can do that! Real sorcerers can do everything! And we guess the cost of the project will be $. Sounds great ! Wonderful ! Go ahead and determine this schedule! Tomorrow we start my new project! We can not do that by tomorrow Real sorcerers can do everything! But finding the schedule is going to take us 23,5 years!

Tomorrow… $ But …I want … $ Then…after 23,5 years The day after tomorrow $ Three days from now $ What if I call you up exactly X days from now (1+1/X)

NP-hard problems Almost all interesting combinatorial problems are NP- hard. Almost all interesting combinatorial problems are NP- hard. Nobody knows a polynomial time exact algorithm for any NP-hard problem. Nobody knows a polynomial time exact algorithm for any NP-hard problem. If there exists a polynomial time exact algorithm for some NP-hard problem then there exists a polynomial time exact algorithm for many NP-hard problems. If there exists a polynomial time exact algorithm for some NP-hard problem then there exists a polynomial time exact algorithm for many NP-hard problems. The most researchers guess the a polynomial time exact algorithm for NP-hard problems does not exist. The most researchers guess the a polynomial time exact algorithm for NP-hard problems does not exist. We have to solve NP-hard problems approximately. We have to solve NP-hard problems approximately.

Approximation algorithm An algorithm A is called ρ-approximation algorithm for problem Π, if for all instances I of Π it delivers a feasible solution with objective value A(I ) such that An algorithm A is called ρ-approximation algorithm for problem Π, if for all instances I of Π it delivers a feasible solution with objective value A(I ) such that A(I ) ρOPT(I ). A(I ) ρOPT(I ).

Polynomial time approximation scheme (PTAS) An approximation scheme for problem Π is a family of (1+ε) – approximation algorithms A ε for problem Π over all 0< ε <1. An approximation scheme for problem Π is a family of (1+ε) – approximation algorithms A ε for problem Π over all 0< ε <1. A polynomial time approximation scheme for problem Π is an approximation scheme whose time complexity is polynomial in the input size. A polynomial time approximation scheme for problem Π is an approximation scheme whose time complexity is polynomial in the input size.

A Fully polynomial time approximation scheme (FPTAS) A fully polynomial time approximation scheme for problem Π is an approximation scheme whose time complexity is polynomial in the input size and also polynomial in 1/ε. A fully polynomial time approximation scheme for problem Π is an approximation scheme whose time complexity is polynomial in the input size and also polynomial in 1/ε.

Remarks Running time Running time PTAS: | I | 2/ε, | I | 2/ε 10, (| I | 2/ε ) 1/ε. PTAS: | I | 2/ε, | I | 2/ε 10, (| I | 2/ε ) 1/ε. FPTAS: | I | 2 ε, | I |ε 2, | I | 7 ε 3. FPTAS: | I | 2 /ε, | I |/ε 2, | I | 7 /ε 3. With respect to worse case approximation an FPTAS is the strongest possible result that we can derive for an NP–hard problem. With respect to worse case approximation an FPTAS is the strongest possible result that we can derive for an NP–hard problem.

P2||C max J={1,..., n} – jobs. J={1,..., n} – jobs. {M 1, M 2 } – identical machines. {M 1, M 2 } – identical machines. j : p j > 0 (j =1,…, n). j : p j > 0 (j =1,…, n). Each job has to be executed by one of two machines. Each job has to be executed by one of two machines. All jobs are available at time 0 and preemption is not allowed. All jobs are available at time 0 and preemption is not allowed. Each machine executes at most one job at time. Each machine executes at most one job at time. The goal is to minimize the maximum job completion time. The goal is to minimize the maximum job completion time.

How to get a PTAS Simplification of instance I. Simplification of instance I. Partition of output space. Partition of output space. Adding structure to the execution of an algorithm A. Adding structure to the execution of an algorithm A. Instance I Algorithm AOutput A(I)

Simplification of instance I The first idea is to turn a difficult instance into a more primitive instance that is easier to tackle. Then we use the optimal solution for the primitive instance to get a near optimal solution of the original instance. The first idea is to turn a difficult instance into a more primitive instance that is easier to tackle. Then we use the optimal solution for the primitive instance to get a near optimal solution of the original instance. Simplification Solve OPT # Translate back OPT App I I #

Approaches of simplification Rounding Rounding Merging Merging Cutting Cutting Aligning Aligning

Rounding

Rounding

Merging

Merging

Cutting

Cutting

Aligning

Aligning = = = 5 3

P2||C max J={1,..., n} – jobs. J={1,..., n} – jobs. {M 1, M 2 } – identical machines. {M 1, M 2 } – identical machines. j : p j > 0 (j =1,…, n). j : p j > 0 (j =1,…, n). Each job has to be executed by one of two machines. Each job has to be executed by one of two machines. All jobs are available at time 0 and preemption is not allowed. All jobs are available at time 0 and preemption is not allowed. Each machine executes at most one job at time. Each machine executes at most one job at time. The goal is to minimize the maximum job completion time. The goal is to minimize the maximum job completion time.

Lower bound

How to simplify an instance ( I I # ) Big = { j J | p j εL} Big = { j J | p j εL} New instance I # contains all the big jobs from I. New instance I # contains all the big jobs from I. Small = { j J | p j < εL} Small = { j J | p j < εL} Let X= Σ j Small p j. Let X= Σ j Small p j. New instance I # contains X/εL jobs of length εL. New instance I # contains X/εL jobs of length εL. The small jobs in I are first glued together to give a long job of length X, and then this long job is cut into lots of chunks of length εL. The small jobs in I are first glued together to give a long job of length X, and then this long job is cut into lots of chunks of length εL.

I and I # The optimal makespan of I # is fairly close to the optimal makespan of I: The optimal makespan of I # is fairly close to the optimal makespan of I: OPT(I # ) (1+ ε)OPT(I ). OPT(I # ) (1+ ε)OPT(I ).

Proof X i – the total size of all small jobs on machine M i in optimal schedule for I. X i – the total size of all small jobs on machine M i in optimal schedule for I. On M i, leave every big job where it is in optimal schedule. On M i, leave every big job where it is in optimal schedule. Replace the small jobs on M i by X i /εL chunks of length εL. Replace the small jobs on M i by X i /εL chunks of length εL. X 1 /εL + X 2 /εL X 1 /εL + X 2 /εL = X/εL X 1 /εL + X 2 /εL X 1 /εL + X 2 /εL = X/εL X i /εL εL – X i (X i /εL + 1) εL – X i εL X i /εL εL – X i (X i /εL + 1) εL – X i εL OPT(I # ) OPT + εL (1+ ε)OPT(I) OPT(I # ) OPT + εL (1+ ε)OPT(I)

How to solve the simplified instance How many jobs in instance I # ? How many jobs in instance I # ? p j εL for all jobs in I #. p j εL for all jobs in I #. The total length of all jobs in I # : p sum 2L. The total length of all jobs in I # : p sum 2L. The number of jobs in I # 2L/εL= 2/ε. The number of jobs in I # 2L/εL= 2/ε. The number of jobs in I # is independent of n. The number of jobs in I # is independent of n. We may simply try all possible schedules. We may simply try all possible schedules. The number of all possible schedules 2 2/ε ! The number of all possible schedules 2 2/ε ! Running time is O(2 2/ε n)! Running time is O(2 2/ε n)!

How to translate solution back Let σ # be an optimal schedule for instance I #. Let σ # be an optimal schedule for instance I #. Let L i # be the load of machine M i in σ #. Let L i # be the load of machine M i in σ #. Let B i # be the total length of the big jobs on M i in σ #. Let B i # be the total length of the big jobs on M i in σ #. Let X i be the total size of the small jobs on M i in σ #. Let X i be the total size of the small jobs on M i in σ #. L i # = B i # + X i #. L i # = B i # + X i #.

σ # (I # ) σ(I) Every big job is put onto the same machine as in schedule σ #. Every big job is put onto the same machine as in schedule σ #. Reserve an interval of length X 1 # + 2εL on machine M 1 and an interval of length X 2 # on machine M 2. Reserve an interval of length X 1 # + 2εL on machine M 1 and an interval of length X 2 # on machine M 2. Pack small jobs into the reserved interval on machine M 1 until meet some small job that does not fit in anymore. Pack small jobs into the reserved interval on machine M 1 until meet some small job that does not fit in anymore. Pack remaining unpacked jobs into the reserved interval on machine M 2. Pack remaining unpacked jobs into the reserved interval on machine M 2.

PTAS

Structuring the output The main idea is to cut output space (i.e. the set of feasible solutions) into lots of smaller regions over which the optimization problem is easy to approximate. Solve the problem separately for each smaller region and taking the best approximate solution over all region will then yield a globally good approximate solution. The main idea is to cut output space (i.e. the set of feasible solutions) into lots of smaller regions over which the optimization problem is easy to approximate. Solve the problem separately for each smaller region and taking the best approximate solution over all region will then yield a globally good approximate solution. 1. Partition. 2. Find representatives. 3. Take the best.

Partition * * - the global optimal solution

Find representatives * * * * * * * * * * * * * * * * * * * * * * * - the global optimal solution * - an optimal solution in his district * - a representative in his district

Take the best * * * * * * * * * * * * * * * * * * * * * * * - the global optimal solution * - an optimal solution in his district * - a representative in his district

P2||C max J={1,..., n} – jobs. J={1,..., n} – jobs. {M 1, M 2 } – identical machines. {M 1, M 2 } – identical machines. j : p j > 0 (j =1,…, n). j : p j > 0 (j =1,…, n). Each job has to be executed by one of two machines. Each job has to be executed by one of two machines. All jobs are available at time 0 and preemption is not allowed. All jobs are available at time 0 and preemption is not allowed. Each machine executes at most one job at time. Each machine executes at most one job at time. The goal is to minimize the maximum job completion time. The goal is to minimize the maximum job completion time.

How to define the districts Big = { j J| p j εL} Big = { j J| p j εL} Small = { j J| p j < εL} Small = { j J| p j < εL} Let Φ be the set of feasible solutions for I. Let Φ be the set of feasible solutions for I. Every feasible solution σ Φ specifies an assignment of the n jobs to the two machines. Every feasible solution σ Φ specifies an assignment of the n jobs to the two machines. Define the districts Φ (1), Φ (2),…according to the assignment of big jobs to the two machines: Two feasible solutions σ 1 и σ 2 lie in the same district if and only if σ 1 assigns every big job to the same machine as σ 2 does. Define the districts Φ (1), Φ (2),…according to the assignment of big jobs to the two machines: Two feasible solutions σ 1 и σ 2 lie in the same district if and only if σ 1 assigns every big job to the same machine as σ 2 does.

Number of districts The number of big jobs 2L/εL = 2/ε. The number of big jobs 2L/εL = 2/ε. The number of different ways for assigning these jobs to two machines 2 2/ε. The number of different ways for assigning these jobs to two machines 2 2/ε. The number of districts 2 2/ε ! The number of districts 2 2/ε ! The number of districts depends on ε and is independent of the input size! The number of districts depends on ε and is independent of the input size!

How to find good representatives The assignments of big jobs to their machines are fixed in Φ (l). The assignments of big jobs to their machines are fixed in Φ (l). Let OPT (l) be the makespan of the best schedule in Φ (l). Let OPT (l) be the makespan of the best schedule in Φ (l). Let B i (l) be the total length of big jobs assigned to machine M i. Let B i (l) be the total length of big jobs assigned to machine M i. T := max{B i (1), B i (2) } OPT (l) T := max{B i (1), B i (2) } OPT (l) The initial workload of machine M i is B i (l). The initial workload of machine M i is B i (l). We assign the small jobs one by one to the machines; every time a job is assigned to the machine with the currently smaller workload. We assign the small jobs one by one to the machines; every time a job is assigned to the machine with the currently smaller workload. The resulting schedule σ (l) with makespan A (l) is our representative for the district Φ (l). The resulting schedule σ (l) with makespan A (l) is our representative for the district Φ (l).

How close is A (l) to OPT (l) 1. If A (l) =T, then A (l) = OPT (l). 2. Let A (l) >T. Consider the machine with higher workload in the schedule σ (l). Consider the machine with higher workload in the schedule σ (l). Then the last job that was assigned to the machine is a small job and it has length at most εL. Then the last job that was assigned to the machine is a small job and it has length at most εL. At the moment when this small job was assigned to the machine the workload of this machine was at most p sum / 2. At the moment when this small job was assigned to the machine the workload of this machine was at most p sum / 2. A (l) (p sum / 2) + εL (1 + ε)OPT (1 + ε)OPT (l) A (l) (p sum / 2) + εL (1 + ε)OPT (1 + ε)OPT (l)

Structuring the execution of an algorithm The main idea is to take an exact but slow algorithm A, and to interact with it while it is working. The main idea is to take an exact but slow algorithm A, and to interact with it while it is working. If the algorithm accumulates a lot of auxiliary data during its execution, then we may remove part of this data and clean up the algorithms memory. If the algorithm accumulates a lot of auxiliary data during its execution, then we may remove part of this data and clean up the algorithms memory. As a result the algorithm becomes faster. As a result the algorithm becomes faster.

P2||C max J={1,..., n} – jobs. J={1,..., n} – jobs. {M 1, M 2 } – identical machines. {M 1, M 2 } – identical machines. j : p j > 0 (j =1,…, n). j : p j > 0 (j =1,…, n). Each job has to be executed by one of two machines. Each job has to be executed by one of two machines. All jobs are available at time 0 and preemption is not allowed. All jobs are available at time 0 and preemption is not allowed. Each machine executes at most one job at time. Each machine executes at most one job at time. The goal is to minimize the maximum job completion time. The goal is to minimize the maximum job completion time.

Code of feasible solution Let σ k be a feasible schedule of k first jobs {1,..., k}. Let σ k be a feasible schedule of k first jobs {1,..., k}. We encode a feasible schedule σ k with machine loads L 1 and L 2 by the two dimensional vector [L 1, L 2 ]. We encode a feasible schedule σ k with machine loads L 1 and L 2 by the two dimensional vector [L 1, L 2 ]. Let V k be the vector set corresponding to feasible schedules of k jobs {1,..., k}. Let V k be the vector set corresponding to feasible schedules of k jobs {1,..., k}.

Dynamic programming Input ( J={1,..., n}, p: J Z + ) 1) Set V 0 ={[0,0]}, i=0. 2) While i n do: for every vector [x,y] V i put [x + p i,y] and [x,y + p i ] in V i+1 ; for every vector [x,y] V i put [x + p i,y] and [x,y + p i ] in V i+1 ; i:= i +1; i:= i +1; 3) Find the vector [x*,y*] V n that minimize the value max [x,y] V n {x,y}. minimize the value max [x,y] V n {x,y}. Output ([x*,y*])

Running time The coordinates of all vectors are integer in the range from 0 to p sum. The coordinates of all vectors are integer in the range from 0 to p sum. The cardinality of every vector set V i is bounded from above by (p sum ) 2. The cardinality of every vector set V i is bounded from above by (p sum ) 2. The total number of vectors determined by the algorithm is at most n(p sum ) 2. The total number of vectors determined by the algorithm is at most n(p sum ) 2. The running time of the algorithm is O(n(p sum ) 2 ). The running time of the algorithm is O(n(p sum ) 2 ). The size |I| of the input I satisfies |I| log(p sum ) = const · ln(p sum ). The size |I| of the input I satisfies |I| log(p sum ) = const · ln(p sum ). The running time of the algorithm is not polynomial of the size of the input! The running time of the algorithm is not polynomial of the size of the input!

How to simplify the vector sets p sum 1 (p sum, p sum ) 1 0 Δ Δ Δ2Δ2 Δ2Δ2 Δ3Δ3 Δ3Δ3 ΔKΔK Δ = 1+ (ε/2n) K = log Δ (p sum ) = = ln(p sum )/ln Δ = ln(p sum )/ln Δ ((1+2n )/ε) ln(p sum ) ((1+2n )/ε) ln(p sum ) ΔKΔK

Trimmed vector set p sum 1 (p sum, p sum ) 1 0 Δ Δ Δ2Δ2 Δ2Δ2 Δ3Δ3 Δ3Δ3 ΔKΔK Δ = 1+ (ε/2n) K = log Δ (p sum ) = = ln(p sum )/ln Δ = ln(p sum )/ln Δ ((1+2n )/ε) ln(p sum ) ((1+2n )/ε) ln(p sum ) ΔKΔK

Algorithm FPTAS Input ( J={1,..., n}, p: J Z + ) 1. Set V 0 # ={[0,0]}, i=0. 2. While i n do: for every vector [x,y] V i # put [x + p i,y] and [x,y + p i ] in V i+1 ; for every vector [x,y] V i # put [x + p i,y] and [x,y + p i ] in V i+1 ; i:= i +1; i:= i +1; Transform V i into V i #. Transform V i into V i #. 3. Find the vector [x*,y*] V n #, that minimize the value max [x,y] V n # {x,y}. minimize the value max [x,y] V n # {x,y}. Output ([x*,y*])

Running time of FPTAS The trimmed vector set V i # contains at most one vector in each box. The trimmed vector set V i # contains at most one vector in each box. There are K 2 boxes. There are K 2 boxes. Running time of FPTAS O(nK 2 ). Running time of FPTAS O(nK 2 ). nK 2 = n ((1+2n )/ε) ln(p sum ) 2. nK 2 = n ((1+2n )/ε) ln(p sum ) 2. Algorithm FPTAS has a time complexity that is polynomial in the input size and in 1/ε. Algorithm FPTAS has a time complexity that is polynomial in the input size and in 1/ε.

V i and V i # V i and V i # For every vector [x,y] V i there exists a vector [x #,y # ] V i #, such that x # Δ i x and y # Δ i y. For every vector [x,y] V i there exists a vector [x #,y # ] V i #, such that x # Δ i x and y # Δ i y.

The worst case behavior FPTAS

Final remarks Do we consider all approaches? Do we consider all approaches? No we dont, of course! No we dont, of course! Approximation Algorithms for NP-hard problems, edited by D.Hochbaum, PWS Publishing Company, Approximation Algorithms for NP-hard problems, edited by D.Hochbaum, PWS Publishing Company, V. Vazirani Approximation Algorithms, Springer-Verlag, Berlin, V. Vazirani Approximation Algorithms, Springer-Verlag, Berlin, P. Schuurman, G. Woeginger, Approximation Schemes – A Tutorial, chapter of the book Lecture on Scheduling, to appear in P. Schuurman, G. Woeginger, Approximation Schemes – A Tutorial, chapter of the book Lecture on Scheduling, to appear in 2008.