A Unified Approach to Approximating Resource Allocation and Scheduling

Slides:



Advertisements
Similar presentations
Unit-iv.
Advertisements

Lecture 11 Overview Self-Reducibility. Overview on Greedy Algorithms.
IBM LP Rounding using Fractional Local Ratio Reuven Bar-Yehuda
Algorithm Design Methods Spring 2007 CSE, POSTECH.
Minimum Vertex Cover in Rectangle Graphs
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Approximation Algorithms
Primal-Dual Algorithms for Connected Facility Location Chaitanya SwamyAmit Kumar Cornell University.
ESA On approximating a geometric prize-collecting traveling salesman problem with time windows Reuven Bar-Yehuda – Technion IIT Guy Even – Tel Aviv.
Discrete Optimization Shi-Chung Chang. Discrete Optimization Lecture #1 Today: Reading Assignments 1.Chapter 1 and the Appendix of [Pas82] 2.Chapter 1.
Approximation Some Network Design Problems With Node Costs Guy Kortsarz Rutgers University, Camden, NJ Joint work with Zeev Nutov The Open University,
Reuven Bar-Yehuda Gleb Polevoy Gleb Polevoy Dror Rawitz Technion Technion 1.
Reuven Bar-Yehuda Gleb Polevoy Gleb Polevoy Dror Rawitz Technion Technion 1.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Seminar : Approximation algorithms for LP/IP optimization problems Reuven Bar-Yehuda Technion IIT Slides and papers at:
Approximation Algorithms
1 Seminar : Approximation algorithms for LP optimization problems Reuven Bar-Yehuda Technion IIT Slides and paper at:
1 A Unified Approach to Approximating Resource Allocation and Scheduling Amotz Bar-Noy.……...AT&T and Tel Aviv University.
ISMP LP Rounding using Fractional Local Ratio Reuven Bar-Yehuda
1 New Developments in the Local Ratio Technique Reuven Bar-Yehuda
A Unified Approach to Approximating Resource Allocation and Scheduling Amotz Bar-Noy.……...AT&T and Tel Aviv University Reuven Bar-Yehuda….Technion IIT.
1 Approximation Algorithms for Bandwidth and Storage Allocation Reuven Bar-Yehuda Joint work with Michael Beder, Yuval Cohen.
Priority Models Sashka Davis University of California, San Diego June 1, 2003.
15.082J and 6.855J and ESD.78J November 2, 2010 Network Flow Duality and Applications of Network Flows.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
Chapter 8 PD-Method and Local Ratio (4) Local ratio This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda.
1 New Developments in the Local Ratio Technique Reuven Bar-Yehuda
LR for Packing problems Reuven Bar-Yehuda
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness and course wrap up.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
1 A Unified Approach to Approximating Resource Allocation and Scheduling Amotz Bar-Noy.……...AT&T and Tel Aviv University.
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
Exploiting Locality: Approximating Sorting Buffers Reuven Bar Yehuda Jonathan Laserson Technion IIT.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Chapter 8 PD-Method and Local Ratio (5) Equivalence This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda.
The Theory of NP-Completeness
Some Topics in OR.
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
Algorithm Design Methods
Chapter 8 Local Ratio II. More Example
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
Maximum Matching in the Online Batch-Arrival Model
On Scheduling in Map-Reduce and Flow-Shops
Computability and Complexity
Seminar : Approximation algorithms for LP/IP optimization problems
1.3 Modeling with exponentially many constr.
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Exam 2 LZW not on syllabus. 73% / 75%.
Linear Programming and Approximation
Lecture 11 Overview Self-Reducibility.
Lecture 11 Overview Self-Reducibility.
Approximation Algorithms
Richard Anderson Lecture 30 NP-Completeness
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
1.3 Modeling with exponentially many constr.
Algorithm Design Methods
離散數學 DISCRETE and COMBINATORIAL MATHEMATICS
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Algorithm Design Methods
The Theory of NP-Completeness
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Chapter 1. Formulations.
Algorithm Design Methods
Presentation transcript:

A Unified Approach to Approximating Resource Allocation and Scheduling Amotz Bar-Noy.……...AT&T and Tel Aviv University Reuven Bar-Yehuda….Technion IIT Ari Freund……………Technion IIT Seffi Naor…………….Bell Labs and Technion IIT Baruch Schieber…...…IBM T.J. Watson Slides and paper at: http://www.cs.technion.ac.il/~reuven

Summery of Results: Discrete Single Machine Scheduling Bar-Noy, Guha, Naor and Schieber STOC 99: 1/2 Non Combinatorial*   Berman, DasGupta, STOC 00: 1/2   This Talk, STOC 00(Independent)      1/2 Bandwidth Allocation Albers, Arora, Khanna SODA 99: O(1) for |Activityi|=1* Uma, Phillips, Wein SODA 00:        1/4 Non combinatorial. This Talk STOC 00 (Independent)       1/3 for w  1/2 This Talk STOC 00 (Independent)       1/5 for w  1    Parallel Unrelated Machines: Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3   Berman, DasGupta STOC 00: 1/2   This Talk, STOC 00(Independent)      1/2

Summery of Results: Continuous Single Machine Scheduling Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 Non Combinatorial   Berman, DasGupta STOC 00: 1/2·(1-)   This Talk, STOC 00: (Independent) 1/2·(1-) Bandwidth Allocation Uma, Phillips, Wein SODA 00:       1/6 Non combinatorial This Talk, STOC 00 (Independent)       1/3 ·(1-) for w  1/2        1/5 ·(1-) for w  1 Parallel unrelated machines: Bar-Noy, Guha, Naor and Schieber STOC 99: 1/4

Summery of Results: and more… General Off-line Caching Problem  Albers, Arora, Khanna SODA 99: O(1) Cache_Size += O(Largest_Page)    O(log(Cache_Size+Max_Page_Penalty)) This Talk, STOC 00: 4 Ring topology: Transformation of approx ratio from line to ring topology 1/  1/(+1+) Dynamic storage allocation (contiguous allocation): Previous results: none for throughput maximization Previous results Kierstead 91 for resource minimization: 6 This paper: 1/35 for throughput max using the result for resource min.

The Local-Ratio Technique: Basic definitions Given a profit [penalty] vector p. Maximize[Minimize] p·x Subject to: feasibility constraints F(x) x is r-approximation if F(x) and p·x [] r · p·x* An algorithm is r-approximation if for any p, F it returns an r-approximation

The Local-Ratio Theorem: x is an r-approximation with respect to p1 x is an r-approximation with respect to p- p1  x is an r-approximation with respect to p Proof: (For maximization) p1 · x  r × p1* p2 · x  r × p2* p · x  r × ( p1*+ p2*)  r × ( p1 + p2 )*

Special case: Optimization is 1-approximation x is an optimum with respect to p1 x is an optimum with respect to p- p1 x is an optimum with respect to p

A Local-Ratio Schema for Maximization[Minimization] problems: Algorithm r-ApproxMax[Min]( Set, p ) If Set = Φ then return Φ ; If  I  Set p(I)  0 then return r-ApproxMax( Set-{I}, p ) ; [If I  Set p(I)=0 then return {I}  r-ApproxMin( Set-{I}, p ) ;] Define “good” p1 ; REC = r-ApproxMax[Min]( S, p- p1 ) ; If REC is not an r-approximation w.r.t. p1 then “fix it”; return REC;

The Local-Ratio Theorem: Applications Applications to some optimization algorithms (r = 1): ( MST) Minimum Spanning Tree (Kruskal) ( SHORTEST-PATH) s-t Shortest Path (Dijkstra) (LONGEST-PATH) s-t DAG Longest Path (Can be done with dynamic programming) (INTERVAL-IS) Independents-Set in Interval Graphs Usually done with dynamic programming) (LONG-SEQ) Longest (weighted) monotone subsequence (Can be done with dynamic programming) ( MIN_CUT) Minimum Capacity s,t Cut (e.g. Ford, Dinitz) Applications to some 2-Approximation algorithms: (r = 2) ( VC) Minimum Vertex Cover (Bar-Yehuda and Even) ( FVS) Vertex Feedback Set (Becker and Geiger) ( GSF) Generalized Steiner Forest (Williamson, Goemans, Mihail, and Vazirani) ( Min 2SAT) Minimum Two-Satisfibility (Gusfield and Pitt) ( 2VIP) Two Variable Integer Programming (Bar-Yehuda and Rawitz) ( PVC) Partial Vertex Cover (Bar-Yehuda) ( GVC) Generalized Vertex Cover (Bar-Yehuda and Rawitz) Applications to some other Approximations: ( SC) Minimum Set Cover (Bar-Yehuda and Even) ( PSC) Partial Set Cover (Bar-Yehuda) ( MSP) Maximum Set Packing (Arkin and Hasin) Applications Resource Allocation and Scheduling : ….

Maximum Independent Set in Interval Graphs Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 time Maximize s.t. For each instance I: For each time t:

Maximum Independent Set in Interval Graphs: How to select P1 to get optimization? Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 Î time Let Î be an interval that ends first; 1 if I in conflict with Î For all intervals I define: p1 (I) = 0 else For every feasible x: p1 ·x  1 Every Î-maximal is optimal. For every Î-maximal x: p1 ·x  1 P1=1 P1=0 P1=0 P1=0 P1=1 P1=0 P1=1 P1=1

Maximum Independent Set in Interval Graphs: An Optimization Algorithm P1=P(Î ) Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 Î time Algorithm MaxIS( S, p ) If S = Φ then return Φ ; If I  S p(I) 0 then return MaxIS( S - {I}, p); Let Î  S that ends first; I  S define: p1 (I) = p(Î)  (I in conflict with Î) ; IS = MaxIS( S, p- p1 ) ; If IS is Î-maximal then return IS else return IS  {Î}; P1=0 P1=0 P1=0 P1=0 P1=P(Î ) P1=0 P1=P(Î ) P1=P(Î )

Maximum Independent Set in Interval Graphs: Running Example P(I5) = 3 -4 P(I6) = 6 -4 -2 P(I3) = 5 -5 P(I2) = 3 -5 P(I1) = 5 -5 P(I4) = 9 -5 -4 -4 -5 -2

Single Machine Scheduling : Bar-Noy, Guha, Naor and Schieber STOC 99: 1/2 LP Berman, DasGupta, STOC 00: 1/2 This Talk, STOC 00(Independent)      1/2 Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 ????????????? time Maximize s.t. For each instance I: For each time t: For each activity A:

Single Machine Scheduling: How to select P1 to get ½-approximation ? Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 Î time Let Î be an interval that ends first; 1 if I in conflict with Î For all intervals I define: p1 (I) = 0 else For every feasible x: p1 ·x  2 Every Î-maximal is 1/2-approximation For every Î-maximal x: p1 ·x  1 P1=1 P1=0 P1=0 P1=0 P1=0 P1=0 P1=0 P1=0 P1=1 P1=0 P1=1 P1=0 P1=1 P1=0 P1=1 P1=1 P1=1 P1=1

Single Machine Scheduling: The ½-approximation Algorithm Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 Î time Algorithm MaxIS( S, p ) If S = Φ then return Φ ; If I  S p(I)  0 then return MaxIS( S - {I}, p); Let Î  S that ends first; I  S define: p1 (I) = p(Î)  (I in conflict with Î) ; IS = MaxIS( S, p- p1 ) ; If IS is Î-maximal then return IS else return IS  {Î};

Bandwidth Allocation Albers, Arora, Khanna SODA 99: O(1) |Ai|=1* Uma, Phillips, Wein SODA 00: 1/4 LP. This Talk 1/3 for w  1/2 and 1/5 for w  1   Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 I w(I) s(I) e(I) time Maximize s.t. For each instance I: For each time t: For each activity A:

Bandwidth Allocation time Bandwidth time Activity 9 Activity 8

Bandwidth Allocation for w  1/2 How to select P1 to get 1/3-approximation? Activity9 Activity8 Î Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 I w(I) s(I) e(I) time 1 if I in the same activity of Î For all intervals I define: p1 (I) = 2*w(I) if I in time conflict with Î 0 else For every feasible x: p1 ·x  3 Every Î-maximal is 1/3-approximation For every Î-maximal x: p1 ·x  1

Bandwidth Allocation The 1/5-approximation for any w  1 Activity9 Activity8 w > ½ Activity7 w > ½ w > ½ Activity6 Activity5 w > ½ Activity4 Activity3 w > ½ w > ½ Activity2 Activity1 w > ½ w > ½ w > ½ Algorithm: GRAY = Find 1/2-approximation for gray (w>1/2) intervals; COLORED = Find 1/3-approximation for colored intervals Return the one with the larger profit Analysis: If GRAY*  40%OPT then GRAY  1/2(40%OPT)=20%OPT else COLORED*  60%OPT thus COLORED  1/3(60%OPT)=20%OPT

Single Machine Scheduling with Release and Deadlines Activity 9 Activity 8 Activity 7 Activity 6 Activity 5 Activity 4 Activity 3 Activity 2 Activity 1 time Each job has a time window within which it can be processed.

Single Machine Scheduling with Release and Deadlines Activity 9 Activity 8 Activity 7 Activity 6 Activity 5 Activity 4 Activity 3 Activity 2 Activity 1

Continuous Scheduling { w(I)  d(I)  s(I) e(I) Single Machine Scheduling (w=1) Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 Non Combinatorial   Berman, DasGupta STOC 00: 1/2·(1-)   This Talk, STOC 00: (Independent) 1/2·(1-) Bandwidth Allocation Uma, Phillips, Wein SODA 00:       1/6 Non combinatorial This Talk, STOC 00 (Independent)       1/3 ·(1-) for w  1/2        1/5 ·(1-) for w  1

Continuous Scheduling: Split and Round Profit (Loose additional (1-) factor) If currant p(I1)   original p(I1) then delete I1 else Split I2=(s2,e2] to I21=(s2, s1+d1] and I22=(s1+d1,e2]  d(I1)   d(I2)  I11 I12  d(I1)  I21 I22  d(I2) 

Minimization problem: General Off-line Caching Problem

The Demand Scheduling Problem Resource 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 w(I) 0.0 Minimize s.t. For each instance I: For each time t: t Demand(t)

Special Case: “Min Knapsack” Demand = 1 For all intervals I define: p1 (I) = Min {w(I), 1} For every feasible x: p1 ·x  1 minimal is 2-approximation For every minimal x: p1 ·x  2

From Knapsack to Demand Scheduling max demand=1 at time t’ For all intervals I intersecting time t’ define: p1 (I) = Min {w(I), 1} p1 (others) = 0 p1 (all “right-minimal”) is at most 2 p1 (all “left-minimal”) is at most 2 For every minimal x: p1 ·x  2+2 For every feasible x: p1 ·x  1 Every minimal is 4-approximation

General Off-line Caching Problem Albers, Arora, Khanna SODA 99: O(1) Cache_Size += O(Largest_Page) O(log(Cache_Size+Max_Page_Penalty)) This Talk 4 General Off-line Caching Problem 0.9 0.8 w(page2)=0.7 0.6 w(page1)=0.5 0.4 w(page3)=0.3 0.2 0.1 0.0 page1 page2 page3 page1 page3 page2 page3 w(pagei) = size of pagei p(pagei) = the reload cost of the page Cache size

4-Approximation for Demand Scheduling Algorithm MinDemandCover( S, p ) If S = Φ then return Φ ; If there exists an interval I  S s.t. p(I) = 0 ; then return {I}+MinDemandCover( S - {I}, p) ; Let t’ be the time with maximum demand k; Let S’ be the set of instances intersects time t ; Let δ = MIN {p(I)/w(I) : I S’} ; MIN {w(I) ,k} if I  S’ For all intervals I  S define: p1 (I) = δ × 0 else C = MinDemandCover( S, p- p1 ) ; Remove elements form C until it is minimal and return C ;

Application: 4-Approximation for the Loss Minimization Problem Resource The cost of a schedule is the sum of profits of instances not in the schedule. For the special case where Ai is a singleton {Ii} the problem is equivalent to the Min Demand Scheduling where:

END?

jobs time d d d d d d d d d d Parallel Unrelated Machines: Continous Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 1/4   Berman, DasGupta STOC 00: 1/2 1/2·(1-)   This Talk, STOC 00(Independent)      1/2 1/2·(1-) d d d d jobs time d d d d d d

Parallel unrelated machines: k A i c h c c k d time d h i A machine c h d d

i k A i k i A time machine c c c c Parallel unrelated machines: 1/5-approximation (not in the paper) Each machine resource 1 p1(Red ) = p1(orange d ) = 1; p1 (Yellow d ) = 2width; p1 (All others) = 0; i k A i c d h c c k d time d h i d A machine c h d d d

END!

Preliminaries Activity9 s(I) e(I) time Activity8 Activity7 Activity6 Activity1 I w(I) s(I) e(I) time