Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Unified Approach to Approximating Resource Allocation and Scheduling

Similar presentations


Presentation on theme: "A Unified Approach to Approximating Resource Allocation and Scheduling"— Presentation transcript:

1 A Unified Approach to Approximating Resource Allocation and Scheduling
Amotz Bar-Noy.……...AT&T and Tel Aviv University Reuven Bar-Yehuda….Technion IIT Ari Freund……………Technion IIT Seffi Naor…………….Bell Labs and Technion IIT Baruch Schieber…...…IBM T.J. Watson Slides and paper at:

2 Summery of Results: Discrete
Single Machine Scheduling Bar-Noy, Guha, Naor and Schieber STOC 99: 1/2 Non Combinatorial*   Berman, DasGupta, STOC 00: /2   This Talk, STOC 00(Independent)      1/2 Bandwidth Allocation Albers, Arora, Khanna SODA 99: O(1) for |Activityi|=1* Uma, Phillips, Wein SODA 00:        1/4 Non combinatorial. This Talk STOC 00 (Independent)       1/3 for w  1/ This Talk STOC 00 (Independent)       1/5 for w  1    Parallel Unrelated Machines: Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3   Berman, DasGupta STOC 00: /2   This Talk, STOC 00(Independent)      1/2

3 Summery of Results: Continuous
Single Machine Scheduling Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 Non Combinatorial   Berman, DasGupta STOC 00: /2·(1-)   This Talk, STOC 00: (Independent) /2·(1-) Bandwidth Allocation Uma, Phillips, Wein SODA 00:       1/6 Non combinatorial This Talk, STOC 00 (Independent)       1/3 ·(1-) for w  1/2        /5 ·(1-) for w  1 Parallel unrelated machines: Bar-Noy, Guha, Naor and Schieber STOC 99: 1/4

4 Summery of Results: and more…
General Off-line Caching Problem  Albers, Arora, Khanna SODA 99: O(1) Cache_Size += O(Largest_Page)    O(log(Cache_Size+Max_Page_Penalty)) This Talk, STOC 00: Ring topology: Transformation of approx ratio from line to ring topology 1/  1/(+1+) Dynamic storage allocation (contiguous allocation): Previous results: none for throughput maximization Previous results Kierstead 91 for resource minimization: 6 This paper: 1/35 for throughput max using the result for resource min.

5 The Local-Ratio Technique: Basic definitions
Given a profit [penalty] vector p. Maximize[Minimize] p·x Subject to: feasibility constraints F(x) x is r-approximation if F(x) and p·x [] r · p·x* An algorithm is r-approximation if for any p, F it returns an r-approximation

6 The Local-Ratio Theorem:
x is an r-approximation with respect to p1 x is an r-approximation with respect to p- p1 x is an r-approximation with respect to p Proof: (For maximization) p1 · x  r × p1* p2 · x  r × p2* p · x  r × ( p1*+ p2*)  r × ( p1 + p2 )*

7 Special case: Optimization is 1-approximation
x is an optimum with respect to p1 x is an optimum with respect to p- p1 x is an optimum with respect to p

8 A Local-Ratio Schema for Maximization[Minimization] problems:
Algorithm r-ApproxMax[Min]( Set, p ) If Set = Φ then return Φ ; If  I  Set p(I)  0 then return r-ApproxMax( Set-{I}, p ) ; [If I  Set p(I)=0 then return {I}  r-ApproxMin( Set-{I}, p ) ;] Define “good” p1 ; REC = r-ApproxMax[Min]( S, p- p1 ) ; If REC is not an r-approximation w.r.t. p1 then “fix it”; return REC;

9 The Local-Ratio Theorem: Applications
Applications to some optimization algorithms (r = 1): ( MST) Minimum Spanning Tree (Kruskal) ( SHORTEST-PATH) s-t Shortest Path (Dijkstra) (LONGEST-PATH) s-t DAG Longest Path (Can be done with dynamic programming) (INTERVAL-IS) Independents-Set in Interval Graphs Usually done with dynamic programming) (LONG-SEQ) Longest (weighted) monotone subsequence (Can be done with dynamic programming) ( MIN_CUT) Minimum Capacity s,t Cut (e.g. Ford, Dinitz) Applications to some 2-Approximation algorithms: (r = 2) ( VC) Minimum Vertex Cover (Bar-Yehuda and Even) ( FVS) Vertex Feedback Set (Becker and Geiger) ( GSF) Generalized Steiner Forest (Williamson, Goemans, Mihail, and Vazirani) ( Min 2SAT) Minimum Two-Satisfibility (Gusfield and Pitt) ( 2VIP) Two Variable Integer Programming (Bar-Yehuda and Rawitz) ( PVC) Partial Vertex Cover (Bar-Yehuda) ( GVC) Generalized Vertex Cover (Bar-Yehuda and Rawitz) Applications to some other Approximations: ( SC) Minimum Set Cover (Bar-Yehuda and Even) ( PSC) Partial Set Cover (Bar-Yehuda) ( MSP) Maximum Set Packing (Arkin and Hasin) Applications Resource Allocation and Scheduling : ….

10 Maximum Independent Set in Interval Graphs
Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity1 time Maximize s.t For each instance I: For each time t:

11 Maximum Independent Set in Interval Graphs: How to select P1 to get optimization?
Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity Î time Let Î be an interval that ends first; 1 if I in conflict with Î For all intervals I define: p1 (I) = 0 else For every feasible x: p1 ·x  1 Every Î-maximal is optimal. For every Î-maximal x: p1 ·x  1 P1=1 P1=0 P1=0 P1=0 P1=1 P1=0 P1=1 P1=1

12 Maximum Independent Set in Interval Graphs: An Optimization Algorithm
P1=P(Î ) Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity Î time Algorithm MaxIS( S, p ) If S = Φ then return Φ ; If I  S p(I) 0 then return MaxIS( S - {I}, p); Let Î  S that ends first; I  S define: p1 (I) = p(Î)  (I in conflict with Î) ; IS = MaxIS( S, p- p1 ) ; If IS is Î-maximal then return IS else return IS  {Î}; P1=0 P1=0 P1=0 P1=0 P1=P(Î ) P1=0 P1=P(Î ) P1=P(Î )

13 Maximum Independent Set in Interval Graphs: Running Example
P(I5) = P(I6) = P(I3) = P(I2) = P(I1) = P(I4) = -4 -5 -2

14 Single Machine Scheduling :
Bar-Noy, Guha, Naor and Schieber STOC 99: 1/2 LP Berman, DasGupta, STOC 00: /2 This Talk, STOC 00(Independent)      1/2 Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity ????????????? time Maximize s.t For each instance I: For each time t: For each activity A:

15 Single Machine Scheduling: How to select P1 to get ½-approximation ?
Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity Î time Let Î be an interval that ends first; 1 if I in conflict with Î For all intervals I define: p1 (I) = 0 else For every feasible x: p1 ·x  2 Every Î-maximal is 1/2-approximation For every Î-maximal x: p1 ·x  1 P1=1 P1=0 P1=0 P1=0 P1=0 P1=0 P1=0 P1=0 P1=1 P1=0 P1=1 P1=0 P1=1 P1=0 P1=1 P1=1 P1=1 P1=1

16 Single Machine Scheduling: The ½-approximation Algorithm
Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity Î time Algorithm MaxIS( S, p ) If S = Φ then return Φ ; If I  S p(I)  0 then return MaxIS( S - {I}, p); Let Î  S that ends first; I  S define: p1 (I) = p(Î)  (I in conflict with Î) ; IS = MaxIS( S, p- p1 ) ; If IS is Î-maximal then return IS else return IS  {Î};

17 Bandwidth Allocation Albers, Arora, Khanna SODA 99: O(1) |Ai|=1*
Uma, Phillips, Wein SODA 00: /4 LP. This Talk 1/3 for w  1/ and 1/5 for w  1   Activity9 Activity8 Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity I w(I) s(I) e(I) time Maximize s.t For each instance I: For each time t: For each activity A:

18 Bandwidth Allocation time Bandwidth time Activity 9 Activity 8

19 Bandwidth Allocation for w  1/2 How to select P1 to get 1/3-approximation?
Activity9 Activity Î Activity7 Activity6 Activity5 Activity4 Activity3 Activity2 Activity I w(I) s(I) e(I) time if I in the same activity of Î For all intervals I define: p1 (I) = *w(I) if I in time conflict with Î else For every feasible x: p1 ·x  3 Every Î-maximal is 1/3-approximation For every Î-maximal x: p1 ·x  1

20 Bandwidth Allocation The 1/5-approximation for any w  1
Activity9 Activity w > ½ Activity w > ½ w > ½ Activity6 Activity w > ½ Activity4 Activity w > ½ w > ½ Activity2 Activity w > ½ w > ½ w > ½ Algorithm: GRAY = Find 1/2-approximation for gray (w>1/2) intervals; COLORED = Find 1/3-approximation for colored intervals Return the one with the larger profit Analysis: If GRAY*  40%OPT then GRAY  1/2(40%OPT)=20%OPT else COLORED*  60%OPT thus COLORED  1/3(60%OPT)=20%OPT

21 Single Machine Scheduling with Release and Deadlines
Activity 9 Activity 8 Activity 7 Activity 6 Activity 5 Activity 4 Activity 3 Activity 2 Activity 1 time Each job has a time window within which it can be processed.

22 Single Machine Scheduling with Release and Deadlines
Activity 9 Activity 8 Activity 7 Activity 6 Activity 5 Activity 4 Activity 3 Activity 2 Activity 1

23 Continuous Scheduling
{ w(I)  d(I)  s(I) e(I) Single Machine Scheduling (w=1) Bar-Noy, Guha, Naor and Schieber STOC 99: 1/3 Non Combinatorial   Berman, DasGupta STOC 00: /2·(1-)   This Talk, STOC 00: (Independent) /2·(1-) Bandwidth Allocation Uma, Phillips, Wein SODA 00:       1/6 Non combinatorial This Talk, STOC 00 (Independent)       1/3 ·(1-) for w  1/2        /5 ·(1-) for w  1

24 Continuous Scheduling: Split and Round Profit (Loose additional (1-) factor)
If currant p(I1)   original p(I1) then delete I1 else Split I2=(s2,e2] to I21=(s2, s1+d1] and I22=(s1+d1,e2]  d(I1)   d(I2)  I11 I12  d(I1)  I21 I22  d(I2) 

25 Minimization problem: General Off-line Caching Problem

26 The Demand Scheduling Problem
Resource 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 w(I) 0.0 Minimize s.t. For each instance I: For each time t: t Demand(t)

27 Special Case: “Min Knapsack”
Demand = 1 For all intervals I define: p1 (I) = Min {w(I), 1} For every feasible x: p1 ·x  1 minimal is 2-approximation For every minimal x: p1 ·x  2

28 From Knapsack to Demand Scheduling
max demand=1 at time t’ For all intervals I intersecting time t’ define: p1 (I) = Min {w(I), 1} p1 (others) = 0 p1 (all “right-minimal”) is at most 2 p1 (all “left-minimal”) is at most 2 For every minimal x: p1 ·x  For every feasible x: p1 ·x  1 Every minimal is 4-approximation

29 General Off-line Caching Problem
Albers, Arora, Khanna SODA 99: O(1) Cache_Size += O(Largest_Page) O(log(Cache_Size+Max_Page_Penalty)) This Talk 4 General Off-line Caching Problem 0.9 0.8 w(page2)=0.7 0.6 w(page1)=0.5 0.4 w(page3)=0.3 0.2 0.1 0.0 page page page page page page2 page3 w(pagei) = size of pagei p(pagei) = the reload cost of the page Cache size

30 4-Approximation for Demand Scheduling
Algorithm MinDemandCover( S, p ) If S = Φ then return Φ ; If there exists an interval I  S s.t. p(I) = 0 ; then return {I}+MinDemandCover( S - {I}, p) ; Let t’ be the time with maximum demand k; Let S’ be the set of instances intersects time t ; Let δ = MIN {p(I)/w(I) : I S’} ; MIN {w(I) ,k} if I  S’ For all intervals I  S define: p1 (I) = δ × 0 else C = MinDemandCover( S, p- p1 ) ; Remove elements form C until it is minimal and return C ;

31 Application: 4-Approximation for the Loss Minimization Problem
Resource The cost of a schedule is the sum of profits of instances not in the schedule. For the special case where Ai is a singleton {Ii} the problem is equivalent to the Min Demand Scheduling where:

32 END?

33 jobs time d d d d d d d d d d Parallel Unrelated Machines: Continous
Bar-Noy, Guha, Naor and Schieber STOC 99: 1/ /4   Berman, DasGupta STOC 00: / /2·(1-)   This Talk, STOC 00(Independent)      1/ /2·(1-) d d d d jobs time d d d d d d

34 Parallel unrelated machines:
k A i c h c c k d time d h i A machine c h d d

35 i k A i k i A time machine c c c c
Parallel unrelated machines: 1/5-approximation (not in the paper) Each machine resource 1 p1(Red ) = p1(orange d ) = 1; p1 (Yellow d ) = 2width; p1 (All others) = 0; i k A i c d h c c k d time d h i d A machine c h d d d

36 END!

37 Preliminaries Activity9 s(I) e(I) time Activity8 Activity7 Activity6
Activity I w(I) s(I) e(I) time


Download ppt "A Unified Approach to Approximating Resource Allocation and Scheduling"

Similar presentations


Ads by Google