Presentation is loading. Please wait.

Presentation is loading. Please wait.

Recap Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling This.

Similar presentations


Presentation on theme: "Recap Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling This."— Presentation transcript:

1 Recap Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling This paper Jim wants to know... bin-packing + LL (no advantage) bin-packing + EDF qJohn’s generalization of partitioning/ non-partitioning qAnomalies everywhere...

2 This paper -I qObs 1 & 2: increasing period may reduce feasibility –(reason: parallelism of processor left over by higher-pri tasks increases) qObs 3: Critical instant not easily identified qObs 4: Response time of a task depends upon relative priorities of higher-priority tasks –==> the Audsley technique of priority assignment cannot be used

3 Finally, a non-anomalous result (Liu & Ha, 1994) Aperiodic jobs: J i = (a i, e i ) (not periodic tasks) – arrival time – execution requirement A system: –{J 1, J 2,...,J n } –m processors –specified priorities –Let F i denote the completion time of J i Any system – {J 1 ’, J 2 ’,...,J n ’} with e i ’  e i –m processors –the same priorities –let F i ’ denote the completion time of J i ’. F i ’  F i can use for the middle column of our table, too!

4 This paper -II qObs 1 & 2: increasing period may reduce feasibility –(reason: parallelism of processor left over by higher-pri tasks increases) qObs 3: Critical instant not easily identified qObs 4: Response time of a task depends upon relative priorities of higher-priority tasks –==> the Audsley technique of priority assignment cannot be used qTheorem 1: A sufficient condition for feasibility –idea of the proof –possible problems (as pointed out by Phil)?

5 Theorem 1, corrected If for each  i =(T i, C i ) there exists an L i  T i such that [C i + (SUM j : j  hp(i) : (  L i /T j  + 1)  C j ) / m ]  L i then the task set is non-partition schedulable. Proof

6 A priority assignment scheme RM-US(1/4) –all tasks  i with (T i / C i > 1/4) have highest priorities –for the remaining tasks, rate-monotonic priorities Lemma: Any task system satisfying [ (SUM  j :  j  : C i /T i )  m/4] and [ (ALL  j :  j  : C i /T i )  1/4] is successfully scheduled using RM-US(1/4) Theorem: Any task system satisfying [ (SUM  j :  j  : C i /T i )  m/4] is successfully scheduled using RM-US(1/4)

7 What this result means... qFirst non-zero utilization bound for non-partitioning static-priority Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling bin-packing + LL (no advantage) bin-packing + EDF qCompare to partitioning (Baker & Oh) -- 41% qRoom for improvement (simple algebra, perhaps ) qExploit the non-anomaly of Liu & Ha to design job-level static priority algorithms)

8 Resource augmentation and on-line scheduling on multiprocessors Phillips, Stein, Torng, and Wein. Optimal time-critical scheduling via resource augmentation. STOC (1997). Algorithmica (to appear).

9 Model and definitions Instance I = {J 1, J 2,..., J n } of jobs J j = (r j, p j, w j, d j );  (I) = max {p j } / min {p j } Known to be (off-line) feasible on m identical multiprocessors But the jobs revealed on line... An s-speed algorithm: meet all deadlines on m processors each s times as fast A w-machine algorithm: meet all deadlines on w  m procs (each of the same speed as the original procs.)

10 Summary of results: Speed An s-speed algorithm: meet all deadlines on m processors each s times as fast EDF & LL are both (2 - 1/m)-speed algorithms –bound is tight for EDF Implies: twice as many processors ==> get optimal performance No 6/5 -speed algorithm can exist (on  2 procs)

11 Summary of results: machines A w-machine algorithm: meet all deadlines on w  m procs (each of the same speed as the original procs.) LL is an O(log  )-machine algorithm –not a c-machine algorithm for any constant c EDF is not an o(  )-machine algorithm “explains” difference between X-proc EDF & LL No 5/4 -machine algorithm can exist (on  2 procs)

12 The big insight: speed Definitions: A(j,t) denotes amount of execution of job j by Algorithm A until time t A(I,t) = [SUM: j  I: A(j,t)] Let A be any “busy” (work-conserving) scheduling algorithm executing on processors of speed   1. What is the smallest  such that at all times t, A(I,  t)  A’(I,t) for any other algorithm A’ executing on speed-1 processors? Lemma 2.6:  turns out to be (2 - 1/m)/  –Thus, choosing  equal to (2 - 1/m) gives  = 1 (Choosing  greater than this (  < 1) makes no physical sense) Use Lemma 2.6, and individual algorithm’s scheduling rules, to draw conclusions regarding these algorithms


Download ppt "Recap Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling This."

Similar presentations


Ads by Google