Resource augmentation and on-line scheduling on multiprocessors Phillips, Stein, Torng, and Wein. Optimal time-critical scheduling via resource augmentation.

Slides:



Advertisements
Similar presentations
REAL TIME SYSTEM Scheduling.
Advertisements

1 EE5900 Advanced Embedded System For Smart Infrastructure RMS and EDF Scheduling.
An Introduction to Real Time Systems
THE UNIVERSITY of TEHRAN Mitra Nasri Sanjoy Baruah Gerhard Fohler Mehdi Kargahi October 2014.
Mehdi Kargahi School of ECE University of Tehran
ISE480 Sequencing and Scheduling Izmir University of Economics ISE Fall Semestre.
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor Paul Regnier † George Lima † Ernesto Massa † Greg Levin ‡ Scott Brandt ‡
Task Allocation and Scheduling n Problem: How to assign tasks to processors and to schedule them in such a way that deadlines are met n Our initial focus:
Module 2 Priority Driven Scheduling of Periodic Task
Soft Real-Time Semi-Partitioned Scheduling with Restricted Migrations on Uniform Heterogeneous Multiprocessors Kecheng Yang James H. Anderson Dept. of.
PTAS for Bin-Packing. Special Cases of Bin Packing 1. All item sizes smaller than Claim 1: Proof: If then So assume Therefore:
How Many Boundaries Are Required to Ensure Optimality in Multiprocessor Scheduling? Geoffrey Nelissen Shelby Funk Dakai Zhu Joёl Goossens.
Ecole Polytechnique, Nov 7, Minimizing Total Completion Time Each job specified by  procesing time (length p j )  release time r j Goal: compute.
Preemptive Behavior Analysis and Improvement of Priority Scheduling Algorithms Xiaoying Wang Northeastern University China.
1 Ecole Polytechnque, Nov 7, 2007 Scheduling Unit Jobs to Maximize Throughput Jobs:  all have processing time (length) = 1  release time r j  deadline.
Recap from last time -I qGiven a system of periodic tasks:  = {  1,  2,...  n };  i = (T i, C i ) qSchedule using static priorities (of the first.
CSE 421 Algorithms Richard Anderson Lecture 6 Greedy Algorithms.
Ecole Polytechnique, Nov 11, List Scheduling on Related Machines processors Related machines: machines may have different speeds  0.25 
EE 249, Fall Discussion: Scheduling Haibo Zeng Amit Mahajan.
Recap Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling This.
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment Presented by Pete Perlegos C.L. Liu and James W. Layland.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
1 The Greedy Method CSC401 – Analysis of Algorithms Lecture Notes 10 The Greedy Method Objectives Introduce the Greedy Method Use the greedy method to.
Chapter 4 – Periodic Task Scheduling In many real-time systems periodic tasks dominate the demand. Three classic periodic task scheduling algorithms: –
9/3/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Guest lecturer: Martin Furer Algorithm Design and Analysis L ECTURE.
Energy, Energy, Energy  Worldwide efforts to reduce energy consumption  People can conserve. Large percentage savings possible, but each individual has.
New Schedulability Tests for Real- Time task sets scheduled by Deadline Monotonic on Multiprocessors Marko Bertogna, Michele Cirinei, Giuseppe Lipari Scuola.
A Categorization of Real-Time Multiprocessor Scheduling Problems and Algorithms Presentation by Tony DeLuce CS 537 Scheduling Algorithms Spring Quarter.
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
Quantifying the Sub-optimality of Non-preemptive Real-time Scheduling Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
1 Server Scheduling in the L p norm Nikhil Bansal (CMU) Kirk Pruhs (Univ. of Pittsburgh)
Scheduling policies for real- time embedded systems.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
Spring 2015 Mathematics in Management Science Critical Path Scheduling Critical Paths & Times Backflow Algorithm Critical Times PL Critical Path Algorithm.
Competitive Queue Policies for Differentiated Services Seminar in Packet Networks1 Competitive Queue Policies for Differentiated Services William.
6. Application mapping 6.1 Problem definition
Yang Cai Oct 06, An overview of today’s class Unit-Demand Pricing (cont’d) Multi-bidder Multi-item Setting Basic LP formulation.
RTOS task scheduling models
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems RMS and EDF Schedulers.
Computer Science & Engineering, ASU1/17 Pfair Scheduling of Periodic Tasks with Allocation Constraints on Multiple Processors Deming Liu and Yann-Hang.
Special Class on Real-Time Systems
1 Real-Time Scheduling. 2Today Operating System task scheduling –Traditional (non-real-time) scheduling –Real-time scheduling.
Common Approaches to Real-Time Scheduling Clock-driven (time-driven) schedulers Priority-driven schedulers Examples of priority driven schedulers Effective.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
© The McGraw-Hill Companies, Inc., Chapter 12 On-Line Algorithms.
Dynamic Priority Driven Scheduling of Periodic Task
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
CSEP 521 Applied Algorithms Richard Anderson Winter 2013 Lecture 3.
Multicast Pull Scheduling Kirk Pruhs. The Big Problem Movie Distribution Database Replication via Internet Harry Potter Book Download Software Download.
Mok & friends. Resource partition for real- time systems (RTAS 2001)
Real-Time Scheduling --- An Overview Real-Time Scheduling --- An Overview Xiaoping He.
11 -1 Chapter 12 On-Line Algorithms On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests.
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Introductory Seminar on Research CIS5935 Fall 2008 Ted Baker.
Planning and Scheduling.  A job can be made up of a number of smaller tasks that can be completed by a number of different “processors.”  The processors.
Lecture 6: Real-Time Scheduling
Approximation Algorithms for Scheduling Lecture 11.
Tardiness Bounds for Global EDF Scheduling on a Uniform Multiprocessor Kecheng Yang James H. Anderson Dept. of Computer Science UNC-Chapel Hill.
Multiprocessor Real-Time Scheduling
CprE 458/558: Real-Time Systems
Sanjoy Baruah The University of North Carolina at Chapel Hill
PTAS for Bin-Packing.
Richard Anderson Lecture 6 Greedy Algorithms
Planning and Scheduling
NET 424: REAL-TIME SYSTEMS (Practical Part)
Richard Anderson Lecture 7 Greedy Algorithms
Planning and Scheduling
Ch 4. Periodic Task Scheduling
Presentation transcript:

Resource augmentation and on-line scheduling on multiprocessors Phillips, Stein, Torng, and Wein. Optimal time-critical scheduling via resource augmentation. STOC (1997). Algorithmica (to appear).

Background: on-line algorithms qOptimization problems: given problem instance I, algorithm A obtains a value val A (I) -- goal is to maximize this value qOn-line algorithms vs an optimal off-line/ clairvoyant algorithm (OPT) qCompetitive ratio of on-line algorithm A: min all I ( val A (I)/ val OPT (I) ) qGoal: Design an on-line algorithm with largest competitive ratio

Background: hard-real-time scheduling qThe on-line problem: –Instance I = {J 1, J 2,..., J n } of jobs –Each job J j = (r j, p j, d j ) arrives at instant r i needs to execute for p i units... by a deadline at instant d i –Job J i is revealed at instant r i all deadlines must be met! qDifficult to formulate as an optimization problem -- all deadlines must be met! qIn uniprocessor systems, we dodged this issue –EDF/ LL are optimal algorithms (always meet all deadlines) –EDF/ LL are on-line algorithms... –... with competitive ratio one

Hard-real-time scheduling: multiprocessors qNo optimal (in the EDF/LL sense) on-line algorithm exists qMust still meet all deadlines...So, give the on-line algorithm extra resources (more/ faster processors) qThis paper asks: how much extra resources do EDF/ LL need, in order to meet all deadlines for sets of jobs known to be feasible on m processors? qThe answers: – EDF/ LL meet all deadlines if processors are (2 - 1/m) times as fast –No on-line algorithm can meet all deadlines if processors are < 1.2 times as fast –EDF cannot always meet all deadlines if processors are (2 - 1/m -  ) times as fast, for any  > 0

Why we care qOur (RTS) task systems: –usually pre-specified (e.g., periodic tasks/ sporadic tasks) –“on-line”ness usually not an issue exception: overload scheduling (later) qWe’ll do feasibility analysis (does a schedule exist?) qIf feasible, we’ll use the results in this paper –choose an algorithm (usually, EDF) –overallocate resources as mandated by these results –sleep well, knowing that the system performs as expected qWhy choose feasibility analysis (versus schedulability analysis with chosen algorithm)? –provably competitive performance translates to approximation guarantees

Model and definitions Instance I = {J 1, J 2,..., J n } of jobs Each job J j = (r j, p j, d j ) arrives at instant r i needs to execute for p i units... by a deadline at instant d i If I is feasible on m processors, an s-speed on-line algorithm will meet all deadlines on m processors each s times as fast (Thus, EDF is a (2 - 1/m)-speed algorithm)

Digression: An example of how we’d use these results

Scheduling periodic tasks - taxonomy Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling Andersson/ Jonsson bin-packing + LL (no advantage) bin-packing + EDF RM EDF LL/ Pfair Periodic task system  = {  1,  2,...,  n };  i = (T i, C i ),

Remember this? (last class) RM-US(1/4) –all tasks  i with (T i / C i > 1/4) have highest priorities –for the remaining tasks, rate-monotonic priorities Lemma: Any task system satisfying [ (SUM  j :  j  : C i /T i )  m/4] and [ (ALL  j :  j  : C i /T i )  1/4] is successfully scheduled using RM-US(1/4) Theorem: Any task system satisfying [ (SUM  j :  j  : C i /T i )  m/4] is successfully scheduled using RM-US(1/4)

A new (job-level static priority) scheduling algorithm EDF-US(1/2): –If C i /T i  0.5, then jobs of  i get EDF priority –If C i /T i > 0.5, then jobs of  i get highest priority (EDF implementation: set deadline to -  ) Lemma: Any task system satisfying [ (SUM  j :  j  : C i /T i )  m/2] and [ (ALL  j :  j  : C i /T i )  1/2] is successfully scheduled using EDF-US(1/2) Theorem: Any task system satisfying [ (SUM  j :  j  : C i /T i )  m/2] is successfully scheduled using EDF-US(1/2)

Scheduling periodic tasks w/ migration Priorities task-level static job-level static dynamic Migration task-level fixed job-level fixed migratory Baker/ Oh (RTS98) Pfair scheduling Andersson/ Jonsson bin-packing + LL (no advantage) bin-packing + EDF RM-US(1/4) EDF-US(1/4) Pfair 25% 50% 100%

Back to the results in this paper... (faster processors)

The big insight Definitions: –A(j,t) denotes amount of execution of job j by Algorithm A until time t –A(I,t) = [SUM: j  I: A(j,t)] The crucial question: Let A be any “busy” (work-conserving) scheduling algorithm executing on m processors of speed   1. What is the smallest  such that at all times t, A(I, t)  A’(I,t) for any other algorithm A’ executing on m speed-1 processors? Lemma 2.6:  turns out to be (2 - 1/m) Use Lemma 2.6, and an individual algorithm’s scheduling rules, to draw conclusions regarding these algorithms

The oh-so-important lemma 2.6 Proof: by contradiction Suppose there are time instants at which this is not true Let  = { i |  t  A(I,t) < A’(I,t) and A(i,t) < A’(i,t) } Let j be the job with the earliest release time r j in  Let t o be the earliest time instant at which A(I,t o ) < A’(I,t o ) Eq (1) A(j,t o ) < A’(j,t o ) Eq (2) Lemma: Let I be an input instance, t  0 any time-instant. For any busy algorithm A using (2-1/m)-speed machines, A(I,t)  A’(I, t) for any algorithm A’ using 1-speed machines.

EDF is a (2 - 1/m)-speed algorithm Instance I = {J 1, J 2,..., J n }; job J j = (r j, p j, d j ) is feasible on m procs Wlog, assume that d i  d i+1 for all i Let I k = {J 1, J 2,..., J k } Proof Proof: Induction on k Base: EDF on m (1 - 2/m)-speed procs meets all deadlines for I 1,.., I m IH: EDF on m (1 - 2/m)-speed procs meets all deadlines for I 1,.., I k We’re considering I k+1. –Let Q k+1  I k+1 denote the jobs in I k+1 with deadlines at d k+1 –(I k+1 \ Q k+1 ) is I q for some q  k –By IH, EDF on m (1 - 2/m)-speed procs meets all deadlines for I q –BY definition of EDF, EDF(I k+1 ) is identical to EDF(I q ) on jobs of I q ; -- thus, all deadlines in I q are met in EDF(I k+1 ) –By Lemma 2.6, EDF(I k+1,d k+1 )  OPT(I k+1, d k+1 ) –Since OPT meets all deadlines at d k+1, so must EDF on m (1 - 2/m)-speed procs