Introductory Seminar on Research CIS5935 Fall 2008 Ted Baker.

Slides:



Advertisements
Similar presentations
Priority INHERITANCE PROTOCOLS
Advertisements

Explicit Preemption Placement for Real- Time Conditional Code via Graph Grammars and Dynamic Programming Bo Peng, Nathan Fisher, and Marko Bertogna Department.
Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points.
THE UNIVERSITY of TEHRAN Mitra Nasri Sanjoy Baruah Gerhard Fohler Mehdi Kargahi October 2014.
Mehdi Kargahi School of ECE University of Tehran
Response-Time Analysis for globally scheduled Symmetric Multiprocessor Platforms Real-Time Systems Laboratory RETIS Lab Marko Bertogna, Michele Cirinei.
RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor Paul Regnier † George Lima † Ernesto Massa † Greg Levin ‡ Scott Brandt ‡
Real-Time Scheduling CIS700 Insup Lee October 3, 2005 CIS 700.
Module 2 Priority Driven Scheduling of Periodic Task
Soft Real-Time Semi-Partitioned Scheduling with Restricted Migrations on Uniform Heterogeneous Multiprocessors Kecheng Yang James H. Anderson Dept. of.
Sporadic Server Scheduling in Linux Theory vs. Practice Mark Stanovich Theodore Baker Andy Wang.
Towards Feasibility Region Calculus: An End-to-end Schedulability Analysis of Real- Time Multistage Execution William Hawkins and Tarek Abdelzaher Presented.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
CprE 458/558: Real-Time Systems
Spring 2002Real-Time Systems (Shin) Rate Monotonic Analysis Assumptions – A1. No nonpreemptible parts in a task, and negligible preemption cost –
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
New Schedulability Tests for Real- Time task sets scheduled by Deadline Monotonic on Multiprocessors Marko Bertogna, Michele Cirinei, Giuseppe Lipari Scuola.
Distributed Real-Time systems 1 By: Mahdi Sadeghizadeh Website: Sadeghizadeh.ir Advanced Computer Networks.
The Design of an EDF- Scheduled Resource-Sharing Open Environment Nathan Fisher Wayne State University Marko Bertogna Scuola Superiore Santa’Anna of Pisa.
Introductory Seminar on Research CIS5935 Fall 2009 Ted Baker.
Real Time Operating Systems Scheduling & Schedulers Course originally developed by Maj Ron Smith 8-Oct-15 Dr. Alain Beaulieu Scheduling & Schedulers- 7.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Quantifying the sub-optimality of uniprocessor fixed priority non-pre-emptive scheduling Robert Davis 1, Laurent George 2, Pierre Courbin 3 1 Real-Time.
Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
Quantifying the Sub-optimality of Non-preemptive Real-time Scheduling Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
1 Reducing Queue Lock Pessimism in Multiprocessor Schedulability Analysis Yang Chang, Robert Davis and Andy Wellings Real-time Systems Research Group University.
Scheduling policies for real- time embedded systems.
Real-Time Systems Mark Stanovich. Introduction System with timing constraints (e.g., deadlines) What makes a real-time system different? – Meeting timing.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 31 – Multimedia OS (Part 1) Klara Nahrstedt Spring 2011.
The Global Limited Preemptive Earliest Deadline First Feasibility of Sporadic Real-time Tasks Abhilash Thekkilakattil, Sanjoy Baruah, Radu Dobrin and Sasikumar.
6. Application mapping 6.1 Problem definition
Cpr E 308 Spring 2005 Process Scheduling Basic Question: Which process goes next? Personal Computers –Few processes, interactive, low response time Batch.
Advanced Operating Systems - Spring 2009 Lecture 14 – February 25, 2009 Dan C. Marinescu Office: HEC 439 B. Office.
CSE 522 Real-Time Scheduling (2)
Real Time Operating Systems Schedulability - Part 2 Course originally developed by Maj Ron Smith 12/20/2015Dr Alain Beaulieu1.
Module 2 Overview of Real Time System Scheduling
Real-Time Scheduling CS 3204 – Operating Systems Lecture 13 10/3/2006 Shahrooz Feizabadi.
1 Real-Time Scheduling. 2Today Operating System task scheduling –Traditional (non-real-time) scheduling –Real-time scheduling.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
OPERATING SYSTEMS CS 3530 Summer 2014 Systems and Models Chapter 03.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Multiprocessor Fixed Priority Scheduling with Limited Preemptions Abhilash Thekkilakattil, Rob Davis, Radu Dobrin, Sasikumar Punnekkat and Marko Bertogna.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Dynamic Priority Driven Scheduling of Periodic Task
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 31 – Process Management (Part 1) Klara Nahrstedt Spring 2009.
Mok & friends. Resource partition for real- time systems (RTAS 2001)
Real-Time Scheduling --- An Overview Real-Time Scheduling --- An Overview Xiaoping He.
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 32 – Multimedia OS Klara Nahrstedt Spring 2010.
THE DEADLINE-BASED SCHEDULING OF DIVISIBLE REAL-TIME WORKLOADS ON MULTIPROCESSOR PLATFORMS Suriayati Chuprat Supervisors: Professor Dr Shaharuddin Salleh.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Distributed Process Scheduling- Real Time Scheduling Csc8320(Fall 2013)
Tardiness Bounds for Global EDF Scheduling on a Uniform Multiprocessor Kecheng Yang James H. Anderson Dept. of Computer Science UNC-Chapel Hill.
OPERATING SYSTEMS CS 3502 Fall 2017
Multiprocessor Real-Time Scheduling
CPU Scheduling CSSE 332 Operating Systems
Unit OS9: Real-Time and Embedded Systems
CprE 458/558: Real-Time Systems
Lecture 21: Introduction to Process Scheduling
Sanjoy Baruah The University of North Carolina at Chapel Hill
Limited-Preemption Scheduling of Sporadic Tasks Systems
Chapter 6: CPU Scheduling
Linköping University, IDA, ESLAB
Lecture 21: Introduction to Process Scheduling
Uniprocessor scheduling
Ch 4. Periodic Task Scheduling
Presentation transcript:

Introductory Seminar on Research CIS5935 Fall 2008 Ted Baker

Outline Introduction to myself – My past research – My current research areas Technical talk: on RT MP EDF Scheduling – The problem – The new results – The basis for the analysis – Why a better result might be possible

Past Research Relative computability – Relativizations of the P=NP? question ( ) Algorithms – N-dim pattern matching (1978) – extended LR parsing (1981) Compilers & PL implementation – Ada compiler and runtime systems ( ) Real-time runtime systems, multi-threading – FSU Pthreads & other RT OS projects ( ) Real-time scheduling & synch. – Stack Resource Protocol (1991) – Deadline Sporadic Server (1995) RT Software standards – POSIX, Ada ( )

Recent/Current Research Multiprocessor real-time scheduling (1998-…) – how to guarantee deadlines for task systems scheduled on multiprocessors? with M. Cirinei & M. Bertogna (Pisa), N. Fisher & S. Baruah (UNC) Real-time device drivers (2006-…) – how to support schedulability analysis with an operating system? – how to get predictable I/O response times? with A. Wang & Mark Stanovich (FSU)

A Real-Time Scheduling Problem Will a set of independent sporadic tasks miss any deadlines if scheduled using a global preemptive Earliest-Deadline-First (EDF) policy on a set of identical multiprocessors?

Background & Terminology job = schedulable unit of computation, with – arrival time – worst-case execution time (WCET) – deadline task = sequence of jobs task system = set of tasks independent tasks: can be scheduled without consideration of interactions, precedence, coordination, etc.

Sporadic Task  i T i = minimum inter-arrival time C i = worst-case execution time D i = relative deadline job released job completesdeadline next release scheduling window

Multiprocessor Scheduling m identical processors (vs. uniform/hetero.) shared memory (vs. distributed) preemptive (vs. non-preemptive) on-line (vs. off-line) EDF – earlier deadline  higher priority global (vs. partitioned) – single queue – tasks can migrate between processors

Questions Is a given system schedulable by global-EDF? How good is global-EDF at finding a schedule? – How does it compare to optimal?

Schedulability Testing Global-EDF schedulability for sporadic task systems can be decided by brute-force state- space enumeration (in exponential time) [Baker, OPODIS 2007] but we don’t have any practical algorithm. We do have several practical sufficient conditions.

Sufficient Conditions for Global EDF Varying degrees of complexity and accuracy Examples: Goossens, Funk, Baruah: density test (2003) Baker: analysis of  -busy interval (2003) Bertogna, Cirinei: iterative slack time estimation (2007) Difficult to compare quality, except by experimentation All tests are very conservative

Density Test for Global EDF where Sporadic task system  is schedulable on m unit-capacity processors if

A more precise load metric maximum demand of jobs of  i that arrive in and have deadlines within any interval of length t maximum fraction of processor demanded by jobs of  i that arrive in and have deadlines within any time interval

Rationale for DBF single processor analysis uses maximal busy interval, which has no “carried in” jobs.

Load-based test: Theorem 3 Sporadic task system  is global-EDF schedulable on m unit-capacity processors if where

Optimality There is no optimal on-line global scheduling algorithm for sporadic tasks [Fisher, 2007] → global EDF is not optimal – so we can’t compare to an optimal on-line algorithm + but we can compare it to an optimal clairvoyant scheduler

Speed-up Factors, used in Competitive Analysis A scheduling algorithm has a processor speedup factor f ≥ 1 if for any task system  that is feasible on a given multiprocessor platform the algorithm schedules  to meet all deadlines on a platform in which each processor is faster by a factor f.

EDF Job Scheduling Speedup Any set of independent jobs that can be scheduled to meet all deadlines on m unit- speed processors will meet all deadlines if scheduled using Global EDF on m processors of speed 2 - 1/m. [Phillips et al., 1997] But how do we tell whether a sporadic task system is feasible?

Sporadic EDF Speed-up If  is feasible on m processors of speed x then it will be correctly identified as global-EDF schedulable on m unit-capacity processors by Theorem 3 if

Corollary 2 The processor speedup bound for the global- EDF schedulability test of Theorem 3 is bounded above by

Interpretation The processor speed-up of compensates for both 1.non-optimality of global EDF 2.pessimism of our schedulability test There is no penalty for allowing post-period deadlines in the analysis (Makes sense, but not borne out by prior analyses, e.g., of partitioned EDF)

Steps of Analysis lower bound  on load to miss deadline lower bound on length of  -busy window downward closure of  -busy window upper bound on carried-in work per task upper bound on per-task contribution to load, in terms of DBF upper bound on DBF, in terms of density upper bound on number of tasks with carry-in sufficient condition for schedulability derivation of speed-up result

problem job arrivesfirst missed deadline problem job executesother jobs execute Consider the first “problem job”, that misses its deadline. What must be true for this to happen?

Details of the First Step What is a lower bound on the load needed to miss a deadline?

problem job arrivesfirst missed deadline problem job ready previous job of problem task The problem job is not ready to execute until the preceding job of the same task completes.

problem window first missed deadline problem job ready previous job of problem task Restrict consideration to the “problem window” during which the problem job is eligible to execute.

problem task executesother tasks execute problem window The ability of the problem job to complete within the problem window depends on its own execution time and interference from jobs of other tasks.

carried-in jobs problem window deadline > t d The interfering jobs are of two kinds: (1)local jobs: arrive in the window and have deadlines in the window (2)carried-in jobs: arrive before the window and have deadlines in the window

problem task executesother tasks interfere Interference only occurs when all processors are busy executing jobs of other tasks.

Therefore, we can get a lower bound on the necessary interfering demand by considering only “blocks” of interference. problem task executesother tasks interfere

The total amount of block interference is not affected by where it occurs within the window. problem task executesother tasks interfere

problem task executesother tasks interfere The total demand with deadline  t d includes the problem problem job and the interference. processors busy executing jobs with deadline  problem job

processors busy executing other jobs with deadline  problem job average competing workload in [ t a,t d ) approximation of interference (blocks) by demand (formless) From this, we can find the average workload with deadline  t d that is needed to cause a missed deadline.

problem job arrives previous job of problem task previous deadline of problem task The minimum inter-arrival time and the deadline give us a lower bound on the length of the problem window.

The WCET of the problem job and the number of processors allow us to find a lower bound on the average competing workload.

What we have shown There can be no missed deadline unless there is a “  -busy ” problem window.

The Rest of the Analysis [lower bound  on load to miss deadline] lower bound on length of  -busy window downward closure of  -busy window upper bound on carried-in work per task upper bound on per-task contribution to load, in terms of DBF upper bound on DBF, in terms of density upper bound on number of tasks with carry-in sufficient condition for schedulability derivation of speed-up result

Key Elements of the Rest of the Analysis # tasks with carried-in jobs  m-1 shows carried-in load   max Observe length of  -busy interval ≥ min(D k,T k ) covers case D k >T k Derive speed-up bounds

problem job arrives previous job of problem task previous deadline of problem task Observe length of  -busy interval ≥ min(D k,T k ) This covers both case D k ≤T k and D k >T k

maximal  -busy interval To minimize the contributions of carried-in jobs, we can extend the problem window downward until the competing load falls below .

at most carried-in jobs maximal  -busy interval Observe # tasks with carried-in jobs  m-1 Use this to show carried-in load   max

Summary New speed-up bound for global EDF on sporadic tasks with arbitrary deadlines Based on bounding number of tasks with carried-in jobs Tighter analysis may be possible in future work

Where analysis might be tighter approximation of interference (blocks) by demand (formless) bounding  i by  max (only considering one value of  ) bounding DBF(  i,  i +  ) by (  i +  )  max (t) double-counting work of carry-in tasks

contribution of  i bounding DBF(  i,  i +  ) by (  i +  )  max (t)

carry-in cases non-carry-in cases double-counting internal load from tasks with carried-in jobs

Some Other Fundamental Questions Is the underlying MP model realistic? Can reasonably accurate WCET’s be found for MP systems? (How do we deal with memory and L2 cache interference effects?) What is the preemption cost? What is the task migration cost? What is the best way to implement it?

The End questions?

at most carried-in jobs maximal  -busy interval