Real-Time Systems Real-Time Systems. Real-time research repository  For information on real-time research groups, conferences, journals, books, products,

Slides:



Advertisements
Similar presentations
Real Time Scheduling.
Advertisements

Introduction Frank Drews
EE5900 Advanced Embedded System For Smart Infrastructure
1 EE5900 Advanced Embedded System For Smart Infrastructure RMS and EDF Scheduling.
CPE555A: Real-Time Embedded Systems
Real Time Scheduling Terminologies define Fixed Priority Scheduler
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Mehdi Kargahi School of ECE University of Tehran
Real-Time Scheduling CIS700 Insup Lee October 3, 2005 CIS 700.
Tasks Periodic The period is the amount of time between each iteration of a regularly repeated task Time driven The task is automatically activated by.
Module 2 Priority Driven Scheduling of Periodic Task
Basic Real Time Concepts Systems Concepts Real-Time Definitions Events and Determinism CPU Utilization Real-Time System Design Issues Example Real-Time.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
CprE 458/558: Real-Time Systems
Spring 2002Real-Time Systems (Shin) Rate Monotonic Analysis Assumptions – A1. No nonpreemptible parts in a task, and negligible preemption cost –
Real-Time Operating System Chapter – 8 Embedded System: An integrated approach.
Introduction to Embedded Systems Resource Management in (Embedded) Real-Time Systems Lecture 17.
A Categorization of Real-Time Multiprocessor Scheduling Problems and Algorithms Presentation by Tony DeLuce CS 537 Scheduling Algorithms Spring Quarter.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Real Time Process Control (Introduction)
Module 2 Clock-Driven Scheduling
Introduction to Real Time Systems Akos Ledeczi EECE 354, Fall 2010 Vanderbilt University.
1. Introduction 1.1 Background 1.2 Real-time applications 1.3 Misconceptions 1.4 Issues in real-time computing 1.5 Structure of a real-time system.
Real Time Operating Systems Scheduling & Schedulers Course originally developed by Maj Ron Smith 8-Oct-15 Dr. Alain Beaulieu Scheduling & Schedulers- 7.
EEL Software development for real-time engineering systems.
Scheduling policies for real- time embedded systems.
1 Scheduling The part of the OS that makes the choice of which process to run next is called the scheduler and the algorithm it uses is called the scheduling.
Reference: Ian Sommerville, Chap 15  Systems which monitor and control their environment.  Sometimes associated with hardware devices ◦ Sensors: Collect.
Real Time Scheduling Telvis Calhoun CSc Outline Introduction Real-Time Scheduling Overview Tasks, Jobs and Schedules Rate/Deadline Monotonic Deferrable.
Clock Driven Scheduling By Dr. Amin Danial Asham.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
6. Application mapping 6.1 Problem definition
Undergraduate course on Real-time Systems Linköping 1 of 45 Autumn 2009 TDDC47: Real-time and Concurrent Programming Lecture 5: Real-time Scheduling (I)
End-To-End Scheduling Angelo Corsaro & Venkita Subramonian Department of Computer Science Washington University Distributed Systems Seminar, Spring 2003.
Real-Time Systems. CS461 - Senior Design Project (AY2005)2 Real-time research repository For information on real-time research groups, conferences, journals,
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
Special Class on Real-Time Systems
Module 2 Overview of Real Time System Scheduling
Real-Time systems By Dr. Amin Danial Asham.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
Common Approaches to Real-Time Scheduling Clock-driven (time-driven) schedulers Priority-driven schedulers Examples of priority driven schedulers Effective.
Introduction to Embedded Systems Rabie A. Ramadan 5.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Lecture 2, CS52701 The Real Time Computing Environment I CS 5270 Lecture 2.
Dynamic Priority Driven Scheduling of Periodic Task
Introduction to Real-Time Systems
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Clock Driven Scheduling
Clock Driven Schedulers
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 6: Real-Time Scheduling
Real-Time Operating Systems RTOS For Embedded systems.
Clock-Driven Scheduling (in-depth) Task Scheduler: i := 0; k := 0; BEGIN LOOP i := i+1; k:= i mod N; IF J(t k-1 )is empty THEN wakeup(aperiodic) ELSE wakeup(J(t.
Embedded System Scheduling
Introduction Frank Drews
Chapter 5a: CPU Scheduling
Wayne Wolf Dept. of EE Princeton University
EEE 6494 Embedded Systems Design
Chapter 2 Scheduling.
Chapter 6: CPU Scheduling
Real-Time Systems.
Chapter 6: CPU Scheduling
CSCI1600: Embedded and Real Time Software
CSCI1600: Embedded and Real Time Software
Chapter 10 Multiprocessor and Real-Time Scheduling
Ch.7 Scheduling Aperiodic and Sporadic Jobs in Priority-Driven Systems
CHAPTER 3 A Reference Model of Real-Time Systems
The End Of The Line For Static Cyclic Scheduling?
Presentation transcript:

Real-Time Systems Real-Time Systems

Real-time research repository  For information on real-time research groups, conferences, journals, books, products, etc., have a look at:  For information on real-time research groups, conferences, journals, books, products, etc., have a look at:

Introduction  A real-time system is a system whose specification includes both a logical and a temporal correctness requirement.  Logical correctness: produce correct output.  Can be checked by various means including Hoare axiomatics and other formal methods.  Temporal correctness: produces output at the right time.  A soft real-time system is one that can tolerate some delay in delivering the result.  A hard real-time system is one that can not afford to miss a dealine.  A real-time system is a system whose specification includes both a logical and a temporal correctness requirement.  Logical correctness: produce correct output.  Can be checked by various means including Hoare axiomatics and other formal methods.  Temporal correctness: produces output at the right time.  A soft real-time system is one that can tolerate some delay in delivering the result.  A hard real-time system is one that can not afford to miss a dealine.

Characteristics of real-time systems  Event-driven, reactive  High cost of failure  Concurrency/multiprogramming  Stand-alone/continuous operation.  Reliability/fault tolerance requirements  PREDICTABLE BEHAVIOR  Event-driven, reactive  High cost of failure  Concurrency/multiprogramming  Stand-alone/continuous operation.  Reliability/fault tolerance requirements  PREDICTABLE BEHAVIOR

Misconceptions about real-time systems  There is no science in real-time-system design.  We shall see…  Advances in supercomputing hardware will take care of real-time requirements.  The old “buy a faster processor” argument…  Real-time computing is equivalent to fast computing.  Only to ad agencies. To us, it means PREDICTABLE computing.  There is no science in real-time-system design.  We shall see…  Advances in supercomputing hardware will take care of real-time requirements.  The old “buy a faster processor” argument…  Real-time computing is equivalent to fast computing.  Only to ad agencies. To us, it means PREDICTABLE computing.

Misconceptions about real-time systems  Real-time programming is assembly coding, …  We would like to automate (as much as possible) real- time system design, instead of relying on clever hand- crafted code.  “Real time” is performance engineering.  In real-time computing, timeliness is almost always more important than raw performance …  “Real-time problems” have all been solved in other areas of CS or operations research.  OR people typically use stochastic queuing models or one-shot scheduling models to reason about systems.  CS people are usually interested in optimizing average-case performance.  Real-time programming is assembly coding, …  We would like to automate (as much as possible) real- time system design, instead of relying on clever hand- crafted code.  “Real time” is performance engineering.  In real-time computing, timeliness is almost always more important than raw performance …  “Real-time problems” have all been solved in other areas of CS or operations research.  OR people typically use stochastic queuing models or one-shot scheduling models to reason about systems.  CS people are usually interested in optimizing average-case performance.

Misconceptions about real-time systems  It is not meaningful to talk about guaranteeing real-time performance when things can fail.  Though things may fail, we certainly don’t want the operating system to be the weakest link!  Real-time systems function in a static environment.  Not true. We consider systems in which the operating mode may change dynamically.  It is not meaningful to talk about guaranteeing real-time performance when things can fail.  Though things may fail, we certainly don’t want the operating system to be the weakest link!  Real-time systems function in a static environment.  Not true. We consider systems in which the operating mode may change dynamically.

Are all systems real-time systems?  Question: Is a payroll processing system a real-time system?  It has a time constraint: Print the pay checks every month.  Perhaps it is a real-time system in a definitional sense, but it doesn’t pay us to view it as such.  We are interested in systems for which it is not a priori obvious how to meet timing constraints.  Question: Is a payroll processing system a real-time system?  It has a time constraint: Print the pay checks every month.  Perhaps it is a real-time system in a definitional sense, but it doesn’t pay us to view it as such.  We are interested in systems for which it is not a priori obvious how to meet timing constraints.

Resources  Resources may be categorized as:  Abundant: Virtually any system design methodology can be used to realize the timing requirements of the application.  Insufficient: The application is ahead of the technology curve; no design methodology can be used to realize the timing requirements of the application.  Sufficient but scarce: It is possible to realize the timing requirements of the application, but careful resource allocation is required.  Resources may be categorized as:  Abundant: Virtually any system design methodology can be used to realize the timing requirements of the application.  Insufficient: The application is ahead of the technology curve; no design methodology can be used to realize the timing requirements of the application.  Sufficient but scarce: It is possible to realize the timing requirements of the application, but careful resource allocation is required.

Example: Interactive/multimedia application Hardware resources in Year X Requirements (performance) Interactive Video High Quality Audio Network File Access Remote Login Insufficient resources Abundant resources Sufficient but scarce resources

Example: Real-time application  Many real-time systems are control systems.  Example: A simple one-sensor, one-actuator control system.  Many real-time systems are control systems.  Example: A simple one-sensor, one-actuator control system. A/D D/A Computation ActuatorPlantSensor The system being controlled

Simple control system  Pseudo-code for this system: set timer to interrupt periodically with period T; at each timer interrupt do do analog-to-digital conversion to get y; compute control output u; output u and do digital-to-analog conversion; od  T is called the sampling period. T is a key design choice. Typical range for T: seconds to milliseconds.  Pseudo-code for this system: set timer to interrupt periodically with period T; at each timer interrupt do do analog-to-digital conversion to get y; compute control output u; output u and do digital-to-analog conversion; od  T is called the sampling period. T is a key design choice. Typical range for T: seconds to milliseconds.

Multi-rate control systems  Example Helicopter flight controller. Do the following in each 1/180-sec. cycle: validate sensor data and select data source; if failure, reconfigure the system Every sixth cycle do: keyboard input and mode selection; data normalization and coordinate transformation; tracking reference update control laws of the outer pitch-control loop; control laws of the outer roll-control loop; control laws of the outer yaw- and collective- control loop  Example Helicopter flight controller. Do the following in each 1/180-sec. cycle: validate sensor data and select data source; if failure, reconfigure the system Every sixth cycle do: keyboard input and mode selection; data normalization and coordinate transformation; tracking reference update control laws of the outer pitch-control loop; control laws of the outer roll-control loop; control laws of the outer yaw- and collective- control loop Every other cycle do: control laws of the inner pitch-control loop; control laws of the inner roll- and collective- control loop; Compute the control laws of the inner yaw-control loop; Output commands; Carry out built-in test; Wait until beginning of the next cycle  Note: Having only harmonic rates simplifies the system.  More complicated control systems have multiple sensors and actuators and must support control loops of different rates.

Hierarchical control systems

Signal processing systems  Signal-processing systems transform data from one form to another.  Examples:  Digital filtering.  Video and voice compression/decompression.  Radar signal processing.  Response times range from a few milliseconds to a few seconds.  Signal-processing systems transform data from one form to another.  Examples:  Digital filtering.  Video and voice compression/decompression.  Radar signal processing.  Response times range from a few milliseconds to a few seconds.

Example: radar system

Other real-time applications  Real-time databases.  Transactions must complete by deadlines.  Main dilemma: Transaction scheduling algorithms and real-time scheduling algorithms often have conflicting goals.  Data may be subject to absolute and relative temporal consistency requirements.  Multimedia.  Want to process audio and video frames at steady rates.  TV video rate is 30 frames/sec. HDTV is 60 frames/sec.  Telephone audio is 16 Kbits/sec. CD audio is 128 Kbits/sec.  Other requirements: Lip synchronization, low jitter, low end- to-end response times (if interactive).  Real-time databases.  Transactions must complete by deadlines.  Main dilemma: Transaction scheduling algorithms and real-time scheduling algorithms often have conflicting goals.  Data may be subject to absolute and relative temporal consistency requirements.  Multimedia.  Want to process audio and video frames at steady rates.  TV video rate is 30 frames/sec. HDTV is 60 frames/sec.  Telephone audio is 16 Kbits/sec. CD audio is 128 Kbits/sec.  Other requirements: Lip synchronization, low jitter, low end- to-end response times (if interactive).

Hard vs. soft real-time  Task: A sequential piece of code.  Job: Instance of a task.  Jobs require resources to execute.  Example resources: CPU, network, disk, critical section.  We will simply call all hardware resources “processors”.  Release time of a job: The time instant the job becomes ready to execute.  Deadline of a job: The time instant by which the job must complete execution.  Relative deadline of a job: “Deadline - Release time”.  Response time of a job: “Completion time - Release time”.  Task: A sequential piece of code.  Job: Instance of a task.  Jobs require resources to execute.  Example resources: CPU, network, disk, critical section.  We will simply call all hardware resources “processors”.  Release time of a job: The time instant the job becomes ready to execute.  Deadline of a job: The time instant by which the job must complete execution.  Relative deadline of a job: “Deadline - Release time”.  Response time of a job: “Completion time - Release time”.

Example  Job is released at time 3.  It’s (absolute) deadline is at time 10.  It’s relative deadline is 7.  It’s response time is 6.  Job is released at time 3.  It’s (absolute) deadline is at time 10.  It’s relative deadline is 7.  It’s response time is 6.

Hard real-time systems  A hard deadline must be met.  If any hard deadline is ever missed, then the system is incorrect.  Requires a means for validating that deadlines are met.  Hard real-time system: A real-time system in which all deadlines are hard.  Examples: Nuclear power plant control, flight control.  A hard deadline must be met.  If any hard deadline is ever missed, then the system is incorrect.  Requires a means for validating that deadlines are met.  Hard real-time system: A real-time system in which all deadlines are hard.  Examples: Nuclear power plant control, flight control.

Soft real-time systems  A soft deadline may occasionally be missed.  Question: How to define “occasionally”?  Soft real-time system: A real-time system in which some deadlines are soft.  Examples: Telephone switches, multimedia applications.  A soft deadline may occasionally be missed.  Question: How to define “occasionally”?  Soft real-time system: A real-time system in which some deadlines are soft.  Examples: Telephone switches, multimedia applications.

Defining “occasionally”  One Approach: Use probabilistic requirements.  For example, 99% of deadlines will be met.  Another Approach: Define a “usefulness” function for each job:  Note: Validation is trickier here.  One Approach: Use probabilistic requirements.  For example, 99% of deadlines will be met.  Another Approach: Define a “usefulness” function for each job:  Note: Validation is trickier here.

Firm deadlines  Firm deadline: A soft deadline such that the corresponding job’s usefulness function goes to 0 as soon as the deadline is reached (late jobs are of no use).

Reference model  Each job J i is characterized by its release time r i, absolute deadline d i, relative deadline D i, and execution time e i.  Sometimes a range of release times is specified: [r i -, r i + ]. This range is called release-time jitter.  Likewise, sometimes instead of e i, execution time is specified to range over [e i -, e i + ].  Note: It can be difficult to get a precise estimate of e i  Each job J i is characterized by its release time r i, absolute deadline d i, relative deadline D i, and execution time e i.  Sometimes a range of release times is specified: [r i -, r i + ]. This range is called release-time jitter.  Likewise, sometimes instead of e i, execution time is specified to range over [e i -, e i + ].  Note: It can be difficult to get a precise estimate of e i

Periodic, sporadic aperiodic tasks  Periodic task:  We associate a period p i with each task T i.  p i is the time between job releases.  Sporadic and aperiodic tasks: Released at arbitrary times.  Sporadic: Has a hard deadline.  Aperiodic: Has no deadline or a soft deadline.  Periodic task:  We associate a period p i with each task T i.  p i is the time between job releases.  Sporadic and aperiodic tasks: Released at arbitrary times.  Sporadic: Has a hard deadline.  Aperiodic: Has no deadline or a soft deadline.

Examples  A periodic task T i with r i = 2, p i = 5, e i = 2, D i = 5 executes like this:

Some definitions for periodic task systems  The jobs of task T i are denoted J i,1, J i,2, ….  r i,1 (the release time of J i,1 ) is called the phase of T i.  Synchronous System: Each task has a phase of 0.  Asynchronous System: Phases are arbitrary.  Hyperperiod: Least common multiple of {p i }.  Task utilization: u i = e i /p i.  System utilization: U = Σ i=1..n u i.  The jobs of task T i are denoted J i,1, J i,2, ….  r i,1 (the release time of J i,1 ) is called the phase of T i.  Synchronous System: Each task has a phase of 0.  Asynchronous System: Phases are arbitrary.  Hyperperiod: Least common multiple of {p i }.  Task utilization: u i = e i /p i.  System utilization: U = Σ i=1..n u i.

Task dependencies  Two main kinds of dependencies:  Critical Sections.  Precedence Constraints.  For example, job J i may be constrained to be released only after job J k completes.  Tasks with no dependencies are called independent.  Two main kinds of dependencies:  Critical Sections.  Precedence Constraints.  For example, job J i may be constrained to be released only after job J k completes.  Tasks with no dependencies are called independent.

Scheduling algorithms  We are generally interested in two kinds of algorithms: 1.A scheduler or scheduling algorithm, which generates a schedule at runtime. 2.A feasibility analysis algorithm, which checks if timing constraints are met.  Usually (but not always) Algorithm 1 is pretty straightforward, while Algorithm 2 is more complex.  We are generally interested in two kinds of algorithms: 1.A scheduler or scheduling algorithm, which generates a schedule at runtime. 2.A feasibility analysis algorithm, which checks if timing constraints are met.  Usually (but not always) Algorithm 1 is pretty straightforward, while Algorithm 2 is more complex.

Classification of scheduling algorithms

Optimality and feasibility  A schedule is feasible if all timing constraints are met.  The term “correct” is probably better — see the next slide.  A task set T is schedulable using scheduling algorithm A if A always produces a feasible schedule for T.  A scheduling algorithm is optimal if it always produces a feasible schedule when one exists (under any scheduling algorithm).  Can similarly define optimality for a class of schedulers, e.g., “an optimal static-priority scheduling algorithm.”  A schedule is feasible if all timing constraints are met.  The term “correct” is probably better — see the next slide.  A task set T is schedulable using scheduling algorithm A if A always produces a feasible schedule for T.  A scheduling algorithm is optimal if it always produces a feasible schedule when one exists (under any scheduling algorithm).  Can similarly define optimality for a class of schedulers, e.g., “an optimal static-priority scheduling algorithm.”

Feasibility vs. schedulability  To most people in real-time community, the term “feasibility” is used to refer to an exact schedulability test, while the term “schedulability” is used to refer to a sufficient schedulability test.  You may find that these terms are used somewhat inconsistently in the literature.  To most people in real-time community, the term “feasibility” is used to refer to an exact schedulability test, while the term “schedulability” is used to refer to a sufficient schedulability test.  You may find that these terms are used somewhat inconsistently in the literature.

Clock driven (or static) scheduling  Model assumes  n periodic tasks T 1,…,T n.  The “rest of the world” periodic model is assumed.  T i is specified by (  i, p i, e i, D i ), where   i is its phase,  p i is its period,  e i is its execution cost per job, and  D i is its relative deadline.  Will abbreviate as (p i,e i,D i ) if  i =0, and (p i,e i ) if  i =0 ∧ p i =D i.  We also have aperiodic jobs that are released at arbitrary times  Model assumes  n periodic tasks T 1,…,T n.  The “rest of the world” periodic model is assumed.  T i is specified by (  i, p i, e i, D i ), where   i is its phase,  p i is its period,  e i is its execution cost per job, and  D i is its relative deadline.  Will abbreviate as (p i,e i,D i ) if  i =0, and (p i,e i ) if  i =0 ∧ p i =D i.  We also have aperiodic jobs that are released at arbitrary times

Schedule table  Our scheduler will schedule periodic jobs using a static schedule that is computed offline and stored in a table T.  T(t k ) = T i if T i is to be scheduled at time t k I if no periodic task is scheduled at time tk  For most of this chapter, we assume the table is given.  Later, we consider one algorithm for producing the table.  Note: This algorithm need not be highly efficient.  We will schedule aperiodic jobs (if any are ready) in intervals not used by periodic jobs.  Our scheduler will schedule periodic jobs using a static schedule that is computed offline and stored in a table T.  T(t k ) = T i if T i is to be scheduled at time t k I if no periodic task is scheduled at time tk  For most of this chapter, we assume the table is given.  Later, we consider one algorithm for producing the table.  Note: This algorithm need not be highly efficient.  We will schedule aperiodic jobs (if any are ready) in intervals not used by periodic jobs. {

Static, timer-driven scheduling

Example  Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2).  Consider the following static schedule:  The first few table entries would be: (0,T1), (1,T3), (2,T2), (3.8,I), (4, T1), …  Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2).  Consider the following static schedule:  The first few table entries would be: (0,T1), (1,T3), (2,T2), (3.8,I), (4, T1), …

Frames  Let us refine this notion of scheduling…  To keep the table small, we divide the time line into frames and make scheduling decisions only at frame boundaries.  Each job is executed as a procedure call that must fit within a frame.  Multiple jobs may be executed in a frame, but the table is only examined at frame boundaries (the number of “columns” in the table = the number of frames per hyperperiod).  In addition to making scheduling decisions, the scheduler also checks for various error conditions, like task overruns, at the beginning of each frame.  We let f denote the frame size.  Let us refine this notion of scheduling…  To keep the table small, we divide the time line into frames and make scheduling decisions only at frame boundaries.  Each job is executed as a procedure call that must fit within a frame.  Multiple jobs may be executed in a frame, but the table is only examined at frame boundaries (the number of “columns” in the table = the number of frames per hyperperiod).  In addition to making scheduling decisions, the scheduler also checks for various error conditions, like task overruns, at the beginning of each frame.  We let f denote the frame size.

Frame size constraints  We want frames to be sufficiently long so that every job can execute within a frame nonpreemptively. So,  f  max 1  i  n (e i ).  To keep table small, f should divide H. Thus, for at least one task T i,  p i /f  - p i /f = 0.  Let F = H/f. (Note: F is an integer.) Each interval of length H is called a major cycle. Each interval of length f is called a minor cycle.  There are F minor cycles per major cycle.  We want frames to be sufficiently long so that every job can execute within a frame nonpreemptively. So,  f  max 1  i  n (e i ).  To keep table small, f should divide H. Thus, for at least one task T i,  p i /f  - p i /f = 0.  Let F = H/f. (Note: F is an integer.) Each interval of length H is called a major cycle. Each interval of length f is called a minor cycle.  There are F minor cycles per major cycle.

Frame size constraints  We want the frame size to be sufficiently small so that between the release time and deadline of every job, there is at least one frame.  A job released “inside” a frame is not noticed by the scheduler until the next frame boundary.  Moreover, if a job has a deadline “inside” frame k + 1, it essentially must complete execution by the end of frame k.  Thus, 2f - gcd(p i, f)  D i.  We want the frame size to be sufficiently small so that between the release time and deadline of every job, there is at least one frame.  A job released “inside” a frame is not noticed by the scheduler until the next frame boundary.  Moreover, if a job has a deadline “inside” frame k + 1, it essentially must complete execution by the end of frame k.  Thus, 2f - gcd(p i, f)  D i.

Example  Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1),T4 = (20,2).  By first constraint, f  2.  Hyperperiod is 20, so by second constraint, possible choices for f are 2, 4, 5, 10, and 20.  Only f = 2 satisfies the third constraint. The following is a possible cyclic schedule.  Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1),T4 = (20,2).  By first constraint, f  2.  Hyperperiod is 20, so by second constraint, possible choices for f are 2, 4, 5, 10, and 20.  Only f = 2 satisfies the third constraint. The following is a possible cyclic schedule.

Job slices  What do we do if the frame size constraints cannot be met?  Example: Consider T = {(4, 1), (5, 2, 7), (20, 5)}. By first constraint,  f  5, but by third constraint, f  4!  Solution: “Slice” the task (20, 5) into subtasks, (20, 1), (20, 3), and (20, 1). Then, f = 4 works. Here’s a schedule:  What do we do if the frame size constraints cannot be met?  Example: Consider T = {(4, 1), (5, 2, 7), (20, 5)}. By first constraint,  f  5, but by third constraint, f  4!  Solution: “Slice” the task (20, 5) into subtasks, (20, 1), (20, 3), and (20, 1). Then, f = 4 works. Here’s a schedule:

Summary of design decisions  Three design decisions:  choosing a frame size,  partitioning jobs into slices, and  placing slices in frames.  In general, these decisions cannot be made independently.  Three design decisions:  choosing a frame size,  partitioning jobs into slices, and  placing slices in frames.  In general, these decisions cannot be made independently.

Pseudo code for cyclic executive

Improving response times for aperiodic jobs  Intuitively, it makes sense to give hard real-time jobs higher priority than aperiodic jobs.  However, this may lengthen the response time of an aperiodic job.  Note that there is no point in completing a hard real time job early, as long as it finishes by its deadline.  Intuitively, it makes sense to give hard real-time jobs higher priority than aperiodic jobs.  However, this may lengthen the response time of an aperiodic job.  Note that there is no point in completing a hard real time job early, as long as it finishes by its deadline.

Slack stealing  Let the total amount of time allocated to all the slices scheduled in frame k be x k.  Definition: The slack available at the beginning of frame k is f - x k.  Change to scheduler:  If the aperiodic job queue is non-empty, let aperiodic jobs execute in each frame whenever there is nonzero slack.  Let the total amount of time allocated to all the slices scheduled in frame k be x k.  Definition: The slack available at the beginning of frame k is f - x k.  Change to scheduler:  If the aperiodic job queue is non-empty, let aperiodic jobs execute in each frame whenever there is nonzero slack.

Example

Implementing slack stealing  Use a pre-computed “initial slack” table.  Initial slack depends only on static quantities.  Use an interval timer to keep track of available slack.  Set the timer when an aperiodic job begins to run. If it goes off, must start executing periodic jobs.  Problem: Most OSs do not provide sub-millisecond granularity interval timers.  So, to use slack stealing, temporal parameters must be on the order of 100s of msecs. or secs.  Use a pre-computed “initial slack” table.  Initial slack depends only on static quantities.  Use an interval timer to keep track of available slack.  Set the timer when an aperiodic job begins to run. If it goes off, must start executing periodic jobs.  Problem: Most OSs do not provide sub-millisecond granularity interval timers.  So, to use slack stealing, temporal parameters must be on the order of 100s of msecs. or secs.

Scheduling sporiadic jobs  Sporadic jobs arrive at arbitrary times.  They have hard deadlines.  Implies we cannot hope to schedule every sporadic job.  When a sporadic job arrives, the scheduler performs an acceptance test to see if the job can be completed by its deadline.  We must ensure that a new sporadic job does not cause a previously-accepted sporadic job to miss its deadline.  We assume sporadic jobs are prioritized on an earliest deadline- first (EDF) basis.  Sporadic jobs arrive at arbitrary times.  They have hard deadlines.  Implies we cannot hope to schedule every sporadic job.  When a sporadic job arrives, the scheduler performs an acceptance test to see if the job can be completed by its deadline.  We must ensure that a new sporadic job does not cause a previously-accepted sporadic job to miss its deadline.  We assume sporadic jobs are prioritized on an earliest deadline- first (EDF) basis.

Acceptance test  Let s(i, k) be the initial total slack in frames i through k, where 1  i  k  F. (This quantity only depends on periodic jobs.)  Suppose we are doing an acceptance test at frame t for a newly-arrived sporadic job S with deadline d and execution cost e.  Suppose d occurs within frame l + 1, i.e., S must complete by the end of frame l.  Compute the current total slack in frames t through l using  (t,l) =  (t,l) -  dk  d (e k -  k )  The sum is over previously-accepted sporadic jobs with equal or earlier deadlines.  k is the amount of time already spent executing S k before frame t.  Let s(i, k) be the initial total slack in frames i through k, where 1  i  k  F. (This quantity only depends on periodic jobs.)  Suppose we are doing an acceptance test at frame t for a newly-arrived sporadic job S with deadline d and execution cost e.  Suppose d occurs within frame l + 1, i.e., S must complete by the end of frame l.  Compute the current total slack in frames t through l using  (t,l) =  (t,l) -  dk  d (e k -  k )  The sum is over previously-accepted sporadic jobs with equal or earlier deadlines.  k is the amount of time already spent executing S k before frame t.

Acceptance test  We’ll specify the rest of the test “algorithmically”…

Acceptance test  To summarize, the scheduler must maintain the following data:  pre-computed initial slack table s(i, k);  x k values to use at the beginning of the current frame  the current slack s k of every accepted sporadic job S k.  To summarize, the scheduler must maintain the following data:  pre-computed initial slack table s(i, k);  x k values to use at the beginning of the current frame  the current slack s k of every accepted sporadic job S k.

Executing Sporadic tasks

Executing sporadic tasks  Accepted sporadic jobs are executed like aperiodic obs in the original alg. (without slack stealing).  Remember, when meeting a deadline is the main concern, there is no need to complete a job early.  One difference: The aperiodic job queue is in FIFO order, while the sporadic job queue is in EDF order.  Aperiodic jobs only execute when the sporadic job queue is empty.  As before, slack stealing could be used when executing aperiodic jobs (in which case, some aperiodic jobs could execute when the sporadic job queue is not empty).  Accepted sporadic jobs are executed like aperiodic obs in the original alg. (without slack stealing).  Remember, when meeting a deadline is the main concern, there is no need to complete a job early.  One difference: The aperiodic job queue is in FIFO order, while the sporadic job queue is in EDF order.  Aperiodic jobs only execute when the sporadic job queue is empty.  As before, slack stealing could be used when executing aperiodic jobs (in which case, some aperiodic jobs could execute when the sporadic job queue is not empty).

Practical considerations  Handling frame overruns.  Main Issue: Should offending job be completed or aborted?  Mode changes.  During a mode change, the running set of tasks is replaced by a new set of tasks (i.e., the table is changed).  Can implement mode change by having an aperiodic or sporadic mode-change job. (If sporadic, what if it fails theacceptance test???)  Multiprocessors.  Like uniprocessors, but table probably takes longer to precompute.  Handling frame overruns.  Main Issue: Should offending job be completed or aborted?  Mode changes.  During a mode change, the running set of tasks is replaced by a new set of tasks (i.e., the table is changed).  Can implement mode change by having an aperiodic or sporadic mode-change job. (If sporadic, what if it fails theacceptance test???)  Multiprocessors.  Like uniprocessors, but table probably takes longer to precompute.

Network Flow Algorithm for Computing Static Schedules  Initialization: Compute all frame sizes in accordance with the second two frame-size constraints:  p i /f  - p i /f = 0 2f - gcd(p i, f)  D i  At this point, we ignore the first constraint, f  max 1  i  n (e i ).  Recall this is the constraint that can force us to “slice” a task into subtasks.  Iterative Algorithm: For each possible frame size f, we compute a network flow graph and run a max-flow algorithm. If the flow thus found has a certain value, then we have a schedule.  Initialization: Compute all frame sizes in accordance with the second two frame-size constraints:  p i /f  - p i /f = 0 2f - gcd(p i, f)  D i  At this point, we ignore the first constraint, f  max 1  i  n (e i ).  Recall this is the constraint that can force us to “slice” a task into subtasks.  Iterative Algorithm: For each possible frame size f, we compute a network flow graph and run a max-flow algorithm. If the flow thus found has a certain value, then we have a schedule.

Flow graph  Denote all jobs in the major cycle of F frames as J 1, J 2, …, J N.  Vertices:  N job vertices, denoted J 1, J 2, …, J N.  F frame vertices, denoted 1, 2, …, F.  source and sink.  Edges:  (J i, j) with capacity f iff J i can be scheduled in frame j.  (source, J i ) with capacity e i.  (f, sink) with capacity f.  Denote all jobs in the major cycle of F frames as J 1, J 2, …, J N.  Vertices:  N job vertices, denoted J 1, J 2, …, J N.  F frame vertices, denoted 1, 2, …, F.  source and sink.  Edges:  (J i, j) with capacity f iff J i can be scheduled in frame j.  (source, J i ) with capacity e i.  (f, sink) with capacity f.

Example

Finding a schedule  The maximum attainable flow value is clearly  i=1,…,N e i.  This corresponds to the exact amount of computation to be scheduled in the major cycle.  If a max flow is found with value  i=1,…,N e i, then we have a schedule.  If a job is scheduled across multiple frames, then we must slice it into corresponding subjobs.  The maximum attainable flow value is clearly  i=1,…,N e i.  This corresponds to the exact amount of computation to be scheduled in the major cycle.  If a max flow is found with value  i=1,…,N e i, then we have a schedule.  If a job is scheduled across multiple frames, then we must slice it into corresponding subjobs.

Example

Non-independent tasks  Tasks with precedence constraints are no problem.  We can enforce precedence constraint like “J i precedes J k ” by simply making sure J i ’s release is at or before J k ’s release, and J i ’s deadline is at or before J k ’s deadline.  If slices of J i and J k are scheduled in the wrong order, we can just swap them.  Critical sections pose a greater challenge.  We can try to “massage” the flow-network schedule into one where nonpreemption constraints are respected.  Unfortunately, there is no known efficient, optimal algorithm for doing this (the problem is actually NP- hard).  Tasks with precedence constraints are no problem.  We can enforce precedence constraint like “J i precedes J k ” by simply making sure J i ’s release is at or before J k ’s release, and J i ’s deadline is at or before J k ’s deadline.  If slices of J i and J k are scheduled in the wrong order, we can just swap them.  Critical sections pose a greater challenge.  We can try to “massage” the flow-network schedule into one where nonpreemption constraints are respected.  Unfortunately, there is no known efficient, optimal algorithm for doing this (the problem is actually NP- hard).

Pros and Cons of Cyclic Executives  Main Advantage: CEs are very simple - you just need a table.  For example, additional mechanisms for concurrency control and synchronization are not needed. In fact, there’s really no notion of a “process” here – just procedure calls.  Can validate, test, and certify with very high confidence.  Certain anomalies will not occur.  For these reasons, cyclic executives are the predominant approach in many safety-critical applications (like airplanes).  Main Advantage: CEs are very simple - you just need a table.  For example, additional mechanisms for concurrency control and synchronization are not needed. In fact, there’s really no notion of a “process” here – just procedure calls.  Can validate, test, and certify with very high confidence.  Certain anomalies will not occur.  For these reasons, cyclic executives are the predominant approach in many safety-critical applications (like airplanes).

Aside: Scheduling Anomalies  Here’s an example: On a multiprocessor, decreasing a job’s execution cost can increase some job’s response time.  Example: Suppose we have one job queue, preemption, but no migration.  Here’s an example: On a multiprocessor, decreasing a job’s execution cost can increase some job’s response time.  Example: Suppose we have one job queue, preemption, but no migration.

Disadvantages  Disadvantages of cyclic executives:  Very brittle: Any change, no matter how trivial, requires that a new table be computed!  Release times of all jobs must be fixed, i.e., “real-world” sporadic tasks are difficult to support.  Temporal parameters essentially must be multiples of f.  F could be huge!  All combinations of periodic tasks that may execute together must a priori be analyzed.  From a software engineering standpoint, “slicing” one procedure into several could be error-prone.  Disadvantages of cyclic executives:  Very brittle: Any change, no matter how trivial, requires that a new table be computed!  Release times of all jobs must be fixed, i.e., “real-world” sporadic tasks are difficult to support.  Temporal parameters essentially must be multiples of f.  F could be huge!  All combinations of periodic tasks that may execute together must a priori be analyzed.  From a software engineering standpoint, “slicing” one procedure into several could be error-prone.

Dynamic-priority scheduling  Let us consider a priority-based dynamic scheduling approach.  Each job is assigned a priority, and the highest- priority job executes at any time.  Under dynamic-priority scheduling, different jobs of a task may be assigned different priorities.  Can have the following: job J i,k of task T i has higher priority than job J j,m of task T j, but job J i,l of T i has lower priority than job J j,n of T j.  Let us consider a priority-based dynamic scheduling approach.  Each job is assigned a priority, and the highest- priority job executes at any time.  Under dynamic-priority scheduling, different jobs of a task may be assigned different priorities.  Can have the following: job J i,k of task T i has higher priority than job J j,m of task T j, but job J i,l of T i has lower priority than job J j,n of T j.

Outline  We consider both earliest-deadline-first (EDF) and least-laxity-first (LLF) scheduling.  Outline:  Optimality of EDF and LLF  Utilization-based schedulability test for EDF  We consider both earliest-deadline-first (EDF) and least-laxity-first (LLF) scheduling.  Outline:  Optimality of EDF and LLF  Utilization-based schedulability test for EDF

Optimality of EDF  Theorem: [Liu and Layland] When preemption is allowed and jobs do not contend for resources, the EDF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules.  Notes:  Applies even if tasks are not periodic.  If periodic, a task’s relative deadline can be less than its period, equal to its period, or greater than its period.  Theorem: [Liu and Layland] When preemption is allowed and jobs do not contend for resources, the EDF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules.  Notes:  Applies even if tasks are not periodic.  If periodic, a task’s relative deadline can be less than its period, equal to its period, or greater than its period.

Proof  We show that any feasible schedule of J can be systematically transformed into an EDF schedule.  Suppose parts of two jobs J i and J k are executed out of EDF order:  This situation can be corrected by performing a “swap”:  We show that any feasible schedule of J can be systematically transformed into an EDF schedule.  Suppose parts of two jobs J i and J k are executed out of EDF order:  This situation can be corrected by performing a “swap”:

Proof  If we inductively repeat this procedure, we can eliminate all out-of-order violations.  The resulting schedule may still fail to be an EDF schedule because it has idle intervals where some job is ready:  Such idle intervals can be eliminated by moving some jobs forward:  If we inductively repeat this procedure, we can eliminate all out-of-order violations.  The resulting schedule may still fail to be an EDF schedule because it has idle intervals where some job is ready:  Such idle intervals can be eliminated by moving some jobs forward:

LLF Scheduling  Definition: At any time t, the slack (or laxity) of a job with deadline d is equal to d - t minus the time required to complete the remaining portion of the job.  LLF Scheduling: The job with the smallest laxity has highest priority at all times.  Definition: At any time t, the slack (or laxity) of a job with deadline d is equal to d - t minus the time required to complete the remaining portion of the job.  LLF Scheduling: The job with the smallest laxity has highest priority at all times.

Optimality of LLF  Theorem: When preemption is allowed and jobs do not contend for resources, the LLF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules.  The proof is similar to that for EDF.  Theorem: When preemption is allowed and jobs do not contend for resources, the LLF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules.  The proof is similar to that for EDF.