Download presentation
Presentation is loading. Please wait.
Published bySpencer Elliott Modified over 8 years ago
1
Real-Time Systems Real-Time Systems
2
Real-time research repository For information on real-time research groups, conferences, journals, books, products, etc., have a look at: http://cs-www.bu.edu/pub/ieee-rts/Home.html For information on real-time research groups, conferences, journals, books, products, etc., have a look at: http://cs-www.bu.edu/pub/ieee-rts/Home.html
3
Introduction A real-time system is a system whose specification includes both a logical and a temporal correctness requirement. Logical correctness: produce correct output. Can be checked by various means including Hoare axiomatics and other formal methods. Temporal correctness: produces output at the right time. A soft real-time system is one that can tolerate some delay in delivering the result. A hard real-time system is one that can not afford to miss a dealine. A real-time system is a system whose specification includes both a logical and a temporal correctness requirement. Logical correctness: produce correct output. Can be checked by various means including Hoare axiomatics and other formal methods. Temporal correctness: produces output at the right time. A soft real-time system is one that can tolerate some delay in delivering the result. A hard real-time system is one that can not afford to miss a dealine.
4
Characteristics of real-time systems Event-driven, reactive High cost of failure Concurrency/multiprogramming Stand-alone/continuous operation. Reliability/fault tolerance requirements PREDICTABLE BEHAVIOR Event-driven, reactive High cost of failure Concurrency/multiprogramming Stand-alone/continuous operation. Reliability/fault tolerance requirements PREDICTABLE BEHAVIOR
5
Misconceptions about real-time systems There is no science in real-time-system design. We shall see… Advances in supercomputing hardware will take care of real-time requirements. The old “buy a faster processor” argument… Real-time computing is equivalent to fast computing. Only to ad agencies. To us, it means PREDICTABLE computing. There is no science in real-time-system design. We shall see… Advances in supercomputing hardware will take care of real-time requirements. The old “buy a faster processor” argument… Real-time computing is equivalent to fast computing. Only to ad agencies. To us, it means PREDICTABLE computing.
6
Misconceptions about real-time systems Real-time programming is assembly coding, … We would like to automate (as much as possible) real- time system design, instead of relying on clever hand- crafted code. “Real time” is performance engineering. In real-time computing, timeliness is almost always more important than raw performance … “Real-time problems” have all been solved in other areas of CS or operations research. OR people typically use stochastic queuing models or one-shot scheduling models to reason about systems. CS people are usually interested in optimizing average-case performance. Real-time programming is assembly coding, … We would like to automate (as much as possible) real- time system design, instead of relying on clever hand- crafted code. “Real time” is performance engineering. In real-time computing, timeliness is almost always more important than raw performance … “Real-time problems” have all been solved in other areas of CS or operations research. OR people typically use stochastic queuing models or one-shot scheduling models to reason about systems. CS people are usually interested in optimizing average-case performance.
7
Misconceptions about real-time systems It is not meaningful to talk about guaranteeing real-time performance when things can fail. Though things may fail, we certainly don’t want the operating system to be the weakest link! Real-time systems function in a static environment. Not true. We consider systems in which the operating mode may change dynamically. It is not meaningful to talk about guaranteeing real-time performance when things can fail. Though things may fail, we certainly don’t want the operating system to be the weakest link! Real-time systems function in a static environment. Not true. We consider systems in which the operating mode may change dynamically.
8
Are all systems real-time systems? Question: Is a payroll processing system a real-time system? It has a time constraint: Print the pay checks every month. Perhaps it is a real-time system in a definitional sense, but it doesn’t pay us to view it as such. We are interested in systems for which it is not a priori obvious how to meet timing constraints. Question: Is a payroll processing system a real-time system? It has a time constraint: Print the pay checks every month. Perhaps it is a real-time system in a definitional sense, but it doesn’t pay us to view it as such. We are interested in systems for which it is not a priori obvious how to meet timing constraints.
9
Resources Resources may be categorized as: Abundant: Virtually any system design methodology can be used to realize the timing requirements of the application. Insufficient: The application is ahead of the technology curve; no design methodology can be used to realize the timing requirements of the application. Sufficient but scarce: It is possible to realize the timing requirements of the application, but careful resource allocation is required. Resources may be categorized as: Abundant: Virtually any system design methodology can be used to realize the timing requirements of the application. Insufficient: The application is ahead of the technology curve; no design methodology can be used to realize the timing requirements of the application. Sufficient but scarce: It is possible to realize the timing requirements of the application, but careful resource allocation is required.
10
Example: Interactive/multimedia application Hardware resources in Year X 1980 1990 2000 Requirements (performance) Interactive Video High Quality Audio Network File Access Remote Login Insufficient resources Abundant resources Sufficient but scarce resources
11
Example: Real-time application Many real-time systems are control systems. Example: A simple one-sensor, one-actuator control system. Many real-time systems are control systems. Example: A simple one-sensor, one-actuator control system. A/D D/A Computation ActuatorPlantSensor The system being controlled
12
Simple control system Pseudo-code for this system: set timer to interrupt periodically with period T; at each timer interrupt do do analog-to-digital conversion to get y; compute control output u; output u and do digital-to-analog conversion; od T is called the sampling period. T is a key design choice. Typical range for T: seconds to milliseconds. Pseudo-code for this system: set timer to interrupt periodically with period T; at each timer interrupt do do analog-to-digital conversion to get y; compute control output u; output u and do digital-to-analog conversion; od T is called the sampling period. T is a key design choice. Typical range for T: seconds to milliseconds.
13
Multi-rate control systems Example Helicopter flight controller. Do the following in each 1/180-sec. cycle: validate sensor data and select data source; if failure, reconfigure the system Every sixth cycle do: keyboard input and mode selection; data normalization and coordinate transformation; tracking reference update control laws of the outer pitch-control loop; control laws of the outer roll-control loop; control laws of the outer yaw- and collective- control loop Example Helicopter flight controller. Do the following in each 1/180-sec. cycle: validate sensor data and select data source; if failure, reconfigure the system Every sixth cycle do: keyboard input and mode selection; data normalization and coordinate transformation; tracking reference update control laws of the outer pitch-control loop; control laws of the outer roll-control loop; control laws of the outer yaw- and collective- control loop Every other cycle do: control laws of the inner pitch-control loop; control laws of the inner roll- and collective- control loop; Compute the control laws of the inner yaw-control loop; Output commands; Carry out built-in test; Wait until beginning of the next cycle Note: Having only harmonic rates simplifies the system. More complicated control systems have multiple sensors and actuators and must support control loops of different rates.
14
Hierarchical control systems
15
Signal processing systems Signal-processing systems transform data from one form to another. Examples: Digital filtering. Video and voice compression/decompression. Radar signal processing. Response times range from a few milliseconds to a few seconds. Signal-processing systems transform data from one form to another. Examples: Digital filtering. Video and voice compression/decompression. Radar signal processing. Response times range from a few milliseconds to a few seconds.
16
Example: radar system
17
Other real-time applications Real-time databases. Transactions must complete by deadlines. Main dilemma: Transaction scheduling algorithms and real-time scheduling algorithms often have conflicting goals. Data may be subject to absolute and relative temporal consistency requirements. Multimedia. Want to process audio and video frames at steady rates. TV video rate is 30 frames/sec. HDTV is 60 frames/sec. Telephone audio is 16 Kbits/sec. CD audio is 128 Kbits/sec. Other requirements: Lip synchronization, low jitter, low end- to-end response times (if interactive). Real-time databases. Transactions must complete by deadlines. Main dilemma: Transaction scheduling algorithms and real-time scheduling algorithms often have conflicting goals. Data may be subject to absolute and relative temporal consistency requirements. Multimedia. Want to process audio and video frames at steady rates. TV video rate is 30 frames/sec. HDTV is 60 frames/sec. Telephone audio is 16 Kbits/sec. CD audio is 128 Kbits/sec. Other requirements: Lip synchronization, low jitter, low end- to-end response times (if interactive).
18
Hard vs. soft real-time Task: A sequential piece of code. Job: Instance of a task. Jobs require resources to execute. Example resources: CPU, network, disk, critical section. We will simply call all hardware resources “processors”. Release time of a job: The time instant the job becomes ready to execute. Deadline of a job: The time instant by which the job must complete execution. Relative deadline of a job: “Deadline - Release time”. Response time of a job: “Completion time - Release time”. Task: A sequential piece of code. Job: Instance of a task. Jobs require resources to execute. Example resources: CPU, network, disk, critical section. We will simply call all hardware resources “processors”. Release time of a job: The time instant the job becomes ready to execute. Deadline of a job: The time instant by which the job must complete execution. Relative deadline of a job: “Deadline - Release time”. Response time of a job: “Completion time - Release time”.
19
Example Job is released at time 3. It’s (absolute) deadline is at time 10. It’s relative deadline is 7. It’s response time is 6. Job is released at time 3. It’s (absolute) deadline is at time 10. It’s relative deadline is 7. It’s response time is 6.
20
Hard real-time systems A hard deadline must be met. If any hard deadline is ever missed, then the system is incorrect. Requires a means for validating that deadlines are met. Hard real-time system: A real-time system in which all deadlines are hard. Examples: Nuclear power plant control, flight control. A hard deadline must be met. If any hard deadline is ever missed, then the system is incorrect. Requires a means for validating that deadlines are met. Hard real-time system: A real-time system in which all deadlines are hard. Examples: Nuclear power plant control, flight control.
21
Soft real-time systems A soft deadline may occasionally be missed. Question: How to define “occasionally”? Soft real-time system: A real-time system in which some deadlines are soft. Examples: Telephone switches, multimedia applications. A soft deadline may occasionally be missed. Question: How to define “occasionally”? Soft real-time system: A real-time system in which some deadlines are soft. Examples: Telephone switches, multimedia applications.
22
Defining “occasionally” One Approach: Use probabilistic requirements. For example, 99% of deadlines will be met. Another Approach: Define a “usefulness” function for each job: Note: Validation is trickier here. One Approach: Use probabilistic requirements. For example, 99% of deadlines will be met. Another Approach: Define a “usefulness” function for each job: Note: Validation is trickier here.
23
Firm deadlines Firm deadline: A soft deadline such that the corresponding job’s usefulness function goes to 0 as soon as the deadline is reached (late jobs are of no use).
24
Reference model Each job J i is characterized by its release time r i, absolute deadline d i, relative deadline D i, and execution time e i. Sometimes a range of release times is specified: [r i -, r i + ]. This range is called release-time jitter. Likewise, sometimes instead of e i, execution time is specified to range over [e i -, e i + ]. Note: It can be difficult to get a precise estimate of e i Each job J i is characterized by its release time r i, absolute deadline d i, relative deadline D i, and execution time e i. Sometimes a range of release times is specified: [r i -, r i + ]. This range is called release-time jitter. Likewise, sometimes instead of e i, execution time is specified to range over [e i -, e i + ]. Note: It can be difficult to get a precise estimate of e i
25
Periodic, sporadic aperiodic tasks Periodic task: We associate a period p i with each task T i. p i is the time between job releases. Sporadic and aperiodic tasks: Released at arbitrary times. Sporadic: Has a hard deadline. Aperiodic: Has no deadline or a soft deadline. Periodic task: We associate a period p i with each task T i. p i is the time between job releases. Sporadic and aperiodic tasks: Released at arbitrary times. Sporadic: Has a hard deadline. Aperiodic: Has no deadline or a soft deadline.
26
Examples A periodic task T i with r i = 2, p i = 5, e i = 2, D i = 5 executes like this:
27
Some definitions for periodic task systems The jobs of task T i are denoted J i,1, J i,2, …. r i,1 (the release time of J i,1 ) is called the phase of T i. Synchronous System: Each task has a phase of 0. Asynchronous System: Phases are arbitrary. Hyperperiod: Least common multiple of {p i }. Task utilization: u i = e i /p i. System utilization: U = Σ i=1..n u i. The jobs of task T i are denoted J i,1, J i,2, …. r i,1 (the release time of J i,1 ) is called the phase of T i. Synchronous System: Each task has a phase of 0. Asynchronous System: Phases are arbitrary. Hyperperiod: Least common multiple of {p i }. Task utilization: u i = e i /p i. System utilization: U = Σ i=1..n u i.
28
Task dependencies Two main kinds of dependencies: Critical Sections. Precedence Constraints. For example, job J i may be constrained to be released only after job J k completes. Tasks with no dependencies are called independent. Two main kinds of dependencies: Critical Sections. Precedence Constraints. For example, job J i may be constrained to be released only after job J k completes. Tasks with no dependencies are called independent.
29
Scheduling algorithms We are generally interested in two kinds of algorithms: 1.A scheduler or scheduling algorithm, which generates a schedule at runtime. 2.A feasibility analysis algorithm, which checks if timing constraints are met. Usually (but not always) Algorithm 1 is pretty straightforward, while Algorithm 2 is more complex. We are generally interested in two kinds of algorithms: 1.A scheduler or scheduling algorithm, which generates a schedule at runtime. 2.A feasibility analysis algorithm, which checks if timing constraints are met. Usually (but not always) Algorithm 1 is pretty straightforward, while Algorithm 2 is more complex.
30
Classification of scheduling algorithms
31
Optimality and feasibility A schedule is feasible if all timing constraints are met. The term “correct” is probably better — see the next slide. A task set T is schedulable using scheduling algorithm A if A always produces a feasible schedule for T. A scheduling algorithm is optimal if it always produces a feasible schedule when one exists (under any scheduling algorithm). Can similarly define optimality for a class of schedulers, e.g., “an optimal static-priority scheduling algorithm.” A schedule is feasible if all timing constraints are met. The term “correct” is probably better — see the next slide. A task set T is schedulable using scheduling algorithm A if A always produces a feasible schedule for T. A scheduling algorithm is optimal if it always produces a feasible schedule when one exists (under any scheduling algorithm). Can similarly define optimality for a class of schedulers, e.g., “an optimal static-priority scheduling algorithm.”
32
Feasibility vs. schedulability To most people in real-time community, the term “feasibility” is used to refer to an exact schedulability test, while the term “schedulability” is used to refer to a sufficient schedulability test. You may find that these terms are used somewhat inconsistently in the literature. To most people in real-time community, the term “feasibility” is used to refer to an exact schedulability test, while the term “schedulability” is used to refer to a sufficient schedulability test. You may find that these terms are used somewhat inconsistently in the literature.
33
Clock driven (or static) scheduling Model assumes n periodic tasks T 1,…,T n. The “rest of the world” periodic model is assumed. T i is specified by ( i, p i, e i, D i ), where i is its phase, p i is its period, e i is its execution cost per job, and D i is its relative deadline. Will abbreviate as (p i,e i,D i ) if i =0, and (p i,e i ) if i =0 ∧ p i =D i. We also have aperiodic jobs that are released at arbitrary times Model assumes n periodic tasks T 1,…,T n. The “rest of the world” periodic model is assumed. T i is specified by ( i, p i, e i, D i ), where i is its phase, p i is its period, e i is its execution cost per job, and D i is its relative deadline. Will abbreviate as (p i,e i,D i ) if i =0, and (p i,e i ) if i =0 ∧ p i =D i. We also have aperiodic jobs that are released at arbitrary times
34
Schedule table Our scheduler will schedule periodic jobs using a static schedule that is computed offline and stored in a table T. T(t k ) = T i if T i is to be scheduled at time t k I if no periodic task is scheduled at time tk For most of this chapter, we assume the table is given. Later, we consider one algorithm for producing the table. Note: This algorithm need not be highly efficient. We will schedule aperiodic jobs (if any are ready) in intervals not used by periodic jobs. Our scheduler will schedule periodic jobs using a static schedule that is computed offline and stored in a table T. T(t k ) = T i if T i is to be scheduled at time t k I if no periodic task is scheduled at time tk For most of this chapter, we assume the table is given. Later, we consider one algorithm for producing the table. Note: This algorithm need not be highly efficient. We will schedule aperiodic jobs (if any are ready) in intervals not used by periodic jobs. {
35
Static, timer-driven scheduling
36
Example Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2). Consider the following static schedule: The first few table entries would be: (0,T1), (1,T3), (2,T2), (3.8,I), (4, T1), … Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1), T4 = (20,2). Consider the following static schedule: The first few table entries would be: (0,T1), (1,T3), (2,T2), (3.8,I), (4, T1), …
37
Frames Let us refine this notion of scheduling… To keep the table small, we divide the time line into frames and make scheduling decisions only at frame boundaries. Each job is executed as a procedure call that must fit within a frame. Multiple jobs may be executed in a frame, but the table is only examined at frame boundaries (the number of “columns” in the table = the number of frames per hyperperiod). In addition to making scheduling decisions, the scheduler also checks for various error conditions, like task overruns, at the beginning of each frame. We let f denote the frame size. Let us refine this notion of scheduling… To keep the table small, we divide the time line into frames and make scheduling decisions only at frame boundaries. Each job is executed as a procedure call that must fit within a frame. Multiple jobs may be executed in a frame, but the table is only examined at frame boundaries (the number of “columns” in the table = the number of frames per hyperperiod). In addition to making scheduling decisions, the scheduler also checks for various error conditions, like task overruns, at the beginning of each frame. We let f denote the frame size.
38
Frame size constraints We want frames to be sufficiently long so that every job can execute within a frame nonpreemptively. So, f max 1 i n (e i ). To keep table small, f should divide H. Thus, for at least one task T i, p i /f - p i /f = 0. Let F = H/f. (Note: F is an integer.) Each interval of length H is called a major cycle. Each interval of length f is called a minor cycle. There are F minor cycles per major cycle. We want frames to be sufficiently long so that every job can execute within a frame nonpreemptively. So, f max 1 i n (e i ). To keep table small, f should divide H. Thus, for at least one task T i, p i /f - p i /f = 0. Let F = H/f. (Note: F is an integer.) Each interval of length H is called a major cycle. Each interval of length f is called a minor cycle. There are F minor cycles per major cycle.
39
Frame size constraints We want the frame size to be sufficiently small so that between the release time and deadline of every job, there is at least one frame. A job released “inside” a frame is not noticed by the scheduler until the next frame boundary. Moreover, if a job has a deadline “inside” frame k + 1, it essentially must complete execution by the end of frame k. Thus, 2f - gcd(p i, f) D i. We want the frame size to be sufficiently small so that between the release time and deadline of every job, there is at least one frame. A job released “inside” a frame is not noticed by the scheduler until the next frame boundary. Moreover, if a job has a deadline “inside” frame k + 1, it essentially must complete execution by the end of frame k. Thus, 2f - gcd(p i, f) D i.
40
Example Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1),T4 = (20,2). By first constraint, f 2. Hyperperiod is 20, so by second constraint, possible choices for f are 2, 4, 5, 10, and 20. Only f = 2 satisfies the third constraint. The following is a possible cyclic schedule. Consider a system of four tasks, T1 = (4,1), T2 = (5,1.8), T3 = (20,1),T4 = (20,2). By first constraint, f 2. Hyperperiod is 20, so by second constraint, possible choices for f are 2, 4, 5, 10, and 20. Only f = 2 satisfies the third constraint. The following is a possible cyclic schedule.
41
Job slices What do we do if the frame size constraints cannot be met? Example: Consider T = {(4, 1), (5, 2, 7), (20, 5)}. By first constraint, f 5, but by third constraint, f 4! Solution: “Slice” the task (20, 5) into subtasks, (20, 1), (20, 3), and (20, 1). Then, f = 4 works. Here’s a schedule: What do we do if the frame size constraints cannot be met? Example: Consider T = {(4, 1), (5, 2, 7), (20, 5)}. By first constraint, f 5, but by third constraint, f 4! Solution: “Slice” the task (20, 5) into subtasks, (20, 1), (20, 3), and (20, 1). Then, f = 4 works. Here’s a schedule:
42
Summary of design decisions Three design decisions: choosing a frame size, partitioning jobs into slices, and placing slices in frames. In general, these decisions cannot be made independently. Three design decisions: choosing a frame size, partitioning jobs into slices, and placing slices in frames. In general, these decisions cannot be made independently.
43
Pseudo code for cyclic executive
44
Improving response times for aperiodic jobs Intuitively, it makes sense to give hard real-time jobs higher priority than aperiodic jobs. However, this may lengthen the response time of an aperiodic job. Note that there is no point in completing a hard real time job early, as long as it finishes by its deadline. Intuitively, it makes sense to give hard real-time jobs higher priority than aperiodic jobs. However, this may lengthen the response time of an aperiodic job. Note that there is no point in completing a hard real time job early, as long as it finishes by its deadline.
45
Slack stealing Let the total amount of time allocated to all the slices scheduled in frame k be x k. Definition: The slack available at the beginning of frame k is f - x k. Change to scheduler: If the aperiodic job queue is non-empty, let aperiodic jobs execute in each frame whenever there is nonzero slack. Let the total amount of time allocated to all the slices scheduled in frame k be x k. Definition: The slack available at the beginning of frame k is f - x k. Change to scheduler: If the aperiodic job queue is non-empty, let aperiodic jobs execute in each frame whenever there is nonzero slack.
46
Example
47
Implementing slack stealing Use a pre-computed “initial slack” table. Initial slack depends only on static quantities. Use an interval timer to keep track of available slack. Set the timer when an aperiodic job begins to run. If it goes off, must start executing periodic jobs. Problem: Most OSs do not provide sub-millisecond granularity interval timers. So, to use slack stealing, temporal parameters must be on the order of 100s of msecs. or secs. Use a pre-computed “initial slack” table. Initial slack depends only on static quantities. Use an interval timer to keep track of available slack. Set the timer when an aperiodic job begins to run. If it goes off, must start executing periodic jobs. Problem: Most OSs do not provide sub-millisecond granularity interval timers. So, to use slack stealing, temporal parameters must be on the order of 100s of msecs. or secs.
48
Scheduling sporiadic jobs Sporadic jobs arrive at arbitrary times. They have hard deadlines. Implies we cannot hope to schedule every sporadic job. When a sporadic job arrives, the scheduler performs an acceptance test to see if the job can be completed by its deadline. We must ensure that a new sporadic job does not cause a previously-accepted sporadic job to miss its deadline. We assume sporadic jobs are prioritized on an earliest deadline- first (EDF) basis. Sporadic jobs arrive at arbitrary times. They have hard deadlines. Implies we cannot hope to schedule every sporadic job. When a sporadic job arrives, the scheduler performs an acceptance test to see if the job can be completed by its deadline. We must ensure that a new sporadic job does not cause a previously-accepted sporadic job to miss its deadline. We assume sporadic jobs are prioritized on an earliest deadline- first (EDF) basis.
49
Acceptance test Let s(i, k) be the initial total slack in frames i through k, where 1 i k F. (This quantity only depends on periodic jobs.) Suppose we are doing an acceptance test at frame t for a newly-arrived sporadic job S with deadline d and execution cost e. Suppose d occurs within frame l + 1, i.e., S must complete by the end of frame l. Compute the current total slack in frames t through l using (t,l) = (t,l) - dk d (e k - k ) The sum is over previously-accepted sporadic jobs with equal or earlier deadlines. k is the amount of time already spent executing S k before frame t. Let s(i, k) be the initial total slack in frames i through k, where 1 i k F. (This quantity only depends on periodic jobs.) Suppose we are doing an acceptance test at frame t for a newly-arrived sporadic job S with deadline d and execution cost e. Suppose d occurs within frame l + 1, i.e., S must complete by the end of frame l. Compute the current total slack in frames t through l using (t,l) = (t,l) - dk d (e k - k ) The sum is over previously-accepted sporadic jobs with equal or earlier deadlines. k is the amount of time already spent executing S k before frame t.
50
Acceptance test We’ll specify the rest of the test “algorithmically”…
51
Acceptance test To summarize, the scheduler must maintain the following data: pre-computed initial slack table s(i, k); x k values to use at the beginning of the current frame the current slack s k of every accepted sporadic job S k. To summarize, the scheduler must maintain the following data: pre-computed initial slack table s(i, k); x k values to use at the beginning of the current frame the current slack s k of every accepted sporadic job S k.
52
Executing Sporadic tasks
53
Executing sporadic tasks Accepted sporadic jobs are executed like aperiodic obs in the original alg. (without slack stealing). Remember, when meeting a deadline is the main concern, there is no need to complete a job early. One difference: The aperiodic job queue is in FIFO order, while the sporadic job queue is in EDF order. Aperiodic jobs only execute when the sporadic job queue is empty. As before, slack stealing could be used when executing aperiodic jobs (in which case, some aperiodic jobs could execute when the sporadic job queue is not empty). Accepted sporadic jobs are executed like aperiodic obs in the original alg. (without slack stealing). Remember, when meeting a deadline is the main concern, there is no need to complete a job early. One difference: The aperiodic job queue is in FIFO order, while the sporadic job queue is in EDF order. Aperiodic jobs only execute when the sporadic job queue is empty. As before, slack stealing could be used when executing aperiodic jobs (in which case, some aperiodic jobs could execute when the sporadic job queue is not empty).
54
Practical considerations Handling frame overruns. Main Issue: Should offending job be completed or aborted? Mode changes. During a mode change, the running set of tasks is replaced by a new set of tasks (i.e., the table is changed). Can implement mode change by having an aperiodic or sporadic mode-change job. (If sporadic, what if it fails theacceptance test???) Multiprocessors. Like uniprocessors, but table probably takes longer to precompute. Handling frame overruns. Main Issue: Should offending job be completed or aborted? Mode changes. During a mode change, the running set of tasks is replaced by a new set of tasks (i.e., the table is changed). Can implement mode change by having an aperiodic or sporadic mode-change job. (If sporadic, what if it fails theacceptance test???) Multiprocessors. Like uniprocessors, but table probably takes longer to precompute.
55
Network Flow Algorithm for Computing Static Schedules Initialization: Compute all frame sizes in accordance with the second two frame-size constraints: p i /f - p i /f = 0 2f - gcd(p i, f) D i At this point, we ignore the first constraint, f max 1 i n (e i ). Recall this is the constraint that can force us to “slice” a task into subtasks. Iterative Algorithm: For each possible frame size f, we compute a network flow graph and run a max-flow algorithm. If the flow thus found has a certain value, then we have a schedule. Initialization: Compute all frame sizes in accordance with the second two frame-size constraints: p i /f - p i /f = 0 2f - gcd(p i, f) D i At this point, we ignore the first constraint, f max 1 i n (e i ). Recall this is the constraint that can force us to “slice” a task into subtasks. Iterative Algorithm: For each possible frame size f, we compute a network flow graph and run a max-flow algorithm. If the flow thus found has a certain value, then we have a schedule.
56
Flow graph Denote all jobs in the major cycle of F frames as J 1, J 2, …, J N. Vertices: N job vertices, denoted J 1, J 2, …, J N. F frame vertices, denoted 1, 2, …, F. source and sink. Edges: (J i, j) with capacity f iff J i can be scheduled in frame j. (source, J i ) with capacity e i. (f, sink) with capacity f. Denote all jobs in the major cycle of F frames as J 1, J 2, …, J N. Vertices: N job vertices, denoted J 1, J 2, …, J N. F frame vertices, denoted 1, 2, …, F. source and sink. Edges: (J i, j) with capacity f iff J i can be scheduled in frame j. (source, J i ) with capacity e i. (f, sink) with capacity f.
57
Example
58
Finding a schedule The maximum attainable flow value is clearly i=1,…,N e i. This corresponds to the exact amount of computation to be scheduled in the major cycle. If a max flow is found with value i=1,…,N e i, then we have a schedule. If a job is scheduled across multiple frames, then we must slice it into corresponding subjobs. The maximum attainable flow value is clearly i=1,…,N e i. This corresponds to the exact amount of computation to be scheduled in the major cycle. If a max flow is found with value i=1,…,N e i, then we have a schedule. If a job is scheduled across multiple frames, then we must slice it into corresponding subjobs.
59
Example
60
Non-independent tasks Tasks with precedence constraints are no problem. We can enforce precedence constraint like “J i precedes J k ” by simply making sure J i ’s release is at or before J k ’s release, and J i ’s deadline is at or before J k ’s deadline. If slices of J i and J k are scheduled in the wrong order, we can just swap them. Critical sections pose a greater challenge. We can try to “massage” the flow-network schedule into one where nonpreemption constraints are respected. Unfortunately, there is no known efficient, optimal algorithm for doing this (the problem is actually NP- hard). Tasks with precedence constraints are no problem. We can enforce precedence constraint like “J i precedes J k ” by simply making sure J i ’s release is at or before J k ’s release, and J i ’s deadline is at or before J k ’s deadline. If slices of J i and J k are scheduled in the wrong order, we can just swap them. Critical sections pose a greater challenge. We can try to “massage” the flow-network schedule into one where nonpreemption constraints are respected. Unfortunately, there is no known efficient, optimal algorithm for doing this (the problem is actually NP- hard).
61
Pros and Cons of Cyclic Executives Main Advantage: CEs are very simple - you just need a table. For example, additional mechanisms for concurrency control and synchronization are not needed. In fact, there’s really no notion of a “process” here – just procedure calls. Can validate, test, and certify with very high confidence. Certain anomalies will not occur. For these reasons, cyclic executives are the predominant approach in many safety-critical applications (like airplanes). Main Advantage: CEs are very simple - you just need a table. For example, additional mechanisms for concurrency control and synchronization are not needed. In fact, there’s really no notion of a “process” here – just procedure calls. Can validate, test, and certify with very high confidence. Certain anomalies will not occur. For these reasons, cyclic executives are the predominant approach in many safety-critical applications (like airplanes).
62
Aside: Scheduling Anomalies Here’s an example: On a multiprocessor, decreasing a job’s execution cost can increase some job’s response time. Example: Suppose we have one job queue, preemption, but no migration. Here’s an example: On a multiprocessor, decreasing a job’s execution cost can increase some job’s response time. Example: Suppose we have one job queue, preemption, but no migration.
63
Disadvantages Disadvantages of cyclic executives: Very brittle: Any change, no matter how trivial, requires that a new table be computed! Release times of all jobs must be fixed, i.e., “real-world” sporadic tasks are difficult to support. Temporal parameters essentially must be multiples of f. F could be huge! All combinations of periodic tasks that may execute together must a priori be analyzed. From a software engineering standpoint, “slicing” one procedure into several could be error-prone. Disadvantages of cyclic executives: Very brittle: Any change, no matter how trivial, requires that a new table be computed! Release times of all jobs must be fixed, i.e., “real-world” sporadic tasks are difficult to support. Temporal parameters essentially must be multiples of f. F could be huge! All combinations of periodic tasks that may execute together must a priori be analyzed. From a software engineering standpoint, “slicing” one procedure into several could be error-prone.
64
Dynamic-priority scheduling Let us consider a priority-based dynamic scheduling approach. Each job is assigned a priority, and the highest- priority job executes at any time. Under dynamic-priority scheduling, different jobs of a task may be assigned different priorities. Can have the following: job J i,k of task T i has higher priority than job J j,m of task T j, but job J i,l of T i has lower priority than job J j,n of T j. Let us consider a priority-based dynamic scheduling approach. Each job is assigned a priority, and the highest- priority job executes at any time. Under dynamic-priority scheduling, different jobs of a task may be assigned different priorities. Can have the following: job J i,k of task T i has higher priority than job J j,m of task T j, but job J i,l of T i has lower priority than job J j,n of T j.
65
Outline We consider both earliest-deadline-first (EDF) and least-laxity-first (LLF) scheduling. Outline: Optimality of EDF and LLF Utilization-based schedulability test for EDF We consider both earliest-deadline-first (EDF) and least-laxity-first (LLF) scheduling. Outline: Optimality of EDF and LLF Utilization-based schedulability test for EDF
66
Optimality of EDF Theorem: [Liu and Layland] When preemption is allowed and jobs do not contend for resources, the EDF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules. Notes: Applies even if tasks are not periodic. If periodic, a task’s relative deadline can be less than its period, equal to its period, or greater than its period. Theorem: [Liu and Layland] When preemption is allowed and jobs do not contend for resources, the EDF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules. Notes: Applies even if tasks are not periodic. If periodic, a task’s relative deadline can be less than its period, equal to its period, or greater than its period.
67
Proof We show that any feasible schedule of J can be systematically transformed into an EDF schedule. Suppose parts of two jobs J i and J k are executed out of EDF order: This situation can be corrected by performing a “swap”: We show that any feasible schedule of J can be systematically transformed into an EDF schedule. Suppose parts of two jobs J i and J k are executed out of EDF order: This situation can be corrected by performing a “swap”:
68
Proof If we inductively repeat this procedure, we can eliminate all out-of-order violations. The resulting schedule may still fail to be an EDF schedule because it has idle intervals where some job is ready: Such idle intervals can be eliminated by moving some jobs forward: If we inductively repeat this procedure, we can eliminate all out-of-order violations. The resulting schedule may still fail to be an EDF schedule because it has idle intervals where some job is ready: Such idle intervals can be eliminated by moving some jobs forward:
69
LLF Scheduling Definition: At any time t, the slack (or laxity) of a job with deadline d is equal to d - t minus the time required to complete the remaining portion of the job. LLF Scheduling: The job with the smallest laxity has highest priority at all times. Definition: At any time t, the slack (or laxity) of a job with deadline d is equal to d - t minus the time required to complete the remaining portion of the job. LLF Scheduling: The job with the smallest laxity has highest priority at all times.
70
Optimality of LLF Theorem: When preemption is allowed and jobs do not contend for resources, the LLF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules. The proof is similar to that for EDF. Theorem: When preemption is allowed and jobs do not contend for resources, the LLF algorithm can produce a feasible schedule of a set J of independent jobs with arbitrary release times and deadlines on a processor if and only if J has feasible schedules. The proof is similar to that for EDF.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.