Download presentation
Presentation is loading. Please wait.
Published byHorace Rolf Warren Modified over 9 years ago
1
Real-time Scheduling Review Venkita Subramonian venkita@cs.wustl.edu Research Seminar on Software Systems February 2, 2004
2
Main Topics for Discussion Single Processor Scheduling End-to-end Scheduling Holistic Scheduling
3
What is a Real-time System? Real-time systems have been defined as: “those systems in which the correctness of the system depends not only on the logical result of the computation, but also on the time at which the results are produced"; J. Stankovic, "Misconceptions About Real-Time Computing," IEEE Computer, 21(10), October 1988. Real-time does not necessarily mean “Real fast”. Predictability is key in real-time systems “There was a man who drowned crossing a stream with an average depth of six inches” – J. Stankovic
4
Real-time Scheduling Job (J ij ): Unit of work, scheduled and executed by system. Jobs repeated at regular or semi-regular intervals modeled as periodic Task (T i ): Set of related jobs. Jobs scheduled and allocated resources based on a set of scheduling algorithms and access control protocols. Scheduler: Module implementing scheduling algorithms Schedule: assignment of all jobs to available processors, produced by scheduler. Valid schedule: All jobs meet their deadline Clock-driven scheduling vs Event(priority)-driven scheduling Fixed Priority vs Dynamic Priority assignment
5
Scheduling Periodic Tasks In hard real-time systems, set of tasks are known apriori Task T i is a series of periodic Jobs J ij. Each task has the following parameters p i - period, minimum interrelease interval between jobs in Task T i. e i - maximum execution time for jobs in task T i. r ij - release time of the j th Job in Task i (J ij in T i ). i - phase of Task T i, equal to r i1. u i - utilization of Task T i = e i / p i In addition the following parameters apply to a set of tasks H - Hyperperiod = Least Common Multiple of p i for all i: H = lcm(p i ), for all i. U - Total utilization = Sum over all u i. Schedulable utilization of an algorithm U s If U < U s the set of tasks can be guaranteed to be scheduled
6
Fixed Priority Algorithms Rate Monotonic scheduling Priority assignment based on rates of tasks Higher rate task assigned higher priority Schedulable utilization = 0.693 (Liu and Leyland) If U < 0.693, schedulability is guaranteed Tasks may be schedulable even if U > 0.693 Deadline Monotonic scheduling Priority assignment based on relative deadlines of tasks Shorter the relative deadline, higher the priority Useful when relative deadline ≠ period Both of the above usually done off-line since fixed priority assigned at task level Online dispatcher enforces the schedule by dispatching higher priority jobs before lower priority jobs
7
Dynamic Priority Algorithms Online scheduler assigns priorities for jobs released Dispatcher dispatches the highest priority job Suitable for scheduling aperiodic as well as periodic tasks Earliest Deadline First (EDF) Priority assignment based on absolute deadline of jobs Job with closest deadline assigned highest priority Schedulable utilization = 1 Least Laxity First (LLF) Laxity = Absolute Deadline – Worst case computation time Priority assignment based on laxity of jobs Job with minimum laxity assigned highest priority Schedulable utilization = 1 Dynamic Priority algorithms provide better processor utilization than Fixed Priority algorithms
8
Hybrid algorithms Not all algorithms are robust in overload situations To improve predictability for critical tasks, use a combination of fixed and dynamic priority algorithms Tasks divided based on criticality – critical and non-critical Critical tasks scheduled using fixed priority assignment Non-critical tasks scheduled based on dynamic priority assignment Examples - Maximum Urgency First, RM + MLF
9
Blocking factors Sometimes a higher priority job cannot run because Currently running lower priority job is non-preemptive (priority- inversion) E.g., non-preemptable system call Self-suspension E.g., i/o operations, remote calls Above blocking delays need to be taken into account while doing schedulability analysis Blocking delay should include Maximum blocking time due to non-preemptability of lower priority tasks Maximum own self suspension time and maximum self-suspension time of all higher priority tasks Context switches
10
ORB endsystem example (1/2) Wait-on-Connection ReplyHandler in ORB waits on socket connection for the reply Blocking call to recv() One less thread listening on the Reactor for new requests No interference from other requests that arrive when reply is pending However, could cause deadlocks on nested upcalls. Wait-on-Reactor ReplyHandler waits in reactor for the reply Reactor gets a chance to process other requests while replies are pending Interleaving of request reply processing, hence interference from other requests while reply is pending Ideal for single threaded processing
11
ORB endsystem example (2/2) Wait-on-Reactor strategy could cause interleaved request/reply processing Blocking factor could be bounded or unbounded Based on the upcall duration And number of intervening upcalls Blocking factors may affect real-time properties of other end-systems Call-chains can have a cascading blocking effect f2f2 f5f5 f3f3 f6f6 f 5 reply queued f 3 returns f 5 reply processed f 2 returns Blocking factor for f 2
12
Algorithm selection PeriodicRMS/MUF/DM Periodic, BlockingRMS/MUF/DM with Priority Ceiling Periodic?, Predictable Overload behavior MUF, RM+MLF Maximum UtilizationEDF,MLF Single Processor
13
EndtoEnd Scheduling
14
End-to-end task model A task is composed of multiple subtasks running on multiple processors Remote method invocation Non-local event Messages Subtasks are subject to precedence constraints Task = a chain of subtasks A task is subject to an end-to-end deadline Does not care about the response time of a particular subtask End-to-End scheduling should address Task allocation : bind tasks to processors Synchronization protocols : to enforce precedence constraints Subdeadline assignment Schedulability analysis Thanks to Dr. Lu for permitting to use material from CS520 slides
15
Task Allocation Strategies Offline, static allocation Allocate a task when it arrives Re-allocate (migrate) a task after it starts NP-hard: heuristics needed Bin-Packing formulation Pack subtasks to bins (processors) with limited capacity “Size” of a subtask Ti,j: ui,j = ei,j/pi “Capacity” of each bin is its utilization bound, e.g., 0.69 (RMS) or 1 (EDF) under ideal assumptions Goal: minimize the number of bins subject to the capacity constraints Thanks to Dr. Lu for permitting to use material from CS520 slides
16
The Synchronization Problem Given that Priorities are assigned to subtasks in a task chain using some fixed priority assignment algorithm How do we coordinate the release of subtasks in a task chain so that Precedence constraints among subtasks are satisfied subtask deadlines are met end-to-end deadlines are met
17
Synchronization Protocols Direct Synchronization (DS) Protocol Simple and straightforward Phase Modification (PM) Protocol Proposed by Bettati Extension called Modified Phase Modification (MPM) Protocol Release Guard Protocol Proposed by Sun
18
Synchronization Protocol - Example P1P2 (4,2) T1T1 (6,2) T 2,1 (6,2) T 2,2 (6,3) T3T3 T i,j – j th subtask of task T i (period,execution time) Period = relative deadline of parent task Task T3 has a phase of 4 time units
19
Direct Synchronization Protocol Greedy strategy On completion of subtask A synchronization signal sent to the next processor Successor subtask competes with other tasks/subtasks on the next processor
20
Direct Synchronization Illustrated On P1 On P2 T1T1 T 2,1 T 2,2 T3T3 24681012246810122468101224681012 Phase of T 3 T 3 misses deadline P1P2 (4,2) T1T1 (6,2) T 2,1 (6,2) T 2,2 (6,3) T3T3
21
Phase Modification Protocol Proposed by Bettati Release subtasks periodically According to the periods of their parent tasks Each subtask given its own phase Phase determined by subtask precedence constraints
22
Phase Modification Protocol Illustrated (1/2) T 1,1 T 1,2 T 1,3 T 1,1 T 1,2 T 1,3 Actual response time Estimated worst case response time Phase of T 1,2 Phase of T 1,3 p1p1 p1p1 p1p1
23
Phase Modification Protocol Illustrated (2/2) On P1 On P2 T1T1 T 2,1 T 2,2 T3T3 24681012246810122468101224681012 Phase of T 3 P1P2 (4,2) T1T1 (6,2) T 2,1 (6,2) T 2,2 (6,3) T3T3 Phase of T 2,2
24
Phase Modification Protocol - Analysis Periodic Timer interrupt to release subtasks Centralized clock or strict clock synchronization Task overruns could cause Precedence constraint violations
25
Modified PM Protocol Illustrated (1/2) T 1,1 T 1,2 T 1,1 T 1,2 T 1,3 Actual response time Estimated worst case response time p1p1 Overrun ∆ p 1 + ∆
26
Modified PM Protocol Illustrated (2/2) On P1 On P2 T1T1 T 2,1 T 2,2 T3T3 24681012246810122468101224681012 Phase of T 3 P1P2 (4,2) T1T1 (6,2) T 2,1 (6,2) T 2,2 (6,3) T3T3 Synch signal delayed
27
Modified PM Protocol - Analysis MPM protocol behavior the same as PM under ideal conditions Ideal conditions – Clocks synchronized, no overrun MPM protocol does not need clock synchronization Precedence constraints preserved even in the case of overruns Upper bound on End-to-End Response time of task T i R i,k is the response time of the k th subtask of T i n i is the number of subtasks for the task T i Lower bound on End-to-End Response time of task T i + Actual Response time of n i th subtask Lower bound high, hence high average EER time, but low output jitter
28
Release Guard Protocol Proposed by Sun A guard variable – release guard - associated with each subtask Release guard used to control release of each subtask Contains next release time of subtask Synchronization signals just like MPM Release guard updated On getting synchronization signal During idle time
29
Release Guard Protocol Illustrated On P1 On P2 T1T1 T 2,1 T 2,2 T3T3 24681012246810122468101224681012 Phase of T 3 P1P2 (4,2) T1T1 (6,2) T 2,1 (6,2) T 2,2 (6,3) T3T3 g 1,2 = 4+6=10g 1,2 = 9 Idle time detected
30
Release Guard Protocol - Analysis Shares the same advantages as MPM Upper bound on EER still the same as MPM Since upper bound on release time enforced by release guard R i,k is the response time of the k th subtask of T i n i is the number of subtasks for the task T i Lower bound on EER less than that of MPM If there are idle times Results in lower average EER
31
Subdeadline Assignment Subdeadline -> priorities under EDF & DM Optimal subdeadline assignment is NP-hard Offline: heuristic search algorithms Online: simpler heuristics Effective Deadline (ED): Work backwards from the end-to-end deadline Slack assignment Assign all slack to 1st subtask Assign slack proportionally to execution time Assign more slack to subtasks on busier processors Thanks to Dr. Lu for permitting to use material from CS520 slides
32
Holistic Scheduling Combine processor scheduling with communication bus scheduling to provide an integrated schedulability analysis Calculate bounds on end-to-end delays in distributed systems including communication delays Typically used in hard real-time systems to calculate the worst case end- to-end response time of tasks
33
Algorithm selection Use MPMP or RG if Information about all tasks are available a priori System has global clock sync Otherwise only RG can be used Use MPMP for low jitter and RG for lower average EER
34
References Synchronization Protocols in Distributed Real-Time Systems, ICDCS 96 Jun Sun, Jane Liu Real-time Systems Jane Liu Holistic Schedulability for Distributed Hard Real-time Systems, Microprocessing and Microprogramming - Euromicro Journal 1994 (Special Issue on Parallel Embedded Real-Time Systems) Ken Tindell, John Clark VEST: An Aspect-Based Composition Tool for Real-Time Systems, RTAS 2003 Stankovic, Lu, et.al
35
DM with phase offset
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.