Download presentation
Presentation is loading. Please wait.
Published byWilliam Hunter Modified over 9 years ago
1
Is It Time Yet? Wing On Chan
2
Distributed Systems – Chapter 18 - Scheduling Hermann Kopetz
3
3 Scheduling Algorithm Classifications Real-Time Scheduling Soft Hard –Dynamic »Preemptive »Non-preemptive –Static »Preemptive »Non-preemptive
4
4 Scheduling Problem Distributed hard real-time systems –Execute a set of concurrent RT transactions such that all time-critical transactions meet their deadlines –Transactions need resources (Computational, communication, and data) –Decomposition Within a node Communication resources
5
5 Hard RT Vs Soft RT Scheduling Hard RT Systems –Deadlines must be guaranteed before execution starts. –Probability that the transaction finishes before it’s deadline not enough. –Off-line schedulability tests –Feasible static schedules
6
6 Hard RT Vs Soft RT Scheduling Soft RT Systems –Violation of timing not critical –Cheaper resource-inadequate solutions can be used –Under adverse conditions, it is tolerable that transactions not meet their timing constraints
7
7 Dynamic Vs Static Scheduling Dynamic (On-line) Scheduling –Only considers Actual requests Execution time parameters –Costly to find a schedule Static Scheduling (Off-line) –Complete knowledge Maximum execution time Precedence constraints Mutual exclusion constraints Deadlines
8
8 Preemptive Vs Non-Preemptive Preemptive –Can be interrupted by more urgent tasks Safety assertions Non-Preemptive –No interruptions –Shortest response = Longest + Shortest task –Reasonable for scenarios with many short tasks
9
9 Central Vs Distributed Dynamic distributed RT systems –Central Scheduling –Distributed Algorithms Requires up-to-date information in all nodes Significant communication costs
10
10 Schedulability Test Determine if a schedule exists –Exact –Necessary –Sufficient Optimal scheduler –Optimal if schedule can be found when the exact schedulability test says there is one Exact schedulability test –Belongs to the class of NP-complete problems
11
11 Schedulability Test Sufficient schedulability test –Sufficient but not necessary condition Necessary schedulability test –Necessary but not sufficient condition Difference between deadline d i and computation time c i (laxity) be non-negative
12
12 Periodic Tasks –After the initial task request All future requests known Adding multiples of known period to initial request time
13
13 Periodic Tasks Task set {T i } of periodic tasks –Periods - p i –Deadlines - d i –Processing requirements – c i Sufficient to examine schedules with length equal to the least common multiple of the periods in {T i }
14
14 Periodic Tasks Necessary schedulability test –Sum of utilization factors i must be less than or equal to n, where n is the number of processors – = (c i / p i ) <= n – i = Percentage of time the task T i requires the service of a CPU
15
15 Sporadic Tasks Request times are not known beforehand Must be a minimum interval p i between any two request times of sporadic tasks If p i doesn’t exist, then the necessary schedulability test will fail Aperiodic tasks = No constraints on request times
16
16 Optimal Dynamic Scheduling Consider a dynamic scheduler with full past knowledge only –Exact schedulability is impossible –New definition of optimal dynamic scheduler Optimal if it can find a schedule whenever a clairvoyant scheduler can find a schedule
17
17 Adversary Argument If there are mutual exclusion constraints between periodic and sporadic tasks, then in general, it is impossible to find an optimal totally online dynamic scheduler.
18
18 Adversary Argument
19
19 Adversary Argument Necessary schedulability test – = (c i / p i ) <= n – = (2/4) + (1/4) = (3/4) <= 1 Suppose that when T 1 starts, T 2 requests service –Mutually exclusive –T 2 has a laxity of d 2 – c 2 = 1 - 1 = 0 –T 2 will miss it’s deadline
20
20 Adversary Argument Clairvoyant scheduler –Schedule periodic task between sporadic tasks –Laxity of periodic task > execution time of sporadic task, so scheduler will always find a schedule
21
21 Adversary Argument If the on-line scheduler has no future knowledge about sporadic tasks, scheduling becomes unsolvable. Predictable hard RT systems are only feasible if there are regularity assumptions
22
22 Dynamic Scheduling Dynamic scheduling algorithm –Determines task after occurrence of a significant event –Based on current task requests
23
23 Rate Monotonic Algorithm Classic algorithm for hard RT systems with a single CPU Dynamic preemptive algorithm Static task priorities
24
24 Rate Monotonic Algorithm Assumptions 1.All requests in set {T i } are periodic 2.All tasks are independent. No precedence or mutual exclusion constraints 3.d i = p i 4.The maximum c i is known and constant 5.Context switching time is ignored 6. = (c i / p i ) <= n (2 1/n – 1) [approaches ln 2 or 0.7]
25
25 Rate Monotonic Algorithm Algorithm defines task priorities –Short p i tasks get higher priority –Longer p i tasks get lower priority –During run-time, always run the highest priority If all assumptions are met, all T i meet their deadlines Optimal for single processor systems
26
26 Earliest-Deadline-First Algorithm Optimal dynamic preemptive algorithm Uses dynamic priorities Assumptions 1-5 of the Rate Monotonic Algorithm must also hold can go up to 1, even with tasks that do not have p i s that are multiples of the shortest period After a significant event –Task with the shortest d i gets the highest dynamic priority
27
27 Least-Laxity Algorithm Optimal in single processor system Same assumptions as Earliest-Deadline-First algorithm At scheduling decision point –Task with the shortest laxity (d i – c i ) is given the highest dynamic priority In multiprocessor systems –Earliest-deadline-first and least-laxity algorithms are not optimal –Least-laxity algorithm is able to handle task scenarios that the Earliest-deadline-first algorithm could not
28
28 Scheduling Dependent Tasks Analysis of tasks with precedence and mutual exclusion constraints more useful Scheduling competing with tasks for resources Possible solutions –Provide extra resources. Simpler sufficient schedulability tests and algorithms. –Divide problem into 2 parts One solved at compile time One solved during run-time (Simpler of the two) –Add restricting regularity assumptions
29
29 Kernelized Monitor For a set of short critical sections, the longest critical section less than a given duration q. Allocates processor time in uninterruptible quantums of q. –Assumes all critical sections can be started and completed within this single uninterruptible –Process may only be interrupted after xq where x is an integer
30
30 Kernelized Monitor Example: –Assume there are two periodic tasks T 1 : c 1 = 2, d 1 = 2, p 1 = 5 T 2 : c 21 = 2, c 22 = 2, d 2 = 10, p 2 = 10 T 2 has two scheduling blocks –C 22 of T 2 is mutually exclusive to T 1 q = 2
31
31 Kernelized Monitor At t = 5, the earliest-deadline algorithm will need to schedule T 1 again but it can’t since T 22 is block the critical section between T 1 and T 22
32
32 Kernelized Monitor Region before the second activation of T 1 is blocked –Forbidden region Dispatcher must know about all forbidden regions during compile time
33
33 Priority Inversion Consider three tasks T 1, T 2, and T 3 with T 1 having the highest priority –Scheduled with rate-monotonic algorithm –T 1 and T 3 require exclusive access to a resource protected by a semaphore S
34
34 Priority Inversion –T 3 starts and has exclusive access to resource –T 1 requests service but is blocked by T 3 –T 2 requests service and is granted service –T 2 finishes –T 3 finishes and releases S –T 1 starts and finishes –Actual execution is T 2, T 3, then T 1 –Solution: Priority Ceiling Protocol
35
35 Priority Ceiling Protocol Priority ceiling (PC) of S = priority of the highest task that can lock S T only enters a new critical section if it’s priority is higher than the PC of all semaphores locked by tasks != T Runs at assigned priority unless in critical region and blocks higher priority tasks –Inherits highest priority of blocked tasks while in the critical region –Returns to assigned priority when exiting
36
36 Priority Ceiling Protocol 1.T 3 starts 2.T 3 locks S 3 3.T 2 starts and preempts T 3 4.T 2 is blocked when locking S 3. T 3 resumes at T 2 ’s inherited priority
37
37 Priority Ceiling Protocol 5. T 3 enters nested critical region and locks S 1. 6. T 1 starts and preempts T 3 7. T 1 is blocked when locking S 1. T 3 resumes 8. T 3 unlocks S 2. T 1 awakens and preempts T 3. T 1 locks S 1
38
38 Priority Ceiling Protocol 9. T 1 unlocks S 1 10. T 1 locks S 2 11. T 1 unlocks S 2 12. T 1 completes. T 3 resumes at priority of T 2
39
39 Priority Ceiling Protocol 13. T 3 unlocks S 3. T 2 preempts T 3 and locks S 3 14. T 2 unlocks S 3 15. T 2 completes. T 3 resumes 16. T 3 completes
40
40 Priority Ceiling Protocol One sufficient schedulability test –Set of n periodic tests {T i }, periods p i, computation time c i –Worse case blocking time by lower priority tasks = B i – i, 1 i n : (c 1 /p 1 ) + (c 2 /p 2 ) + …+ (c i /p i ) + (B i /p 2 ) i(2 1/i – 1) Not the only test. There are more complex ones. Priority ceiling protocol – Predictable, non- deterministic scheduling protocol
41
41 Dynamic Scheduling In Distributed Systems Hard to guarantee deadlines in single processor systems Even harder in distributed systems or multiprocessor systems due to communication Applications required to tolerate transient faults like message losses as well as detect permanent faults
42
42 Dynamic Scheduling In Distributed Systems Positive Acknowldgement or Retransmission (PAR) –Large temporal uncertainty between shortest and longest execution time –Worse case – assume longest time. Poor responsiveness of system Masking Protocols –Send message k + 1 in case the tolerance of k is required. No temporal problem but can’t detect permanent faults due to unidirectional communication
43
43 Dynamic Scheduling In Distributed Systems Solutions? –No idea –Providing good temporal performance is a “fashionable research topic”
44
44 Static Scheduling Static schedules guarantees all deadlines, based on known resources, precedence, and synchronization requirements, is calculated off-line Strong regularity assumptions Known times when external events will be serviced
45
45 Static Scheduling System design –Maximum delay time until request is recognized + maximum transaction response time < service deadline Time –Generally a periodic time-triggered schedule –Time line divided into a sequence of granules (cycle time) –Only one interrupt, a periodic clock interrupt for the start of a new granule –In distributed systems, synchronized to a precision of less than a granule
46
46 Static Scheduling Periodic with p i being a multiple of the basic granule Schedule period = least common multiple of all p i All scheduling decisions made at compile- time and executed at run-time Optimal schedule in a distributed system => NP complete
47
47 Search Tree Precedence Graph –Tasks = Nodes, Edges = dependencies Search Tree –Level = unit of time, Depth = period –Path to a leaf node = complete schedule –Goal: Find a complete schedule that observes all precedence and mutual exclusion constraints before the deadline
48
48 Heurisitc Function Two terms: Actual cost of path, estimated cost to goal Example –Estimate time needed to complete precedence graph (Time Until Response) (TUR) –Necessary estimate of TUR = (max exec time + communications) –If necessary estimate > deadline, prune branches of the node and backtrack to the parent
49
49 Increasing Adaptability Weakness: Assumption of strictly periodic tasks Proposed solutions for flexibility –Transformation of sporadic requests into periodic requests –Sporadic server task –Mode changes
50
50 Transformation Of Sporadic Requests To Periodic Requests Possible to find a schedule if the sporadic task has a laxity One solution: Replace T with a quasisporadic task T ’ –c ’ = c –d ’ = d –p’ = min(p – d + 1, p) Sporadic task with a short latency will demand a lot of resources, but will request it infrequently
51
51 Sporadic Server Task Periodic server task of high priority created –Maintains execution time for duration of server’s period –When sporadic task arrives, services with server’s priority (Depletes execution time) –Replenishes execution time when active –Sporadic server task is dynamically scheduled in response to a sporadic request
52
52 Mode Change During system design, identify all modes For each mode, generate a static schedule off-line Analyze mode changes and develop mode change schedule During run-time, when a mode change is requested, change to corresponding static schedule
53
53 Comparisons Predictability –Static Scheduling Accurate planning of schedule, so precise predictability –Dynamic Scheduling No schedulability tests exist for distributed system with mutual exclusion and precedence relations Dynamic nature can not guarantee timeliness
54
54 Comparisons Testability –Static Scheduling Performance tests of every task can be compared with established plans Systematic and constructive since all input cases can be observed –Dynamic Scheduling Confidence of timeliness based on simulations Real loads not enough since rare events don’t occur often Are the simulated loads representative of real loads?
55
55 Comparisons Resource Utilization –Static Scheduling Planned for peak load with time for each task at least the maximum execution time If many operating modes, can lead to “combinatorial explosion” of static schedules –Dynamic Scheduling Processor available more quickly Resources needed to do dynamic scheduling
56
56 Comparisons Resource Utilization –Dynamic Scheduling (Cont’d) If loads low, better utilization than static schedule If loads high, more resources used for dynamic scheduling and less for execution of tasks
57
57 Comparisons Extensibility –Static Scheduling If a new task is added or the maximum execution time is modified, the schedule needs to be recalculated If the new node sends information into the system, the communications schedule needs to be recalculated Impossible to calculate static schedule if the number of tasks changes dynamically during run-time
58
58 Comparisons Extensibility –Dynamic Scheduling Easy to add/modify tasks Change can ripple through system Probability of change and system test-time are proportional to tasks. Assessing the consequences increase more than linearly with the number of tasks Scales poorly for large applications
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.