Presentation is loading. Please wait.

Presentation is loading. Please wait.

RTOS Scheduling – I : Rate-Monotonic Theory

Similar presentations


Presentation on theme: "RTOS Scheduling – I : Rate-Monotonic Theory"— Presentation transcript:

1 RTOS Scheduling – I : Rate-Monotonic Theory
EE202A (Fall 2001): Lecture #4

2 Reading List for This Lecture
Required Balarin, F.; Lavagno, L.; Murthy, P.; Sangiovanni-Vincentelli, A.; Systems, C.D.; Sangiovanni-, A. Scheduling for embedded real-time systems. IEEE Design & Test of Computers, vol.15, (no.1), IEEE, Jan.-March p   Sha, L.; Rajkumar, R.; Sathaye, S.S. Generalized rate-monotonic scheduling theory: a framework for developing real-time systems. Proceedings of the IEEE, vol.82, (no.1), Jan p references.  Recommended Sha, L.; Rajkumar, R.; Lehoczky, J.P. Priority inheritance protocols: an approach to real-time synchronization. IEEE Transactions on Computers, vol.39, (no.9), Sept p Others Liu, C.L.; Layland, J.W. Scheduling algorithms for multiprogramming in a hard-real-time environment. Journal of the Association for Computing Machinery, vol.20, (no.1), Jan p Lehoczky, J.; Sha, L.; Ding, Y. The rate monotonic scheduling algorithm: exact characterization and average case behavior. Proceedings. Real Time Systems Symposium (Cat. No.89CH2803-5), (Proceedings. Real Time Systems Symposium (Cat. No.89CH2803-5), Santa Monica, CA, USA, 5-7 Dec ) Los Alamitos, CA, USA: IEEE Comput. Soc. Press, p

3 Computation & Timing Model of the System
Requests for tasks for which hard deadlines exist are periodic, with constant inter-request intervals Deadlines consist of runnability constraints only each task must finish before the next request for it eliminates need for buffering to queue tasks Tasks are independent requests for a certain task does not depend on the initiation or completion of requests for other tasks however, there periods may be related Run-time for each task is constant for that task, and does not vary with time can be interpreted as the maximum running time

4 Characterizing the Task Set
Set on n independent tasks 1, 2, … n Request periods are T1, T2, ... Tn request rate of i is 1/Ti Run-times are C1, C2, ... Cn

5 Scheduling Algorithm Set of rules to determine the task to be executed at a particular moment One possibility: preemptive & priority driven tasks are assigned priorities statically or dynamically at any instant, the highest priority task is run whenever there is a request for a task that is of higher priority than the one currently being executed, the running task is interrupted, and the newly requested task is started Therefore, scheduling algorithm == method to assign priorities

6 Assigning Priorities to Tasks
Static or fixed approach priorities are assigned to tasks once for all Dynamic approach priorities of tasks may change from request to request Mixed approach some tasks have fixed priorities, others don’t

7 Deriving Optimum Priority Assignment Rule

8 Critical Instant for a Task
Deadline for a task = time of next request of it Overflow is said to occur at time t, if t is the deadline of an unfulfilled request A scheduling algorithm is feasible if tasks can be scheduled so that no overflow ever occurs Response time of a request of a certain task is the time span between the request and the end of response to that task

9 Critical Instant for a Task (contd.)
Critical instant for a task = instant at which a request for that task will have the maximum response time Critical time zone of a task = time interval between a critical instant & the end of the response to the corresponding request of the task

10 When does Critical Instant occur for a task?
Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks Can use this to determine whether a given priority assignment will yield a feasible scheduling algorithm if requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible

11 Example Consider 1 & 2 with T1=2, T2=5, & C1=1, C2=1
1 has higher priority than 2 priority assignment is feasible can increase C2 to 2 and still be able to schedule t T1 T1 t T2 T2 t CRITICAL TIME ZONE CRITICAL TIME ZONE

12 Example (contd.) 2 has higher priority than 1
priority assignment is still feasible but, can ‘t increase beyond C1=1, C2=1 T2 t T1 t CRITICAL TIME ZONE

13 Observation Consider 1 & 2 with T1 < T2
Let, 1 be the higher priority task. From Theorem 1, the following must hold:  T2/ T1  C1 + C2  T2 (necessary condition, but not sufficient) Let, 2 be the higher priority task. The following must hold: C1 + C2  T1

14 Observation (contd.) Note: C1 + C2  T1   T2/ T1  C1 +  T2/ T1  C2   T2/ T1  T1   T2/ T1  C1 +  T2/ T1  C2  T1   T2/ T1  C1 + C2  T1 since  T2/ T1   1 Therefore, whenever T1 < T2 and C1, C2 are such that the task schedule is feasible with 2 at higher priority than 1, then it is also feasible with 1 at higher priority than 2 but opposite is not true

15 A Possible Rule for Priority Assignment
Assign priorities according to request rates, independent of run times higher priorities for tasks with higher request rates Called Rate-Monotonic (RM) Priority Assignment It is optimum Theorem 2: no other fixed priority assignment can schedule a task set if RM priority assignment can’t schedule it, i.e., if a feasible priority assignment exists then RM priority assignment is feasible proof: can obtain a RM priority assignment by a sequence of pairwise reorderings of task priorities

16 Processor Utilization
Processor Utilization Factor: fraction of processor time spent in executing the task set i.e. 1 - fraction of time processor is idle For n tasks, 1, 2, … n the utilization factor U is U = C1/T1 + C2/T2 + … + Cn/Tn U can be improved by increasing Ci’s or decreasing Ti’s as long as tasks continue to satisfy their deadlines at their critical instants

17 How Large can U be for a Fixed Priority Scheduling Algorithm?
Corresponding to a priority assignment, a set of tasks fully utilizes a processor if: the priority assignment is feasible for the set and, if an increase in the run time of any task in the set will make the priority assignment infeasible The least upper bound of U is the minimum of the U’s over all task sets that fully utilize the processor for all task sets whose U is below this bound,  a fixed priority assignment which is feasible U above this bound can be achieved only if the task periods Ti’s are suitably related

18 Utilization Factor for Rate-Monotonic Priority Assignment
RM priority assignment is optimal  for a given task set, the U achieved by RM priority assignment is  the U for any other priority assignment  the least upper bound of U = the infimum of U’s for RM priority assignment over all possible T’s and all C’s for the tasks

19 Two Tasks Case Theorem 3: For a set of two tasks with fixed priority assignment, the least upper bound to processor utilization factor is U=2(21/2-1) Proof: Let 1 and 2 be two tasks with periods T1 and T2, and run-times C1 and C2 Assume T2 > T1 According to RM, 1 has higher priority than 2 In a critical time zone of 2, there are T2/T1 requests for 1

20 Two Tasks Case (contd.) Proof contd.:
Adjust C2 to fully utilize the available processor time within the critical time zone Case 1: C1 is short enough so that all requests for 1 within the critical time zone of 2 are completed before the next request of 2, i.e. C1  T2-T1T2 /T1 then, largest possible C2 = T2 - C1 T2/T1 so that, U = 1 - C1((1/T2)T2/T1 - 1/T1 ) U monotonically decreases with C1

21 Two Tasks Case (contd.) Proof contd.:
Case 2: The execution of the T2/T1 -th request for 1 overlaps the next request for 2, i.e. C1  T2-T1T2 /T1 then, largest possible C2 = (T1 -C1) T2 /T1 so that, U = (T1 / T2) T2 /T1+ C1((1/T1)-(1/ T2) T2 /T1) U monotonically increases with C1 The minimum U occurs at the boundary of cases 1 & 2, i.e. for C1 =T2-T1T2 /T1 U = 1-(T1 / T2)(T2/T1 -(T2/T1))((T2/T1)- T2 /T1)

22 Two Tasks Case (contd.) Proof contd.:
U = 1-(T1 / T2)(T2/T1 -(T2/T1))((T2/T1)- T2 /T1) = 1 - f(1-f)/(I+f) where I= T2 /T1 f= (T2/T1)- T2 /T1=fractional part of T2/T1 U is monotonically increasing with I min U occurs at min value of I, i.e. I=1 Minimizing U over f, one gets f = 21/2-1 So, min U = 2(21/2-1)  0.83 Note: U=1 when f=1, i.e. when period of lower priority task is a multiple of the period of the higher priority task

23 C1/T1 + C2/T2 + … + Cn/Tn  n(21/n-1)
General Case Theorem: For a set of n tasks with fixed priority assignment, the least upper bound to processor utilization factor is U=n(21/n-1) Or, equivalently, a set of n periodic tasks scheduled by RM algorithm will always meet their deadlines for all task start times if C1/T1 + C2/T2 + … + Cn/Tn  n(21/n-1)

24 General Case (contd.) As m, the U rapidly converges to ln 2 = 0.69
This is a rather tight bound But, note that this is just the least upper bound task set with larger U may still be schedulable e.g., note that if {Tn/Ti} = 0 for I=1,2,…,n-1, then U=1 How to check if a specific task set with n tasks is schedulable? If U  n(21/n-1) then it is schedulable otherwise, use Theorem 1!

25 Theorem 1 Recalled Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks Can use this to determine whether a given priority assignment will yield a feasible scheduling algorithm if requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible Applicable to any static priority scheme… not just RM

26 Example #1 Task 1 : C1 =20; T1 =100; D1 =100 Task 2 : C2 =30; T2 =145; D2 =145 Is this task set schedulable? U = 20/ /145 = 0.41  2(21/2-1) = Yes!

27 Example #2 Task 1 : C1 =20; T1 =100; D1 =100 Task 2 : C2 =30; T2 =145; D2 =145 Task 3 : C3 =68; T3 =150; D3 =150 Is this task set schedulable? U = 20/ / / = 0.86 > 3(21/3-1) = Can’t say! Need to apply Theorem 1.

28 Example #2 (contd.) Consider the critical instant of 3, the lowest priority task 1 and 2 must execute at least once before 3 can begin executing therefore, completion time of 3 is  C1 +C2 +C3 = = 118 however, 1 is initiated one additional time in (0,118) taking this into consideration, completion time of 3 = 2 C1 +C2 +C3 = 2* = 138 Since 138 < D3 = 150, the task set is schedulable

29 Response Time Analysis for RM
For the highest priority task, worst case response time R is its own computation time C R = C Other lower priority tasks suffer interferences from higher priority processes Ri = Ci + Ii Ii is the interference in the interval [t, t+Ri]

30 Response Time Analysis (contd.)
Consider task i, and a higher priority task j Interference from task j during Ri: # of releases of task k = Ri/Tj each will consume Cj units of processor total interference from task j = Ri/Tj * Cj Let hp(i) be the set of tasks with priorities higher than that of task i Total interference to task i from all tasks during Ri:

31 Response Time Analysis (contd.)
This leads to: Smallest Ri will be the worst case response time Fixed point equation: can be solved iteratively

32 Algorithm

33 RM Schedulability Consider tasks 1, 2, … n in decreasing order of priority For task i to be schedulable, a necessary and sufficient condition is that we can find some t  [0,Ti] satisfying the condition t = t/T1C1 + t/T2C2 + … t/Ti-1Ci-1 + Ci But do we need to check at exhaustively for all values of t in [0,Ti]?

34 RM Schedulability (contd.)
Observation: RHS of the equation jumps only at multiples of T1, T2, … Ti-1 It is therefore sufficient to check if the inequality is satisfied for some t  [0,Ti] that is a multiple of one or more of T1, T2, … Ti-1 t  t/T1C1 + t/T2C2 + … t/Ti-1Ci-1 + Ci

35 RM Schedulability (contd.)
Notation Wi(t) = j=1..iCjt/Tj Li(t) = Wi(t)/t Li = min0  t  Ti Li(t) L = max{Li} General sufficient & necessary condition: Task i can be scheduled iff Li 1 Practically, we only need to compute Wi(t) at all times i = {kTj | j=1,…,I; k=1,…,Tj/Tj} these are the times at which tasks are released Wi(t) is constant at other times Practical RM schedulability conditions: if minti Wi(t)/t  1, task i is schedulable if maxi{1,…,n}{minti Wi(t)/t}  1, then the entire set is schedulable

36 Example Task set: Then:
1: T1=100, C1=20 2: T2=150, C2=30 3: T3=210, C3=80 4: T4=400, C4=100 Then: 1 = {100} 2 = {100,150} 3 = {100,150,200,210} 4 = {100,150,200,210,300,400} Plots of Wi(t): task i is RM-schedulable iff any part of the plot of Wi(t) falls on or below the Wi(t)=t line.

37 Deadline Monotonic Priority Assignment (DMP)
Fixed priority of a process is inversely proportional to its deadline (< period) Di < Dj  Pi > Pj Optimal: can schedule any task set that any other static priority assignment can Example: RM fails but DMP succeeds for the following

38 RM in Distributed/Networked Embedded Systems?
Task is essentially scheduled on multiple resources in series Need to schedule communication messages over the interconnect propagation delay & jitter queuing delay & jitter Divide end-to-end deadline into subsystem deadlines Need buffering to mitigate jitter problem as task may arrive too early

39 Can one do better? Yes… by using dynamic priority assignment
In fact, there is a scheme for dynamic priority assignment for which the least upper bound on the processor utilization is 1 More later...

40 Transient Overload Task Ci Ave Ti Peak Ti Criticality 1 20 10 100
30 25 150 3 80 40 210 Non critical 4 400 Not RM schedulable if all tasks take their worst case execution times, but is schedulable in the average case Can we arrange so that the three critical tasks always meet their deadlines, while the non-critical task meets many of its deadlines?

41 Dealing with Transient Overload
Transient system overload may cause some deadlines to be missed Lower priority tasks are likely to miss their deadlines first in an overload situation But, the more important task may have been assigned a lower priority: priority != importance One could assign priorities according to importance e.g. by artificially reducing smaller deadline but… reduces schedulablity

42 Example Consider two tasks: Task 1: C1 =3.5; T1 =10; D1 =10; less important Task 2: C2 =7; T2 =14; D2 =13; critical task 2 will have lower priority completion time test shows that 2 is not schedulable but is important and must be guaranteed! Making priority 2 of 1 will make unschedulable

43 A Better Approach: Period Transformation
One could transform period of 2 to 7, yielding a modified task set Task 1: C1 =3.5; T1 =10; D1 =10 Task 2a: C2a=3.5; T2a=7; D2a=6 Note: in period transformation, the real deadline is at the last transformed period deadline at the second transformed period of 2a is at most 6 (7+6=13) Now, 2a has higher priority, and the task set is schedulable!

44 Using Period Transformation to Improve Schedulability
Consider two tasks: Task 1: C1=5; T1=10 Task 2: C2=5; T2=15 These two tasks are just schedulable with utilization 83.3% If we transform, Task 1a: C1a=2.5; T1a=5 Task 2: C2=5; T2=15 the utilization bound becomes 100%

45 Sporadic Tasks Tasks that are released irregularly, often in response to some event in the environment no periods associated but must have some maximum release rate (minimum inter-arrival time) Otherwise no limit on workload! How to deal with them? consider them as periodic with a period equal to the minimum inter-arrival time other approaches…

46 Handling Sporadic Tasks: Approach 1
Define fictitious periodic task of highest priority and of some chosen execution period During the time this task is scheduled to run, the processor can run any sporadic task that is awaiting service if no sporadic task awaiting service, processor is idle Outside this time the processor attends to periodic tasks Problem: wasteful!

47 Handling Sporadic Tasks: Approach 2 (Deferred Server)
Less wasteful… Whenever the processor is scheduled to run sporadic tasks, and finds no such tasks awaiting service, it starts executing other (periodic) tasks in order of priority However, if a sporadic task arrives, it preempts the periodic task and can occupy a total time up to the time allotted for sporadic task Schedulability?

48 Example Approach 1 Approach 2 (Deferred Server)

49 Task Synchronization So far, we considered independent tasks
However, tasks do interact: semaphores, locks, monitors, rendezvous etc. shared data, use of non-preemptable resources This jeopardizes systems ability to meet timing constraints e.g. may lead to an indefinite period of priority inversion where a high priority task is prevented from executing by a low priority task

50 Priority Inversion Example
Let 1 & 3 share a resource & let 1 have a higher priority. Let 2 be an intermediate priority task that does not share any resource with either. Consider: 3 obtains a lock on the semaphore S and enters its critical section to use a shared resource 1 becomes ready to run and preempts . Next 1 tries to enter its critical section by trying to lock S. But S is locked and therefore 1 is blocked. 2 becomes ready to run. Since only 2 and 3 are ready to run, 2 preempts 3 while 3 is in critical section.

51 Priority Inversion Example (contd.)
What would we prefer? 1, being the highest priority task, should be blocked no longer than the time 3 takes to complete its critical section But, in reality, duration of blocking is unpredictable 3 can be preempted until 2 and any other pending intermediate priority tasks are completed Duration of priority inversion is a function of the task execution times,and is not bounded by the duration of critical sections

52 Process Interactions & Blocking
Priority inversions Blocking Priority inheritance

53 Example: Priority Inversion

54 Example: Priority Inheritance

55 Response Time Calculations & Blocking
R = C + B + I solve by forming recurrence relation With priority inheritance:

56 Response Time Calculations & Blocking (contd.)
Where usage is a 0/1 function: usage(k, i) = 1 if resource k is used by at least one process with a priority less than i, and at least one process with a priority greater or equal to i. Otherwise it gives the result 0. CS(k) is the computational cost of executing the k-th critical section.

57 Priority Ceiling Protocols
Basic idea: priority ceiling of a binary semaphore S is the highest priority of all tasks that may lock S When a task  attempts to execute one of its critical sections, it will be suspended unless its priority is > than the priority of all semaphores currently locked by tasks other than  If task  is unable to enter its critical section for this reason, the task that holds the lock on its semaphore with the highest priority ceiling is said to be blocking  hence, inherits the priority of 

58 Priority Ceiling Protocols (contd.)
Two forms Original ceiling priority protocol (OCPP) Immediate ceiling priority protocol (ICPP) On a single processor system A high priority process can be blocked at most once during its execution by lower priority processes Deadlocks are prevented Transitive blocking is prevented Mutual exclusive access to resources is ensured (by the protocol itself)

59 Example of Priority Ceiling Protocol in Operation
Two tasks 1 and 2 with two shared data structures protected by binary semaphores S1 and S2. 1: {… Lock(S1)… Lock(S2) … Unlock (S2) … Unlock (S1) … } 2: {… Lock(S2)… Lock(S1) … Unlock (S1) … Unlock (S2) … } Assume 1 has higher priority than 2 Note: priority ceilings of both S1 & S2 = priority of 1

60 Example of Priority Ceiling Protocol in Operation (contd.)
Attempts to lock S1 (blocked) S2 unlocked S1 locked S2 locked S1 unlocked 1 Otherwise 1 and 2 will be deadlocked S1 unlocked S1 locked S2 unlocked S2 locked 2 Inherits priority of 1 t0 t1 t2 t3

61 OCPP Each process has a static default priority assigned (perhaps by the deadline monotonic scheme) Each resource has a static ceiling value defined, this is the maximum priority of the processes that use it A process has a dynamic priority that is the maximum of its own static priority and any it inherits due to it blocking higher priority processes A process can only lock a resource if its dynamic priority is higher than the ceiling of any currently locked resource (excluding any that it has already locked itself).

62 Example of OCPP

63 ICPP Each process has a static default priority assigned (perhaps by the deadline monotonic scheme) Each resource has a static ceiling value defined, this is the maximum priority of the processes that use it A process has a dynamic priority that is the maximum of its own static priority and the ceiling values of any resources it has locked.

64 Example of ICPP

65 OCPP vs. ICPP Worst case behavior identical from a scheduling point of view ICCP is easier to implement than the original (OCPP) as blocking relationships need not be monitored ICPP leads to less context switches as blocking is prior to first execution ICPP requires more priority movements as this happens with all resource usages; OCPP only changes priority if an actual block has occurred.

66 Schedulability Impact of Task Synchronization
Let Bi be the duration in which i is blocked by lower priority tasks The effect of this blocking can be modeled as if I’s utilization were increased by an amount Bi/Ti The effect of having a deadline Di before the end of the period Ti can also be modeled as if the task were blocked for Ei=(Ti-Di) by lower priority tasks as if utilization increased by Ei/Ti

67 Schedulability Impact of Task Synchronization (contd.)
Theorem: A set of n periodic tasks scheduled by RM algorithm will always meet its deadlines for all task phasings if

68 Cooperative Scheduling
True preemptive behavior is not always possible e.g. context switches & other kernel routines are non-preemptible Cooperative or deferred preemption splits processes into non-preemptive blocks separated by ‘de-schedule’ requests execution times < Bmax, the max blocking time in the system Exploits non-cumulative property of ICPP (a task may not be both blocked by an app and by kernel routine) increases situation in which blocking may occur Mutual exclusion is via non-preemption Advantages: increases schedulability and lowers C last block with time Fi runs with no interference When this converges (win+1= win), the response time is: Ri = win + Fi

69 Release Jitter A key issue in distributed systems
Sporadic task will be released at time 0, 5, 25, 45, and so on … i.e. at times 0, T-J, 2T-J, 3T-J, and so on…

70 Release Jitter (contd.)
Examination of the derivation of the schedulability equation implies that process i will suffer one interference from S if Ri is between 0 and T-J, that is Ri  [0, T–J), two if Ri  [T–J, 2T-J), three if Ri  [2T–J, 3T-J), and so on…

71 Release Jitter (contd.)
In general, periodic tasks do not suffer jitter But, an implementation may restrict granularity of system timer which releases periodic tasks a periodic task may therefore suffer from jitter If response time is to be measured relative to the real release time then the jitter value must be added to that previously calculated: Riperiodic = Ri + Ji

72 Arbitrary Deadlines Case when deadline Di < Ti is easy…
Case when deadline Di > Ti is much harder multiple iterations of the same task may be alive simultaneously may have to check multiple task initiations to obtain the worst case response time Example: consider two tasks Task 1: C1 = 28, T1 = 80 Task 2: C2 = 71, T2 = 110 Assume all deadlines to be infinity

73 Arbitrary Deadlines (contd.)
Response time for task 2: initiation completion time response time Response time is worst not for the first initiation Not sufficient to consider just the first iteration

74 Schedulability Condition for Arbitrary Deadlines
Analysis for situations where Di (and hence potentially Ri ) can be greater than Ti The number of releases that need to be considered is bounded by the lowest value of q = 0,1,2,… for which the following relation is true:

75 Arbitrary Deadlines (contd.)
The worst-case response time is then the maximum value found for each q: Note: for D  , the relation is true for q=0 if the task can be scheduled, in which case the analysis simplifies to original if any R>D, the task is not schedulable

76 Arbitrary Deadlines with Release Jitter


Download ppt "RTOS Scheduling – I : Rate-Monotonic Theory"

Similar presentations


Ads by Google