RTOS Scheduling – I : Rate-Monotonic Theory

Slides:



Advertisements
Similar presentations
Fakultät für informatik informatik 12 technische universität dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund,
Advertisements

Real Time Scheduling.
Chapter 7 - Resource Access Protocols (Critical Sections) Protocols: No Preemptions During Critical Sections Once a job enters a critical section, it cannot.
Priority Inheritance and Priority Ceiling Protocols
Washington WASHINGTON UNIVERSITY IN ST LOUIS Resource and Resource Access Control Fred Kuhns Applied Research Laboratory Computer Science and Engineering.
Outline Introduction Assumptions and notations
Introduction to Embedded Systems Resource Management - III Lecture 19.
Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment (then and now) EECS 750 – Spring 2006 Presented by: Shane Santner, TJ Staley,
Priority INHERITANCE PROTOCOLS
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 8 SCHEDULING.
1 EE5900 Advanced Embedded System For Smart Infrastructure RMS and EDF Scheduling.
0 Synchronization Problem Resource sharing –Requires mutual exclusion –Critical section A code section that should be executed mutually exclusively by.
CS5270 Lecture 31 Uppaal, and Scheduling, and Resource Access Protocols CS 5270 Lecture 3.
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems Resource Access Control Protocols.
CSE 522 Real-Time Scheduling (3)
CSE 522 Real-Time Scheduling (4)
Real-time concepts Lin Zhong ELEC424, Fall Real time Correctness – Logical correctness – Timing Hard vs. Soft – Hard: lateness is intolerable Pass/Fail.
Is It Time Yet? Wing On Chan. Distributed Systems – Chapter 18 - Scheduling Hermann Kopetz.
Task Allocation and Scheduling n Problem: How to assign tasks to processors and to schedule them in such a way that deadlines are met n Our initial focus:
Module 2 Priority Driven Scheduling of Periodic Task
Scheduling Theory ITV Real-Time Systems Anders P. Ravn Aalborg University March 2007.
Scheduling Theory ITV Multiprogramming and Real-Time Programs Anders P. Ravn Aalborg University May 2009.
Fakultät für informatik informatik 12 technische universität dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund,
EE 249, Fall Discussion: Scheduling Haibo Zeng Amit Mahajan.
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment Presented by Pete Perlegos C.L. Liu and James W. Layland.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
CprE 458/558: Real-Time Systems
Spring 2002Real-Time Systems (Shin) Rate Monotonic Analysis Assumptions – A1. No nonpreemptible parts in a task, and negligible preemption cost –
UCDavis, ecs251 Fall /23/2007ecs251, fall Operating System Models ecs251 Fall 2007 : Operating System Models #3: Priority Inversion Dr. S.
Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environments.
Chapter 4 – Periodic Task Scheduling In many real-time systems periodic tasks dominate the demand. Three classic periodic task scheduling algorithms: –
Technische Universität Dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund, Informatik 12 Germany 2007/12/14.
Introduction to Embedded Systems
Chapter 6 CPU SCHEDULING.
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems Combined Scheduling of Periodic and Aperiodic Tasks.
MM Process Management Karrie Karahalios Spring 2007 (based off slides created by Brian Bailey)
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
1 Reducing Queue Lock Pessimism in Multiprocessor Schedulability Analysis Yang Chang, Robert Davis and Andy Wellings Real-time Systems Research Group University.
Scheduling policies for real- time embedded systems.
Real Time Operating Systems Schedulability - Part 3 Course originally developed by Maj Ron Smith 10/24/2015Dr Alain Beaulieu1.
Real Time Scheduling Telvis Calhoun CSc Outline Introduction Real-Time Scheduling Overview Tasks, Jobs and Schedules Rate/Deadline Monotonic Deferrable.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
Real Time Systems Real-Time Schedulability Part I.
RTOS task scheduling models
- 1 -  P. Marwedel, Univ. Dortmund, Informatik 12, 2006 Universität Dortmund Periodic scheduling For periodic scheduling, the best that we can do is to.
12/19/2015COSC , Lecture 31 Real-Time Systems, COSC , Lecture 3 Stefan Andrei.
CSE 522 Real-Time Scheduling (2)
Real Time Operating Systems Schedulability - Part 2 Course originally developed by Maj Ron Smith 12/20/2015Dr Alain Beaulieu1.
1 Real-Time Scheduling. 2Today Operating System task scheduling –Traditional (non-real-time) scheduling –Real-time scheduling.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
Introduction to Embedded Systems Rabie A. Ramadan 5.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Dynamic Priority Driven Scheduling of Periodic Task
Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund, Informatik 12 Germany 2012 年 12 月 19 日 These slides use Microsoft clip.
Sandtids systemer 2.modul el. Henriks 1. forsøg m. Power Point.
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Lecture 6: Real-Time Scheduling
Distributed Process Scheduling- Real Time Scheduling Csc8320(Fall 2013)
Real-time Embedded Systems Rate monotonic theory scheduling.
Scheduling and Resource Access Protocols: Basic Aspects
EEE 6494 Embedded Systems Design
CSCI1600: Embedded and Real Time Software
CSCI1600: Embedded and Real Time Software
Processes and operating systems
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
CHAPTER 8 Resources and Resource Access Control
Real-Time Process Scheduling Concepts, Design and Implementations
The End Of The Line For Static Cyclic Scheduling?
Ch 4. Periodic Task Scheduling
Real-Time Process Scheduling Concepts, Design and Implementations
Presentation transcript:

RTOS Scheduling – I : Rate-Monotonic Theory EE202A (Fall 2001): Lecture #4

Reading List for This Lecture Required Balarin, F.; Lavagno, L.; Murthy, P.; Sangiovanni-Vincentelli, A.; Systems, C.D.; Sangiovanni-, A. Scheduling for embedded real-time systems. IEEE Design & Test of Computers, vol.15, (no.1), IEEE, Jan.-March 1998. p.71-82.  http://ielimg.ihs.com/iel3/54/14269/00655185.pdf Sha, L.; Rajkumar, R.; Sathaye, S.S. Generalized rate-monotonic scheduling theory: a framework for developing real-time systems. Proceedings of the IEEE, vol.82, (no.1), Jan. 1994. p.68-82. 28 references. http://ielimg.ihs.com/iel1/5/6554/00259427.pdf Recommended Sha, L.; Rajkumar, R.; Lehoczky, J.P. Priority inheritance protocols: an approach to real-time synchronization. IEEE Transactions on Computers, vol.39, (no.9), Sept. 1990. p.1175-85. http://ielimg.ihs.com/iel1/12/2066/00057058.pdf Others Liu, C.L.; Layland, J.W. Scheduling algorithms for multiprogramming in a hard-real-time environment. Journal of the Association for Computing Machinery, vol.20, (no.1), Jan. 1973. p.46-61. http://nesl.ee.ucla.edu/pw/ee202a/Liu73.pdf Lehoczky, J.; Sha, L.; Ding, Y. The rate monotonic scheduling algorithm: exact characterization and average case behavior. Proceedings. Real Time Systems Symposium (Cat. No.89CH2803-5), (Proceedings. Real Time Systems Symposium (Cat. No.89CH2803-5), Santa Monica, CA, USA, 5-7 Dec. 1989.) Los Alamitos, CA, USA: IEEE Comput. Soc. Press, 1989. p.166-71. http://ielimg.ihs.com/iel2/268/2318/00063567.pdf

Computation & Timing Model of the System Requests for tasks for which hard deadlines exist are periodic, with constant inter-request intervals Deadlines consist of runnability constraints only each task must finish before the next request for it eliminates need for buffering to queue tasks Tasks are independent requests for a certain task does not depend on the initiation or completion of requests for other tasks however, there periods may be related Run-time for each task is constant for that task, and does not vary with time can be interpreted as the maximum running time

Characterizing the Task Set Set on n independent tasks 1, 2, … n Request periods are T1, T2, ... Tn request rate of i is 1/Ti Run-times are C1, C2, ... Cn

Scheduling Algorithm Set of rules to determine the task to be executed at a particular moment One possibility: preemptive & priority driven tasks are assigned priorities statically or dynamically at any instant, the highest priority task is run whenever there is a request for a task that is of higher priority than the one currently being executed, the running task is interrupted, and the newly requested task is started Therefore, scheduling algorithm == method to assign priorities

Assigning Priorities to Tasks Static or fixed approach priorities are assigned to tasks once for all Dynamic approach priorities of tasks may change from request to request Mixed approach some tasks have fixed priorities, others don’t

Deriving Optimum Priority Assignment Rule

Critical Instant for a Task Deadline for a task = time of next request of it Overflow is said to occur at time t, if t is the deadline of an unfulfilled request A scheduling algorithm is feasible if tasks can be scheduled so that no overflow ever occurs Response time of a request of a certain task is the time span between the request and the end of response to that task

Critical Instant for a Task (contd.) Critical instant for a task = instant at which a request for that task will have the maximum response time Critical time zone of a task = time interval between a critical instant & the end of the response to the corresponding request of the task

When does Critical Instant occur for a task? Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks Can use this to determine whether a given priority assignment will yield a feasible scheduling algorithm if requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible

Example Consider 1 & 2 with T1=2, T2=5, & C1=1, C2=1 1 has higher priority than 2 priority assignment is feasible can increase C2 to 2 and still be able to schedule 1 2 3 4 5 t T1 T1 t 1 2 3 4 5 T2 T2 t 1 2 3 4 5 CRITICAL TIME ZONE CRITICAL TIME ZONE

Example (contd.) 2 has higher priority than 1 priority assignment is still feasible but, can ‘t increase beyond C1=1, C2=1 T2 t 1 2 3 4 5 T1 t 1 2 3 4 5 CRITICAL TIME ZONE

Observation Consider 1 & 2 with T1 < T2 Let, 1 be the higher priority task. From Theorem 1, the following must hold:  T2/ T1  C1 + C2  T2 (necessary condition, but not sufficient) Let, 2 be the higher priority task. The following must hold: C1 + C2  T1

Observation (contd.) Note: C1 + C2  T1   T2/ T1  C1 +  T2/ T1  C2   T2/ T1  T1   T2/ T1  C1 +  T2/ T1  C2  T1   T2/ T1  C1 + C2  T1 since  T2/ T1   1 Therefore, whenever T1 < T2 and C1, C2 are such that the task schedule is feasible with 2 at higher priority than 1, then it is also feasible with 1 at higher priority than 2 but opposite is not true

A Possible Rule for Priority Assignment Assign priorities according to request rates, independent of run times higher priorities for tasks with higher request rates Called Rate-Monotonic (RM) Priority Assignment It is optimum Theorem 2: no other fixed priority assignment can schedule a task set if RM priority assignment can’t schedule it, i.e., if a feasible priority assignment exists then RM priority assignment is feasible proof: can obtain a RM priority assignment by a sequence of pairwise reorderings of task priorities

Processor Utilization Processor Utilization Factor: fraction of processor time spent in executing the task set i.e. 1 - fraction of time processor is idle For n tasks, 1, 2, … n the utilization factor U is U = C1/T1 + C2/T2 + … + Cn/Tn U can be improved by increasing Ci’s or decreasing Ti’s as long as tasks continue to satisfy their deadlines at their critical instants

How Large can U be for a Fixed Priority Scheduling Algorithm? Corresponding to a priority assignment, a set of tasks fully utilizes a processor if: the priority assignment is feasible for the set and, if an increase in the run time of any task in the set will make the priority assignment infeasible The least upper bound of U is the minimum of the U’s over all task sets that fully utilize the processor for all task sets whose U is below this bound,  a fixed priority assignment which is feasible U above this bound can be achieved only if the task periods Ti’s are suitably related

Utilization Factor for Rate-Monotonic Priority Assignment RM priority assignment is optimal  for a given task set, the U achieved by RM priority assignment is  the U for any other priority assignment  the least upper bound of U = the infimum of U’s for RM priority assignment over all possible T’s and all C’s for the tasks

Two Tasks Case Theorem 3: For a set of two tasks with fixed priority assignment, the least upper bound to processor utilization factor is U=2(21/2-1) Proof: Let 1 and 2 be two tasks with periods T1 and T2, and run-times C1 and C2 Assume T2 > T1 According to RM, 1 has higher priority than 2 In a critical time zone of 2, there are T2/T1 requests for 1

Two Tasks Case (contd.) Proof contd.: Adjust C2 to fully utilize the available processor time within the critical time zone Case 1: C1 is short enough so that all requests for 1 within the critical time zone of 2 are completed before the next request of 2, i.e. C1  T2-T1T2 /T1 then, largest possible C2 = T2 - C1 T2/T1 so that, U = 1 - C1((1/T2)T2/T1 - 1/T1 ) U monotonically decreases with C1

Two Tasks Case (contd.) Proof contd.: Case 2: The execution of the T2/T1 -th request for 1 overlaps the next request for 2, i.e. C1  T2-T1T2 /T1 then, largest possible C2 = (T1 -C1) T2 /T1 so that, U = (T1 / T2) T2 /T1+ C1((1/T1)-(1/ T2) T2 /T1) U monotonically increases with C1 The minimum U occurs at the boundary of cases 1 & 2, i.e. for C1 =T2-T1T2 /T1 U = 1-(T1 / T2)(T2/T1 -(T2/T1))((T2/T1)- T2 /T1)

Two Tasks Case (contd.) Proof contd.: U = 1-(T1 / T2)(T2/T1 -(T2/T1))((T2/T1)- T2 /T1) = 1 - f(1-f)/(I+f) where I= T2 /T1 f= (T2/T1)- T2 /T1=fractional part of T2/T1 U is monotonically increasing with I min U occurs at min value of I, i.e. I=1 Minimizing U over f, one gets f = 21/2-1 So, min U = 2(21/2-1)  0.83 Note: U=1 when f=1, i.e. when period of lower priority task is a multiple of the period of the higher priority task

C1/T1 + C2/T2 + … + Cn/Tn  n(21/n-1) General Case Theorem: For a set of n tasks with fixed priority assignment, the least upper bound to processor utilization factor is U=n(21/n-1) Or, equivalently, a set of n periodic tasks scheduled by RM algorithm will always meet their deadlines for all task start times if C1/T1 + C2/T2 + … + Cn/Tn  n(21/n-1)

General Case (contd.) As m, the U rapidly converges to ln 2 = 0.69 This is a rather tight bound But, note that this is just the least upper bound task set with larger U may still be schedulable e.g., note that if {Tn/Ti} = 0 for I=1,2,…,n-1, then U=1 How to check if a specific task set with n tasks is schedulable? If U  n(21/n-1) then it is schedulable otherwise, use Theorem 1!

Theorem 1 Recalled Theorem 1: A critical instant for any task occurs whenever the task is requested simultaneously with requests of all higher priority tasks Can use this to determine whether a given priority assignment will yield a feasible scheduling algorithm if requests for all tasks at their critical instants are fulfilled before their respective deadlines, then the scheduling algorithm is feasible Applicable to any static priority scheme… not just RM

Example #1 Task 1 : C1 =20; T1 =100; D1 =100 Task 2 : C2 =30; T2 =145; D2 =145 Is this task set schedulable? U = 20/100 + 30/145 = 0.41  2(21/2-1) = 0.828 Yes!

Example #2 Task 1 : C1 =20; T1 =100; D1 =100 Task 2 : C2 =30; T2 =145; D2 =145 Task 3 : C3 =68; T3 =150; D3 =150 Is this task set schedulable? U = 20/100 + 30/145 + 68/150 = 0.86 > 3(21/3-1) = 0.779 Can’t say! Need to apply Theorem 1.

Example #2 (contd.) Consider the critical instant of 3, the lowest priority task 1 and 2 must execute at least once before 3 can begin executing therefore, completion time of 3 is  C1 +C2 +C3 = 20+68+30 = 118 however, 1 is initiated one additional time in (0,118) taking this into consideration, completion time of 3 = 2 C1 +C2 +C3 = 2*20+68+30 = 138 Since 138 < D3 = 150, the task set is schedulable

Response Time Analysis for RM For the highest priority task, worst case response time R is its own computation time C R = C Other lower priority tasks suffer interferences from higher priority processes Ri = Ci + Ii Ii is the interference in the interval [t, t+Ri]

Response Time Analysis (contd.) Consider task i, and a higher priority task j Interference from task j during Ri: # of releases of task k = Ri/Tj each will consume Cj units of processor total interference from task j = Ri/Tj * Cj Let hp(i) be the set of tasks with priorities higher than that of task i Total interference to task i from all tasks during Ri:

Response Time Analysis (contd.) This leads to: Smallest Ri will be the worst case response time Fixed point equation: can be solved iteratively

Algorithm

RM Schedulability Consider tasks 1, 2, … n in decreasing order of priority For task i to be schedulable, a necessary and sufficient condition is that we can find some t  [0,Ti] satisfying the condition t = t/T1C1 + t/T2C2 + … t/Ti-1Ci-1 + Ci But do we need to check at exhaustively for all values of t in [0,Ti]?

RM Schedulability (contd.) Observation: RHS of the equation jumps only at multiples of T1, T2, … Ti-1 It is therefore sufficient to check if the inequality is satisfied for some t  [0,Ti] that is a multiple of one or more of T1, T2, … Ti-1 t  t/T1C1 + t/T2C2 + … t/Ti-1Ci-1 + Ci

RM Schedulability (contd.) Notation Wi(t) = j=1..iCjt/Tj Li(t) = Wi(t)/t Li = min0  t  Ti Li(t) L = max{Li} General sufficient & necessary condition: Task i can be scheduled iff Li 1 Practically, we only need to compute Wi(t) at all times i = {kTj | j=1,…,I; k=1,…,Tj/Tj} these are the times at which tasks are released Wi(t) is constant at other times Practical RM schedulability conditions: if minti Wi(t)/t  1, task i is schedulable if maxi{1,…,n}{minti Wi(t)/t}  1, then the entire set is schedulable

Example Task set: Then: 1: T1=100, C1=20 2: T2=150, C2=30 3: T3=210, C3=80 4: T4=400, C4=100 Then: 1 = {100} 2 = {100,150} 3 = {100,150,200,210} 4 = {100,150,200,210,300,400} Plots of Wi(t): task i is RM-schedulable iff any part of the plot of Wi(t) falls on or below the Wi(t)=t line.

Deadline Monotonic Priority Assignment (DMP) Fixed priority of a process is inversely proportional to its deadline (< period) Di < Dj  Pi > Pj Optimal: can schedule any task set that any other static priority assignment can Example: RM fails but DMP succeeds for the following

RM in Distributed/Networked Embedded Systems? Task is essentially scheduled on multiple resources in series Need to schedule communication messages over the interconnect propagation delay & jitter queuing delay & jitter Divide end-to-end deadline into subsystem deadlines Need buffering to mitigate jitter problem as task may arrive too early

Can one do better? Yes… by using dynamic priority assignment In fact, there is a scheme for dynamic priority assignment for which the least upper bound on the processor utilization is 1 More later...

Transient Overload Task Ci Ave Ti Peak Ti Criticality 1 20 10 100 30 25 150 3 80 40 210 Non critical 4 400 Not RM schedulable if all tasks take their worst case execution times, but is schedulable in the average case Can we arrange so that the three critical tasks always meet their deadlines, while the non-critical task meets many of its deadlines?

Dealing with Transient Overload Transient system overload may cause some deadlines to be missed Lower priority tasks are likely to miss their deadlines first in an overload situation But, the more important task may have been assigned a lower priority: priority != importance One could assign priorities according to importance e.g. by artificially reducing smaller deadline but… reduces schedulablity

Example Consider two tasks: Task 1: C1 =3.5; T1 =10; D1 =10; less important Task 2: C2 =7; T2 =14; D2 =13; critical task 2 will have lower priority completion time test shows that 2 is not schedulable but is important and must be guaranteed! Making priority 2 of 1 will make unschedulable

A Better Approach: Period Transformation One could transform period of 2 to 7, yielding a modified task set Task 1: C1 =3.5; T1 =10; D1 =10 Task 2a: C2a=3.5; T2a=7; D2a=6 Note: in period transformation, the real deadline is at the last transformed period deadline at the second transformed period of 2a is at most 6 (7+6=13) Now, 2a has higher priority, and the task set is schedulable!

Using Period Transformation to Improve Schedulability Consider two tasks: Task 1: C1=5; T1=10 Task 2: C2=5; T2=15 These two tasks are just schedulable with utilization 83.3% If we transform, Task 1a: C1a=2.5; T1a=5 Task 2: C2=5; T2=15 the utilization bound becomes 100%

Sporadic Tasks Tasks that are released irregularly, often in response to some event in the environment no periods associated but must have some maximum release rate (minimum inter-arrival time) Otherwise no limit on workload! How to deal with them? consider them as periodic with a period equal to the minimum inter-arrival time other approaches…

Handling Sporadic Tasks: Approach 1 Define fictitious periodic task of highest priority and of some chosen execution period During the time this task is scheduled to run, the processor can run any sporadic task that is awaiting service if no sporadic task awaiting service, processor is idle Outside this time the processor attends to periodic tasks Problem: wasteful!

Handling Sporadic Tasks: Approach 2 (Deferred Server) Less wasteful… Whenever the processor is scheduled to run sporadic tasks, and finds no such tasks awaiting service, it starts executing other (periodic) tasks in order of priority However, if a sporadic task arrives, it preempts the periodic task and can occupy a total time up to the time allotted for sporadic task Schedulability?

Example Approach 1 Approach 2 (Deferred Server)

Task Synchronization So far, we considered independent tasks However, tasks do interact: semaphores, locks, monitors, rendezvous etc. shared data, use of non-preemptable resources This jeopardizes systems ability to meet timing constraints e.g. may lead to an indefinite period of priority inversion where a high priority task is prevented from executing by a low priority task

Priority Inversion Example Let 1 & 3 share a resource & let 1 have a higher priority. Let 2 be an intermediate priority task that does not share any resource with either. Consider: 3 obtains a lock on the semaphore S and enters its critical section to use a shared resource 1 becomes ready to run and preempts . Next 1 tries to enter its critical section by trying to lock S. But S is locked and therefore 1 is blocked. 2 becomes ready to run. Since only 2 and 3 are ready to run, 2 preempts 3 while 3 is in critical section.

Priority Inversion Example (contd.) What would we prefer? 1, being the highest priority task, should be blocked no longer than the time 3 takes to complete its critical section But, in reality, duration of blocking is unpredictable 3 can be preempted until 2 and any other pending intermediate priority tasks are completed Duration of priority inversion is a function of the task execution times,and is not bounded by the duration of critical sections

Process Interactions & Blocking Priority inversions Blocking Priority inheritance

Example: Priority Inversion

Example: Priority Inheritance

Response Time Calculations & Blocking R = C + B + I solve by forming recurrence relation With priority inheritance:

Response Time Calculations & Blocking (contd.) Where usage is a 0/1 function: usage(k, i) = 1 if resource k is used by at least one process with a priority less than i, and at least one process with a priority greater or equal to i. Otherwise it gives the result 0. CS(k) is the computational cost of executing the k-th critical section.

Priority Ceiling Protocols Basic idea: priority ceiling of a binary semaphore S is the highest priority of all tasks that may lock S When a task  attempts to execute one of its critical sections, it will be suspended unless its priority is > than the priority of all semaphores currently locked by tasks other than  If task  is unable to enter its critical section for this reason, the task that holds the lock on its semaphore with the highest priority ceiling is said to be blocking  hence, inherits the priority of 

Priority Ceiling Protocols (contd.) Two forms Original ceiling priority protocol (OCPP) Immediate ceiling priority protocol (ICPP) On a single processor system A high priority process can be blocked at most once during its execution by lower priority processes Deadlocks are prevented Transitive blocking is prevented Mutual exclusive access to resources is ensured (by the protocol itself)

Example of Priority Ceiling Protocol in Operation Two tasks 1 and 2 with two shared data structures protected by binary semaphores S1 and S2. 1: {… Lock(S1)… Lock(S2) … Unlock (S2) … Unlock (S1) … } 2: {… Lock(S2)… Lock(S1) … Unlock (S1) … Unlock (S2) … } Assume 1 has higher priority than 2 Note: priority ceilings of both S1 & S2 = priority of 1

Example of Priority Ceiling Protocol in Operation (contd.) Attempts to lock S1 (blocked) S2 unlocked S1 locked S2 locked S1 unlocked 1 Otherwise 1 and 2 will be deadlocked S1 unlocked S1 locked S2 unlocked S2 locked 2 Inherits priority of 1 t0 t1 t2 t3

OCPP Each process has a static default priority assigned (perhaps by the deadline monotonic scheme) Each resource has a static ceiling value defined, this is the maximum priority of the processes that use it A process has a dynamic priority that is the maximum of its own static priority and any it inherits due to it blocking higher priority processes A process can only lock a resource if its dynamic priority is higher than the ceiling of any currently locked resource (excluding any that it has already locked itself).

Example of OCPP

ICPP Each process has a static default priority assigned (perhaps by the deadline monotonic scheme) Each resource has a static ceiling value defined, this is the maximum priority of the processes that use it A process has a dynamic priority that is the maximum of its own static priority and the ceiling values of any resources it has locked.

Example of ICPP

OCPP vs. ICPP Worst case behavior identical from a scheduling point of view ICCP is easier to implement than the original (OCPP) as blocking relationships need not be monitored ICPP leads to less context switches as blocking is prior to first execution ICPP requires more priority movements as this happens with all resource usages; OCPP only changes priority if an actual block has occurred.

Schedulability Impact of Task Synchronization Let Bi be the duration in which i is blocked by lower priority tasks The effect of this blocking can be modeled as if I’s utilization were increased by an amount Bi/Ti The effect of having a deadline Di before the end of the period Ti can also be modeled as if the task were blocked for Ei=(Ti-Di) by lower priority tasks as if utilization increased by Ei/Ti

Schedulability Impact of Task Synchronization (contd.) Theorem: A set of n periodic tasks scheduled by RM algorithm will always meet its deadlines for all task phasings if

Cooperative Scheduling True preemptive behavior is not always possible e.g. context switches & other kernel routines are non-preemptible Cooperative or deferred preemption splits processes into non-preemptive blocks separated by ‘de-schedule’ requests execution times < Bmax, the max blocking time in the system Exploits non-cumulative property of ICPP (a task may not be both blocked by an app and by kernel routine) increases situation in which blocking may occur Mutual exclusion is via non-preemption Advantages: increases schedulability and lowers C last block with time Fi runs with no interference When this converges (win+1= win), the response time is: Ri = win + Fi

Release Jitter A key issue in distributed systems Sporadic task will be released at time 0, 5, 25, 45, and so on … i.e. at times 0, T-J, 2T-J, 3T-J, and so on…

Release Jitter (contd.) Examination of the derivation of the schedulability equation implies that process i will suffer one interference from S if Ri is between 0 and T-J, that is Ri  [0, T–J), two if Ri  [T–J, 2T-J), three if Ri  [2T–J, 3T-J), and so on…

Release Jitter (contd.) In general, periodic tasks do not suffer jitter But, an implementation may restrict granularity of system timer which releases periodic tasks a periodic task may therefore suffer from jitter If response time is to be measured relative to the real release time then the jitter value must be added to that previously calculated: Riperiodic = Ri + Ji

Arbitrary Deadlines Case when deadline Di < Ti is easy… Case when deadline Di > Ti is much harder multiple iterations of the same task may be alive simultaneously may have to check multiple task initiations to obtain the worst case response time Example: consider two tasks Task 1: C1 = 28, T1 = 80 Task 2: C2 = 71, T2 = 110 Assume all deadlines to be infinity

Arbitrary Deadlines (contd.) Response time for task 2: initiation completion time response time 1 127 127 110 226 116 220 353 133 330 452 122 440 551 111 550 678 128 660 777 117 770 876 106 Response time is worst not for the first initiation Not sufficient to consider just the first iteration

Schedulability Condition for Arbitrary Deadlines Analysis for situations where Di (and hence potentially Ri ) can be greater than Ti The number of releases that need to be considered is bounded by the lowest value of q = 0,1,2,… for which the following relation is true:

Arbitrary Deadlines (contd.) The worst-case response time is then the maximum value found for each q: Note: for D  , the relation is true for q=0 if the task can be scheduled, in which case the analysis simplifies to original if any R>D, the task is not schedulable

Arbitrary Deadlines with Release Jitter