Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rate Monotonic Analysis Xinrong Zhou

Similar presentations


Presentation on theme: "Rate Monotonic Analysis Xinrong Zhou"— Presentation transcript:

1 Rate Monotonic Analysis Xinrong Zhou (xzhou@abo.fi)
G572 Real-time Systems Rate Monotonic Analysis Xinrong Zhou

2 RMA RMA is one quantitative method which makes it possible to analyze if the system can be scheduled With the help of RMA it is possible to: select the best task priority allocation select the best scheduling protocol Select the best implementation scheme for aperiodic reality If RMA rules are followed mechanically, the optimal implementation (where all hard deadlines are met) is reached with the highest possibility

3 RMA System model Rate Monotonic: priority controls
scheduler selects tasks by priority if a task with higher priority comes available, task currently running will be interrupt and the task with higher priority will be run (preemption) Rate Monotonic: priority controls monotonic, frequency (rate) of a periodic process

4 Contents Under what conditions a system can be scheduled when priorities are allocated by RMA? Periodic tasks Blocking Random deadlines Aperiodic activities

5 Why are deadlines missed?
Two factors: The amount of computation capacity that is globally available, and how this is used Factor 1 can be easily quantified MIPS Max bandwidth of LAN Factor 2 contains Processing capacity that is used by the operating system Distribution between tasks RMA goals: Optimize distribution in such a way that deadlines will be met; or ”provide for graceful degradation”

6 Utilization Task Ti is dependent on:
Preemption of tasks with higher priority describes relative usability grade of tasks with higher priority Its own running time ci blocking by tasks with lower priority Lower priority tasks contain critical resources

7 Example Task runtime ci period Ti Utilization Priority T1 3 9 33,33%
5 15 2 T3 23 21,73% 1 Total utilization U = 88,39% 3 5 2 1 23 10 20 Missed deadline

8 Harmonic period helps Application is harmonic if every task’s period is exact portion of a longer period Task runtime ci period Ti Utilization Priority T1 3 9 33,33% T2 6 18 2 T3 8 36 21,66% 1 Total utilization U = 88,32% 3 6 3 2 10 20 30 36

9 Liu & Layland Suppose that: Every task can be interrupted (preemption)
Tasks can be ordered by inverted period: pr(Ti) < pr(Tj) iff Pj<Pi Every task is independent and periodic Tasks relative deadlines are similar to tasks’ periods Only one iteration of all tasks is active at one time!

10 Liu & Layland When a group of tasks can be scheduled by RMA? RMA 1
If total utilization of tasks tasks can be scheduled so that every deadline will be met e.g. When the number of tasks increases, processor will be idling 32% of time!

11 Variation of U(n) by n

12 Example In example below, we get utilization 6/15+4/20+5/30 = 0,767
because U(3) = 0,780, application can be scheduled Task running time ci period Ti Utilization Priority T1 6 15 0,4 3 T2 4 20 0,2 2 T3 5 30 0,167 1

13 Why shorter periods are prioritized?
Task runtime ci period Ti non RMA RMA priority T1 3 5 1 2 T2 4 10 In realistic application. c is small compared with T. Shortest T first ensures that negative preemption effects will be minimized T1 T2 T1 misses deadline 5 10 non RMA RMA

14 Why shorter periods are prioritized?
Slack: T-c = S In example is c2 > T1 – c1 In practice is c<< T e.g. slack is proportional to the period By selecting the shortest period first, the preemption effect will be minimized NOTE: We are primarly interesting in shortening deadlines by priorities!

15 How 100% utilization can be achieved?
Critical zone theorem: If some number of independent periodic tasks start synchronously and every task meets its first deadline, then every future deadline will be met Worst-case: task starts at the same time with higher-priority task Scheduling points for task Ti: Ti’s first deadline, and the end of periods within Ti’s first deadline for every task with higher priority than Ti If we can show that there are at least one scheduling point for one task, then if that task have time to run one time, this task can be scheduled

16 Overhead Periodic overhead System overhead
overhead to make tasks periodic overhead to switch between tasks System overhead overhead of operating system UNIX, Windows NT: difficult to know what happens in background

17 Periodic Overhead task switch: saving and recalling tasks ”context”
To execute a task periodically, the clock must be read and the next execution time must be calculated Next_Time = Clock; loop Next_Time = Next_Time + Period; { ... periodic task code here ... } delay Next_Time – Clock; end loop task switch: saving and recalling tasks ”context” add two task switches to the running queue

18 Periodic Overhead Data can be gathered by benchmarking
for eCOS operating system with: Board: ARM AEB-1 Evaluation Board CPU : Sharp LH77790A 24MHz Confidence Ave Min Max Var Function 125.56 102.67 148.00 11.89 50% 25% Create thread 75.16 74.00 211.33 2.13 99% Thread switch 45.69 44.00 46.00 0.38 95% 4% Resume [suspended low prio] thread 136.53 135.33 160.00 1.96 Resume [high priority] thread 42.58 36.67 43.33 0.49 70% Suspend [runnable] thread

19 Bad points of ”classic” RMA
It requires preemptive scheduler blocking can stop the system aperiodic activities must be ”normalized” tasks which have jitter must be specially handled RMA can’t analyze multiprocessor systems

20 Blocking If two tasks share the same resource, those tasks are dependent Resources are serially and mutually exclusive used Shared resources are often implemented using semaphores (mutex): 2 operations get release Blocking causes two problems: priority inversion deadlocks

21 Priority inversion Priority inversion happens when a task with a lower priority blocks a task with a higher priority e.g. Mars Rover was lost because of priority inversion Blocking time Allocates R Interrupt T1 T2 T3 Try to allocate R Terminates R reserved Interrupt and allocates R Release R Release R

22 Deadlock Deadlock means that there are circular resource allocation:
T1 allocates R1 T2 interrupt T1 T2 allocates R2 T2 try to allocate R1 but blocks  T1 will be run T1 try to allocate R2 but blocks Both tasks are now blocked deadlock

23 Deadlock Deadlocks are always design faults
Deadlocks can be avoided by Special resource allocation algorithms deadlock-detection algoritms static analysis of the system Resource allocation graphs Matrix method Deadlocks will be discussed more thoroughly with parallel programming

24 Priority inversion: control of blocking time
It is difficult to avoid priority inversion when blocking happens Scheduling using fixed priorities produces unlimited blocking time 3 algorithms for minimization of blocking time: PIP: Priority inheritance PCP: Prioirity ceiling HL: Highest locker

25 Priority Inheritance 3 rules:
Blocked task inherits a temporary priority from the blocking task with a higher priority (priority-inheritance rule) Task can lock a resource only if no other task has locked the resource. Task blocks until resources are released, after which task will continue running on its original priority (allocation rule) Inheritance is transitive: T1 blocks T2, and T2 blocks T3 T1 inherits T3’s priority

26 Priority inheritance In the example Alowest priority
Chighest priority

27 Priority inheritance Blocking time will be now shorter
Maximum blocking time for a task Length of min(m,n) critical section m = number of critical sections in application n = number of tasks with higher priority ”chain blocking” PIP can’t prevent deadlock in all cases priority ceiling algorithm can prevent deadlock and reduce the blocking time

28 Priority Inheritance

29 Priority Ceiling Every resource has the highest priority level (”ceiling”): Highest priority level of task which can lock (or require) the resource Highest ceiling value (currently used) is stored in a system variable called system ceiling A task can lock or release a resource if It already has locked a resource which has ceiling= system ceiling, or Task’s priority is higher than system ceiling A task blocks until resource will be available and system ceiling variable decrements Blocking task inherits its priority from that blocking task which has highest priority Inheritance is transitive

30 Priority Ceiling R1 ceiling = Prio B R2 ceiling = Prio C
system ceiling = R1 ceiling

31 Priority Ceiling blocking time for c is now 0
no ”chain blocking” possible deadlock not possible in any cases Complicated implementation

32 Highest locker Every resource has highest priority level (”ceiling”): highest priority of a task which can lock the resource Task can lock or release resource and inherit resources ceiling + 1 as its new priority level Simpler than PCP to implement In practice same properties than PCP ”best” alternative

33 Highest Locker

34 Cost of implementation
PIP protocol Task generates a context-switch when it blocks Task generates a context-switch when it becomes executable Scheduler must keep one queue for semaphores of tasks ordered by priorities Every time new task needs a semaphore, scheduler must adjust tasks priority to the resource owner’s priority Owner (task) must possible inherit the priority Every times resource is released, disinheritance procedure is executed

35 Costs of implementation
PIWG ( Performance Issues Working Group) has developed benchmarks for comparing scheduling algorithms Test Description Fixed-Priority tasking PIP Tasking PIP Overhead T0001 a call without parameters 28,5s 30,9s +8,42% T0004 T0001 with selective wait for two calls 40,5s 45,2s +11,60% T0006 T0004 with 10 tasks 46.7s 54,5s +16,49%

36 Cost of implementation
Note: If the protocol is not in operating system, it must not be implemented by itself. (application level=>huge overhead) ”No silver bullet” Scheduling protocol can only minimize the possibility of blocking Blocking must be prevented by improved design

37 Similarities of scheduling protocols
Protocol or policy Decrements blocking time limited blocking time Prevents deadlocks POSIX implementation Fixed-Priority No SCHED_FIFO scheduling policy PIP Yes PRIO_INHERIT synchronization protocol for mutex Priority Ceiling PRIO_PROTECT

38 Algorithms in commercial RTOS
Priority inheritance WindRiver Tornado/VxWorks LynxOs OSE eCOS Priority Ceiling OS-9 pSOS+

39 Deadline Monotonic Priority Assignment
RM: Shortest period gets the highest priority DM: Shortest deadline gets the highest priority DM gives higher utilization Get first the analysis using RM, allocate then priorities using DM

40 Dynamic scheduling EDF is an optimal scheduling algorithm
Priorities are calculated at run time Earliest deadline first (EDF): Task with shortest time to the deadline executed first It is always possible to transform timing plan which meets deadlines to the timing plan under EDF EDF is an optimal scheduling algorithm It is not simple to show that the system is schedulable using EDF RMA is enough in practice tasks need not to be periodic!

41 EDF: Example When t=0 can only T1 run.
Task Start time ci absolute deadline T1 10 30 T2 4 3 T3 5 25 When t=0 can only T1 run. When t=4 has T2 higher priority because d2 < d1 When t=5 has T2 higher priority than T3 When t=7 stops T2 and T3 has higher priority When t=17 stops T3 and T1 executes

42 EDF: test of schedulability
In general case, we must simulate the system to show that it is schedulable finity criterion gives limits to the simulation Let hT(t) be the sum of running times of those tasks in unit T which have an absolute deadline

43 Aperiodic activities In practice not every event is periodic
Aperiodic activities are often I/O activities Connected to the interrupt lines of the CPU Processors’ interrupt has always higher priority than the scheduler of the operating system How could the aperiodic events be connected that the system will be schedulable?

44 Aperiodic activities Look 5 different implementation models:
Interrupt handled at hardware priority level Cyclic polling (handled at the os-schedulers priority level) Combination of 1 and 2 (deferred aperiodic handling) Interrupt handled partially at hardware level and partially at OS level with a deferred server Interrupt handled partially at hardware level and partially OS level with a sporadic server

45 Hardware handling Interrupt handled wholy by interrupt service routine (ISR) Benefits Problems Simple implementation easy to analyse with RMA Minimal overhead there maybe starving on OS level RMA: calculate ”pseudoperiod” for ISR based on events incoming time use in order to handle event more rapidly it must be always shown with RMA that applications stay schedulable even with storm of events

46 Cyclic polling One task allocate for handling of events Requirements:
Hardware must buffer Requirements: Only one event should happens between two cycles Event must be handled before its deadline If polling task is not executed with the highest priority, its period should be highest half of the events deadline 1 5 10 Event Processering D period n period n+1

47 Cyclic polling Benefits Problems Simple implementation
Full control of the overhead no hardware processing Overhead (unnecessary polling) possibly makes RMA more complicated RMA: Calculate a period for the polling task. Execution time will be the time for handling the event. Suppose that an event is handled at every cycle Used even if events happen seldom it should always be shown with RMA that applications schedulability is not disturbed by pollings overhead

48 Delayed handling Partition handling to two parts:
ISR: handle those tasks that must be handled immediately: Reading of hardware registers .... ”deferred handler” (DH) will be activated that handles the rest of task Benefits Problems small amount of hardware processing DH can handle many different events delayed handler is an aperiodic task RMA: Calculate a pseudoperiod for ISR. Calculate a pseudoperiod for DH:n based of ISR’s pseudoperiod The immediate/deferred scheme is a better approach than the full immediate handing scheme(risk of monopolizing the processor) or the cyclic poll scheme (high overhead)

49 Deferred Server Delayed handling is still aperiodic
Idea: make DH periodic Server process period for prevention that server takes all processing capacity, server under period is given limited execution capacity After server goes idle, the consumed processor capacity can be restored (replenishment period) deferred server restores its processing capacity at the beginning of period

50 Deferred server execution capacity 1 (one event/period) period 4
5 10 15 Event A Event B Event D Event C 4 8 12 execution capacity 1 (one event/period) period 4 server is redo when t=0, but runs first when t=3 e.g. phase = 3 RMA analysis needs that every task should be executable at the beginning of the cycle

51 Sporadic server sporadic server restores its capacity first after capacity has been used execution capacity 1 (one event/period) period 4 so that A (t=3) use the whole processing capacity, can B be processed first at t = 3+4=7 Event A Event B Event C Event D 10 15 5 3 7 11

52 Sporadic server sporadic server can be handled as a periodic task with variable phase 5 10 15 Event A Event B Event D Event C 3 7 11 No event to handle phase=1 phase=2

53 Sporadic Server Implementation: Operating system: By own coding:
POSIX d defines an interface for sporadic server scheduling By own coding: complicated

54 Sporadic server Calculation of period: hard deadline:
minimum interarrival time/2 event deadline/2 bursty hard deadline: server period = 1/event density soft deadine: M/D/1 queue-analys W = average response time I = average interarrival time I in general then server runs with the highest priority (e = processing time for an event)

55 Sporadic server Control of schedulability Criterions
Analysis technique Soft deadline schedulability of soft deadlines can’t be quarantied. M/D/1 technique gives approximate responce times Hard deadline, deadline=period, no blocking RMA 1 and RMA2 Hard deadline, deadline=period, blocking RMA 3 and RMA 4 Hard deadline, deadline before or after period, blocking Random deadlines

56 Example Is this system schedulable? Depends on the implementation
Event Description Processering (ms) Period (ms) Deadline (ms) e1 periodic 5 30 25 e2 hard deadline: sporadic incoming time, maximal rate: 23 evens in 150ms direct processering: 0,1ms delayable processering: 0,9ms a hardware queue saves last 23 events 1 - e3 58 200 e4 127 600 Is this system schedulable? Depends on the implementation

57 Summary RMA 1,2 Dealing with blocking.. RMA 3,4 Arbitrary ´deadline
Aperodic tasks


Download ppt "Rate Monotonic Analysis Xinrong Zhou"

Similar presentations


Ads by Google