Download presentation
Presentation is loading. Please wait.
Published byEstella Richards Modified over 9 years ago
1
Task Allocation and Scheduling n Problem: How to assign tasks to processors and to schedule them in such a way that deadlines are met n Our initial focus: uniprocessor task scheduling n Extensions: to multiprocessors
2
Uniprocessor Task Scheduling n Initial Assumptions: » Each task is periodic » Periods of different tasks may be different » Worst-case task execution times are known » Relative deadline of a task is equal to its period » No dependencies between tasks: they are independent » Only resource constraint considered is execution time » No critical sections » Preemption costs are negligible » Tasks must be completed for output to have any value
3
Standard Scheduling Algorithms n Rate-Monotonic (RM) Algorithm: » Static priority » Higher-frequency tasks have higher priority n Earliest-Deadline First (EDF) Algorithm: » Dynamic priority » Task with the earliest absolute deadline has highest priority
4
RMA n Task priority is inversely proportional to the task period (directly proportional to task frequency) n At any moment, the processor is either » idle if there are no tasks to run, or » running the highest-priority task available n A lower-priority task can suffer many preemptions n To a task, lower-priority tasks are effectively invisible
5
RMA n Example n Schedulability criteria: » Sufficiency condition (Liu & Layland, 1973) » Necessary & sufficient conditions (Joseph & Pandya, 1986; Lehoczky, Sha, Ding 1989)
6
RMA n Critical Instant of a Task: An instant at which a request for that task will have the largest response time n Critical Time-zone of a Task: Interval between a critical instant of that task and the completion time of that task n Critical Instant Theorem: Critical instant of a task T_i occurs whenever T_i arrives simultaneously with all higher-priority tasks
7
RMA: Scheulability Check n The Critical Instant Theorem leads to a schedulability check: » If a task is released at the same time as all of the tasks of higher priority and it meets its deadline, then it will meet its deadline under all circumstances
8
RMA: Schedulability Test n If a task is released simultaneously with all higher- priority tasks, determine when it will be done n If this completion time is no later than this task’s deadline, we have succeeded with this task n Find a systematic procedure to turn this process into a necessary-and-sufficient schedulability check
9
RMA: Schedulability n Start with a single-task set and obtain its schedulability conditions n Extend this to a two-task set n Exploit any intuition gained to generalize this
10
RM Schedulability
11
Earliest Deadline First (EDF) n Same assumptions as before n This is a dynamic priority algorithm: the relative priorities of tasks can change with time n The task with the earliest absolute deadline has the processor n Schedulability Test: Total utilization of task set must not exceed 1.
12
EDF n Lemma 3.8: If a deadline is missed for the first time at some time t_miss, the processor must have been continuously busy over [0,t_miss]. n Theorem 3.11: A task set is schedulable iff its total utilization is no greater than 1.
13
EDF: When Deadline != Period
14
for t >= d_max
16
Critical Sections n Remove the assumption that tasks can be preempted at any time n If a task is within a critical section of code » It may be preempted » However, until that task finishes executing that critical section, no other task can enter it (irrespective of its priority) » Obvious effect: Some higher-priority tasks which also need to enter a critical section will have to wait » Less obvious effect: Priority-inversion can occur
17
Example From J. W. Liu: Real-Time Systems, Prentice-Hall, 2000
18
Critical Sections (contd.) From J. W. Liu, op cit.
19
Priority Inheritance Protocol n Key feature is the priority inheritance rule: » When a higher-priority task A gets blocked due to resource R by a lower-priority task B, B inherits the priority of A. » When B releases R, the priority of B reverts to the value it held before it inherited the priority of A. » Priority inheritance is transitive.
20
Priority Ceiling Protocol n The priority ceiling of any resource is the highest priority of all the tasks requiring that resource. n The current priority ceiling of the system is the highest priority ceiling of the resources currently locked. n A task that requires no critical section resources proceeds according to the traditional approach
21
n When task A requests resource R, » If R is held by another task, it is blocked. » If R is free, n If A’s priority is greater than the current system priority ceiling, A is granted access to R n If A’s priority is not greater than the current system priority ceiling, then it is blocked unless A holds resources whose priority ceiling equals the system priority ceiling. » Blocking tasks inherit the priority of the tasks they block (as in the priority inheritance protocol)
22
n The priority ceiling protocol: » Prevents deadlocks from ever occurring » Ensures that no task can be blocked for more than the duration of one critical section
23
Example From J. Liu, op cit.
24
Properties of the Ceiling Protocol n Deadlock is not possible n Transitive blocking does not occur, i.e., a task which blocks another task cannot itself be blocked. » Each task can be blocked for the duration of at most one critical section. » The longest critical section provides a bound on the blocking time.
25
IRIS Tasks n IRIS = Increased Reward for Increased Service n Also called “imprecise” tasks n Consist of: » Mandatory portion, which has to be executed » Optional portion » Reward function linking the execution time to resulting quality of output n Examples: Search and numerical algorithms
26
Identical Linear Reward Fn n If mandatory portion of all tasks is zero, EDF is optimal, i.e., it results in a maximal reward. n If mandatory portions of at least one task is non- zero, it gets more complex » See Algorithm IRIS1 on page 99
27
IRIS 1 Example
28
Non-identical Linear Rewards n Basic Idea: » Check if the mandatory portions can be scheduled. If not, then give up » Otherwise, keep augmenting the task set with optional portions of tasks in descending order of weights, and running IRIS1 on them
29
IRIS2 Algorithm
30
Identical Concave Rewards n Captures the property of diminishing returns seen in many iterative algorithms n Consider here tasks with zero mandatory portions n Tactic: Ensure that the optional time given to each task is as equal as possible n Example: Aperiodic tasks. » Start from the end of the schedule & work backwards
31
Sporadic Tasks n In EDF, simply use the deadline of the sporadic task to determine their priority n In RM, create a “sporadic server” periodic task that is a placeholder for the sporadic tasks. » Several obvious ways in which to manage the sporadic server
32
Task Assignment n Scheduling tasks on a multiprocessor is generally an NP-complete problem n Traditional heuristics do it in two steps: » Assign or allocate tasks to processors » Use a uniprocessor scheduling algorithm to schedule tasks assigned to each processor » Do this iteratively, if necessary
33
Assignment Algorithms n Bin packing: » First fit » Best fit
34
Fault-Tolerant Scheduling n Fault Tolerance: The ability of a system to suffer component failures and still function adequately n Fault-Tolerant Scheduling: Save enough time in a schedule that the system can still function despite a certain number of processor failures
35
FT-Scheduling: Model n System Model » Multiprocessor system » Each processor has its own memory » Tasks are preloaded into assigned processors n Task Model » Tasks are independent of one another » Schedules are created ahead of time
36
Basic Idea n Preassign backup copies, called ghosts. n Assign ghosts to the processors along with the primary copies » A ghost and a primary copy of the same task can’t be assigned to the same processor » For each processor, all the primaries and a particular subset of the ghost copies assigned to it should be feasibly schedulable on that processor
37
Requirements n Two main variations: » Current and future iterations of the task have to be saved if a processor fails » Only future iterations need to be saved; the current iteration can be discarded
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.