Presentation is loading. Please wait.

Presentation is loading. Please wait.

Relaxing the Synchronous Approach for Mixed-Criticality Systems

Similar presentations


Presentation on theme: "Relaxing the Synchronous Approach for Mixed-Criticality Systems"— Presentation transcript:

1 Relaxing the Synchronous Approach for Mixed-Criticality Systems
Eugene Yip, Matthew M Y Kuo, Partha S Roop, and David Broman RTAS’14

2 Mixed-Criticality Motivations
Different requirements: timing, security, safety. Criticality: Level of required assurance against failure. Hardware Multi-processor, Multi-core, Multi-threaded, ... Software Task 1, Task 2, ... Task n Hard/soft/non-real-time DO-178B Software Level Failure Condition A Catastrophic B Hazardous C Major D Minor E No effect It is now common for embedded systems to contain many tasks with different requirements running on the same hardware platform. The requirements can be based on, for example, timing, security, and safety properties. The criticality of a task is the level of assurance required against its failure. A high critical task requires a high level of assurance that it will not fail. In our work and in the work of many others, we have focused on criticality in terms of a task’s timing requirements. For example, hard, soft, and non-real-time requirements. Before a system can be accepted for commercial use, it must be certified against a safety standard by a certification body. For example, the DO-178B safety standard specifies 5 levels of criticality, from level A with catastrophic consequences when a task fails down to level E with no effects when a tasks fail. Life Mission [Vestal 2007] Preemptive Scheduling of Multi-criticality Systems with Varying Degrees of Execution Time Assurance. [RTCA 1992] Software Considerations in Airborne Systems and Equipment Certification. Non-critical

3 UAV Example As an example, an unmanned aerial vehicle can be designed as a mixed-criticality system. The Nav and Stability tasks are responsible for planning a stable flight path. Since the quality of the flight is sensitive to jitter, the tasks are hard real-time and therefore assigned as life critical tasks. The Avoid task checks for obstacles in the flight path. The more frequent obstacles are checked for, the faster the UAV can safely fly. The Video task streams a video of the UAV’s flight from an on-board camera back its operators. The higher the frame rate the better the viewing experience. Since a minimum quality of service is desired, the tasks are assigned as mission critical tasks. The Logging task logs interesting flight events and the Sharing task shares information with nearby UAVs. These tasks are not necessary for the correct operation of the UAV, so they are assigned as non-critical tasks.

4 Related Work Vestal: Task WCETs more pessimistic at higher criticalities. Over provisioning of resources. Early-Release EDF: Low critical tasks have a maximum period and shorter desired periods. Zero-Slack QoS-based Resource Allocation Model: Tasks with lower utility degraded first (selecting longer periods). Many works have been devoted on the problem of scheduling mixed-criticality tasks on the same hardware platform. Vestal is one of the first to formalize this scheduling problem. He observed that the WCET of a task becomes more pessimistic as the task’s criticality increases. If all the tasks were to be certified at the highest criticality, then more resources will need to be provisioned than actually required. Typically scheduling starts by assuming all tasks will only execute for up to their low criticality WCET; their least pessimistic WCET. If a task exceeds its low criticality WCET, then the lower criticality tasks are discarded to “release” enough processor time to complete the computation of the higher criticality tasks. Hence, the execution behaviour of lower criticality tasks can become sporadic. In ER-EDF, the aim is to guarantee minimum service levels for low critical tasks. Low criticality tasks are guaranteed enough processor time to execute within their maximum period, thus avoiding the need to discard them. If slack is available, then the low criticality tasks can be released earlier than usual. In ZS-QRAM, tasks can be criticality or utility-based tasks. Critical tasks must meet their timing deadlines. Utility-based tasks incorporate the notion of utility; a measure of the benefits that the system receives by allowing the task to execute. When processor resources become constrained, low utility tasks are degraded first (by increasing their period). (Scheduling methods do not deal with task inputs; polling or interrupt) [Vestal 2007] Preemptive Scheduling of Multi-criticality Systems with Varying Degrees of Execution Time Assurance. [Su et al. 2013] Scheduling Algorithms for Elastic Mixed-Criticality Tasks in Multicore Systems. [de Niz et al. 2012] On Resource Overbooking in an Unmanned Aerial Vehicle.

5 The Synchronous Approach
Environment Task 1 j = f(i) Task 2 k = g(j) int i int j int k Formal semantics. Formal verification. SCADE used in Airbus. Safety critical systems are usually periodic hard real-time systems and can be modelled directly using synchronous languages. Tasks in a synchronous program are activated by a logical clock. At each logical tick, the tasks sample the environment, perform their computations, and then emit their outputs. The synchrony hypothesis assumes that computations complete instantaneously, in zero time. (This is the assumption used when designing hardware circuits.) When implementing the program, computations will take physical time and the logical clock will take physical time to tick. Thus, it is necessary to validate that the WCET of the program never exceeds the duration of a tick. Implementation takes physical time to execute. Synchrony hypothesis: Executions complete instantaneously. Validate: WCET is always less than the duration of any tick. Task 1 Task 2 Task 1 Task 2 Task 1 Task 2 Implementation takes physical time to tick. 1 2 3 Logical time [Benveniste et al. 2003] The Synchronous Languages: 12 Years Later.

6 Related Work Baruah’s static scheduling approach: Missing:
High and low criticality tasks. Low-criticality tasks may be discarded. Multi-rate synchronous tasks on uni-processor. Single-rate synchronous tasks on multi-processor. Missing: Multi-rate tasks on multi-processor. Modelling of mission tasks that can tolerate bounded deadline misses (soft real-time). The work by Baruah is the first attempt at generating efficient static schedules for synchronous tasks with two levels of criticality. High criticality tasks are the only tasks subject to certification and have hard real-time deadlines. The low criticality tasks are discarded whenever processor time is required to meet the demands of the high criticality tasks. Baruah has worked on the static scheduling of multi-rate tasks on uniprocessors and single-rate tasks on multiprocessors. Multi-rate tasks are tasks that are triggered at every multiple number ticks and are suited for modeling physical processes that proceed at different rates. What is missing is the scheduling of multi-rate, mixed-criticality, synchronous tasks on multi-processors. Also, all tasks that are subject to certification must be hard real-time. Thus, tasks that can tolerate bounded deadline misses (such as the obstacle avoidance and video stream tasks of the UAV example) cannot be modelled directly in the synchronous framework. We would like to relax the synchrony hypothesis to be able to model such tasks. [Baruah 2012] Semantics-Preserving Implementation of Multirate Mixed-Criticality Synchronous Programs. [Baruah 2013] Implementing Mixed-Criticality Synchronous Reactive Systems Upon Multiprocessor Platforms.

7 UAV Example For example, under the synchronous framework, we would like to define life critical tasks with constant execution frequencies, mission critical tasks with bounded execution frequencies where the minimum frequency is the minimum quality of service while the maximum frequency is the desired quality of service. Finally, we would like to define goal frequencies for non-critical tasks that they try to achieve but it is okay if their actual execution frequency is much lower.

8 Problem Statement Synchrony hypothesis requires:
All tasks to be hard real-time: No advantage in prioritizing tasks based on criticality. WCETs of all tasks for validation: Cannot include (non-critical) tasks with unknown WCETs. Enough resources to be provisioned for the worst-case: Under-utilization of resources at runtime. Under the traditional synchronous framework, all tasks must be hard real-time to adhere to the synchrony hypothesis. Thus, there is no advantage in prioritizing the execution of high criticality tasks over low criticality tasks. The WCET of all tasks are required to validate the synchrony hypothesis. Thus, non-critical tasks that are too complex to analyse cannot be included in the program. To satisfy the synchrony hypothesis, the implementation must provide enough resources to ensure that the program’s estimated WCET is less than the system’s period. If the WCET assumptions are too pessimistic, then the provisioned resources will be significantly under-utilized at runtime.

9 Contributions Relax the synchrony hypothesis to model mission critical tasks with frequency bounds. Address the communication between mission critical tasks. Propose an efficient scheduling of multi-rate, mixed-criticality, synchronous tasks on multi-processors. Benchmark showing better processor utilization than ER-EDF.

10 Talk Outline MC Task and Communication Model
Multiprocessor Scheduling Approach Performance Evaluation and Discussions Conclusions and Future Work

11 MC Task Model Program is a set of tasks: 𝜏∈Γ
Task’s level of criticality: 𝜁 𝜏 ∈ life, mission, non−critical Task’s release frequency: Life: 𝑓= 𝑓 𝑚𝑖𝑛 = 𝑓 𝑚𝑎𝑥 (constant) Mission: 𝑓 𝑚𝑖𝑛 ≤𝑓≤ 𝑓 𝑚𝑎𝑥 (bound) Non-critical: 𝑓= 𝑓 𝑚𝑎𝑥 (goal) Task’s computation time (WCET analysis): 𝑐 𝜏 [Wilhelm et al. 2008] The Worst-Case Execution-Time Problem - Overview of Methods and Survey of Tools.

12 MC Task Communication Model
Instead of instantaneous communication... Use delayed communication: Data-dependencies limit schedulability and distribution. Delays difficult to analyze for distributed tasks. Communication in synchronous programs is usually instantaneous. Thus, tasks may have to wait for each other to the resolve data dependences, therefore, limiting schedulability and distribution. A common approach is to use delayed communication to avoid data dependences. Tasks use values produced from the previous period in its computation. Tasks use values produced from the previous period. Delays due to data dependencies are avoided.

13 MC Task Communication Model
Oversampling: Undersampling: For tasks with different execution frequencies, oversampling is used when a faster task reads from a slower task. The faster task reads the same value multiple times between the updates of the slower task. Undersampling is used when a slower task reads from a faster task. The slower task only reads the last value produced by the faster task. If commands are being sent from the faster task, then the slower task would lose some of those commands.

14 MC Task Communication Model
Lossless buffering: Data received in the same sequence as it is sent. Timing of when data is received varies at runtime. Maximum buffer size = 𝑓 𝜏 𝑚𝑎𝑥 𝑓 𝜏 ′ 𝑚𝑖𝑛 𝜏=sending task 𝜏 ′ =receiving task A solution is to buffer the values produced by the faster task in a FIFO and to get the slower task to read the entire buffer whenever it is released. The buffer is then cleared. Note that the release of a task does not depend on how many values have been buffered. (In fact, the number of values that need to be buffered depends on the execution frequency of the tasks). Data is always received in the same sequence as it was sent, but the timing of the reception is not guaranteed. We observe that the bounded frequencies of mission critical tasks means that the buffer size is bounded by how quickly a sending task can produce values compared to how slowly a receiving task can read and clear the buffer.

15 Related Work Lossless buffering:
Synchronous Data Flow and Rate-Based Execution. Release of a task depends on receiving a minimum amount of buffered data. Buffer sizes depend on task scheduling order. Synchronous Data Flow and Rate-Based Execution have similar lossless buffering techniques, but with a few differences. In our proposed approach, the execution frequency of the tasks determines the number of buffered values, and the buffer sizes are independent of the task scheduling order. [Lee & Messerschmitt 1987] Synchronous Data Flow. [Goddard & Jeffay 2001] Managing Latency and Buffer Requirements in Processing Graph Chains.

16 Multiprocessor Task Schedulability
Notations for task utilization: 𝑢 𝜏 𝑚𝑖𝑛 = 𝑐 𝜏 ⋅ 𝑓 𝜏 𝑚𝑖𝑛 𝑢 𝜏 𝑚𝑎𝑥 = 𝑐 𝜏 ⋅ 𝑓 𝜏 𝑚𝑎𝑥 𝑈 Γ life = 𝜏∈Γ, 𝜁 𝜏 =life 𝑢 𝜏 𝑚𝑖𝑛 𝑈 Γ 𝑚𝑖𝑛 mission = 𝜏∈Γ, 𝜁 𝜏 =mission 𝑢 𝜏 𝑚𝑖𝑛 Before we talk about task schedulability, let us define some useful notations for task utilization. A task’s utilization is the proportion of time a processor spends executing it.

17 Multiprocessor Task Schedulability
Schedulability: Given a set of homogenous processors 𝑛∈𝑁, a task set 𝜏∈Γ is schedulable over 𝑁 processors if: ∀𝑛∈𝑁: 𝑈 Γ 𝑛 life + 𝑈 Γ 𝑛 𝑚𝑖𝑛 mission ≤1 We define task schedulability as follows. Given a set of available homogeneous processors, a task set is schedulable over those processors if each processor can execute its allocated life critical tasks at their required frequencies and its allocated mission critical tasks at their minimum frequencies. Note that we do not guarantee the execution of non-critical tasks, so they do not appear in the equation.

18 Multiprocessor Scheduling Approach
Static scheduling: Allocate minimum processor time to life and mission critical tasks to satisfy schedulability. Distribute slack fairly among mission critical tasks to help improve their frequency. Dynamic scheduling: Give non-critical tasks the chance to execute and reach their goal frequency. For a schedulable task set we need to generate a suitable task schedule. In our approach, we create a static schedule for the life and mission critical tasks which satisfies the schedulability condition. If there is slack left in the static schedule, then that slack is allocated to the mission critical tasks to help improve their frequency beyond their minimum frequency. At runtime, tasks are executed according to the static schedule but tasks may complete earlier than expected and create slack. We use this slack to schedule the non-critical to give them the chance to execute and reach their goal frequency.

19 Static Scheduling Base period approach: GCD of task periods.
Portion of 𝑐 𝜏 allocated in the base period. Slack accumulates at the end of each base period. Example: Task C = 𝜁=life,𝑓=2Hz, 𝑐=250𝑚𝑠 Task D = 𝜁=life,𝑓=5Hz, 𝑐=100𝑚𝑠 Base period =𝐺𝐶𝐷 500𝑚𝑠,200𝑚𝑠 =100𝑚𝑠 Synchronous programs can be scheduled using the base period approach. In this approach, a static schedule is created such that it can be repeated indefinitely to satisfy the timing requirements of the scheduled tasks. The length of the static schedule, called the base period, is computed as the greatest common divisor of all the task periods. Thus, the periods of all the tasks can be derived as multiples of the base period. For example, the base period for tasks C and D is 100ms. Task C is released every 5 base periods and needs 50ms in each base period to complete. Task D is released every 2 base periods and requires 50ms to complete. Tasks are preempted whenever they have exhausted their allocated time and have not completed their computation. They will resume in the next base period. When a task is preempted or has completed, the next statically scheduled task is immediately executed. Thus, slack only appears at the end of the base period after all the tasks have executed. [Caspi & Maler 2005] From Control Loops to Real-Time Programs.

20 Static Scheduling (ILP)
𝑝 𝑏 : Base period (GCD). 𝑛∈𝑁: Processors. 𝑡 𝜏 𝑚𝑖𝑛 , 𝑡 𝜏 𝑚𝑎𝑥 ∈𝑇: Min and max processor time each life and mission critical task needs in 𝑝 𝑏 . Maximize: 𝑛∈𝑁 𝒖 𝒏 𝒖 𝒏 ≤ 𝑝 𝑏 𝒖 𝒏 =𝛽+ 𝜏∈Γ 𝑡 𝜏 𝑚𝑖𝑛 +𝛼 ⋅ 𝒂 𝝉 𝒏 𝒂 𝝉 𝒏 = 1 if 𝜏 is allocated to processor 𝑛 0 otherwise ∀𝜏∈Γ: 𝑛∈𝑁 𝒂 𝝉 𝒏 =1 We use ILP to find a feasible static schedule. The inputs are: the base period, the set of available homogeneous processors, and the minimum and maximum amount of processor time that each life and mission critical task will need in each base period to meet its minimum or maximum frequencies, respectively. (All variables are in blue font, while constants are in black font) The objective function is to maximise the utilisation of each processor, which should not be larger than the base period. A processor’s utilisation is computed as the cost of the delayed communication and the sum of the minimum times of its allocated tasks. The variable “a” is a boolean for assigning tasks to a processor. We make sure that a task is only allocated to one processor. Cost of delayed communication. Cost of preempting a task. Solution exists if the task set is schedulable.

21 Static scheduling (ILP)
Minimum allocated times: 𝑝 𝑏 =𝐺𝐶𝐷 1 4 , 1 10 , 1 20 , 1 25 =10𝑚𝑠 For example, the base period of the UAV tasks is 10 ms. Let us summarise the diagram into a table with task WCETs and calculations for task utilization and the times that tasks need in each base period. Here is a possible static schedule for the life and mission critical tasks using the minimum times. We can see that there is a lot of slack in the static schedule that can be allocated to the mission critical tasks. The maximum amount of slack that can be allocated to a task is x_max_tau and is the difference between the maximum and minimum time that the task needs in each base period. An additional constraint is used to decide how much slack a task is given on its allocated processor. Maximum allocated times: 𝑥 𝜏 𝑚𝑎𝑥 = 𝑡 𝜏 𝑚𝑎𝑥 − 𝑡 𝜏 𝑚𝑖𝑛 0≤ 𝒙 𝝉 𝒏 ≤ 𝒂 𝝉 𝒏 ⋅ 𝑥 𝜏 𝑚𝑎𝑥 Note, 𝑥 𝜏 𝑚𝑎𝑥 =0 for life critical tasks. 𝑡 𝜏 𝑚𝑖𝑛 𝒙 𝝉 𝒏

22 Static scheduling (ILP)
Allocate slack among mission critical tasks: Additional constraints to guide slack allocation. E.g., proportionate fairness or marginal utility. Example: For any two tasks, the task with larger 𝑥 𝜏 𝑚𝑎𝑥 is given proportionally more slack. ∀𝑛, 𝑛 ′ ∈𝑁, ∀𝜏, 𝜏 ′ ∈Γ, 𝑥 𝜏 𝑚𝑎𝑥 ≥ 𝑥 𝜏 ′ 𝑚𝑎𝑥 , 𝒂 𝝉 𝒏 =1, 𝒂 𝝉 ′ 𝒏 ′ =1: 𝒙 𝝉 ′ 𝒏 ′ ⋅ 𝑥 𝜏 𝑚𝑎𝑥 ≤ 𝒙 𝝉 𝒏 ⋅ 𝑥 𝜏 ′ 𝑚𝑎𝑥 We can guide how the slack is allocated to the mission critical tasks by introducing fairness or utility constraints. Fairness: No task is worse off than others. Utility: Tasks get more slack if their execution benefits the system more. A simple example that tries to approximate proportionate fairness is given here. The goal of the constraint is for tasks with larger x_max_tau values to get proportionally more slack than tasks with smaller x_max_tau. An inequality is used to enable tasks to “soak up” any remaining slack in the static schedule. (E.g., suppose two tasks have x_max_tau values of 4 and 2. The ratio would be 4/2=2. If the processor has only 4 units of slack, then the tasks will be allocated 2 and 1 units of slack, leaving = 1 unit of slack left over. To avoid this we have the inequality which will allow the task with larger x_max_tau to take the remaining unit of slack). 𝑥 𝜏 𝑚𝑎𝑥 𝑥 𝜏 ′ 𝑚𝑎𝑥 ≤ 𝒙 𝝉 𝒏 𝒙 𝝉 ′ 𝒏 ′ [Lan et al. 2010] An Axiomatic Theory of Fairness in Network Resource Allocation. [Baruah et al. 1996] Proportionate Progress: A Notion of Fairness in Resource Allocation. [de Niz et al. 2012] On Resource Overbooking in an Unmanned Aerial Vehicle.

23 Static scheduling (ILP)
Maximize: 𝑛∈𝑁 𝒖 𝒏 𝒖 𝒏 ≤ 𝑝 𝑏 𝒖 𝒏 =𝛽+ 𝜏∈Γ 𝑡 𝜏 𝑚𝑖𝑛 +𝛼 ⋅ 𝒂 𝝉 𝒏 + 𝒙 𝝉 𝒏 𝒂 𝝉 𝒏 = 1 if 𝜏 is allocated to processor 𝑛 0 otherwise ∀𝜏∈Γ: 𝑛∈𝑁 𝒂 𝝉 𝒏 =1 0≤ 𝒙 𝝉 𝒏 ≤ 𝒂 𝝉 𝒏 ⋅ 𝑥 𝜏 𝑚𝑎𝑥 ∀𝑛, 𝑛 ′ ∈𝑁, ∀𝜏, 𝜏 ′ ∈Γ, 𝑥 𝜏 𝑚𝑎𝑥 ≥ 𝑥 𝜏 ′ 𝑚𝑎𝑥 , 𝒂 𝝉 𝒏 =1, 𝒂 𝝉 ′ 𝒏 ′ =1: 𝒙 𝝉 ′ 𝒏 ′ ⋅ 𝑥 𝜏 𝑚𝑎𝑥 ≤ 𝒙 𝝉 𝒏 ⋅ 𝑥 𝜏 ′ 𝑚𝑎𝑥 Here is a summary of the ILP formulation including the constraints for slack allocation and fairness.

24 Multiprocessor Scheduling Approach
Static scheduling: Allocate minimum processor time to life and mission critical tasks to satisfy schedulability. Distribute slack fairly among mission critical tasks to help improve their release frequency. Dynamic scheduling: Give non-critical and mission tasks the chance to reach their 𝑓 𝑚𝑎𝑥 . We now discuss the dynamic scheduling of tasks in the slack created at runtime.

25 Dynamic Scheduling Statically scheduled life and mission critical tasks. Execute non-critical tasks. Execute mission critical tasks. Execute life critical tasks. Processor 1 2 3 Time (base period) Slack (Dynamic scheduling) Dynamic scheduling: Allow task migration. Tasks execute until they complete or the base period expires. Pick non-critical tasks that have received the least amount of slack. Pick mission critical tasks with the least improvement in frequency. First, the life and mission critical tasks are statically scheduled in each base period. When tasks finish earlier than expected, they create slack that can be used to dynamically execute tasks. We allow task migration for dynamic scheduling and selected tasks execute until they complete or until the base period ends. The non-critical tasks are scheduled to give them a chance to execute. A heuristic based on fairness or utility can be used to decide the scheduling order. For this paper, the non-critical tasks that have received the least amount of slack are scheduled first. If slack still exists, then mission critical tasks that have used their statically allocated time but have not completed their computation are dynamically scheduled. This helps them complete earlier and improve their execution frequency. For this paper, tasks with the least improvement in frequency are scheduled first. The improvement is measured as the proportion of extra frequency the task has achieved. If slack still remains, then the slack would be forfeited. However, we observe that the slack can “shifted” to a later base period if it is used to execute life critical tasks that have not yet completed their computation. The life critical task will simply finish earlier and give the slack back and hopefully be utilised. This completes the scheduling of multi-rate, mixed criticality, synchronous programs. 𝑓 𝜏 𝑖𝑚𝑝𝑟𝑜𝑣𝑒 = 𝑓 𝜏 𝑎𝑣𝑔 − 𝑓 𝜏 𝑚𝑖𝑛 𝑓 𝜏 𝑚𝑎𝑥 − 𝑓 𝜏 𝑚𝑖𝑛

26 Performance Evaluation
Compare against ER-EDF (the closest work): High criticality task ⇒ Life critical task Low criticality task ⇒ Mission critical task Early release points spaced evenly by 𝑝 𝑏 . Tasks picked randomly for early release. ER-EDF low criticality task For our performance evaluation, we compare our scheduling approach against Early-Release EDF as it is the closest work. Tasks in ER-EDF are either high or low criticality and are statically allocated to a processor. High criticality task: Constant period. Low criticality task: Maximum period that it must complete by. If it finishes early then it can also be released at an earlier point in time if a processor has enough slack to execute it to completion. For early releases only, tasks can migrate to other processors. In our evaluation, the early release points are specified as being one base period apart, starting from the minimum period. The schedulability test in ER-EDF is equivalent to ours. 𝑟 𝑘1 𝑘2 𝑘3 𝑘4 𝑟+𝑝 Proposed mission critical task 𝑟 𝑟+ 1 𝑓𝑚𝑎𝑥 𝑟+ 1 𝑓𝑚𝑖𝑛 [Su et al. 2013] Scheduling Algorithms for Elastic Mixed-Criticality Tasks in Multicore Systems.

27 Performance Evaluation
Follow the simulation approach of ER-EDF. Generate random task sets: 5%≤ 𝑢 𝜏 𝑚𝑎𝑥 ≤50% 100𝐻𝑧≤ 𝑓 𝑏 ≤1,000𝐻𝑧 Divisors of 𝑓 𝑏 randomly selected for 𝑓 𝜏 𝑚𝑖𝑛 and 𝑓 𝜏 𝑚𝑎𝑥 . For our evaluation, we generate random task sets with the following parameters. Each task has a maximum utilization between 5% and 50%. Instead of selecting random minimum and maximum task frequencies and calculating the GCD of all task periods, which may involve prime numbers, we select a random base frequency between 100Hz and 1000Hz from which the task frequencies are derived from. Divisors of the base frequency are randomly selected for each task’s minimum and maximum frequencies. [Su et al. 2013] Scheduling Algorithms for Elastic Mixed-Criticality Tasks in Multicore Systems.

28 Performance Evaluation
Control the proportion of life and mission critical tasks generated. Control the “normalized system utilization”: Estimated utilization expected at runtime. 0%≤𝑈≤100% 𝑈= 𝑀𝑎𝑥 𝑈 Γ life , 𝑈 Γ 𝑚𝑖𝑛 life +𝑈 Γ 𝑚𝑖𝑛 mission 𝑁 When generating the random task sets, we control the proportion of life and mission critical tasks generated. We also control the normalised system utilization that we want at runtime and it is calculated as the average utilisation of the task set normalised to the number of processors. The average utilisation is computed as the maximum of either the life critical tasks or the life and mission critical tasks at their minimum utilisation. For simulation purposes only, the minimum utilisation of life critical tasks is selected randomly between its maximum and one eight of its maximum utilisation. Since the normalised system utilisation is not the worst case utilisation, some unschedulable task sets may be generated. 𝜏∈Γ, 𝜁 𝜏 =life 𝑢 𝜏 𝑚𝑖𝑛 where, 𝑢 𝜏 𝑚𝑎𝑥 8 ≤ 𝑢 𝜏 𝑚𝑖𝑛 ≤ 𝑢 𝜏 𝑚𝑎𝑥 [Su et al. 2013] Scheduling Algorithms for Elastic Mixed-Criticality Tasks in Multicore Systems.

29 Performance Evaluation
Schedulability of the generated task sets: Each data point is the average of 10,000 random task sets. 4 processor system. An average of ILP constraints for each task set. ILP solver (Gurobi) allowed one minute to solve and generate a static schedule. Less schedulable task sets generated when life and mission critical tasks are in equal proportions. Here is a graph showing the proportion of generated task sets that are schedulable (called the acceptance ratio) for various normalised system utilisation and proportions of life critical tasks. We see that, when the proportion of life critical tasks is 0.5, many unschedulable task sets are generated when the normalised system utilisation is above 50%. [Gurobi version 5.6]

30 Performance Evaluation
Proportion of life critical tasks varied: U = 50%, N = 4, 1000 base periods. Task’s actual execution time between 0.8 𝑐 𝜏 and 𝑐 𝜏 . System Runtime Utilization Consistently higher utilization. Utilization drops off because less mission critical tasks are available to use the slack. Here we compare the average processor utilisation of our proposed approach with ER-EDF and normal EDF. The normalized system utilisation is set to 50% for the remaining experiments. We can see that our proposed approach can utilise the processors better. This is because ER-EDF will only release a task early if a processor has enough slack to execute the task completely, otherwise, the slack is forfeited. In our approach, slack can always be used to partially execute a task. For the experiments, we simulate a task’s execution time to be between 100% and 80% of its WCET. Each task set was simulated for 1000 base periods. For both ER-EDF and our approach, tasks use the same task to processor allocation decided by our proposed ILP. We have similar results for 8 processors.

31 Performance Evaluation
Proportion of life critical tasks varied: U = 50%, N = 4, 1000 base periods. Task’s actual execution time between 0.8 𝑐 𝜏 and 𝑐 𝜏 . Overall Frequency Improvement of Mission Critical Tasks 𝑓 𝑚𝑖𝑠𝑠𝑖𝑜𝑛 𝑖𝑚𝑝𝑟𝑜𝑣𝑒 = 𝑓 𝜏 𝑎𝑣𝑔 − 𝑓 𝜏 𝑚𝑖𝑛 𝑓 𝜏 𝑚𝑎𝑥 − 𝑓 𝜏 𝑚𝑖𝑛 We also measured the overall frequency improvement of the mission critical tasks. The improvement was measured as the average proportion of extra frequency that the tasks achieved. Tasks scheduled under our approach achieved higher frequency improvements, hence the higher system utilisation. As the proportion of life critical tasks increases, the slack only needs to be shared among fewer mission critical tasks, resulting in higher frequency improvements. Higher system utilization leads to higher frequency improvement. No improvement when there are no mission critical tasks.

32 Performance Evaluation
Proportion of life critical tasks varied: U = 50%, N = 4, 1000 base periods. Task’s actual execution time between 0.8 𝑐 𝜏 and 𝑐 𝜏 . unfair Fairness Among Mission Critical Tasks fairness= 𝑓 𝜏 𝑎𝑣𝑔_𝑖𝑚𝑝𝑟𝑜𝑣𝑒 − 𝑓 𝜏 𝑖𝑚𝑝𝑟𝑜𝑣𝑒 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑀𝑖𝑠𝑠𝑖𝑜𝑛 𝑇𝑎𝑠𝑘𝑠 Lastly we measured how fairly all tasks were able to improve their frequency. Fairness was measured as how close the tasks were to the average frequency improvement of all tasks. 50% means completely unfair and 0% means completely fair improvement. We can see that our approach gave fairer results than ER-EDF, which randomly selected tasks for early release. The fairness constraint performs better when there are many mission critical tasks. When only one mission critical task is in the task set, it does not need to share the slack and so the fairness is completely fair. Fairness heuristics performs better when there are many mission critical tasks. Completely fair when only one mission critical task is generated. fair

33 Performance Evaluation
Proportion of non-critical tasks varied: Remaining tasks: Equal proportions of life and mission critical tasks. System Runtime Utilization Overall Frequency Improvement Fairness We also performed additional experiments with non-critical tasks to see their effects on the life and mission critical tasks. We varied the proportion of non-critical tasks and the remaining tasks were equal proportions of life and mission critical tasks. System utilization is maximised with the non-critical tasks using most of the slack. Although mission critical tasks are not dynamically scheduled often, they can still improve their frequency by using the slack that was statically allocated to them. Thus, frequency improvement is quite fair. Non-critical tasks use most of the slack. Mission critical tasks already given slack in the static schedule and rarely picked during dynamic scheduling.

34 Discussions Proposed scheduling achieved:
Higher system utilization, frequency improvement, and better fairness. Proposed scheduling approach supports an extra level of task criticality. Base period scheduling incurs nearly twice the number of preemptions than ER-EDF. Solving ILP can be expensive. Can use solver to find locally optimal solutions, like a heuristic. More results can be found in our paper: Extension to more levels of criticality, ILP scalability, Preemptions, results for 8 processors.

35 Conclusions and Future Work
Mission critical tasks (bounded deadline misses) for the synchronous task model. Lossless communication between multi-rate tasks. Scheduling on multi-processors to maximize system utilization with fairness. Future: Study a real system. Extend definition of criticality to include energy use. Develop improved fairness/utility heuristics.

36 Thank You Questions?

37

38 MC Task Model Program is a set of tasks: 𝜏∈Γ
Task’s level of criticality: 𝜁 𝜏 ∈ life, mission, non−critical Task’s release times: Constant release frequency: 𝑓= 1 𝑝 Deadline is the next release time. Life-critical task r r+p r+2p Time

39 MC Task Model Program is a set of tasks: 𝜏∈Γ
Task’s level of criticality: 𝜁 𝜏 ∈ life, mission, non−critical Task’s release times: Ideal next release time (and deadline). Upper bound on deadline miss. Mission-critical task r r+pmin r r+pmax r+pmin r+pmax Time Bounded release frequency: 1 𝑝 𝑚𝑎𝑥 ≤𝑓≤ 1 𝑝 𝑚𝑖𝑛 If a task completes between the bounds, then it is immediately released again.

40 MC Task Model Program is a set of tasks: 𝜏∈Γ
Task’s level of criticality: 𝜁 𝜏 ∈ life, mission, non−critical Task’s release times: Ideal next release time. No upper bound on deadline miss. Non-critical task r r+p Time Goal release frequency: 𝑓= 1 𝑝

41 Multiprocessor Scheduling Approach
Traditional static scheduling approaches: Base period and hyper period. Task C = 𝜁=life,𝑓=2Hz, 𝑐=0.25𝑠 Task D = 𝜁=life,𝑓=5Hz, 𝑐=0.10𝑠 Hyper period: Makespan = LCM of task periods. Longer schedules. Slack appears between task releases. Base period: Makespan = GCD of task periods. Shorter schedules. More preemptions. Slack accumulates at the end of each base period (easier to track).

42 Obtaining a Static Schedule

43 Fairness Example Task C = 𝑡 𝜏 𝑚𝑖𝑛 =4, 𝑡 𝜏 𝑚𝑎𝑥 =8, 𝑥 𝐶 𝑚𝑎𝑥 =4
Task D = 𝑡 𝜏 𝑚𝑖𝑛 =4, 𝑡 𝜏 𝑚𝑎𝑥 =6, 𝑥 𝐷 𝑚𝑎𝑥 =2 𝑥 𝐶 𝑚𝑎𝑥 𝑥 𝐷 𝑚𝑎𝑥 = 𝒙 𝑪 𝒏 𝒙 𝑫 𝒏 If processor 𝑛 only has 4 units of slack, then 𝒙 𝑪 𝒏 =2, 𝒙 𝑫 𝒏 =1, and 1 unit of slack left over. An inequality would allow task C to take the remaining unit of slack. 2 1

44 ILP Scalability Time for Gurobi to find the first (locally optimal) solution compared to the final (globally optimal) solution. Generated 250 random task sets containing 2 to 50 tasks (even numbered). U = 50%, N = 32, 50% life critical tasks. Quick to find the first solution. Similar to using a heuristic.

45 Preemptions Normalized system utilization varied:
N = 4, 1000 base periods, 50% life critical tasks. Task’s actual execution time between 0.8 𝑐 𝜏 and 𝑐 𝜏 . Average Number of Preemptions on each Processor Proposed approach is nearly twice that of EDF. Implementation determines the true cost.

46 Extra Levels of Criticality
Refining the timing criticality of tasks: Or mix timing criticality with other kinds of criticalities (e.g., security, safety, and power). Failure Condition DO-178B Software Level Task Criticality 𝑣= 𝑓 𝑚𝑖𝑛 𝑓 𝑚𝑎𝑥 Catastrophic A Life 1 Hazardous B Mission 0.66≤𝑣<1 Major C 0.33≤𝑣<0.66 Minor D 0≤𝑣<0.33 No effect E Non-Critical


Download ppt "Relaxing the Synchronous Approach for Mixed-Criticality Systems"

Similar presentations


Ads by Google