Multiprocessor Real-Time Scheduling

Slides:



Advertisements
Similar presentations
Multiprocessor Scheduling
Advertisements

Real-Time Mutli-core Scheduling Moris Behnam. Introduction Single processor scheduling – E.g., t 1 (P=10,C=5), t 2 (10, 6) – U= >1 – Use a faster.
CS5270 Lecture 31 Uppaal, and Scheduling, and Resource Access Protocols CS 5270 Lecture 3.
CPE555A: Real-Time Embedded Systems
CSE 522 Real-Time Scheduling (4)
RUN: Optimal Multiprocessor Real-Time Scheduling via Reduction to Uniprocessor Paul Regnier † George Lima † Ernesto Massa † Greg Levin ‡ Scott Brandt ‡
Module 2 Priority Driven Scheduling of Periodic Task
Soft Real-Time Semi-Partitioned Scheduling with Restricted Migrations on Uniform Heterogeneous Multiprocessors Kecheng Yang James H. Anderson Dept. of.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
Real-Time Operating System Chapter – 8 Embedded System: An integrated approach.
Technische Universität Dortmund Classical scheduling algorithms for periodic systems Peter Marwedel TU Dortmund, Informatik 12 Germany 2007/12/14.
New Schedulability Tests for Real- Time task sets scheduled by Deadline Monotonic on Multiprocessors Marko Bertogna, Michele Cirinei, Giuseppe Lipari Scuola.
1 Reducing Queue Lock Pessimism in Multiprocessor Schedulability Analysis Yang Chang, Robert Davis and Andy Wellings Real-time Systems Research Group University.
Scheduling policies for real- time embedded systems.
Multiprocessor Real-time Scheduling Jing Ma 马靖. Classification Partitioned Scheduling In the partitioned approach, the tasks are statically partitioned.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
The Global Limited Preemptive Earliest Deadline First Feasibility of Sporadic Real-time Tasks Abhilash Thekkilakattil, Sanjoy Baruah, Radu Dobrin and Sasikumar.
6. Application mapping 6.1 Problem definition
CSE 522 Real-Time Scheduling (2)
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Dynamic Priority Driven Scheduling of Periodic Task
Sandtids systemer 2.modul el. Henriks 1. forsøg m. Power Point.
Mok & friends. Resource partition for real- time systems (RTAS 2001)
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Introductory Seminar on Research CIS5935 Fall 2008 Ted Baker.
Determining Optimal Processor Speeds for Periodic Real-Time Tasks with Different Power Characteristics H. Aydın, R. Melhem, D. Mossé, P.M. Alvarez University.
Distributed Process Scheduling- Real Time Scheduling Csc8320(Fall 2013)
Real-Time Operating Systems RTOS For Embedded systems.
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
Embedded System Scheduling
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Scheduling and Resource Access Protocols: Basic Aspects
Networks and Operating Systems: Exercise Session 2
Wayne Wolf Dept. of EE Princeton University
Chapter 2 Scheduling.
Presented by: Suresh Vadlakonda Ramanjaneya Gupta Pasumarthi
Lecture 4 Schedulability and Tasks
Chapter 5: CPU Scheduling
CprE 458/558: Real-Time Systems
Chapter 6: CPU Scheduling
Houssam-Eddine Zahaf, Giuseppe Lipari, Luca Abeni RTNS’17
CPU Scheduling G.Anuradha
Module 5: CPU Scheduling
Operating System Concepts
Real Time Scheduling Mrs. K.M. Sanghavi.
Sanjoy Baruah The University of North Carolina at Chapel Hill
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
CSCI1600: Embedded and Real Time Software
CSCI1600: Embedded and Real Time Software
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Outline Scheduling algorithms Multi-processor scheduling
Limited-Preemption Scheduling of Sporadic Tasks Systems
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
CSCI1600: Embedded and Real Time Software
Processes and operating systems
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
The End Of The Line For Static Cyclic Scheduling?
Chapter 6: CPU Scheduling
Ch 4. Periodic Task Scheduling
Module 5: CPU Scheduling
Anand Bhat*, Soheil Samii†, Raj Rajkumar* *Carnegie Mellon University
Presentation transcript:

Multiprocessor Real-Time Scheduling Embedded System Software Design Use under the permission of Prof. Ya-Shu Chen (NTUST)

Outline Multiprocessor Real-Time Scheduling Global Scheduling Partitioned Scheduling Semi-partitioned Scheduling

Multiprocessor Models Identical (Homogeneous): All the processors have the same characteristics, i.e., the execution time of a job is independent on the processor it is executed. Uniform: Each processor has its own speed, i.e., the execution time of a job on a processor is proportional to the speed of the processor. A faster processor always executes a job faster than slow processors do. For example, multiprocessors with the same instruction set but with different supply voltages/frequencies. Unrelated (Heterogeneous): Each job has its own execution time on a specified processor A job might be executed faster on a processor, but other jobs might be slower on that processor. For example, multiprocessors with different instruction sets.

Scheduling Models Global Scheduling: Partitioned Scheduling: A job may execute on any processor. The system maintains a global ready queue. Execute the M highest-priority jobs in the ready queue, where M is the number of processors. It requires high on-line overhead. Partitioned Scheduling: Each task is assigned on a dedicated processor. Schedulability is done individually on each processor. It requires no additional on-line overhead. Semi-partitioned Scheduling: Adopt task partitioning first and reserve time slots (bandwidths) for tasks that allow migration. It requires some on-line overhead.

Scheduling Models Semi-partitioned Scheduling Global Scheduling 7 9 8 7 7 3 7 Waiting Queue 4 6 5 4 6 5 6 4 5 1 2 3 1 2 3 2 3 CPU 1 CPU 2 CPU 3 CPU 1 CPU 2 CPU 3 CPU 1 CPU 2 CPU 3

Global Scheduling All ready tasks are kept in a global queue A job can be migrated to any processor. Priority-based global scheduling: Among the jobs in the global queue, the M highest priority jobs are chosen to be executed on M processors. Task migration here is assumed with no overhead. Global-EDF: When a job finishes or arrives to the global queue, the M jobs in the queue with the shortest absolute deadlines are chosen to be executed on M processors. Global-RM: When a job finishes or arrives to the global queue, the M jobs in the queue with the highest priorities are chosen to be executed on M processors.

Global Scheduling Advantages: Disadvantages: Effective utilization of processing resources (if it works) Unused processor time can easily be reclaimed at run-time (mixture of hard and soft RT tasks to optimize resource utilization) Disadvantages: Adding processors and reducing computation times and other parameters can actually decrease optimal performance in some scenarios! Poor resource utilization for hard timing constraints Few results from single-processor scheduling can be used Supported by most multiprocessor operating systems Windows NT, Solaris, Linux, ...

Schedule Anomaly Anomaly 1 Anomaly 2 A decrease in processor demand from higher-priority tasks can increase the interference on a lower-priority task because of the change in the time when the tasks execute Anomaly 2 A decrease in processor demand of a task negatively affects the task itself because the change in the task arrival times make it suffer more interference

Anomaly 1 Task 3 misses its deadline when task 1 increases its period. This can happen for the following schedulable task set (with priorities assigned according to RM): (T1 = 3; C1 = 2); (T2 = 4; C2 = 2); (T3 =12; C3 = 8). When \tau_1 increases its period to 4, tau_3 becomes unschedulable. This is because tau_3 is saturated and its interference increases from 4 to 6.

Anomaly 2 Consider the following schedulable task set (with priorities assigned according to RM): (T1 = 4; C1 = 2); (T2 = 5; C2 = 3); (T3 = 10; C3 = 7). If we increase T3, the resulting task set becomes unschedulable. When T3 increases its period to 11, the second instance of T3 misses its deadline. This is because the interference increases from 3 to 5 and 3 is already saturated.

Dhall effect On 2 processors: T3 is not schedulable Dhall effect : For Global-EDF or Global-RM, the least upper bound for schedulability analysis is at most 1. On 2 processors: T3 is not schedulable Task T D C U T1 10 5 0.5 T2 T3 12 8 0.67 One heavy task tk : Dk = Tk = Ck M light tasks ti s: Ci = e and Di = Ti = Ck -e, in which e is a positive number, very close to 0.

Schedulability Test A set of periodic tasks t1, t2, . . . , tN with implicit deadlines is schedulable on M processors by using preemptive Global EDF scheduling if where tk is the task with the largest utilization Ck/Tk

Weakness of Global Scheduling Migration overhead Schedule Anomaly

Partitioned Scheduling Two steps: Determine a mapping of tasks to processors Perform run-time single-processor scheduling •Partitioned with EDF Assign tasks to the processors such that no processor’s capacity is exceeded (utilization bounded by 1.0) Schedule each processor using EDF

Bin-packing Problem The problem is NP-complete !!

Bin-packing to Multiprocessor Scheduling The problem concerns packing objects of varying sizes in boxes (”bins”) with the objective of minimizing number of used boxes. Solutions (Heuristics): First Fit Application to multiprocessor systems: Bins are represented by processors and objects by tasks. The decision whether a processor is ”full” or not is derived from a utilization-based schedulability test.

Partitioned Scheduling Advantages: Most techniques for single-processor scheduling are also applicable here Partitioning of tasks can be automated Solving a bin-packing algorithm Disadvantages: Cannot exploit/share all unused processor time May have very low utilization, bounded by 50%

Partitioned Scheduling Problem Given a set of tasks with arbitrary deadlines, the objective is to decide a feasible task assignment onto M processors such that all the tasks meet their timing constraints, where Ci is the execution time of task ti on any processor m.

Partitioned Algorithm First-Fit: choose the one with the smallest index Best-Fit: choose the one with the maximal utilization Worst-Fit: choose the one with the minimal utilization

Partitioned Example 0.2 -> 0.6 -> 0.4 -> 0.7 -> 0.1 -> 0.3 0.2 0.4 0.6 0.1 0.3 0.7 0.2 0.4 0.6 0.1 0.3 0.7 0.7 0.3 0.6 0.1 0.4 0.2 First Fit Best Fit Worst Fit

EDF with First Fit

Schedulability Test Lopez [3] proves that the worst-case achievable utilization for EDF scheduling and FF allocation (EDF-FF) takes the value If all the tasks have an utilization factor C/T under a value α, where m is the number of processors where

Demand Bound Function Define demand bound function as We need approximation to enforce polynomial-time schedulability test DBF(\tau_i, t) <= DBF*(\tau_i,t) < 2* DBF(\tau_i,t)

Deadline Monotonic Partition

Schedulabiliy Test

Weakness of Partitioned Scheduling Restricting a task on a processor reduces the schedulability Restricting a task on a processor makes the problem NP-hard Example: Suppose that there are M processors and M + 1 tasks with the same period T and the (worst-case) execution times of all these M + 1 tasks are T/2 + e with e > 0 With partitioned scheduling, it is not schedulable

Semi-partitioned Scheduling Tasks are first partitioned into processor. To reduce the utilization, we again pick the processor with the minimum task utilization If a task cannot fit into the picked processor, we will have to split it into multiple (two or even more) parts. If ti is split and assigned to a processor m and the utilization on processor m after assigning ti is at most U(scheduler,N), then ti is so far schedulable.

Semi-partitioned EDF Tmin is the minimum period among all the tasks. By a user-designed parameter k, we divide time into slots with length S = Tmin/k . We can use the first-fit approach by splitting a task into 2 subtasks, in which one is executed on processor m and the other is executed on processor m + 1. Execution of a split task is only possible in the reserved time window in the time slot. Applying first-fit algorithm, by taking SEP as the upper bound of utilization on a processor. If a task does not fit, split this task into two subtasks and allocate a new processor, one is assigned on the processor under consideration, and the other is assigned on the newly allocated processor.

Semi-partitioned EDF For each time slot, we will reserve two parts. If a task ti is split, the task can be served only within these two pre-defined time slots with length xi and yi . A processor can host two split tasks, ti and tj. ti is served at the beginning of the time slot, and tj is served at the end. X+y/S= C/T The schedule is EDF, but if a split task instance is in the ready queue, it is executed in the reserved time region.

Semi-partitioned EDF We can assign all the tasks ti with Ui > SEP on a dedicated processor. So, we only consider tasks with Ui no larger SEP. X+y/S= C/T lo_split(ti )+high_split(ti )=C/T <SEP When executing, the reservation to serve ti is to set xi to S X (f + lo_split(ti )) and yi to S X (f + high_split(ti )). SEP is set as a constant.

Two Split Tasks on a Processor For split tasks to be schedulable, the following sufficient conditions have to be satisfied lo_split(ti ) + f + high_split(ti ) + f <= 1 for any split task ti . lo_split(tj ) + f + high_split(ti ) + f <= 1 when ti and tj are assigned on the same processor. Therefore, the “magic value” SEP However, we still have to guarantee the schedulability of the non-split tasks. It can be shown that the sufficient condition is

Schedulability Test

Magic Values: f

Magic Values: SEP

Reference Multiprocessor Real-Time Scheduling Global Scheduling Dr. Jian-Jia Chen: Multiprocessor Scheduling. Karlsruhe Institute of Technology (KIT): 2011-2012 Global Scheduling Sanjoy K. Baruah: Techniques for Multiprocessor Global Schedulability Analysis. RTSS 2007: 119-128 Partitioned Scheduling Sanjoy K. Baruah, Nathan Fisher: The Partitioned Multiprocessor Scheduling of Sporadic Task Systems. RTSS 2005: 321-329 Semi-partitioned Scheduling Björn Andersson, Konstantinos Bletsas: Sporadic Multiprocessor Scheduling with Few Preemptions. ECRTS 2008: 243-252

See You Next Week

Critical Instants? The analysis for uniprocessor scheduling is based on the gold critical instant theorem. Synchronous release of events does not lead to the critical instant for global multiprocessor scheduling