Implications of Classical Scheduling Results For Real-Time Systems

Slides:



Advertisements
Similar presentations
On the Complexity of Scheduling
Advertisements

Chapter 7 - Resource Access Protocols (Critical Sections) Protocols: No Preemptions During Critical Sections Once a job enters a critical section, it cannot.
Priority INHERITANCE PROTOCOLS
Algorithm Design Methods Spring 2007 CSE, POSTECH.
ECE 667 Synthesis and Verification of Digital Circuits
1 EE5900 Advanced Embedded System For Smart Infrastructure RMS and EDF Scheduling.
CS5270 Lecture 31 Uppaal, and Scheduling, and Resource Access Protocols CS 5270 Lecture 3.
CPE555A: Real-Time Embedded Systems
Courseware Scheduling of Distributed Real-Time Systems Jan Madsen Informatics and Mathematical Modelling Technical University of Denmark Richard Petersens.
CSE 522 Real-Time Scheduling (3)
Mehdi Kargahi School of ECE University of Tehran
Task Allocation and Scheduling n Problem: How to assign tasks to processors and to schedule them in such a way that deadlines are met n Our initial focus:
Module 2 Priority Driven Scheduling of Periodic Task
1 IOE/MFG 543 Chapter 3: Single machine models (Sections 3.1 and 3.2)
Deterministic Scheduling
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
Spring 2002Real-Time Systems (Shin) Rate Monotonic Analysis Assumptions – A1. No nonpreemptible parts in a task, and negligible preemption cost –
Scheduling policies for real- time embedded systems.
Multiprocessor Real-time Scheduling Jing Ma 马靖. Classification Partitioned Scheduling In the partitioned approach, the tasks are statically partitioned.
1 Short Term Scheduling. 2  Planning horizon is short  Multiple unique jobs (tasks) with varying processing times and due dates  Multiple unique jobs.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
6. Application mapping 6.1 Problem definition
Outline Introduction Minimizing the makespan Minimizing total flowtime
Advanced Operating Systems - Spring 2009 Lecture 14 – February 25, 2009 Dan C. Marinescu Office: HEC 439 B. Office.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
Introduction to Embedded Systems Rabie A. Ramadan 5.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Dynamic Priority Driven Scheduling of Periodic Task
Introduction to Real-Time Systems
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Single Machine Scheduling Problem Lesson 5. Maximum Lateness and Related Criteria Problem 1|r j |L max is NP-hard.
Embedded System Scheduling
Classification of Scheduling Problems
Scheduling with Constraint Programming
Approximation Algorithms for Scheduling
ECE 720T5 Fall 2012 Cyber-Physical Systems
Multiprocessor Real-Time Scheduling
Some Topics in OR.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Scheduling and Resource Access Protocols: Basic Aspects
Algorithm Design Methods
EEE 6494 Embedded Systems Design
CprE 458/558: Real-Time Systems
Chapter 6: CPU Scheduling
Process Scheduling B.Ramamurthy 11/18/2018.
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter5: CPU Scheduling
CSCI1600: Embedded and Real Time Software
Chapter 6: CPU Scheduling
Outline Scheduling algorithms Multi-processor scheduling
Limited-Preemption Scheduling of Sporadic Tasks Systems
CSCI1600: Embedded and Real Time Software
Processes and operating systems
Algorithm Design Methods
NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and Johnson, W.H. Freeman and Company, 1979.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
CHAPTER 8 Resources and Resource Access Control
Algorithm Design Methods
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Single Machine Deterministic Models
Chapter 6: CPU Scheduling
The End Of The Line For Static Cyclic Scheduling?
Ch 4. Periodic Task Scheduling
Algorithm Design Methods
Module 5: CPU Scheduling
Presentation transcript:

Implications of Classical Scheduling Results For Real-Time Systems John A. Stankovic, Marco Spuri, Marco Di Natale and Giorgo Buttazzo

Introduction Classical scheduling theory Vast amount of literature Not always directly applicable for RT systems Summarize implications and new results Provide important insight in making good design choices Address common problems and design issues

Contents Preliminaries Uni-processor systems Multiprocessor systems Preemptive vs. Non-preemptive Precedence constraints Shared resources Overload Multiprocessor systems Static vs. Dynamic Preemptive vs. Non-preemtive Anomalies An analogie Complexity increased gradually

Static vs. Dynamic scheduling The algorythm has complete preliminary knowledge regarding the task set, constraints, deadlines, computation times, release times i.e. laboratory experiment, process control Dynamic The algorythm has complete knowledge on the current state, but nothing about the future i.e. multi-agent problems Typically problems are static

On-line vs. Off-line On-line calculating the schedule Takes the current conditions into account Decisions are based on the current conditions Off-line scheduling is always done Premilinary analysis (what we should expect) A scheduling algorithm can be applied to both static and dynamic scheduling, and can be used on- or off-line Dynamic case: static scheduling can be applied to the worst case off-line

Metrics Carefully choose metrics Minimize the: Sum of completion time Weighted sum of completion time Schedule length Number of required processors Maximum lateness (useful) Number of tasks who miss their deadlines (usually used) Deadlines are usually included as constraints *Useful ** Usually used

Problem with the Lmax property First case: everybody missed their deadlines Second case: 4 out of 5 have met their deadlines

Complexity theory NP P NP-Complete NP-Hard NP: a proof can be recognized in polynomial time, but no polynomial algorithm exist to solve the problem P: polynomial algorithm exist to find a proof NP-Complete: R ϵ NP-Complete if all NP problems can be polynomial tranformed to R and R ϵ NP NP-Hard: R ϵ NP-Hard if all NP problems are polynomial transformable to R

Uni-processor systems Problem definition syntax α | β | γ α: machine environment (number of processors) β: job characteristics (preemption, constraints...) γ: optimality criterion (max lateness... etc.)

Independent tasks 1. Preemption vs Nonpreemption First thing to consider: use of preemption Problem: 1 | nopmtn | Lmax Single machine – no preemption – minimize maximum lateness Jackson’s Rule: Any sequence is optimal that puts the jobs in order of nondecreasing due dates EDF algorithm: Earliest Deadline First (ϵ P) EDD optimal is some other cases too Proof: interchange argument (not presented)

Independent tasks 2. Release times Release time: a task has release time ri if its execution cannot start before time ri 1 | nopmtn, rj | Lmax ϵ NP 1 | pmtn, rj | Lmax ϵ P Jackson’s Rule Modified: Any sequence that any instant schedules the job with the earliest due date among all the eligible jobs is optimal with respect to Lmax

Independent tasks 3. EDF and LLF Proof of Jackson’s rule is the interchange argument (not discussed in the paper) Usually allowing preemption decreases complexity EDF and LLF algorithms are optimal in these cases LLF = Least Laxity First (laxity = „slack time”) Laxity = d-t-c t c Slack time d

Independent tasks 4. Rate monotonic approach Rate-monotonic approach (Liu and Layland) Shorter period – higher priority A set of n independent periodic jobs can be scheduled by the rate monotonic policy if pi: worst case execution time; Ti: period 69% utilization can always be achieved Both rate-monotonic and EDF are broadly used

Precendence constraints 1. Tasks are not independent anymore i → j means that task i must precede task j G(V,E) precedence graph can be constructed 1 | prec, nopmtn | Lmax Lawler algorithm solves it in: O(n2) (ϵ P) But task start times must be identical

Precendence constraints 2. If we introduce release times, the problem becomes exponential (1 | prec, nopmtn, ri | Lmax) ϵ NP The general case cannot be solved BUT: Polynomial algorithm exists when the precedence graph is a series-parallel graph

Precendence constraints 3. Series-parallel graphs 1. Graphs that can be construted from an empty graph with two operators: Or if their transitive closure does not contain a Z-graph

Precendence constraints 4. Series-parallel graphs 2. Series-parallel graphs only contain intrees OR outrees, but not both of them The precedence problem than can than be solved with Lawler’s algorithm in O(|N| + |A|) ϵ P |N| - number of nodes; |A| - number of edges

Precendence constraints 5. Bad news: Z-graphs almost always occur in RT systems For example: an asyncronous send followed by a syncronous receive Preemption again can reduce the complexity of the scheduling problem (1 | prec, pmtn, ri | Lmax) ϵ O(n2) Baker’s algorithm (not discussed)

Precedence constraints 6. Another idea is to encode the precedences into the deadlines, and use EDF Blazewicz: EDF is optimal for this case if we revise the deadlines and release dates of tasks according to these formulas: starting from tasks having no successor step by step starting from tasks having no predecessor step by step

Precedence constraints 7. Still no shared resources are taken into account The general problem of scheduling tasks with precedence constraints and resource conflicts is still NP-hard Solutions usually use heuristic and branch-and-bound methods Branch and bound (BB) is a general algorithm for finding optimal solutions of various optimization problems, especially in discrete and combinatorial optimization. It consists of a systematic enumeration of all candidate solutions, where large subsets of fruitless candidates are discarded en masse, by using upper and lower estimated bounds of the quantity being optimized.

Shared resources 1. Problem is solved with mutual exclusion primitives Several additional problems arise: Mok: When there are mutual exclusion constraints, it is impossible to find a totally on-line optimal run-time scheduler It is even worse: The problem of deciding whether it is possible to schedule a set of periodic processes which use semaphores (only to enforce mutual exclusion) is NP-hard

Shared resources 2. Even deciding whether a solution exists, is NP-hard Proof: polynomial tranformation to the 3-partition problem (a.k.a. Karp-reduction) 3 partition problem ϵ NP A given multiset of integers Divide it into three equal-sum groups

1 | nopmtn, prec, rj, pj=1 | Cmax Shared resources 3. Mok also points out: the reason for the NP-hardness is the different possible computation times of the mutually exclusive blocks Confirmation: 1 | nopmtn, rj, pj=1 | Lmax and 1 | nopmtn, prec, rj, pj=1 | Cmax ϵ P

Shared resources 4. Somehow the algorithm should force using same length critical sections Sha and Baker found efficient suboptimal solutions guaranteeing minimum level of performance Kernelized monitor: Use longer time quantum on the processor, than the longest critical section:

Shared resources 5. Mok: If a feasible schedule exists for an instance of the process model with precedence constraints and critical sections, then the kernelized monitor scheduler can be used to produce a feasible schedule

Shared resources 6. Rate-monotonic approach – Priority Ceiling Procotol (PCP) We assign priority to the mutex object We prevent accessing all the mutexes based on this priority Proved to be deadlock-free Prevents unbounded priority inversion (a job can block only once) Chen and Lin extended PCP to work with EDF

Shared resources 7. Stack Resource Policy (SRP) A more general solution by Baker A job should not be permitted to start until the resources currently available are sufficient to meet its maximum requirements until the resources currently available are sufficient to meet the maximum requirements of any single job that might preempt it The first property prevents deadlocks, the second prevents multiple priority inversion

Shared resources - summary It is very important to deal with the problem of shared resources The classical results are usually applicable to RT systems, but only in uniprocessor systems

Overload and value 1. If transient large overload occur, we still want a suboptimal schedule Some tasks should meet their deadlines between all conditions We associate values with tasks, so that we can define our preferences

Overload and value 2. EDF (and LLF) algorithms perform very poorly in overloaded conditions EDF gives the highest priority to tasks with the closest deadline, so the „Domino effect” may occur For example: all tasks miss their deadline, while a suboptimal solution could have been found

Overload and value 3. We use different metrics, however Lmax=0 could express that every task should meet its deadline Task sets with values: wi Smith’s rule: finding an optimal schedule for: is given by any sequence that puts jobs in order of non-decreasing ratios:

Overload and value 4. This solution does not work in general All of these problems are in NP 1 | prec | ΣwjCj 1 | dj | ΣwjCj 1 | prec | Σcj 1 | prec, pj=1| ΣwjCj These are solved by a polynomial algorithm: 1 | chain | ΣCj 1 | series-parallel | ΣCj 1 | dj | ΣCj

Overload and value 5. Baruah: there is an upper bound on the performance of any on-line, preemptive algorithm working between overloaded conditions Competitve factor: ratio of the cumulative values accomplished by the algorithm and the clairvoyant scheduler No on-line scheduling algorithm exists with a competitive factor greater than 0.25

Overload and value 6. Ratio to the clairvoyant scheduler 1 0.385 0.25 Load 1 2 This is the achiveable competitive factor as the function of the load size

Summary of uni-processor results Huge amount of theoretical results, Many used algorithms are based on the EDF or the rate-monotonic scheduling Operating in overload and fault-tolerant scheduling are the fields where additional research is necessary

Multi-processor RT scheduling Far less results are presented in this field Almost all of the problems are NP-hard The most important goal is to develop clever heuristics There are serious anomalies that should be avoided Processors are considered to be identical

Deterministic (static) scheduling 1. Non-preemptive Multiprocessor scheduling results usually consider tasks with constant execution time Theorems for non-preemptive, partially ordered tasks with resource constraints and one single(!) deadline cases show that they are almost always NP-hard The following theorems consider arbitrary partial, forest partial ordered and independent tasks Forest partial order: in terms of the precedence graph

Deterministic (static) scheduling 2. Non-preemptive Processors Resources Ordering Computational time Complexity Theorem 2 Arbitrary Unit P Coffmann and Graham Independent NP Garey and Johnson 1 or 2 Units 1 Forest 3 N Hu Ulmann

Deterministic (static) scheduling 3. Non-preemptive These cases are far less complex than a usual embedded system scheduling problem No unit tasks More shared resources Tasks with different deadlines (!) Heuristical algorithms must be used

Deterministic (static) scheduling 4. Preemptive Introducing preemption usually makes the problem easier, but: P | pmtn | ΣwjCj McNaughton: For any instance of the multiproc. Scheduling problem, there exists a schedule with no preemption for which the value of the sum of computation times is as small as for any schedule with a finite number of preemptions There is no advantage of preemption in this case We should rather minimize overhead (such as context switches) and not use preemption

Deterministic (static) scheduling 5. Preemptive Lawler: The multiprocessing problem of scheduling P processors, with task preemption allowed and where we try to minimize the number of late tasks is NP-hard. (P | pmtn | ΣUj) ϵ NP Uj – late tasks Solution always requires heuristics!

Dynamic scheduling 1. There are very few theoretical results in this field Consider the EDF algorithm (which is optimal in the uni-processor case): Mok: Earlies deadline first scheduling is not optimal in the multiprocessor case Example

Example of EDF in multiprocess case Ti(Ci, di): T1(1,1), T2(1,2), T3(3,3.5) P1 T1 T3 EDF P2 T2 t 1 2 3 3.5 4 P1 T1 T2 optimal P2 T3 t 1 2 3 3.5 4

Dynamic scheduling 2. Mok: For two or more processors, no deadline scheduling algorithm can be optimal without complete a priori knowledge of deadlines computation times start times of tasks This implies, that none of the classical scheduling algorithms can be optimal when used online

Dynamic scheduling 3. Possibilities Analyse the worst case scenario. If a scheduling exists, than every run-time situation can be schedules Use well-developed heuristics – this can really increase computational requirements (sometimes additional hardware required) Baruah: No on-line scheduling algorithm can guarantee a cumulative value greater than one-half for the dual-processor case

Multiprocessing anomalies 1. Richard’s anomalies Optimal schedule , fixed number of processors, fixed execution times, precedence constaints Graham: For the stated problem, changing the priority list, increasing the number of processors, reducing execution times, or weakening the precedence constraints can increase the schedule length.

Multiprocessing anomalies 2. Weakening the constaints can ruin the schedule Example: P1 P2 ↓ We decrease the C1 P1 P2 Static allocation: P1: T1; T2 P2: T3; T4; T5

Multiprocessing anomalies 3. Richard’s anomalies are the proof of that it is not always sufficient to schedule the worst-case We can overcome these anomalies by having tasks simply idle if they finish earlier than their allocated computation time This can be really inefficient However their are solution for this [Shen]

Similarity to bin-packing Bin-packing problem is a famous algorithmic problem There are N bins, each have a capacity There are boxes, and we have to put them into those bins Two variations: What is the minimum number of required bins (same size) Fixed number of bins given, minimize the maximum bin length

Bin-packing implications Several algorithms can be used, such as: first-fit (FF), best-fit (BF), first-fit decreasing (FFD), best-fit decreasing (BFD) Theroetical boundaries exists: For FF and BF worst case: (17/10) L* (L* = optimal) FFD >= BFD boundary: (11/9) L* (L* = optimal) In RT systems, we have much more constraints than the ones this analogie takes into account, but the implications might still be useful in off-line analysis

Summary Uniprocessor RT scheduling can use the vast amount of theoretical knowledge from the classical theory We do not know too much from multi-processor scheduling, and most of the problems are NP-hard We need to develop clever heuristics, and do additional research in these fields

Thank you for you attention!