Is It Time Yet? Wing On Chan. Distributed Systems – Chapter 18 - Scheduling Hermann Kopetz.

Slides:



Advertisements
Similar presentations
Real Time Scheduling.
Advertisements

EE5900 Advanced Embedded System For Smart Infrastructure
Chapter 7 - Resource Access Protocols (Critical Sections) Protocols: No Preemptions During Critical Sections Once a job enters a critical section, it cannot.
Introduction to Embedded Systems Resource Management - III Lecture 19.
Priority INHERITANCE PROTOCOLS
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 8 SCHEDULING.
1 EE5900 Advanced Embedded System For Smart Infrastructure RMS and EDF Scheduling.
CS5270 Lecture 31 Uppaal, and Scheduling, and Resource Access Protocols CS 5270 Lecture 3.
Courseware Scheduling of Distributed Real-Time Systems Jan Madsen Informatics and Mathematical Modelling Technical University of Denmark Richard Petersens.
Task Allocation and Scheduling n Problem: How to assign tasks to processors and to schedule them in such a way that deadlines are met n Our initial focus:
Tasks Periodic The period is the amount of time between each iteration of a regularly repeated task Time driven The task is automatically activated by.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
1 Concurrency: Deadlock and Starvation Chapter 6.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
CprE 458/558: Real-Time Systems
Spring 2002Real-Time Systems (Shin) Rate Monotonic Analysis Assumptions – A1. No nonpreemptible parts in a task, and negligible preemption cost –
Chapter 4 – Periodic Task Scheduling In many real-time systems periodic tasks dominate the demand. Three classic periodic task scheduling algorithms: –
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 6 CPU SCHEDULING.
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
© Oxford University Press 2011 DISTRIBUTED COMPUTING Sunita Mahajan Sunita Mahajan, Principal, Institute of Computer Science, MET League of Colleges, Mumbai.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
Scheduling policies for real- time embedded systems.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
Real-Time Scheduling CS 3204 – Operating Systems Lecture 20 3/3/2006 Shahrooz Feizabadi.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
2.5 Scheduling Given a multiprogramming system. Given a multiprogramming system. Many times when more than 1 process is waiting for the CPU (in the ready.
Undergraduate course on Real-time Systems Linköping 1 of 45 Autumn 2009 TDDC47: Real-time and Concurrent Programming Lecture 5: Real-time Scheduling (I)
Real Time Systems Real-Time Schedulability Part I.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
CS244-Introduction to Embedded Systems and Ubiquitous Computing Instructor: Eli Bozorgzadeh Computer Science Department UC Irvine Winter 2010.
Special Class on Real-Time Systems
Real-Time Scheduling CS 3204 – Operating Systems Lecture 13 10/3/2006 Shahrooz Feizabadi.
1 Real-Time Scheduling. 2Today Operating System task scheduling –Traditional (non-real-time) scheduling –Real-time scheduling.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
2.5 Scheduling. Given a multiprogramming system, there are many times when more than 1 process is waiting for the CPU (in the ready queue). Given a multiprogramming.
Introduction to Embedded Systems Rabie A. Ramadan 5.
CSCI1600: Embedded and Real Time Software Lecture 23: Real Time Scheduling I Steven Reiss, Fall 2015.
Lecture 2, CS52701 The Real Time Computing Environment I CS 5270 Lecture 2.
Introduction to Real-Time Systems
Sandtids systemer 2.modul el. Henriks 1. forsøg m. Power Point.
Real-Time Scheduling --- An Overview Real-Time Scheduling --- An Overview Xiaoping He.
1.  System Characteristics  Features of Real-Time Systems  Implementing Real-Time Operating Systems  Real-Time CPU Scheduling  An Example: VxWorks5.x.
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
Lecture 6: Real-Time Scheduling
Distributed Process Scheduling- Real Time Scheduling Csc8320(Fall 2013)
Real-Time Operating Systems RTOS For Embedded systems.
Embedded System Scheduling
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Scheduling and Resource Access Protocols: Basic Aspects
EEE 6494 Embedded Systems Design
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
CSCI1600: Embedded and Real Time Software
Chapter 6: CPU Scheduling
CSCI1600: Embedded and Real Time Software
Processes and operating systems
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Is It Time Yet? Wing On Chan

Distributed Systems – Chapter 18 - Scheduling Hermann Kopetz

3 Scheduling Algorithm Classifications Real-Time Scheduling Soft Hard –Dynamic »Preemptive »Non-preemptive –Static »Preemptive »Non-preemptive

4 Scheduling Problem Distributed hard real-time systems –Execute a set of concurrent RT transactions such that all time-critical transactions meet their deadlines –Transactions need resources (Computational, communication, and data) –Decomposition Within a node Communication resources

5 Hard RT Vs Soft RT Scheduling Hard RT Systems –Deadlines must be guaranteed before execution starts. –Probability that the transaction finishes before it’s deadline not enough. –Off-line schedulability tests –Feasible static schedules

6 Hard RT Vs Soft RT Scheduling Soft RT Systems –Violation of timing not critical –Cheaper resource-inadequate solutions can be used –Under adverse conditions, it is tolerable that transactions not meet their timing constraints

7 Dynamic Vs Static Scheduling Dynamic (On-line) Scheduling –Only considers Actual requests Execution time parameters –Costly to find a schedule Static Scheduling (Off-line) –Complete knowledge Maximum execution time Precedence constraints Mutual exclusion constraints Deadlines

8 Preemptive Vs Non-Preemptive Preemptive –Can be interrupted by more urgent tasks Safety assertions Non-Preemptive –No interruptions –Shortest response = Longest + Shortest task –Reasonable for scenarios with many short tasks

9 Central Vs Distributed Dynamic distributed RT systems –Central Scheduling –Distributed Algorithms Requires up-to-date information in all nodes Significant communication costs

10 Schedulability Test Determine if a schedule exists –Exact –Necessary –Sufficient Optimal scheduler –Optimal if schedule can be found when the exact schedulability test says there is one Exact schedulability test –Belongs to the class of NP-complete problems

11 Schedulability Test Sufficient schedulability test –Sufficient but not necessary condition Necessary schedulability test –Necessary but not sufficient condition Difference between deadline d i and computation time c i (laxity) be non-negative

12 Periodic Tasks –After the initial task request All future requests known Adding multiples of known period to initial request time

13 Periodic Tasks Task set {T i } of periodic tasks –Periods - p i –Deadlines - d i –Processing requirements – c i Sufficient to examine schedules with length equal to the least common multiple of the periods in {T i }

14 Periodic Tasks Necessary schedulability test –Sum of utilization factors  i must be less than or equal to n, where n is the number of processors –  =  (c i / p i ) <= n –  i = Percentage of time the task T i requires the service of a CPU

15 Sporadic Tasks Request times are not known beforehand Must be a minimum interval p i between any two request times of sporadic tasks If p i doesn’t exist, then the necessary schedulability test will fail Aperiodic tasks = No constraints on request times

16 Optimal Dynamic Scheduling Consider a dynamic scheduler with full past knowledge only –Exact schedulability is impossible –New definition of optimal dynamic scheduler Optimal if it can find a schedule whenever a clairvoyant scheduler can find a schedule

17 Adversary Argument If there are mutual exclusion constraints between periodic and sporadic tasks, then in general, it is impossible to find an optimal totally online dynamic scheduler.

18 Adversary Argument

19 Adversary Argument Necessary schedulability test –  =  (c i / p i ) <= n –  = (2/4) + (1/4) = (3/4) <= 1 Suppose that when T 1 starts, T 2 requests service –Mutually exclusive –T 2 has a laxity of d 2 – c 2 = = 0 –T 2 will miss it’s deadline

20 Adversary Argument Clairvoyant scheduler –Schedule periodic task between sporadic tasks –Laxity of periodic task > execution time of sporadic task, so scheduler will always find a schedule

21 Adversary Argument If the on-line scheduler has no future knowledge about sporadic tasks, scheduling becomes unsolvable. Predictable hard RT systems are only feasible if there are regularity assumptions

22 Dynamic Scheduling Dynamic scheduling algorithm –Determines task after occurrence of a significant event –Based on current task requests

23 Rate Monotonic Algorithm Classic algorithm for hard RT systems with a single CPU Dynamic preemptive algorithm Static task priorities

24 Rate Monotonic Algorithm Assumptions 1.All requests in set {T i } are periodic 2.All tasks are independent. No precedence or mutual exclusion constraints 3.d i = p i 4.The maximum c i is known and constant 5.Context switching time is ignored 6.  =  (c i / p i ) <= n (2 1/n – 1) [approaches ln 2 or 0.7]

25 Rate Monotonic Algorithm Algorithm defines task priorities –Short p i tasks get higher priority –Longer p i tasks get lower priority –During run-time, always run the highest priority If all assumptions are met, all T i meet their deadlines Optimal for single processor systems

26 Earliest-Deadline-First Algorithm Optimal dynamic preemptive algorithm Uses dynamic priorities Assumptions 1-5 of the Rate Monotonic Algorithm must also hold  can go up to 1, even with tasks that do not have p i s that are multiples of the shortest period After a significant event –Task with the shortest d i gets the highest dynamic priority

27 Least-Laxity Algorithm Optimal in single processor system Same assumptions as Earliest-Deadline-First algorithm At scheduling decision point –Task with the shortest laxity (d i – c i ) is given the highest dynamic priority In multiprocessor systems –Earliest-deadline-first and least-laxity algorithms are not optimal –Least-laxity algorithm is able to handle task scenarios that the Earliest-deadline-first algorithm could not

28 Scheduling Dependent Tasks Analysis of tasks with precedence and mutual exclusion constraints more useful Scheduling competing with tasks for resources Possible solutions –Provide extra resources. Simpler sufficient schedulability tests and algorithms. –Divide problem into 2 parts One solved at compile time One solved during run-time (Simpler of the two) –Add restricting regularity assumptions

29 Kernelized Monitor For a set of short critical sections, the longest critical section less than a given duration q. Allocates processor time in uninterruptible quantums of q. –Assumes all critical sections can be started and completed within this single uninterruptible –Process may only be interrupted after xq where x is an integer

30 Kernelized Monitor Example: –Assume there are two periodic tasks T 1 : c 1 = 2, d 1 = 2, p 1 = 5 T 2 : c 21 = 2, c 22 = 2, d 2 = 10, p 2 = 10 T 2 has two scheduling blocks –C 22 of T 2 is mutually exclusive to T 1 q = 2

31 Kernelized Monitor At t = 5, the earliest-deadline algorithm will need to schedule T 1 again but it can’t since T 22 is block the critical section between T 1 and T 22

32 Kernelized Monitor Region before the second activation of T 1 is blocked –Forbidden region Dispatcher must know about all forbidden regions during compile time

33 Priority Inversion Consider three tasks T 1, T 2, and T 3 with T 1 having the highest priority –Scheduled with rate-monotonic algorithm –T 1 and T 3 require exclusive access to a resource protected by a semaphore S

34 Priority Inversion –T 3 starts and has exclusive access to resource –T 1 requests service but is blocked by T 3 –T 2 requests service and is granted service –T 2 finishes –T 3 finishes and releases S –T 1 starts and finishes –Actual execution is T 2, T 3, then T 1 –Solution: Priority Ceiling Protocol

35 Priority Ceiling Protocol Priority ceiling (PC) of S = priority of the highest task that can lock S T only enters a new critical section if it’s priority is higher than the PC of all semaphores locked by tasks != T Runs at assigned priority unless in critical region and blocks higher priority tasks –Inherits highest priority of blocked tasks while in the critical region –Returns to assigned priority when exiting

36 Priority Ceiling Protocol 1.T 3 starts 2.T 3 locks S 3 3.T 2 starts and preempts T 3 4.T 2 is blocked when locking S 3. T 3 resumes at T 2 ’s inherited priority

37 Priority Ceiling Protocol 5. T 3 enters nested critical region and locks S T 1 starts and preempts T 3 7. T 1 is blocked when locking S 1. T 3 resumes 8. T 3 unlocks S 2. T 1 awakens and preempts T 3. T 1 locks S 1

38 Priority Ceiling Protocol 9. T 1 unlocks S T 1 locks S T 1 unlocks S T 1 completes. T 3 resumes at priority of T 2

39 Priority Ceiling Protocol 13. T 3 unlocks S 3. T 2 preempts T 3 and locks S T 2 unlocks S T 2 completes. T 3 resumes 16. T 3 completes

40 Priority Ceiling Protocol One sufficient schedulability test –Set of n periodic tests {T i }, periods p i, computation time c i –Worse case blocking time by lower priority tasks = B i –  i, 1  i  n : (c 1 /p 1 ) + (c 2 /p 2 ) + …+ (c i /p i ) + (B i /p 2 )  i(2 1/i – 1) Not the only test. There are more complex ones. Priority ceiling protocol – Predictable, non- deterministic scheduling protocol

41 Dynamic Scheduling In Distributed Systems Hard to guarantee deadlines in single processor systems Even harder in distributed systems or multiprocessor systems due to communication Applications required to tolerate transient faults like message losses as well as detect permanent faults

42 Dynamic Scheduling In Distributed Systems Positive Acknowldgement or Retransmission (PAR) –Large temporal uncertainty between shortest and longest execution time –Worse case – assume longest time. Poor responsiveness of system Masking Protocols –Send message k + 1 in case the tolerance of k is required. No temporal problem but can’t detect permanent faults due to unidirectional communication

43 Dynamic Scheduling In Distributed Systems Solutions? –No idea –Providing good temporal performance is a “fashionable research topic”

44 Static Scheduling Static schedules guarantees all deadlines, based on known resources, precedence, and synchronization requirements, is calculated off-line Strong regularity assumptions Known times when external events will be serviced

45 Static Scheduling System design –Maximum delay time until request is recognized + maximum transaction response time < service deadline Time –Generally a periodic time-triggered schedule –Time line divided into a sequence of granules (cycle time) –Only one interrupt, a periodic clock interrupt for the start of a new granule –In distributed systems, synchronized to a precision of less than a granule

46 Static Scheduling Periodic with p i being a multiple of the basic granule Schedule period = least common multiple of all p i All scheduling decisions made at compile- time and executed at run-time Optimal schedule in a distributed system => NP complete

47 Search Tree Precedence Graph –Tasks = Nodes, Edges = dependencies Search Tree –Level = unit of time, Depth = period –Path to a leaf node = complete schedule –Goal: Find a complete schedule that observes all precedence and mutual exclusion constraints before the deadline

48 Heurisitc Function Two terms: Actual cost of path, estimated cost to goal Example –Estimate time needed to complete precedence graph (Time Until Response) (TUR) –Necessary estimate of TUR =  (max exec time + communications) –If necessary estimate > deadline, prune branches of the node and backtrack to the parent

49 Increasing Adaptability Weakness: Assumption of strictly periodic tasks Proposed solutions for flexibility –Transformation of sporadic requests into periodic requests –Sporadic server task –Mode changes

50 Transformation Of Sporadic Requests To Periodic Requests Possible to find a schedule if the sporadic task has a laxity One solution: Replace T with a quasisporadic task T ’ –c ’ = c –d ’ = d –p’ = min(p – d + 1, p) Sporadic task with a short latency will demand a lot of resources, but will request it infrequently

51 Sporadic Server Task Periodic server task of high priority created –Maintains execution time for duration of server’s period –When sporadic task arrives, services with server’s priority (Depletes execution time) –Replenishes execution time when active –Sporadic server task is dynamically scheduled in response to a sporadic request

52 Mode Change During system design, identify all modes For each mode, generate a static schedule off-line Analyze mode changes and develop mode change schedule During run-time, when a mode change is requested, change to corresponding static schedule

53 Comparisons Predictability –Static Scheduling Accurate planning of schedule, so precise predictability –Dynamic Scheduling No schedulability tests exist for distributed system with mutual exclusion and precedence relations Dynamic nature can not guarantee timeliness

54 Comparisons Testability –Static Scheduling Performance tests of every task can be compared with established plans Systematic and constructive since all input cases can be observed –Dynamic Scheduling Confidence of timeliness based on simulations Real loads not enough since rare events don’t occur often Are the simulated loads representative of real loads?

55 Comparisons Resource Utilization –Static Scheduling Planned for peak load with time for each task at least the maximum execution time If many operating modes, can lead to “combinatorial explosion” of static schedules –Dynamic Scheduling Processor available more quickly Resources needed to do dynamic scheduling

56 Comparisons Resource Utilization –Dynamic Scheduling (Cont’d) If loads low, better utilization than static schedule If loads high, more resources used for dynamic scheduling and less for execution of tasks

57 Comparisons Extensibility –Static Scheduling If a new task is added or the maximum execution time is modified, the schedule needs to be recalculated If the new node sends information into the system, the communications schedule needs to be recalculated Impossible to calculate static schedule if the number of tasks changes dynamically during run-time

58 Comparisons Extensibility –Dynamic Scheduling Easy to add/modify tasks Change can ripple through system Probability of change and system test-time are proportional to tasks. Assessing the consequences increase more than linearly with the number of tasks Scales poorly for large applications