Reducing the Number of Preemptions in Real-Time Systems Scheduling by CPU Frequency Scaling Abhilash Thekkilakattil, Anju S Pillai, Radu Dobrin, Sasikumar.

Slides:



Advertisements
Similar presentations
Feedback EDF Scheduling Exploiting Dynamic Voltage Scaling Yifan Zhu and Frank Mueller Department of Computer Science Center for Embedded Systems Research.
Advertisements

Washington WASHINGTON UNIVERSITY IN ST LOUIS Real-Time: Periodic Tasks Fred Kuhns Applied Research Laboratory Computer Science Washington University.
Real Time Scheduling.
Reducing the Number of Preemptions in Real-Time Systems Scheduling by CPU Frequency Scaling Abhilash Thekkilakattil, Anju S Pillai, Radu Dobrin, Sasikumar.
Pinwheel Scheduling for Power-Aware Real-Time Systems Gaurav Chitroda Komal Kasat Nalini Kumar.
Energy-efficient Task Scheduling in Heterogeneous Environment 2013/10/25.
1 “Scheduling with Dynamic Voltage/Speed Adjustment Using Slack Reclamation In Multi-processor Real-Time Systems” Dakai Zhu, Rami Melhem, and Bruce Childers.
CS 149: Operating Systems February 3 Class Meeting
CPE555A: Real-Time Embedded Systems
Real- time Dynamic Voltage Scaling for Low- Power Embedded Operating Systems Written by P. Pillai and K.G. Shin Presented by Gaurav Saxena CSE 666 – Real.
Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points.
Preemptive Behavior Analysis and Improvement of Priority Scheduling Algorithms Xiaoying Wang Northeastern University China.
System-Wide Energy Minimization for Real-Time Tasks: Lower Bound and Approximation Xiliang Zhong and Cheng-Zhong Xu Dept. of Electrical & Computer Engg.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
Energy, Energy, Energy  Worldwide efforts to reduce energy consumption  People can conserve. Large percentage savings possible, but each individual has.
VOLTAGE SCHEDULING HEURISTIC for REAL-TIME TASK GRAPHS D. Roychowdhury, I. Koren, C. M. Krishna University of Massachusetts, Amherst Y.-H. Lee Arizona.
Abhilash Thekkilakattil, Radu Dobrin, Sasikumar Punnekkat Mälardalen Real-time Research Center, Mälardalen University Västerås, Sweden Preemption Control.
A S CHEDULABILITY A NALYSIS FOR W EAKLY H ARD R EAL - T IME T ASKS IN P ARTITIONING S CHEDULING ON M ULTIPROCESSOR S YSTEMS Energy Reduction in Weakly.
Probabilistic Preemption Control using Frequency Scaling for Sporadic Real-time Tasks Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
Quantifying the Sub-optimality of Non-preemptive Real-time Scheduling Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
1 Reducing Queue Lock Pessimism in Multiprocessor Schedulability Analysis Yang Chang, Robert Davis and Andy Wellings Real-time Systems Research Group University.
Multiprocessor Real-time Scheduling Jing Ma 马靖. Classification Partitioned Scheduling In the partitioned approach, the tasks are statically partitioned.
Abhilash Thekkilakattil, Radu Dobrin, Sasikumar Punnekkat Mälardalen Real-time Research Center, Mälardalen University Västerås, Sweden Towards Preemption.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
The Global Limited Preemptive Earliest Deadline First Feasibility of Sporadic Real-time Tasks Abhilash Thekkilakattil, Sanjoy Baruah, Radu Dobrin and Sasikumar.
Hard Real-Time Scheduling for Low- Energy Using Stochastic Data and DVS Processors Flavius Gruian Department of Computer Science, Lund University Box 118.
6. Application mapping 6.1 Problem definition
Resource Augmentation for Performance Guarantees in Embedded Real-time Systems Abhilash Thekkilakattil Licentiate Thesis Presentation Västerås, November.
Fault Tolerant Scheduling of Mixed Criticality Real-Time Tasks under Error Bursts Abhilash Thekkilakattil, Radu Dobrin and Sasikumar Punnekkat.
Mixed Criticality Systems: Beyond Transient Faults Abhilash Thekkilakattil, Alan Burns, Radu Dobrin and Sasikumar Punnekkat.
1 Real-Time Scheduling. 2Today Operating System task scheduling –Traditional (non-real-time) scheduling –Real-time scheduling.
Multiprocessor Fixed Priority Scheduling with Limited Preemptions Abhilash Thekkilakattil, Rob Davis, Radu Dobrin, Sasikumar Punnekkat and Marko Bertogna.
Yifan Zhu, Frank Mueller North Carolina State University Center for Efficient, Secure and Reliable Computing DVSleak: Combining Leakage Reduction and Voltage.
Undergraduate course on Real-time Systems Linköping University TDDD07 Real-time Systems Lecture 2: Scheduling II Simin Nadjm-Tehrani Real-time Systems.
Distributed Process Scheduling- Real Time Scheduling Csc8320(Fall 2013)
Real-Time Operating Systems RTOS For Embedded systems.
Embedded System Scheduling
CPU SCHEDULING.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
Networks and Operating Systems: Exercise Session 2
Wayne Wolf Dept. of EE Princeton University
EEE 6494 Embedded Systems Design
Priority Scheduling Example
Chapter 8 – Processor Scheduling
Process Scheduling B.Ramamurthy 9/16/2018.
CprE 458/558: Real-Time Systems
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process Scheduling B.Ramamurthy 11/18/2018.
CPU Scheduling Basic Concepts Scheduling Criteria
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Limited-Preemption Scheduling of Sporadic Tasks Systems
Chapter 6: CPU Scheduling
Processes and operating systems
Processes and operating systems
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Real-Time Process Scheduling Concepts, Design and Implementations
Chapter 6: CPU Scheduling
Ch 4. Periodic Task Scheduling
Real-Time Process Scheduling Concepts, Design and Implementations
Presentation transcript:

Reducing the Number of Preemptions in Real-Time Systems Scheduling by CPU Frequency Scaling Abhilash Thekkilakattil, Anju S Pillai, Radu Dobrin, Sasikumar Punnekkat EURECA

Real-time Scheduling Paradigms Preemptive Non-preemptive  Ability to achieve high utilization  Preemption costs  Low runtime overhead Increased blocking times How to take advantage of both paradigms? Dynamic Voltage (and Frequency) Scaling Speed up or slow down task executions Supported by most modern processors Traditionally used to slow down task executions : energy conservation Our work: Applying frequency scaling for preemption control

Preemption related costs Scheduler related cost Overhead involved in saving and retrieving the context of the preempted task Cache related preemption delays Overhead involved in reloading cache lines Vary with the point of preemption Increased bus contention Pipeline related cost Clear the pipeline upon preemption Refill the pipeline upon resumption Energy consumption E.g., off-chip accesses due to cache pollution:10-100 times increase in energy

Related work Limitations of the existing works: DVS: energy conservation- increase in preemptions Pillai-01,Aydin-04, Bini-09, Marinoni-07 Context switches and cache related preemption delays Lee-98 , Schneider-00, Katcher-93 Need for preemption elimination recognized Ramamritham-94, Burns-95, Ramaprasad-06 RM- more preemptions than EDF Buttazzo-05 Preemption aware scheduling algorithms Yang-09, Woonseok -04 Delay preemption, force simultaneous release of tasks Dobrin-04 Preemption threshold Scheduling Saksena-00, Wang-99 Limitations of the existing works: May require change of task attributes May require changes in the scheduler Incur potentially infeasible costs

Our approach We use CPU frequency scaling to control preemption behavior in fixed priority schedulers

Proposed methodology-overview Step 1: Offline preemption identification Step 2: Preemption elimination Calculate the minimum frequency required to eliminate the preemption Analyze the effect of preemption elimination on the rest of the schedule Recalculate start and finish times of task instances Step 3: Iterate until no more preemptions can be eliminated

Classification of preemptions Initial Preemptions Occur if all tasks execute for WCET Identified offline Task A (high priority) Task B (low priority)

Classification of preemptions Potential Preemptions Additional preemptions that may occur when tasks execute for less than WCET Identified offline Finish time of A Task A (highest priority) B executes for less than WCET Task B Task C (lowest priority) Finish time of B and release time (and start time) of A

Step 1: Offline preemption identification Step 2: Preemption elimination a. Calculate the minimum frequency required to eliminate the preemption b. Analyze the effect of preemption elimination on the rest of the schedule Step 3: Iterate until no more preemptions can be eliminated Condition 1. A has higher priority than B AND Condition 2. A is released after release of B AND Condition 3. A starts executing after start of B AND Condition 4. B finishes its execution after the release of A Task A (high priority) Task B (low priority)

Step 1: Offline preemption identification Step 2: Preemption elimination a. Calculate the minimum frequency required to eliminate the preemption b. Analyze the effect of preemption elimination on the rest of the schedule Step 3: Iterate until no more preemptions can be eliminated Condition 1. A has higher priority than B Condition 2. A is released after B Condition 3. A starts executing after start of B Condition 4. B finishes its execution after the release of A Task A (high priority) Cnew Task B (low priority) Execution time is inversely proportional to clock frequency Cold α 1/Fold Cnew α 1/Fnew Fnew = (Cold / Cnew ) Fold

Step 1: Offline preemption identification Step 2: Preemption elimination a. Calculate the minimum frequency required to eliminate the preemption b. Analyze the effect of preemption elimination on the rest of the schedule Step 3: Iterate until no more preemptions can be eliminated Task A (highest priority) Task B Task C (lowest priority) Task A (highest priority) Task B Task C (lowest priority)

Recalculate start times Step 1: Offline preemption identification Step 2: Preemption elimination a. Calculate the minimum frequency required to eliminate the preemption b. Analyze the effect of preemption elimination on the rest of the schedule Step 3: Iterate until no more preemptions can be eliminated Recalculate start times CASE 1: There are no higher priority tasks executing when a task is released Finish time Task A (highest priority) Task B Start time = release time Task C (lowest priority) CASE 2: There are higher priority tasks executing when a task is released Task A (highest priority) Finish time Task B Start time = latest finish time among all higher priority tasks Task C (lowest priority)

Recalculate finish times Step 1: Offline preemption identification Step 2: Preemption elimination a. Calculate the minimum frequency required to eliminate the preemption b. Analyze the effect of preemption elimination on the rest of the schedule Step 3: Iterate until no more preemptions can be eliminated Recalculate finish times Finish time Task A (highest priority) Task B Task C Task D Task E (lowest priority) Start Time + Interference + Execution time = Finish time Response time analysis at instance level

Step 1: Offline preemption identification Step 2: Preemption elimination a. Calculate the minimum frequency required to eliminate the preemption b. Analyze the effect of preemption elimination on the rest of the schedule Step 3: Iterate until no more preemptions can be eliminated Minimizing the number of preemptions: NP hard problem Order of preemption elimination matters! In this work we use heuristics Heuristics evaluated Highest Priority First (HPF) Lowest Priority First (LPF) First Occurring Preemption First (FOPF) Last Occurring Preemption First (LOPF)

Rate monotonic schedule Task A B C D Example Exec. Time Period 1 4 2 8 6 20 40 Order of elimination: First Occurring Preemption First Processor modes Modes M1 M2 M3 Frequency (Hz) 100 150 300 Task instance frequencies (Hz) Task 1 2 3 4 5 6 7 8 9 10 A 100 100 100 100 100 100 100 100 100 100 Rate monotonic schedule B 100 100 100 100 100 - - - - - C 150 100 300 100 - - - - - - - - D 100 - - - - - - - - - 4 8 12 16 20 24 28 32 36 40 C=1 4 C=4 8 12 16 20 C=2 24 28 32 36 40 FC1=6F1=600 Hz FC1=1.5F1=150 Hz FC2=3F1=300 Hz 4 8 C=1 12 16 20 24 28 32 36 40 FD1=4F1=400 Hz 4 8 12 16 20 24 28 32 36 40 D starts earlier

Simulations Task Set Processor Experiments UUniFast1 algorithm 60% to 100% utilization 1000 task sets 5 – 15 tasks LCM of task periods <= 1500 Processor Continuous frequency modes Ratio of maximum to minimum frequency is 10 : 6 Initially runs at the minimum frequency Four ranges of power consumption: can be easily generalized Experiments Based on task priority order (HPF,LPF) Based on the order of occurrence (FOPF,LOPF) 1Bini and Buttazzo, Measuring the Performance of Schedulability Tests, Real-Time Systems, 2005

Preliminary results-initial preemptions 55% preemption reduction after elimination Initial preemptions (Avg. no.) Before elimination Highest Priority First (HPF) Lowest Priority First (LPF) First Occurring Preemption First (FOPF) Last Occurring Preemption First (LOPF)

Preliminary results-potential preemptions 27% preemption reduction after elimination LOPF and LPF fares slightly better Potential preemptions (Avg. no.) Before elimination Highest Priority First (HPF) Lowest Priority First (LPF) First Occurring Preemption First (FOPF) Last Occurring Preemption First (LOPF)

Conclusions Method to control preemption behavior in real time systems using CPU frequency scaling Offline Provides for trade-off between preemptions and energy consumption Requires no modifications to the scheduler Required no modifications to task attributes Main element of cost: energy (showed 4.2 times increase in simulations)

On-going efforts Extensions to sporadic task model Combined offline/online approach Reduce task execution speed Could be a better alternative for preemption control Compensates the energy increase due to speeding up of task executions Optimization Minimize the number of preemptions Minimize overall energy consumption Trade-offs

Questions or Suggestions ? Thank You