Welcome!. PhD Dissertation Defense PhD Candidate: Wenming Li Advisor: Dr. Krishna M. Kavi Committee: Dr. Krishna M. Kavi Dr. Robert Akl Dr. Phil Sweany.

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

CPU Scheduling Tanenbaum Ch 2.4 Silberchatz and Galvin Ch 5.
Module 2 Priority Driven Scheduling of Periodic Task
CPU SCHEDULING RONG ZHENG. OVERVIEW Why scheduling? Non-preemptive vs Preemptive policies FCFS, SJF, Round robin, multilevel queues with feedback, guaranteed.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems 1 K. Salah Module 2.1: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
By Group: Ghassan Abdo Rayyashi Anas to’meh Supervised by Dr. Lo’ai Tawalbeh.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 6: CPU Scheduling
 2004 Deitel & Associates, Inc. All rights reserved. 1 Chapter 8 – Processor Scheduling Outline 8.1 Introduction 8.2Scheduling Levels 8.3Preemptive vs.
MM Process Management Karrie Karahalios Spring 2007 (based off slides created by Brian Bailey)
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Real-Time Scheduling CS4730 Fall 2010 Dr. José M. Garrido Department of Computer Science and Information Systems Kennesaw State University.
Scheduling policies for real- time embedded systems.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Cpr E 308 Spring 2005 Process Scheduling Basic Question: Which process goes next? Personal Computers –Few processes, interactive, low response time Batch.
1 Real-Time Scheduling. 2Today Operating System task scheduling –Traditional (non-real-time) scheduling –Real-time scheduling.
CSCI1600: Embedded and Real Time Software Lecture 24: Real Time Scheduling II Steven Reiss, Fall 2015.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Dynamic Priority Driven Scheduling of Periodic Task
Real-Time Scheduling --- An Overview Real-Time Scheduling --- An Overview Xiaoping He.
Purpose of Operating System Part 2 Monil Adhikari.
Operating System Concepts and Techniques Lecture 6 Scheduling-2* M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and Techniques, First.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Operating Systems Unit 5: – Processor scheduling – Java – Linux – Windows XP Operating Systems.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Real-Time Operating Systems RTOS For Embedded systems.
Embedded System Scheduling
lecture 5: CPU Scheduling
CPU SCHEDULING.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 8 – Processor Scheduling
Process Scheduling B.Ramamurthy 9/16/2018.
Lecture 24: Process Scheduling Examples and for Real-time Systems
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process Scheduling B.Ramamurthy 11/18/2018.
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Process Scheduling B.Ramamurthy 2/23/2019.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Concurrency and Threading: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Welcome!

PhD Dissertation Defense PhD Candidate: Wenming Li Advisor: Dr. Krishna M. Kavi Committee: Dr. Krishna M. Kavi Dr. Robert Akl Dr. Phil Sweany

Group-EDF - A New Approach And An Efficient Non-Preemptive Algorithm for Soft Real-Time Systems

Contributions A new approach for soft real-time systems. A new scheduling algorithm for soft real-time systems and soft Real- Time Operating System (RTOS).

Contributions (Cont’d) Our research work is a new approach for soft real-time systems. - First proposed the strategy of the dynamic grouping of tasks with deadlines. - First proposed a two-level scheduling scenario for real-time tasks.

Contributions (Cont’d) Group-EDF is a new scheduling algorithm for soft RTOS and real- time systems. - First proposed to use Earliest Deadline First (EDF) for dynamic groups and Shortest Job First (SJF) within a group.

Focus Soft real-time systems and soft RTOS. Non-preemptive scheduling. Real-time periodic, aperiodic, or sporadic tasks.

The Taxonomy of Real-time Scheduling Our EDF/gEDF algorithm is applicable to the shaded region

Terminology of the Real-Time Model

Hard Real-Time Systems Every resource management system must work in the correct order to meet time constraints. No deadline miss is allowed. Disadvantage - Low utilization

Soft Real-Time Systems It is similar to hard real-time systems. But it is not necessary that every time constraint be met. Some deadline miss is tolerated. Advantage - High utilization

Non-Preemptive Scheduling Why non-preemptive? - non-preemptive scheduling is more efficient than preemptive scheduling since preemption incurs context switching overhead which can be significant in fine-grained multithreading systems.

Basic Real-Time Scheduling First Come First Served (FCFS) Round Robin (RR) Shortest Job First (SJF)

First Come First Served (FCFS) Simple “first in first out” queue Long average waiting time Negative for I/O bound processes Nonpreemptive

Round Robin (RR) FCFS + preemption with time quantum Performance (average waiting time) is proportional to the size of the time quantum.

Shortest Job First (SJF) Optimal with respect to average waiting time. Requires profiling of the execution times of tasks.

Static Priority Scheduling – Rate-Monotonic (RM) The shorter the period of a task, the higher is its priority (relative deadline = period). A set of n independent, periodic jobs can be scheduled by the rate monotonic policy if e 1 /P 1 + e 2 /P 2 + … + e n /P n  n (2 1/n - 1) - The upper bound on utilization is ln2 = 0.69 as n approaches infinity.

Static Priority Scheduling – Deadline-Monotonic (DM) The shorter the relative deadline of a task, the higher is its priority. Suitable when relative deadline  period For arbitrary relative deadlines, DM outperforms RM in terms of utilization.

Dynamic Priority Scheduling – Earliest Deadline First (EDF) The first and the most effectively widely used dynamic priority-driven scheduling algorithm. Effective for both preemptive and non- preemptive scheduling periodic, aperiodic, and sporadic tasks.

Preemptive EDF For a set of preemptive periodic, aperiodic, and sporadic tasks, EDF is optimal in the sense that EDF will find a schedule if a schedule is possible for other algorithms. - Approach 100% utilization for periodic tasks

Non-Preemptive EDF Optimal for sporadic non-preemptive tasks. Scheduling periodic and aperiodic non- preemptive tasks is NP-hard. - Approach near optimal for non-preemptive scheduling on a uniprocessor system.

Theory of EDF Minimize maximum lateness L max = max {L i | i = 1, …, n} = max {C i - d i | i = 1, …, n} The problem: 1 | nonpmtn | L max Any sequence of jobs in nondecreasing order of due dates d i, results in an optimal schedule. The scheduling problem {1 | nonpmtn, r i | L max } is NP-hard. Let L max = max {C i - d i | i = 1, …, n} = 0, that is, all deadlines of tasks must be met.

POSIX b Portable Operating System Interface (POSIX) b, the IEEE Computer Society’s Portable Application Standards Committee (PASC) - SCHED FIFO - SCHED RR - SCHED OTHER

Related Work Domino Effect of EDF - Overload Overload Detection And Control - Best-effort by value-density V/C - Admission control - Disadvantage: Needing accurate utilization computing Switching between two scheduling schemes Using Worst Case Execution Time (WCET)

Related Work SCAN-EDF for disk scheduling - Use SJF to break deadline ties Quantized deadlines (from CMU) - Static deadline windows

Our Real-time Model A task (job) in a real-time system or a thread in multithreading processing  i is defined as:  i = (r i, e i, D i, P i )

Overview of gEDF Divide real-time jobs into groups by their deadlines, dynamically. Groups are based on EDF but tasks within a group may be scheduled based on a different scheme - SJF, Value, Priority, etc. gEDF is used both in underload and overload.

Overview of gEDF (Cont’d) We use SJF to enhance EDF, but it is extensible to other scheduling schemes. gEDF is suitable for non-preemptive soft-real-time systems. The criteria of selecting suitable grouping policy is flexible Static deadline windows Dynamic windows as jobs arrive

Overview of gEDF (Cont’d) A group in the gEDF algorithm depends on a group range parameter Gr. A job  j belongs to the same group as job  i if d i  d j  (d i + Gr*(d i – t)), where t is the current time, 1  i, j  N. We group jobs with deadlines that are very close to each other. - The jobs with very close deadlines are in a group (but not necessary if at the boundary of groups)

The gEDF Algorithm We assume a uniprocessor system. Q gEDF is a queue for gEDF scheduling. The current time is represented by t. |Q gEDF | represents the length of the queue Q gEDF.  = (r, e, D, P) is the job at the head of the queue. - gEDF Group = {  k |  k  Q gEDF, d k – d 1  D 1 * Gr, 1  k  m, where m  |Q gEDF |}, and D 1 is the deadline of the first job in a group

The gEDF Algorithm (Cont’d) Function Enqueue (Q gEDF,  ) if (  ’s deadline d > t ) then insert job  into Q gEDF by Earliest Deadline First, i.e. d i  d i+1  d i+2, where  i,  i+1,  i+2  Q gEDF, 1  i  |Q gEDF | - 2; end - Enqueue is invoked on job arrivals.

The gEDF Algorithm (Cont’d) Function Dequeue (Q gEDF ) if Q gEDF   then find a job  min with e min = min { e k |  k  Q gEDF, d k – d 1  Gr*D 1, 1  k  m, where m  |Q gEDF |}; run it and delete  min from Q gEDF ; end - Dequeue is called when the processor becomes idle.

The Experiment Used MATLAB provided tools to generate tasks. - In each experiment generated N tasks. - The jobs are scheduling using EDF & gEDF. - The experiment is truncated at a predetermined time T. Success rate is computed based on m out of N jobs completed.

The Experiment (Cont’d) Varied - Load (or utilization) - Execution time - Deadline (tight deadlines & loose deadlines) - Group range - Deadline tolerance (hard vs. soft real-time)

The Experiment (Cont’d) For each set of parameters, the experiment is repeated 100 times and the results shown are the averages from the 100 experiments.

Success Ratio: gEDF vs. EDF Deadline Tolerance Tr = 0.2 Success Ratio: gEDF vs. EDF Deadline Tolerance Tr = 0.2

Success Ratio: gEDF vs. EDF Deadline Tolerance Tr = 0.5 Success Ratio: gEDF vs. EDF Deadline Tolerance Tr = 0.5

Success Ratio: gEDF vs. EDF Deadline Tolerance Tr = 1.0

Success Ratio: gEDF vs. EDF Summary of the three previous figures

The gEDF algorithm obtains higher success ratio under higher system loads. Suitable for soft real-time systems.

Success Ratio: gEDF vs. EDF/Best-Effort/Guarantee Summary when Tr = 0.5

Effect of Deadline Laxity on Success Ratio Tight Deadline  D = 1 (Deadline = Execution Time) and hard real-time

Effect of Deadline Laxity on Success Ratio Tight Deadline  D = 1 (Deadline = Execution Time) and softer real-time

Effect of Deadline Laxity on Success Ratio Loose Deadline  D = 5 (Deadline = 5*Execution Time)

Effect of Deadline on Success Ratio Success Ratio of EDF when  D = 1, 2, 5, 10, and 15 (i.e. Deadline =  D *Execution Time)

Effect of Deadline on Success Ratio Success Ratio of gEDF when  D = 1, 2, 5, 10, and 15 (i.e. Deadline =  D *Execution Time)

Effect of Deadline on Success Ratio The gEDF algorithm has higher performance (i.e. success ratio) than EDF with greater deadline laxity and greater deadline tolerances.

Effect of Group Range (Gr) Gr = 0.1, 0.2, 0.5, 1.0, Tr = 0.1

Effect of Group Range (Gr) Gr = 0.1, 0.2, 0.5, 1.0, Tr = 0.5

Effect of Group Range (Gr) Within our experimental range, the size of the group does not seem to show a great variance. Intuitively - very large range means gEDF = SJF - Very short range means gEDF = EDF Optimal window depends on execution times of jobs, deadline tightness, deadline tolerance.

Response Time: gEDF vs. EDF Deadline Tolerance Tr = 0

Response Time: gEDF vs. EDF Deadline Tolerance Tr = 0.5

Response Time: gEDF vs. EDF Deadline Tolerance Tr = 1.0

Response Time: gEDF vs. EDF The gEDF algorithm can yield better (=faster) response times than EDF. Both in underloaded and overloaded situations. Deadline tolerance Tr has greater impact on gEDF than on EDF.

Response Time: gEDF vs. EDF/Best-Effort/Guarantee Summery Tr = 0.2

The Effect of Deadline on Response Time Response time of EDF when  D = 1, 2, 5, and 10

The Effect of Deadline on Response Time Response time of gEDF when  D = 1, 2, 5, and 10

The Effect of Deadline on Response Time When expected value of deadlines  D is sufficiently large (>2), gEDF results in faster response times than EDF does.

The gEDF Implementation in the Linux Kernel Keep the original functions for non-real- time applications. Modify structure task_struct and add a new specific runqueue for EDF/gEDF. Add the system call (extension to POSIX) sys_sched_setscheduler_plus

The gEDF Implementation in the Linux Kernel (Cont’d) Add a new structure struct edf_param { unsigned long policy; unsigned long period; unsigned long length; }

The gEDF Implementation in the Linux Kernel (Cont’d) Dequeue_edf_task() Enqueue_edf_task() (for EDF & gEDF) Schedule() (include the gEDF algorithm) - Every one jiffy (1ms), entering the kernel to run schedule function (user process can also yield to other process) - Complexity O(n) (If using heap, O(log(n)). ref. Ingo Molnar O(1))

Benchmark Testing Test Suites

Benchmark Testing (Cont’d) Another Test Suite

Testing Results

Testing Results (Cont’d) gEDF’s Success Ratio/EDF’s Success Ratio Y - axis: Load X - axis: gEDF’s Success Ratio / EDF’s Success Ratio

Conclusions gEDF performs as well as or better than EDF and adaptive algorithms such as Best-Effort and Guarantee schemes. In underloaded, gEDF performs as well as EDF in terms of success ratio; gEDF shows higher success rates than EDF when dealing with soft real-time tasks. In underloaded, gEDF performs much better than EDF in terms of response time.

Conclusions (Cont’d) In underloaded, gEDF obtains overall better performance than adaptive algorithms in terms of success ratio and response time. In overloaded, gEDF consistently outperforms EDF both in success ratio and response time. In overloaded, gEDF obtains overall better performance than adaptive algorithms in terms of success ratio and response time.

Conclusions (Cont’d) Summary AlgorithmSuccess RatioResponse Time UnderloadOverloadUnderloadOverload Group-EDF vs. EDF = >>=>> Group – EDF vs. Adaptive Algorithm Best-Effort =>>=> Guarantee Scheme =>>>=>> =: at least as good as>=: better or as good as >: better>>: much better

Future Work Explore the applicability of gEDF algorithm for Scheduled Dataflow (SDF) Architecture. Explore if gEDF can be used to obtain acceptable (and near optimal) results for multiprocessor systems with soft real-time tasks. Exploring different scheduling scheme applied within each gEDF.

gEDF for SDF SU: Scheduling Unit EP: Execution Processor SP: Synchronization Processor PLC: PreloadPSC: PoststoreEXC: Execution

gEDF for Multiprocessor EDF is not optimal for multiprocessor real-time systems. The EDF scheme can be used to schedule dynamic groups on multiprocessors. An optimal or near optimal algorithm may be adopted to schedule jobs distributed on different processors within each dynamic group.

gEDF for Multiprocessor (Cont’d) Advantage for using gEDF - Not limited to SJF - Possible higher success ratios in underloaded and overloaded situations

Scheduling within A Group Exploring different scheduling scheme applied within each gEDF. - A promising research of applying the gEDF scenario. Reduce overall power consumption. - Explore a scheduling scheme that minimizes the power consumed by tasks in a group.

Thank You !