Chapter 6: Scheduling Algorithms Dr. Amjad Ali

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

Topic : Process Management Lecture By: Rupinder Kaur Lecturer IT, SRS Govt. Polytechnic College for Girls,Ludhiana.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 6a: CPU Scheduling.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Silberschatz, Galvin and Gagne ©2007 Processes and Their Scheduling.
Chapter 6: CPU Scheduling
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Chapter 5 CPU Scheduling Bernard Chen Spring 2007.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
Purpose of Operating System Part 2 Monil Adhikari.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 6: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
lecture 5: CPU Scheduling
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Chapter 5a: CPU Scheduling
Priority Scheduling Example
Operating Systems Processes Scheduling.
Process Scheduling B.Ramamurthy 9/16/2018.
Scheduling (Priority Based)
Chapter 5: CPU Scheduling
CPU Scheduling.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
Process Scheduling B.Ramamurthy 11/18/2018.
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Process Scheduling B.Ramamurthy 12/5/2018.
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Lecture 2 Part 3 CPU Scheduling
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 2/23/2019.
Processor Scheduling Hank Levy 1.
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Process Scheduling B.Ramamurthy 4/19/2019.
Process Scheduling B.Ramamurthy 4/24/2019.
Chapter 6: CPU Scheduling
Topic 5 (Textbook - Chapter 6) CPU Scheduling
Module 5: CPU Scheduling
Process Scheduling B.Ramamurthy 5/7/2019.
CPU SCHEDULING CPU SCHEDULING.
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling.
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Presentation transcript:

Chapter 6: Scheduling Algorithms Dr. Amjad Ali

Context switching A context switch is the computing process of storing and restoring the state (context) of a CPU so that execution can be resumed from the same point at a later time. This enables multiple processes to share a single CPU. The context switch is an essential feature of a multitasking operating system. Context switches are usually computationally intensive and much of the design of operating systems is to optimize the use of context switches

Context switching The following steps are needed for a context switch to occur: Save the values of the program counter, registers, stack pointer. Change the state of the process in the PCB from running state to blocked state. Update the various times for process accounting information. Select a new process for execution. Update the state field in its PCB to running. Update memory management related information. Change the context of the CPU to the new process by loading the values of program counter and registers that were saved earlier. Context switching is a costly operation. Choose the time slice wisely. < 10 usec

CPU Switch From Process to Process

Process Scheduling Maximize CPU use, quickly switch processes onto CPU for time sharing Process scheduler selects among available processes for next execution on CPU Maintains scheduling queues of processes Job queue – set of all processes in the system Ready queue – set of all processes residing in main memory, ready and waiting to execute Device queues – set of processes waiting for an I/O device Processes migrate among the various queues

Representation of Process Scheduling Queueing diagram represents queues, resources, flows

Schedulers Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU Sometimes the only scheduler in a system Short-term scheduler is invoked very frequently (milliseconds)  (must be fast) Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow) The long-term scheduler controls the degree of multiprogramming

Schedulers Processes can be described as either: I/O-bound process – spends more time doing I/O than computations, many short CPU bursts CPU-bound process – spends more time doing computations; few very long CPU bursts Types of process scheduling Preemptive Can be taken out from the processor after the time slice gets expire Non-preemptive Can’t be taken out unless it completes its execution

Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per unit time Turn around time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time

First-Come, First-Served (FCFS) Scheduling Process Burst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 P1 P2 P3 24 27 30

FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P2 , P3 , P1 The Gantt chart for the schedule is: Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect - short process behind long process P1 P3 P2 6 3 30

Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst Use these lengths to schedule the process with the shortest time SJF is optimal – gives minimum average waiting time for a given set of processes The longer processes may starve if there is a steady supply of shorter processes.

Example of SJF ProcessTime Burst Time P1 0.0 6 P2 2.0 8 P3 4.0 7 SJF scheduling chart Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 P4 P3 P1 3 16 9 P2 24

Class Exercise Process Execution Time P1 7 P2 8 P3 4 P4 3

Class Exercise Process Execution Time P1 7 P2 8 P3 4 P4 3 14 7 P2 22 Average waiting time = (7 + 14 + 3 + 0) / 4 = 6

Shortest Remaining Time First The preemptive version of SJF. Calculate the remaining execution time for each process Schedule the process with the minimum remaining time Like SJF, longer processes can starve in SRT algorithm

Example of Shortest-remaining-time-first Now we add the concepts of varying arrival times and preemption to the analysis ProcessA arri Arrival TimeT Burst Time P1 0 8 P2 1 4 P3 2 9 P4 3 5 Preemptive SJF Gantt Chart Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec P1 P2 1 17 10 P3 26 5 P4

Class Exercise Process Arrival Time Execution Time P1 7 P2 1 3 P3 2 8 7 P2 1 3 P3 2 8 P4 4

Class Exercise Process Arrival Time Execution Time P1 7 P2 1 3 P3 2 8 7 P2 1 3 P3 2 8 P4 4 P1 P2 P4 P1 P3 8 14 22 1 4 P1 = 8 – 1 = 7 P2 = 1 – 1 = 0 P3 = 14 – 2 = 12 P4 = 4 – 3 = 1 Average waiting time = (7 + 0 + 12 + 1) / 4 = 5

Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer  highest priority) Preemptive Non-preemptive SJF is priority scheduling where priority is the inverse of predicted next CPU burst time Problem  Starvation – low priority processes may never execute Solution  Aging – as time progresses increase the priority of the process

Example of Priority Scheduling ProcessA arri Burst TimeT Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 Priority scheduling Gantt Chart Average waiting time = 8.2 msec P2 P3 P5 1 18 16 P4 19 6 P1

Round Robin (RR) Each process gets a small unit of CPU time (time quantum or time slice) q, usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end or tail of the ready queue which is treated as circular queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units. Each process must wait no longer than (n-1)q time units untill its next time quantum. Timer interrupts every quantum to schedule next process Performance q large  FIFO q small  q must be large with respect to context switch, otherwise overhead is too high

Example of RR Process Burst Time P1 24 P2 3 P3 3 Time slice = 4 The Gantt chart is: Typically, higher average turnaround time than SJF, but better response time q should be large compared to context switch time q usually 10ms to 100ms, context switch < 10 usec P1 P2 P3 4 7 10 14 18 22 26 30

Algorithm Evaluation How to select CPU-scheduling algorithm for an OS? Determine criteria, then evaluate algorithms Deterministic modeling Type of analytic evaluation Takes a particular predetermined workload and defines the performance of each algorithm for that workload Consider 5 processes arriving at time 0:

Deterministic Evaluation For each algorithm, calculate minimum average waiting time Simple and fast, but requires exact numbers for input, applies only to those inputs FCFS is 28ms: Non-preemptive SJF is 13ms: RR is 23ms: