CPU Scheduling.

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 6a: CPU Scheduling.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Chapter 3: CPU Scheduling
Scheduling in Batch Systems
Chapter 6: CPU Scheduling
Chapter 6 CPU SCHEDULING.
CPU S CHEDULING Lecture: Operating System Concepts Lecturer: Pooja Sharma Computer Science Department, Punjabi University, Patiala.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
lecture 5: CPU Scheduling
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
Chapter 5a: CPU Scheduling
CPU Scheduling Algorithms
Operating Systems Processes Scheduling.
Process Scheduling B.Ramamurthy 9/16/2018.
CPU Scheduling.
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Process management Information maintained by OS for process management
CS 143A - Principles of Operating Systems
Process Scheduling B.Ramamurthy 11/18/2018.
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Process Scheduling B.Ramamurthy 12/5/2018.
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Chapter 5: CPU Scheduling
Lecture 2 Part 3 CPU Scheduling
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 2/23/2019.
Process Scheduling B.Ramamurthy 4/11/2019.
Process Scheduling B.Ramamurthy 4/7/2019.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Process Scheduling B.Ramamurthy 4/19/2019.
Process Scheduling B.Ramamurthy 4/24/2019.
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Module 5: CPU Scheduling
Process Scheduling B.Ramamurthy 5/7/2019.
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Chapter 6: CPU Scheduling
CPU Scheduling.
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Presentation transcript:

CPU Scheduling

Outline Scheduling Objectives Levels of Scheduling Scheduling Criteria Scheduling Algorithms FCFS, Shortest Job First, Priority, Round Robin, Multilevel Multiple Processor Scheduling Real-time Scheduling Algorithm Evaluation

Introduction to Scheduling Multiprogramming Environment, Many processes competing for the CPU at the same time. Two or more processes are in ready state and one CPU then choice is to be made which process to run next. The part of the OS that makes the choice is called the scheduler and algorithm it uses is called the scheduling algorithms. Process Scheduling and Thread Scheduling many things are same.

Introduction to Scheduling Batch Systems I/P in the form of card s or a magnetic tape Scheduling was very simple, Just run the next job on the tape Time Sharing Systems Scheduling algorithms became more complex because multiple users waiting for service. Because CPU time is a scarce resource, a good scheduler can make a big difference in perceived performance and satisfaction Great work was done into devising cleaver & efficient scheduling algorithms

Introduction to Scheduling Advent of PCs situation changed in two ways First, most of the time there is only one process Second, computers became so faster that the CPU is rarely a scarce resource any more. Ex. Word or Excel running at a time It hardly matters which goes first since the user is waiting for both of them to finish Scheduling does not matter much on simple PCs

Introduction to Scheduling When we turn to high-end networked workstations and servers the situation changes Multiple processes often compete for the CPU so scheduling matters EX CPU has to choose between running a process that updates the screen after the user has closed the window Running a process that sends out queued email It makes huge difference in the perceived response

Introduction to Scheduling Closing Window must not take time because user can perceive that and If email is delayed by 2 seconds its okay, even may not be noticed also. In this case scheduling matters very much In addition to pick a right process, scheduler also has to worry about making efficient use of the CPU because process switching is also expensive.

Process Behavior CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait Bursts of CPU usage alternate with periods of I/O wait a CPU-bound process an I/O bound process

Process Behavior

When to Schedule CPU scheduling decisions may take place under following circumstances, When a process Switches from running to waiting state For example, when the process has an IO request This is non-preemptive Switches from running to ready state For example, when an interrupt occurs This is preemptive 9/8/2018 10 10

When to Schedule CPU scheduling decisions may take place under following circumstances, When a process Switches from waiting to ready state For example, at the completion of I/O This is preemptive Terminates For example, the process (running program) is done This is non-preemptive 9/8/2018 11 11

CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. Non-preemptive Scheduling Once CPU has been allocated to a process, the process keeps the CPU until Process exits OR Process switches to waiting state Preemptive Scheduling Process can be interrupted and must release the CPU. Need to coordinate access to shared data Can take the CPU back due to Time slice Higher priority process arrived Interrupts 9/8/2018 12

Categories Of Scheduling Algorithms Scheduling algorithms differs for different environment and different systems Batch No users waiting for a quick response Non-preemptive or preemptive algorithm with long time periods for each process are often acceptable This approach reduces process switches and thus improves performance Interactive Preemption is essential to keep one process hogging the CPU and denying service to the others Real Time With real time constraints 9/8/2018 13 13

Scheduling Algorithm Goals/ Scheduling Criteria/ Objectives of Scheduling What a good algorithm should do? CPU Utilization Keep the CPU and other resources as busy as possible Throughput # of processes that complete their execution per time unit. Turnaround time amount of time to execute a particular process from its entry time. (Time of Completion of Job – Time of Submission of Job) 9/8/2018 14 14

Scheduling Algorithm Goals/ Scheduling Criteria/ Objectives of Scheduling Waiting time amount of time a process has been waiting in the ready queue. Response Time (in a time-sharing environment) amount of time it takes from when a request was submitted until the first response is produced, called response time The amount of time it takes to start responding, but not the time it takes to output that response.

Scheduling Algorithm Goals/ Scheduling Criteria/ Objectives of Scheduling Priority Give preferential treatment to processes with higher priority Balanced Utilization Utilization of memory, I/O devices & other system resources are also considered not only CPU utilization Fairness Can be reflected by treating all the processes same and no process should suffer indefinite postponement

Scheduling Algorithm Goals

Optimization Criteria Max CPU Utilization Max Throughput Min Turnaround time Min Waiting time Min response time

Scheduling algorithms FCFS First Come First Serve SJF Shortest Job First Shortest Remaining Time Next Three level Scheduling

First Come First Serve (FCFS) Scheduling Simplest of all scheduling algorithms. Policy: Process that requests the CPU FIRST is allocated the CPU FIRST. (Single queue of ready processes) FCFS is a non-preemptive algorithm. Implementation - using FIFO queues incoming process is added to the tail of the queue. Process selected for execution is taken from head of queue. When the first job enters the system from the outside, it is started immediately and allowed to run as long as it wants to.

First Come First Serve (FCFS) Scheduling As another job comes in, they are put into the end of the queue When the running process blocks, the first process from the queue is run next When the blocked process becomes ready, like newly arrived job, it is put on the end of the queue. Performance metric - Average waiting time in queue. Easy to understand and Easy to program Gantt Charts are used to visualize schedules.

First-Come, First-Served(FCFS) Scheduling Example Suppose the arrival order for the processes is P1, P2, P3 Waiting time P1 = 0; P2 = 24; P3 = 27; Average waiting time (0+24+27)/3 = 17 Gantt Chart for Schedule P1 P2 P3 24 27 30

FCFS Scheduling (cont.) Suppose the arrival order for the processes is P2, P3, P1 Waiting time P1 = 6; P2 = 0; P3 = 3; Average waiting time (6+0+3)/3 = 3 , better.. Convoy Effect: short process behind long process, e.g. 1 CPU bound process, many I/O bound processes. Simple, fair, but poor performance. Average queuing time may be long. Example Gantt Chart for Schedule P2 P3 P1 3 6 30

Disadvantages of FCFS Scheduling Average waiting time under FCFS is generally not minimal There is a convoy effect Results in lower CPU & device Utilization FCFS is non-preemptive FCFS can not be used for time-sharing environment

First-Come, First-Served(FCFS) Scheduling Example Process Burst Time P1 3 P2 6 P3 4 P4 2

First-Come, First-Served(FCFS) Scheduling Example Process Burst Time P1 3 P2 6 P3 4 P4 2 Gantt Chart for Schedule P1 P2 P3 P4 0 3 9 13 15

First-Come, First-Served(FCFS) Scheduling order of the processes P1, P2, P3,P4 Waiting time P1 = 0; P2 = 3; P3 = 9; P4 = 13; Average waiting time (0+3+9+13)/4 = 6.25ms Example Process Burst Time P1 3 P2 6 P3 4 P4 2 Gantt Chart for Schedule P1 P2 P3 P4 0 3 9 13 15

First-Come, First-Served(FCFS) Scheduling Turn around Time: Waiting Time + Burst Time P1 = 0 + 3 = 3 P2 = 3 + 6 = 9 P3 = 9 + 4 = 13 P4 = 13 + 2 = 15 Average Turn around Time: (3+9+13+15) / 4 =10 ms Example Process Burst Time P1 3 P2 6 P3 4 P4 2 Gantt Chart for Schedule P1 P2 P3 P4 0 3 9 13 15

Example Processes Burst time P1 20 P2 7 P3 5 Calculate Average Waiting Time using FCFS if the processes arrive in the order of : P1, P2, P3 P2, P3, P1 P3, P1,P2 P3, P2, P1

Example Processes Burst time P1 6 P2 8 P3 7 P4 3 Calculate Average Waiting Time for FCFS & SJF?

Example Processes Burst time P1 6 P2 8 P3 7 P4 3 FCFS average waiting time: (0+6+14+21)/4=10.25 SJF average waiting time: (3+16+9+0)/4=7

Example Processes Burst time P1 8 P2 4 P3 4 P4 4 Calculate Average Waiting Time for FCFS & SJF?

Example Processes Burst time P1 8 P2 4 P3 4 P4 4 Average Waiting Time for FCFS is 9 ms Average Turnaround Time for FCFS is 14ms Average Waiting Time for SJF is 6 ms Average Turnaround Time for SJF is 11ms

Shortest-Job-First(SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. Two Schemes: Scheme 1: Non-preemptive Once CPU is given to the process it cannot be preempted until it completes its CPU burst. Scheme 2: Preemptive If a new CPU process arrives with CPU burst length less than remaining time of current executing process, preempt. Also called Shortest-Remaining-Time-First (SRTF). SJF is optimal - gives minimum average waiting time for a given set of processes.

Non-Preemptive SJF Scheduling Example Gantt Chart for Schedule P1 P3 P2 P4 7 8 12 16 Average waiting time = (0+6+3+7)/4 = 4

Shortest-Job-First(SJF) Scheduling Advantages: SJF is provably optimal, in that it gives the minimum average waiting time for a given set of processes. By moving short processes before long process waiting time for short can be decreased by increasing waiting time of the long process. The average waiting time decreases. Difficulty With SJF is knowing the length of the next CPU request Can be work out by approximating the next CPU Burst.

Example Non-Preemptive SJF order of the processes P1, P2, P3,P4 Waiting time P1 = 0; P2 = (8-1)=7; P3 = (17-2)=15; P4 = (12-3)=9; Average waiting time (0+7+15+9)/4 = 7.75ms P1 P2 P4 P3 0 P2 P3 P4 8 12 17 26

Example Non-Preemptive SJF Arrival Time for all processes is 0 ms order of the processes P1, P2, P3,P4 Process Burst Time P1 3 P2 6 P3 4 P4 2

Example Non-Preemptive SJF Arrival Time for all processes is 0 ms order of the processes P1, P2, P3,P4 Waiting time P1 = 2; P2 = 9; P3 = 5; P4 = 0; Average waiting time (2+9+5+0)/4 = 4 ms Process Burst Time P1 3 P2 6 P3 4 P4 2 P4 P1 P3 P2 0 2 5 9 6

Example Non-Preemptive SJF Turn around Time P1 = 2+3 = 5 P2 = 9+6 = 15 P3 = 5+4 = 9 P4 = 0+2 =2 Avg. Turnaround Time = (5+15+9+2)/4 = 7.75 ms Process Burst Time P1 3 P2 6 P3 4 P4 2 P4 P1 P3 P2 0 2 5 9 6

SRTF (Shortest Remaining Time First) SJF is preemptive or non-preemptive The choice arises when a new process arrives at the ready queue while a previous process is executing. The new process may have shorter next CPU burst than what is left of the currently executing process. The preemptive SJF algorithm will preempt the currently executing process, whereas a non-preemptive SJF algorithm will allow the currently running process to finish the CPU burst. Preemptive SJF is sometimes called Shortest-Remaining-Time-First

Preemptive SJF Scheduling(SRTF) Example Gantt Chart for Schedule P1 P2 P3 P2 P4 P1 2 4 5 7 11 16 Average waiting time = (9+1+0+2)/4 = 3

Example SRTF Waiting Time for P1 = 0 + (10-1) = 9 ms P2 = 1-1 = 0 ms P3 = 17-2 = 15 ms P4 = 5-3 = 2 ms Avg. Waiting Time = (9+0+15+2) / 4 = 6.5 ms Process Arrival Time Burst Time P1 8 P2 1 4 P3 2 9 P4 3 5

SRTF (Shortest Remaining Time First) SRTF is Preemptive counter part of SJF SRTF is useful in time-sharing environment Advantages: In SRTF, process with the smallest estimated burst time completes first, including new arrivals In SRTF, running process may be preempted with the arrival of new process with shortest estimated time. Superior turnaround time than SJF Disadvantages: SRTF has higher overload than SJF. SRTF must keep track of the elapsed time of the running process and must handle preemptions.

Round-Robin Scheduling Algorithm Specially designed for Time-Sharing Systems Same as FCFS but preemption is added to switch between processes A small unit of time called a time quantum (or time slice) is defined. In this algorithm Ready queue is assumed to be a circular queue To implement RR scheduling, Ready queue as FIFO queue of the processes New processes are added to the tail of the ready queue The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches.

Round-Robin Scheduling Algorithm One of the two things may happen The Process may have a CPU burst of less than 1 time quantum. In this case, the process itself will release the CPU voluntarily. The scheduler will than processed to the next process in the ready queue. If the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will cause an interrupt to the OS. The average waiting time under RR is often, quite long.

Example RR Arrival Time is 0 ms and order is P1,P2,P3 and time quantum of 4 ms Waiting Time for P1 = 0 ms + (11 - 4) ms = 7 ms P2 = 4 ms P3 = 7 ms Avg waiting Time = (7 + 4+ 7)/3 = 6 ms Process Burst Time P1 20 P2 3 P3 4

Example RR Arrival Time 0 ms and Time quantum is 2 ms Waiting Time for P1 = 0 + (8-2) = 6 ms P2 = 2 + (9-4) +(13-11) = 9 P3 = 4 + (11-6) = 9 P4 = 6 Avg waiting Time = (6+9+9+6) /4 = 7.5 ms Process Burst Time P1 3 P2 6 P3 4 P4 2

Example RR Turn around Time P1 = 3 + 6 = 9 P2 = 6+ 9 = 15 Avg Turn around Time = (9+15+13+8) /4 = 11.25 ms Process Burst Time P1 3 P2 6 P3 4 P4 2

Round-Robin Scheduling Algorithm If the time quantum is very short, then short processes will move through the system relatively quickly. It increases the processing overhead involved in handling the clock interrupt and performing the scheduling and dispatch function. Thus very short time quantum should be avoided.

Comparison FCFS & RR SR. No. FCFS RR 1 FCFS is non-preemptive RR is preemptive 2 Minimum overhead Low overhead 3 Response Time may be high Provides good response time for short processes 4 Not used for time-sharing system Designed for time-sharing system 5 Simply Processed in the arrival order Like FCFS but uses time-quantum 6 No starvation in FCFS No starvation in RR

Priority Scheduling Algorithm A Priority is attached with each process, and CPU is allocated to the process with the highest priority. Equal priority processes are scheduled in FCFS order. Priorities are generally some fixed range of numbers such as 0 to 7. Priority Scheduling is preemptive or non-preemptive. Non-Preemptive will simply put the new process at the head of the ready queue Preemptive will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.

Example Priority Arrival Time is 0 ms Order is P1,P2,…P5 Low number represents higher priority Waiting Time for P1 = 6 ms P2 = 0 ms P3 = 16 ms P4 = 18 ms P5 = 1 ms Avg. Waiting Time = (6+0+16+18+1) / 5 = 8.2 ms Process Burst-Time Priority P1 10 3 P2 1 P3 2 4 P4 5 P5

Example Priority Waiting Time for P1 = 0 ms +(8-1) ms = 7 ms Avg. Waiting Time = (7+2+0) / 3= 3 ms Process Burst-Time Priority Arrival time P1 10 3 P2 5 2 1 P3

Example Priority Arrival Time is 0 ms Order is P1,P2,P3,P4 Waiting Time for P1 = 4 ms P2 = 9 ms P3 = 0 ms P4 = 7 ms Average waiting Time = 4+9+0+7 / 4 = 5 ms Process Burst-Time Priority P1 3 2 P2 6 4 P3 1 P4

Example Priority Turnaround Time Average Turnaround Time P1 = 3 + 4 ms = 7 ms P2 = 6 + 9 ms = 15 ms P3 = 4 + 0 ms = 4 ms P4 = 2 + 7 ms = 9 ms Average Turnaround Time = 7+15+4+9 / 4 = 8.75 ms Process Burst-Time Priority P1 3 2 P2 6 4 P3 1 P4

Problem with Priority Scheduling Algorithm The Problem with Priority scheduling algorithm arises when it becomes the biggest cause of starvation of a process. If a process is in ready state but its execution is almost always preempted, due to the arrival of higher priority processes. It will starve for its execution. Therefore, a mechanism called “ageing” has to be built into the system so that almost every process should get the CPU in a fixed interval of time. This can be done by increasing the priority of a low-priority process after a fixed interval of time, so that at one moment of time it becomes a high-priority process compared to others and thus, finally gets CPU for its execution.

Multilevel Queue Scheduling Two types of processes Foreground Process (interactive) Background Process (batch) They need different response time and different scheduling needs. This Algorithm partitions the ready queue into several separate queues Processes are permanently assigned to one queue based on property of the process

Multilevel Queue Scheduling

Multilevel Queue Scheduling Fig shows queues, each queue has priority over other queue. No Process in student queue can be processed unless the queues for system processes, interactive and batch processes are empty Amount of time each queue will get Foreground can get 80% of time with RR scheduling Background can get 20% of time with FCFS scheduling

Multilevel feedback Queue Scheduling In multilevel queue scheduling processes do not move between queues. low over head but is inflexible Multilevel feedback queue allows processes to move between queues. If a process uses too much CPU time it will be moved to a lower priority queue Process that waits too long in a lower priority queue may be moved to higher priority queue. This form of ageing prevents starvation

Multilevel Feedback Queues

Multilevel Feedback Queues A process in queue 1 is given a time quantum of 8 ms, if does not finish within this time then it is moved to the end of queue. If queue 1 is empty, the process at the head of queue-2 is given a quantum of 16 ms and if a process in queue 2 does not finish within 16 ms, it is moved to the end of queue three. Processes in queue-3 are run on FCFS basis only if queue-0 and 1 are empty.

Other scheduling Algorithms Guaranteed Scheduling Live up to the user: If there are n users logged in while you are working, you will receive about 1/n of the CPU power. If there are n processes running, all things being equal, each one should get 1/n of the CPU cycle. Lottery Scheduling Basic idea is to give processes lottery tickets for various system resources, such as CPU time. Whenever scheduling decision has to be made, a lottery ticket is chosen at random, and the process holding that ticket gets the resource. Fair-Share Scheduling Each user is allocated some fraction of the CPU and the scheduler picks processes in such a way as to enforce it.