Chapter 5 – CPU Scheduling (Pgs 183 – 218). CPU Scheduling  Goal: To get as much done as possible  How: By never letting the CPU sit "idle" and not.

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling CS 3100 CPU Scheduling1. Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various.
CPU Scheduling Basic Concepts Scheduling Criteria
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Silberschatz, Galvin and Gagne ©2007 Chapter 5: CPU Scheduling.
Chapter 5: CPU Scheduling
Job scheduling Queue discipline.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Objectives To introduce CPU scheduling To describe.
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 9 Chapter 5: CPU Scheduling (cont)
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
Operating Systems Part III: Process Management (CPU Scheduling)
Chapter 6: CPU Scheduling
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
CSC 360- Instructor: K. Wu CPU Scheduling. CSC 360- Instructor: K. Wu Agenda 1.What is CPU scheduling? 2.CPU burst distribution 3.CPU scheduler and dispatcher.
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Scheduler What is the job of.
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
ICS Principles of Operating Systems Lecture 5 - CPU Scheduling Prof. Nalini Venkatasubramanian
1 CSE451 Scheduling Autumn 2002 Gary Kimura Lecture #6 October 11, 2002.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
CE Operating Systems Lecture 7 Threads & Introduction to CPU Scheduling.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
5.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 5: Process Scheduling Objectives To introduce CPU scheduling To describe various.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
 In a single-processor system, only one process can run at a time; any others must wait until the CPU is free and can be rescheduled.  The objective.
Guy Martin, OSLab, GNU Fall-09
lecture 5: CPU Scheduling
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Chapter 5 – CPU Scheduling (Pgs 183 – 218)

CPU Scheduling  Goal: To get as much done as possible  How: By never letting the CPU sit "idle" and not do anything  Idea: When a process is waiting for something to happen, the CPU can execute a different process that isn't waiting for anything  Reality: Many CPUs are often idle because all processes are waiting for something

Bursts  CPU burst – period of time in which CPU is executing instructions and "doing work"  I/O burst – period of time in which CPU is waiting for IO to occur and is "idle"  CPU bursts tend to follow a pattern as most bursts tend to be short duration and few bursts are long durations

CPU Burst Duration Histogram

Fig. 3.2 Process State

Scheduling  Short-term schedulers select (or order) the ready queue  Scheduling occurs: 1. When a process switches from running to waiting 2. When a process switches from running to ready 3. When a process switches from waiting to ready 4. When a process terminates  If 1 and 4 only, scheduling is cooperative  If 1 to 4 (all), scheduling is preemptive

Preemption  A process switches to ready because its timeslice is over  Permits fair sharing of CPU cycles  Needed for multi-user systems  Cooperation uses "run as long as possible"  A variant of cooperation, "run to completion" never switches a process that is not finished, even if waiting for I/O

Scheduling Criteria  CPU Utilisation: Keep the CPU busy  Throughput: # of processes / time unit  Turnaround Time: time from process submission to completion  Waiting Time: Time spent in the READY queue  Response Time: Time from submission until first output produced

Criteria Not Mentioned  Overhead!  O/S activities take time away from user processes  Time for performing scheduling  Time to do context switch  Interference due to interrupt handling  Dispatch Latency: Time to stop one process and start another running

First Come – First Served  Simple queue needed to implement  Average waiting times can be long  Order processes are started will affect waiting times  Non-preemptive, poor for multi-user systems  But, no so bad when used with preemption (Round Robin Scheduling)

Shortest Job First  Provably optimal w.r.t. waiting times  May or may not be preemptive  Problem – how does one know how long a job will take (CPU burst length)? 1. Start with system average 2. Modify based on previous bursts (exponential average) p = αt last + (1-α)t all-others

Shortest Remaining Time  Shortest Job First  BUT if a new job will be quicker than the running job, preempt the running job and run the new one

Priority Scheduling  Select next process based on priority  SJF = priority based on inverse CPU burst length  Many ways to assign priority: policy, memory factors, burst times, user history etc.  Starvation (being too low a priority to get CPU time) can be a problem  Aging: older processes increase in priority (prevents starvation)

Round Robin  FCFS with preemption  Basic time slices  Length of time slice is important  80% +/- of CPU bursts should fit in a time slice  Should not be so short as consume a large fraction of CPU cycles doing context switches

Multi-Level Queueing  Similar to Priority Scheduling, but keep different queues for each priority instead of ordering on one queue  Can use different algorithms (or variants of the same algorithm) on each queue  Various ways to select which queue to select the next job from  Can permit process migration between queues  Queues do not need to have the same length timeslices etc.

Thread Scheduling  Process Contention Scope: Scheduling of threads in "user space"  System Contention Scope: Scheduling of threads in "kernel space"  Pthreads lets user control contention scope!  pthread_attr_setscope()  pthread_attr_getscope()

Multi-Processor Scheduling  Load Sharing – Now scheduling must also deal with multiple CPUs as well as multiple processes  Quite complex  Can be affected by processor similarity  Symmetric Multiprocessing (SMP) – Each CPU is self scheduling (most common)  Asymmetric Multiprocessing – One processor is the master scheduler

Processor Affinity  Best to keep a process on the same CPU for its life to maximise cache benefits  Hard Affinity: Process can be set to never migrate between CPUs  Soft Affinity: Migration is possible in some instances  NUMA, CPU speed, job mix will all affect migration  Sometimes the cost of migration is recovered by moving from an overworked CPU to an idle (or faster) one

Load Balancing  It makes no sense to have some CPUs with waiting processes while some CPUs sit idle  Push Migration: A monitor moves processes around and "pushes" them towards less busy CPUs  Pull Migration: Idle CPUs pull in jobs  Often the load balancing is wasted when cache reloads are needed

Threading Granularity  Some CPUs have very low-level instructions to support threads  Can switch threads every few instructions at low cost = fine-grained multithreading  Some CPUs do not provide much support and context switches are expensive = coarse-grained multithreading  Many CPUs provide multiple hardware threads in support of fine-grained multithreading – CPU is specifically designed for this (e.g., two register sets) and has hardware and microcode support

Algorithm Evaluation 1. Deterministic Modeling – use predetermined workload (e.g., historic data) to evaluate alogorithms 2. Queueing Analysis – uses mathematical queueing theory and process characteristics (based on probability) to model the system 3. Simulations – simulate the system and measure performance on probability of process characteristics 4. Prototyping – program and test the algorithm in an operating environment

To Do:  Work on Assignment 1  Finish reading Chapter 5 (pgs ; this lecture) if you haven’t already  Read Chapter 6 (pgs ; next lecture)