Download presentation
Presentation is loading. Please wait.
Published byJared Boyd Modified over 9 years ago
1
Chapter 5 – CPU Scheduling (Pgs 183 – 218)
2
CPU Scheduling Goal: To get as much done as possible How: By never letting the CPU sit "idle" and not do anything Idea: When a process is waiting for something to happen, the CPU can execute a different process that isn't waiting for anything Reality: Many CPUs are often idle because all processes are waiting for something
3
Bursts CPU burst – period of time in which CPU is executing instructions and "doing work" I/O burst – period of time in which CPU is waiting for IO to occur and is "idle" CPU bursts tend to follow a pattern as most bursts tend to be short duration and few bursts are long durations
4
CPU Burst Duration Histogram
5
Fig. 3.2 Process State
6
Scheduling Short-term schedulers select (or order) the ready queue Scheduling occurs: 1. When a process switches from running to waiting 2. When a process switches from running to ready 3. When a process switches from waiting to ready 4. When a process terminates If 1 and 4 only, scheduling is cooperative If 1 to 4 (all), scheduling is preemptive
7
Preemption A process switches to ready because its timeslice is over Permits fair sharing of CPU cycles Needed for multi-user systems Cooperation uses "run as long as possible" A variant of cooperation, "run to completion" never switches a process that is not finished, even if waiting for I/O
8
Scheduling Criteria CPU Utilisation: Keep the CPU busy Throughput: # of processes / time unit Turnaround Time: time from process submission to completion Waiting Time: Time spent in the READY queue Response Time: Time from submission until first output produced
9
Criteria Not Mentioned Overhead! O/S activities take time away from user processes Time for performing scheduling Time to do context switch Interference due to interrupt handling Dispatch Latency: Time to stop one process and start another running
10
First Come – First Served Simple queue needed to implement Average waiting times can be long Order processes are started will affect waiting times Non-preemptive, poor for multi-user systems But, no so bad when used with preemption (Round Robin Scheduling)
11
Shortest Job First Provably optimal w.r.t. waiting times May or may not be preemptive Problem – how does one know how long a job will take (CPU burst length)? 1. Start with system average 2. Modify based on previous bursts (exponential average) p = αt last + (1-α)t all-others
12
Shortest Remaining Time Shortest Job First BUT if a new job will be quicker than the running job, preempt the running job and run the new one
13
Priority Scheduling Select next process based on priority SJF = priority based on inverse CPU burst length Many ways to assign priority: policy, memory factors, burst times, user history etc. Starvation (being too low a priority to get CPU time) can be a problem Aging: older processes increase in priority (prevents starvation)
14
Round Robin FCFS with preemption Basic time slices Length of time slice is important 80% +/- of CPU bursts should fit in a time slice Should not be so short as consume a large fraction of CPU cycles doing context switches
15
Multi-Level Queueing Similar to Priority Scheduling, but keep different queues for each priority instead of ordering on one queue Can use different algorithms (or variants of the same algorithm) on each queue Various ways to select which queue to select the next job from Can permit process migration between queues Queues do not need to have the same length timeslices etc.
16
Thread Scheduling Process Contention Scope: Scheduling of threads in "user space" System Contention Scope: Scheduling of threads in "kernel space" Pthreads lets user control contention scope! pthread_attr_setscope() pthread_attr_getscope()
17
Multi-Processor Scheduling Load Sharing – Now scheduling must also deal with multiple CPUs as well as multiple processes Quite complex Can be affected by processor similarity Symmetric Multiprocessing (SMP) – Each CPU is self scheduling (most common) Asymmetric Multiprocessing – One processor is the master scheduler
18
Processor Affinity Best to keep a process on the same CPU for its life to maximise cache benefits Hard Affinity: Process can be set to never migrate between CPUs Soft Affinity: Migration is possible in some instances NUMA, CPU speed, job mix will all affect migration Sometimes the cost of migration is recovered by moving from an overworked CPU to an idle (or faster) one
19
Load Balancing It makes no sense to have some CPUs with waiting processes while some CPUs sit idle Push Migration: A monitor moves processes around and "pushes" them towards less busy CPUs Pull Migration: Idle CPUs pull in jobs Often the load balancing is wasted when cache reloads are needed
20
Threading Granularity Some CPUs have very low-level instructions to support threads Can switch threads every few instructions at low cost = fine-grained multithreading Some CPUs do not provide much support and context switches are expensive = coarse-grained multithreading Many CPUs provide multiple hardware threads in support of fine-grained multithreading – CPU is specifically designed for this (e.g., two register sets) and has hardware and microcode support
21
Algorithm Evaluation 1. Deterministic Modeling – use predetermined workload (e.g., historic data) to evaluate alogorithms 2. Queueing Analysis – uses mathematical queueing theory and process characteristics (based on probability) to model the system 3. Simulations – simulate the system and measure performance on probability of process characteristics 4. Prototyping – program and test the algorithm in an operating environment
22
To Do: Work on Assignment 1 Finish reading Chapter 5 (pgs 183-218; this lecture) if you haven’t already Read Chapter 6 (pgs 225-267; next lecture)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.