Scheduling. Model of Process Execution Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources Preemption or voluntary yield AllocateRequest.

Slides:



Advertisements
Similar presentations
© 2004, D. J. Foreman 1 Scheduling & Dispatching.
Advertisements

CS 149: Operating Systems February 3 Class Meeting
CPU Management CT213 – Computing Systems Organization.
SCHEDULING Kshama Desai Bijal Shah Kishore Putta Kashyap Sheth.
CS4315A. Berrached:CMS:UHD1 CPU Scheduling Chapter 7.
Slide 7-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 7 7 Scheduling.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 17 Scheduling III.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating Systems Chapter 6
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems CPU Scheduling. Agenda for Today What is Scheduler and its types Short-term scheduler Dispatcher Reasons for invoking scheduler Optimization.
Day 25 Uniprocessor scheduling. Algorithm: Selection function Selection function – which process among ready processes to select. w – time spent in system,
Chapter 3: CPU Scheduling
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6 Threads and Scheduling 6.
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
Scheduling in Batch Systems
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6 Processes and Threads 6.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Chapter 5-CPU Scheduling
Job scheduling Queue discipline.
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6 Threads and Scheduling 6.
Slide 7-1 Copyright © 2004 Pearson Education, Inc. Scheduling: the act of sharing.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
Slide 7-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 7.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU-Scheduling Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short term scheduler.
CPU Scheduling Chapter 6 Chapter 6.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Chapter 6 CPU SCHEDULING.
Operating Systems Lecture Notes CPU Scheduling Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Scheduling Strategies Operating Systems Spring 2004 Class #10.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
2 Program Process Abstract Computing Environment File Manager Memory Manager Device Manager Protection Deadlock Synchronization Process Description Process.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Uniprocessor Scheduling
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Slide 7-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 7.
Slide 7-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 7 7 Scheduling.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
CPU Scheduling Algorithms CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5a: CPU Scheduling
Chapter 6: CPU Scheduling
Outline Announcements Process Management – continued
Scheduling.
Outline Announcements Process Scheduling– continued
Outline Announcement Process Scheduling– continued
Scheduling & Dispatching
Scheduling & Dispatching
Presentation transcript:

Scheduling

Model of Process Execution Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources Preemption or voluntary yield AllocateRequest Done New Process job “Ready” “Running” “Blocked”

Recall Resource Manager Resource Manager Process Resource Pool ReleaseRequest Allocate Release Blocked Processes

Scheduler as CPU Resource Manager Scheduler Process Units of time for a time-multiplexed CPU ReleaseReady to run Dispatch Release Ready List

Scheduler Components Ready Process Enqueue Ready List Ready List Dispatch Context Switch Context Switch Process Descriptor Process Descriptor CPU From Other States Running Process

Process Context R1 R2 Rn... Status Registers Functional Unit Left Operand Right Operand Result ALU PC IR Ctl Unit

Context Switch CPU Process Descriptor

Invoking the Scheduler Need a mechanism to call the scheduler Voluntary call –Process blocks itself –Calls the scheduler Involuntary call –External force (interrupt) blocks the process –Calls the scheduler

Voluntary CPU Sharing yield(p i.pc, p j.pc) { memory[p i.pc] = PC; PC = memory[p j.pc]; } p i can be “automatically” determined from the processor status registers yield(*, p j.pc) { memory[p i.pc] = PC; PC = memory[p j.pc]; }

More on Yield yield(*, p j.pc);... yield(*, p i.pc);... yield(*, p j.pc);... p i and p j can resume one another’s execution Suppose p j is the scheduler: // p_i yields to scheduler yield(*, p j.pc); // scheduler chooses p k yield(*, p k.pc); // p k yields to scheduler yield(*, p j.pc); // scheduler chooses...

Voluntary Sharing Every process periodically yields to the scheduler Relies on correct process behavior –Malicious –Accidental Need a mechanism to override running process

Involuntary CPU Sharing Interval timer –Device to produce a periodic interrupt –Programmable period IntervalTimer() { InterruptCount--; if(InterruptCount <= 0) { InterruptRequest = TRUE; InterruptCount = K; } SetInterval(programmableValue) { K = programmableValue: InterruptCount = K; }

Involuntary CPU Sharing (cont) Interval timer device handler –Keeps an in-memory clock up-to-date (see Chap 4 lab exercise) –Invokes the scheduler IntervalTimerHandler() { Time++; // update the clock TimeToSchedule--; if(TimeToSchedule <= 0) { ; TimeToSchedule = TimeSlice; }

Contemporary Scheduling Involuntary CPU sharing – timer interrupts –Time quantum determined by interval timer – usually fixed size for every process using the system –Sometimes called the time slice length

Choosing a Process to Run Mechanism never changes Strategy = policy the dispatcher uses to select a process from the ready list Different policies for different requirements Ready Process Enqueue Ready List Ready List Dispatch Context Switch Context Switch Process Descriptor Process Descriptor CPU Running Process

Policy Considerations Policy can control/influence: –CPU utilization –Average time a process waits for service –Average amount of time to complete a job Could strive for any of: –Equitability –Favor very short or long jobs –Meet priority requirements –Meet deadlines

Optimal Scheduling Suppose the scheduler knows each process p i ’s service time,  p i  -- or it can estimate each  p i  : Policy can optimize on any criteria, e.g., –CPU utilization –Waiting time –Deadline To find an optimal schedule: –Have a finite, fixed # of p i –Know  p i  for each p i –Enumerate all schedules, then choose the best

However... The  (p i ) are almost certainly just estimates General algorithm to choose optimal schedule is O(n 2 ) Other processes may arrive while these processes are being serviced Usually, optimal schedule is only a theoretical benchmark – scheduling policies try to approximate an optimal schedule

Model of Process Execution Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources Preemption or voluntary yield AllocateRequest Done New Process job “Ready” “Running” “Blocked”

Talking About Scheduling... Let P = {p i | 0  i < n} = set of processes Let S(p i )  {running, ready, blocked} Let  (p i ) = Time process needs to be in running state (the service time) Let W(p i ) = Time p i is in ready state before first transition to running (wait time) Let T TRnd (p i ) = Time from p i first enter ready to last exit ready (turnaround time) Batch Throughput rate = inverse of avg T TRnd Timesharing response time = W(p i )

Simplified Model Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources AllocateRequest Done New Process job “Ready” “Running” “Blocked” Simplified, but still provide analysis result Easy to analyze performance No issue of voluntary/involuntary sharing Preemption or voluntary yield

Estimating CPU Utilization Ready List Ready List Scheduler CPU Done New Process System p i per second Each p i uses 1/  units of the CPU Let = the average rate at which processes are placed in the Ready List, arrival rate Let  = the average service rate  1/  = the average  (p i )

Estimating CPU Utilization Ready List Ready List Scheduler CPU Done New Process Let = the average rate at which processes are placed in the Ready List, arrival rate Let  = the average service rate  1/  = the average  (p i ) Let  = the fraction of the time that the CPU is expected to be busy  = # p i that arrive per unit time * avg time each spends on CPU  = * 1/  = /  Notice must have <  (i.e.,  < 1) What if  approaches 1?

Nonpreemptive Schedulers Ready List Ready List Scheduler CPU Done New Process Try to use the simplified scheduling model Only consider running and ready states Ignores time in blocked state: –“New process created when it enters ready state” –“Process is destroyed when it enters blocked state” –Really just looking at “small phases” of a process Blocked or preempted processes

First-Come-First-Served i  (p i ) p0p0 T TRnd (p 0 ) =  (p 0 ) = 350 W(p 0 ) =

First-Come-First-Served i  (p i ) p0p0 p1p1 T TRnd (p 0 ) =  (p 0 ) = 350 T TRnd (p 1 ) = (  (p 1 ) +T TRnd (p 0 )) = = 475 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) =

First-Come-First-Served i  (p i ) p0p0 p1p1 p2p2 T TRnd (p 0 ) =  (p 0 ) = 350 T TRnd (p 1 ) = (  (p 1 ) +T TRnd (p 0 )) = = 475 T TRnd (p 2 ) = (  (p 2 ) +T TRnd (p 1 )) = = 950 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) =

First-Come-First-Served i  (p i ) p0p0 p1p1 p2p2 p3p3 T TRnd (p 0 ) =  (p 0 ) = 350 T TRnd (p 1 ) = (  (p 1 ) +T TRnd (p 0 )) = = 475 T TRnd (p 2 ) = (  (p 2 ) +T TRnd (p 1 )) = = 950 T TRnd (p 3 ) = (  (p 3 ) +T TRnd (p 2 )) = = 1200 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) = 475 W(p 3 ) = T TRnd (p 2 ) =

First-Come-First-Served i  (p i ) p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) =  (p 0 ) = 350 T TRnd (p 1 ) = (  (p 1 ) +T TRnd (p 0 )) = = 475 T TRnd (p 2 ) = (  (p 2 ) +T TRnd (p 1 )) = = 950 T TRnd (p 3 ) = (  (p 3 ) +T TRnd (p 2 )) = = 1200 T TRnd (p 4 ) = (  (p 4 ) +T TRnd (p 3 )) = = 1275 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) = 475 W(p 3 ) = T TRnd (p 2 ) = 950 W(p 4 ) = T TRnd (p 3 ) =

FCFS Average Wait Time i  (p i ) p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) =  (p 0 ) = 350 T TRnd (p 1 ) = (  (p 1 ) +T TRnd (p 0 )) = = 475 T TRnd (p 2 ) = (  (p 2 ) +T TRnd (p 1 )) = = 950 T TRnd (p 3 ) = (  (p 3 ) +T TRnd (p 2 )) = = 1200 T TRnd (p 4 ) = (  (p 4 ) +T TRnd (p 3 )) = = 1275 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) = 475 W(p 3 ) = T TRnd (p 2 ) = 950 W(p 4 ) = T TRnd (p 3 ) = 1200 W avg = ( )/5 = 2974/5 = Easy to implement Ignores service time, etc Not a great performer

Predicting Wait Time in FCFS In FCFS, when a process arrives, all in ready list will be processed before this job Let  be the service rate Let L be the ready list length W avg (p) = L*1/  1/  L  Compare predicted wait with actual in earlier examples

Shortest Job Next i  (p i ) p4p4 T TRnd (p 4 ) =  (p 4 ) = 75 W(p 4 ) = 0 750

Shortest Job Next i  (p i ) p1p1 p4p4 T TRnd (p 1 ) =  (p 1 )+  (p 4 ) = = 200 T TRnd (p 4 ) =  (p 4 ) = 75 W(p 1 ) = 75 W(p 4 ) =

Shortest Job Next i  (p i ) p1p1 p3p3 p4p4 T TRnd (p 1 ) =  (p 1 )+  (p 4 ) = = 200 T TRnd (p 3 ) =  (p 3 )+  (p 1 )+  (p 4 ) = = 450 T TRnd (p 4 ) =  (p 4 ) = 75 W(p 1 ) = 75 W(p 3 ) = 200 W(p 4 ) =

Shortest Job Next i  (p i ) p0p0 p1p1 p3p3 p4p4 T TRnd (p 0 ) =  (p 0 )+  (p 3 )+  (p 1 )+  (p 4 ) = = 800 T TRnd (p 1 ) =  (p 1 )+  (p 4 ) = = 200 T TRnd (p 3 ) =  (p 3 )+  (p 1 )+  (p 4 ) = = 450 T TRnd (p 4 ) =  (p 4 ) = 75 W(p 0 ) = 450 W(p 1 ) = 75 W(p 3 ) = 200 W(p 4 ) =

Shortest Job Next i  (p i ) p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) =  (p 0 )+  (p 3 )+  (p 1 )+  (p 4 ) = = 800 T TRnd (p 1 ) =  (p 1 )+  (p 4 ) = = 200 T TRnd (p 2 ) =  (p 2 )+  (p 0 )+  (p 3 )+  (p 1 )+  (p 4 ) = = 1275 T TRnd (p 3 ) =  (p 3 )+  (p 1 )+  (p 4 ) = = 450 T TRnd (p 4 ) =  (p 4 ) = 75 W(p 0 ) = 450 W(p 1 ) = 75 W(p 2 ) = 800 W(p 3 ) = 200 W(p 4 ) =

Shortest Job Next i  (p i ) p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) =  (p 0 )+  (p 3 )+  (p 1 )+  (p 4 ) = = 800 T TRnd (p 1 ) =  (p 1 )+  (p 4 ) = = 200 T TRnd (p 2 ) =  (p 2 )+  (p 0 )+  (p 3 )+  (p 1 )+  (p 4 ) = = 1275 T TRnd (p 3 ) =  (p 3 )+  (p 1 )+  (p 4 ) = = 450 T TRnd (p 4 ) =  (p 4 ) = 75 W(p 0 ) = 450 W(p 1 ) = 75 W(p 2 ) = 800 W(p 3 ) = 200 W(p 4 ) = 0 W avg = ( )/5 = 1525/5 = Minimizes wait time May starve large jobs Must know service times

Priority Scheduling i  (p i ) Pri p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) =  (p 0 )+  (p 4 )+  (p 2 )+  (p 1 ) )+  (p 3 ) = = 1275 T TRnd (p 1 ) =  (p 1 )+  (p 3 ) = = 375 T TRnd (p 2 ) =  (p 2 )+  (p 1 )+  (p 3 ) = = 850 T TRnd (p 3 ) =  (p 3 ) = 250 T TRnd (p 4 ) =  (p 4 )+  (p 2 )+  (p 1 )+  (p 3 ) = = 925 W(p 0 ) = 925 W(p 1 ) = 250 W(p 2 ) = 375 W(p 3 ) = 0 W(p 4 ) = 850 W avg = ( )/5 = 2400/5 = Reflects importance of external use May cause starvation Can address starvation with aging

Deadline Scheduling i  (p i ) Deadline (none) p0p0 p1p1 p2p2 p3p3 p4p Allocates service by deadline May not be feasible p0p0 p1p1 p2p2 p3p3 p4p4 p0p0 p1p1 p2p2 p3p3 p4p4 575

Preemptive Schedulers Ready List Ready List Scheduler CPU Preemption or voluntary yield Done New Process Highest priority process is guaranteed to be running at all times –Or at least at the beginning of a time slice Dominant form of contemporary scheduling But complex to build & analyze

Round Robin (TQ=50) i  (p i ) p0p0 W(p 0 ) = 0 050

Round Robin (TQ=50) i  (p i ) p0p0 W(p 0 ) = 0 W(p 1 ) = p1p1

Round Robin (TQ=50) i  (p i ) p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = p2p2 p1p1

Round Robin (TQ=50) i  (p i ) p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = p3p3 p2p2 p1p1

Round Robin (TQ=50) i  (p i ) p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = p4p4 p3p3 p2p2 p1p1

Round Robin (TQ=50) i  (p i ) p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = p0p0 p4p4 p3p3 p2p2 p1p1

Round Robin (TQ=50) i  (p i ) p0p0 T TRnd (p 4 ) =  W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = p4p4 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3

Round Robin (TQ=50) i  (p i ) p0p0 T TRnd (p 1 ) =  T TRnd (p 4 ) =  W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 550

Round Robin (TQ=50) i  (p i ) p0p0 T TRnd (p 1 ) =  T TRnd (p 3 ) =  T TRnd (p 4 ) =  W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p

Round Robin (TQ=50) i  (p i ) p0p0 T TRnd (p 0 ) =  T TRnd (p 1 ) =  T TRnd (p 3 ) =  T TRnd (p 4 ) =  W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p

Round Robin (TQ=50) i  (p i ) p0p0 T TRnd (p 0 ) =  T TRnd (p 1 ) =  T TRnd (p 2 ) =  T TRnd (p 3 ) =  T TRnd (p 4 ) =  W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p0 p2p2 p2p2 p2p2 p2p

Round Robin (TQ=50) i  (p i ) p0p0 T TRnd (p 0 ) =  T TRnd (p 1 ) =  T TRnd (p 2 ) =  T TRnd (p 3 ) =  T TRnd (p 4 ) =  W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 W avg = ( )/5 = 500/5 = Equitable Most widely-used Fits naturally with interval timer p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p0 p2p2 p2p2 p2p2 p2p T TRnd _ avg = ( )/5 = 4350/5 = 870

RR with Overhead=10 (TQ=50) i  (p i ) p0p0 T TRnd (p 0 ) =  T TRnd (p 1 ) =  T TRnd (p 2 ) =  T TRnd (p 3 ) =  T TRnd (p 4 ) =  W(p 0 ) = 0 W(p 1 ) = 60 W(p 2 ) = 120 W(p 3 ) = 180 W(p 4 ) = 240 W avg = ( )/5 = 600/5 = Overhead must be considered p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p0 p2p2 p2p2 p2p2 p2p T TRnd _ avg = ( )/5 = 5220/5 =

Multi-Level Queues Ready List 0 Ready List 1 Ready List 2 Ready List 3 Scheduler CPU Preemption or voluntary yield Done New Process All processes at level i run before any process at level j At a level, use another policy, e.g. RR

Contemporary Scheduling Involuntary CPU sharing -- timer interrupts –Time quantum determined by interval timer -- usually fixed for every process using the system –Sometimes called the time slice length Priority-based process (job) selection –Select the highest priority process –Priority reflects policy With preemption Usually a variant of Multi-Level Queues

BSD 4.4 Scheduling Involuntary CPU Sharing Preemptive algorithms 32 Multi-Level Queues –Queues 0-7 are reserved for system functions –Queues 8-31 are for user space functions –nice influences (but does not dictate) queue level

Windows NT/2K Scheduling Involuntary CPU Sharing across threads Preemptive algorithms 32 Multi-Level Queues –Highest 16 levels are “real-time” –Next lower 15 are for system/user threads Range determined by process base priority –Lowest level is for the idle thread