Download presentation
Presentation is loading. Please wait.
Published bySarah Peters Modified over 9 years ago
2
2 Program Process Abstract Computing Environment File Manager Memory Manager Device Manager Protection Deadlock Synchronization Process Description Process Description CPU Other H/W Scheduler Resource Manager Resource Manager Resource Manager Resource Manager Resource Manager Resource Manager Memory Devices Process Mgr
3
Scheduling mechanism is the part of the process manager that handles the removal of the running process of CPU and the selection of another process on the basis of a particular strategy Scheduler chooses one from the ready threads to use the CPU when it is available Scheduling policy determines when it is time for a thread to be removed from the CPU and which ready thread should be allocated the CPU next 3
4
4 Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources Preemption or voluntary yield AllocateRequest Done New Thread job “Ready” “Running” “Blocked”
5
New threads put into ready state and added to ready list Running thread may cease using CPU for any of four reasons Thread has completed its execution Thread requests a resource, but can’t get it Thread voluntarily releases CPU Thread is preempted by scheduler and involuntarily releases CPU Scheduling policy determines which thread gets the CPU and when a thread is preempted 5
6
6 Ready Process Enqueuer Ready List Ready List Dispatcher Context Switcher Context Switcher Process Descriptor Process Descriptor CPU From Other States Running Process
7
Adds processes which are ready to run to the ready list May compute the priority for the waiting process (could also be determined by the dispatcher) 7
8
Saves contents of processor registers for a process being taken off the CPU If hardware has more than one set of processor registers, OS may just switch between sets Typically, one set is used for supervisor mode, others for applications Depending on OS, process could be switched out voluntarily or involuntarily 8
9
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context From process, to dispatcher, to new process switching to user mode jumping to the proper location in the user program to restart that program 9
10
10 R1 R2 Rn... Status Registers Functional Unit Left Operand Right Operand Result ALU PC IR Ctl Unit
11
11 CPU New Thread Descriptor Old Thread Descriptor
12
Context switching is a time-consuming process Assuming n general registers and m status registers, each requiring b store operations, and K time units to perform a store, the time required is (n + m) b × K time units Then a processor requiring 50 ns to store 1 unit of information, and assuming that n = 32 and m = 8, the time required is 2000 ns = 2 μ s A complete context switch involves removing process, loading dispatcher, removing dispatcher, loader new process at least 8 μ s required Note that a 1 Ghz processor could have executed about 4,000 instructions in this time Some processors use several sets of registers to reduce this switching time (one set for supervisor mode, the other for user) 12
13
Need a mechanism to call the scheduler Voluntary call Process blocks itself Calls the scheduler Non-preemptive scheduling Involuntary call External force (interrupt) blocks the process Calls the scheduler Preemptive scheduling 13
14
Each process will voluntarily share the CPU By calling the scheduler periodically The simplest approach Requires a yield instruction to allow the running process to release CPU 14 yield(p i.pc, p j.pc) { memory[p i.pc] = PC; PC = memory[p j.pc]; }
15
p i can be “automatically” determined from the processor status registers So function can be written as 15 yield(*, p j.pc) { memory[p i.pc] = PC; PC = memory[p j.pc]; }
16
16 yield(*, p j.pc);... yield(*, p i.pc);... yield(*, p j.pc);... p i and p j can resume one another’s execution Suppose p j is the scheduler: // p_i yields to scheduler yield(*, p j.pc); // scheduler chooses p k yield(*, p k.pc); // p k yields to scheduler yield(*, p j.pc); // scheduler chooses...
17
Every process periodically yields to the scheduler Relies on correct process behavior Malicious Accidental Need a mechanism to override running process 17
18
18
19
Periodic involuntary interruption Through an interrupt from an interval timer device Which generates an interrupt whenever the timer expires The scheduler will be called in the interrupt handler A scheduler that uses involuntary CPU sharing is called a preemptive scheduler 19
20
20 IntervalTimer() { InterruptCount--; if (InterruptCount <= 0) { InterruptRequest = TRUE; InterruptCount = K } SetInterval( ) { K = ; InterruptCount = K; } Interrupt occurs every K clock ticks
21
Interval timer device handler Keeps an in-memory clock up-to-date Invokes the scheduler 21 IntervalTimerHandler() { Time++; // update the clock TimeToSchedule--; if(TimeToSchedule <= 0) { ; TimeToSchedule = TimeSlice; }
22
Involuntary CPU sharing – timer interrupts Time quantum determined by interval timer – usually fixed size for every process using the system Sometimes called the time slice length 22
23
Mechanism never changes Strategy = policy the dispatcher uses to select a process from the ready list Different policies for different requirements 23
24
Policy can control/influence: CPU utilization Average time a process waits for service Average amount of time to complete a job Could strive for any of: Equitability Favor very short or long jobs Meet priority requirements Meet deadlines 24
25
Suppose the scheduler knows each process p i ’s service time, p i -- or it can estimate each p i : Policy can optimize on any criteria, e.g., CPU utilization Waiting time Deadline To find an optimal schedule : Have a finite, fixed # of p i Know p i for each p i Enumerate all schedules, then choose the best 25
26
The (p i ) are almost certainly just estimates General algorithm to choose optimal schedule is O(n 2 ) Other processes may arrive while these processes are being serviced Usually, optimal schedule is only a theoretical benchmark – scheduling policies try to approximate an optimal schedule 26
27
The scheduling criteria will depend in part on the goals of the OS and on priorities of processes, fairness, overall resource utilization, throughput, turnaround time, response time, and deadlines 27
28
P will be a set of processes, p 0, p 1,..., p n-1 S(p i ) is the state of p i {running, ready, blocked} τ (p i ), the service time The amount of time p i needs to be in the running state before it is completed W (p i ), the waiting time The time p i spends in the ready state before its first transition to the running state T TRnd (p i ), turnaround time The amount of time between the moment p i first enters the ready state and the moment the process exits the running state for the last time 28
29
Simplified, but still provides analysis results Easy to analyze performance No issue of voluntary/involuntary sharing 29 Ready List Ready List Scheduler CPU Resource Manager Resource Manager Resources AllocateRequest Done New Process job “Ready” “Running” “Blocked” Preemption or voluntary yield
30
30 Ready List Ready List Scheduler CPU Done New Process System p i per second Each p i uses 1/ μ units of the CPU Let = the average rate at which processes are placed in the Ready List, arrival rate Let μ = the average service rate 1/ μ = the average (p i )
31
31 Ready List Ready List Scheduler CPU Done New Process Let = the average rate at which processes are placed in the Ready List, arrival rate Let μ = the average service rate 1/ μ = the average (p i ) Let = the fraction of the time that the CPU is expected to be busy = # p i that arrive per unit time * avg time each spends on CPU = * 1/ μ = / μ Note: must have < (i.e., < 1) What if approaches 1?
32
Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time Which one to use depends on the system’s design goal 32
33
33 Ready List Ready List Scheduler CPU Done New Process Try to use the simplified scheduling model Only consider running and ready states Ignores time in blocked state: –New process created when it enters ready state –Process is destroyed when it enters blocked state –Really just looking at “small phases” of a process Blocked or preempted processes
34
First-come, first served (FCFS) Shorter jobs first (SJF) or Shortest job next (SJN) Higher priority jobs first Job with the closest deadline first 34
35
35
36
36
37
Used to illustrate deterministic schedules Dependencies of a process on other processes Plots processor(s) against time Shows which processes on executing on which processors at which times Also shows idle time, so illustrates the utilization of each processor In following, will only assume one processor 37
38
First-Come-First-Served Assigns priority to processes in the order in which they request the processor iτ(p i ) 0350 1125 2475 3250 475 38
39
39 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 T TRnd (p 0 ) = (p 0 ) = 350 W(p 0 ) = 0 0350
40
40 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 T TRnd (p 0 ) = (p 0 ) = 350 T TRnd (p 1 ) = ( (p 1 ) +T TRnd (p 0 )) = 125+350 = 475 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 475350
41
41 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 p2p2 T TRnd (p 0 ) = (p 0 ) = 350 T TRnd (p 1 ) = ( (p 1 ) +T TRnd (p 0 )) = 125+350 = 475 T TRnd (p 2 ) = ( (p 2 ) +T TRnd (p 1 )) = 475+475 = 950 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) = 475 475950
42
42 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 p2p2 p3p3 T TRnd (p 0 ) = (p 0 ) = 350 T TRnd (p 1 ) = ( (p 1 ) +T TRnd (p 0 )) = 125+350 = 475 T TRnd (p 2 ) = ( (p 2 ) +T TRnd (p 1 )) = 475+475 = 950 T TRnd (p 3 ) = ( (p 3 ) +T TRnd (p 2 )) = 250+950 = 1200 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) = 475 W(p 3 ) = T TRnd (p 2 ) = 950 1200950
43
43 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) = (p 0 ) = 350 T TRnd (p 1 ) = ( (p 1 ) +T TRnd (p 0 )) = 125+350 = 475 T TRnd (p 2 ) = ( (p 2 ) +T TRnd (p 1 )) = 475+475 = 950 T TRnd (p 3 ) = ( (p 3 ) +T TRnd (p 2 )) = 250+950 = 1200 T TRnd (p 4 ) = ( (p 4 ) +T TRnd (p 3 )) = 75+1200 = 1275 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) = 475 W(p 3 ) = T TRnd (p 2 ) = 950 W(p 4 ) = T TRnd (p 3 ) = 1200 12001275
44
44 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) = (p 0 ) = 350 T TRnd (p 1 ) = ( (p 1 ) +T TRnd (p 0 )) = 125+350 = 475 T TRnd (p 2 ) = ( (p 2 ) +T TRnd (p 1 )) = 475+475 = 950 T TRnd (p 3 ) = ( (p 3 ) +T TRnd (p 2 )) = 250+950 = 1200 T TRnd (p 4 ) = ( (p 4 ) +T TRnd (p 3 )) = 75+1200 = 1275 W(p 0 ) = 0 W(p 1 ) = T TRnd (p 0 ) = 350 W(p 2 ) = T TRnd (p 1 ) = 475 W(p 3 ) = T TRnd (p 2 ) = 950 W(p 4 ) = T TRnd (p 3 ) = 1200 W avg = (0+350+475+950+1200)/5 = 2974/5 = 595 127512009504753500 Easy to implement Ignores service time, etc Not a great performer
45
In FCFS, when a process arrives, all in ready list will be processed before this job Let μ be the service rate Let L be the ready list length W avg (p) = L*1/ μ + 0.5* 1/ μ = L/ μ +1/(2 μ ) (in queue) (active process) Compare predicted wait with actual in earlier examples 45
46
Example: ProcessBurst Time P1 24 P2 3 P3 3 Suppose that the processes arrive in the order: P1, P2, P3 The Gantt Chart for the schedule is: Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 46 P1P1 P2P2 P3P3 2427300
47
Suppose that the processes arrive in the order P 2, P 3, P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6 ; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case. 47 P1P1 P3P3 P2P2 63300
48
Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. SJN is optimal gives minimum average waiting time for a given set of processes. 48
49
Two schemes: non-preemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. Preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-Next (SRTN). 49
50
50 i (p i ) 0 350 1 125 2 475 3 250 4 75 p4p4 T TRnd (p 4 ) = (p 4 ) = 75 W(p 4 ) = 0 750
51
51 i (p i ) 0 350 1 125 2 475 3 250 4 75 p1p1 p4p4 T TRnd (p 1 ) = (p 1 )+ (p 4 ) = 125+75 = 200 T TRnd (p 4 ) = (p 4 ) = 75 W(p 1 ) = 75 W(p 4 ) = 0 200750
52
52 i (p i ) 0 350 1 125 2 475 3 250 4 75 p1p1 p3p3 p4p4 T TRnd (p 1 ) = (p 1 )+ (p 4 ) = 125+75 = 200 T TRnd (p 3 ) = (p 3 )+ (p 1 )+ (p 4 ) = 250+125+75 = 450 T TRnd (p 4 ) = (p 4 ) = 75 W(p 1 ) = 75 W(p 3 ) = 200 W(p 4 ) = 0 450200750
53
53 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 p3p3 p4p4 T TRnd (p 0 ) = (p 0 )+ (p 3 )+ (p 1 )+ (p 4 ) = 350+250+125+75 = 800 T TRnd (p 1 ) = (p 1 )+ (p 4 ) = 125+75 = 200 T TRnd (p 3 ) = (p 3 )+ (p 1 )+ (p 4 ) = 250+125+75 = 450 T TRnd (p 4 ) = (p 4 ) = 75 W(p 0 ) = 450 W(p 1 ) = 75 W(p 3 ) = 200 W(p 4 ) = 0 800450200750
54
54 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) = (p 0 )+ (p 3 )+ (p 1 )+ (p 4 ) = 350+250+125+75 = 800 T TRnd (p 1 ) = (p 1 )+ (p 4 ) = 125+75 = 200 T TRnd (p 2 ) = (p 2 )+ (p 0 )+ (p 3 )+ (p 1 )+ (p 4 ) = 475+350+250+125+75 = 1275 T TRnd (p 3 ) = (p 3 )+ (p 1 )+ (p 4 ) = 250+125+75 = 450 T TRnd (p 4 ) = (p 4 ) = 75 W(p 0 ) = 450 W(p 1 ) = 75 W(p 2 ) = 800 W(p 3 ) = 200 W(p 4 ) = 0 1275800450200750
55
55 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) = (p 0 )+ (p 3 )+ (p 1 )+ (p 4 ) = 350+250+125+75 = 800 T TRnd (p 1 ) = (p 1 )+ (p 4 ) = 125+75 = 200 T TRnd (p 2 ) = (p 2 )+ (p 0 )+ (p 3 )+ (p 1 )+ (p 4 ) = 475+350+250+125+75 = 1275 T TRnd (p 3 ) = (p 3 )+ (p 1 )+ (p 4 ) = 250+125+75 = 450 T TRnd (p 4 ) = (p 4 ) = 75 W(p 0 ) = 450 W(p 1 ) = 75 W(p 2 ) = 800 W(p 3 ) = 200 W(p 4 ) = 0 W avg = (450+75+800+200+0)/5 = 1525/5 = 305 1275800450200750 Minimizes wait time May starve large jobs Must know service times
56
Can only estimate the length. Can be done by using the length of previous CPU bursts, using exponential averaging. 56
57
=0 n+1 = n Recent history does not count. =1 n+1 = t n Only the actual last CPU burst counts. If we expand the formula, we get: n+1 = t n +( 1 - ) t n- 1 + … +(1 - ) j t n-j + … +(1 - ) n+1 0 Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor. 57
58
In priority scheduling, processes/threads are allocated to the CPU based on the basis of an externally assigned priority A commonly used convention is that lower numbers have higher priority Static priorities vs. dynamic priorities Static priorities are computed once at the beginning and are not changed Dynamic priorities allow the threads to become more or less important depending on how much service it has recently received 58
59
There are non-preemptive and preemptive priority scheduling algorithms Preemptive nonpreemptive SJN is a priority scheduling where priority is the predicted next CPU burst time. FCFS is a priority scheduling where priority is the arrival time 59
60
60 Non-preemptive Priority Scheduling i (p i ) Pri 0 350 5 1 125 2 2 475 3 3 250 1 4 75 4 p0p0 p1p1 p2p2 p3p3 p4p4 T TRnd (p 0 ) = (p 0 )+ (p 4 )+ (p 2 )+ (p 1 ) )+ (p 3 ) = 350+75+475+125+250 = 1275 T TRnd (p 1 ) = (p 1 )+ (p 3 ) = 125+250 = 375 T TRnd (p 2 ) = (p 2 )+ (p 1 )+ (p 3 ) = 475+125+250 = 850 T TRnd (p 3 ) = (p 3 ) = 250 T TRnd (p 4 ) = (p 4 )+ (p 2 )+ (p 1 )+ (p 3 ) = 75+475+125+250 = 925 T TRnd = (1275+375+850+250+925)/5 = 735 W(p 0 ) = 925 W(p 1 ) = 250 W(p 2 ) = 375 W(p 3 ) = 0 W(p 4 ) = 850 W avg = (925+250+375+0+850)/5 = 2400/5 = 480 12759258503752500 Reflects importance of external use May cause starvation Can address starvation with aging
61
61 i (p i ) Deadline 0 350 575 1 125 550 2 475 1050 3 250 (none) 4 75 200 p0p0 p1p1 p2p2 p3p3 p4p4 1275 1050550200 0 Allocates service by deadline May not be feasible p0p0 p1p1 p2p2 p3p3 p4p4 p0p0 p1p1 p2p2 p3p3 p4p4 575
62
Hard real-time systems – required to complete a critical task within a guaranteed amount of time. Soft real-time computing – requires that critical processes receive priority over less fortunate ones. 62
63
63 Ready List Ready List Scheduler CPU Preemption or voluntary yield Done New Process Highest priority process is guaranteed to be running at all times –Or at least at the beginning of a time slice Dominant form of contemporary scheduling But complex to build & analyze
64
Also called the shortest remaining job next When a new process arrives, its next CPU burst is compared to the remaining time of the running process If the new arriver’s time is shorter, it will preempt the CPU from the current running process 64
65
ProcessArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 Average time spent in ready queue = (9 + 1 + 0 +2)/4 = 3 65 P1P1 P3P3 P2P2 42 11 0 P4P4 57 P2P2 P1P1 16
66
ProcessArrival TimeBurst Time P 1 0.07 P 2 2.04 P 3 4.01 P 4 5.04 SJN (non-preemptive) Average time spent in ready queue (0 + 6 + 3 + 7)/4 = 4 66 P1P1 P3P3 P2P2 73160 P4P4 812
67
Each process gets a small unit of CPU time ( time quantum ), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/ n of the CPU time in chunks of at most q time units at once. No process waits more than ( n -1) q time units. 67
68
68 Good way to upset customers!
69
69 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 W(p 0 ) = 0 050
70
70 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 W(p 0 ) = 0 W(p 1 ) = 50 1000 p1p1
71
71 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 1000 p2p2 p1p1
72
72 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 2001000 p3p3 p2p2 p1p1
73
73 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 2001000 p4p4 p3p3 p2p2 p1p1
74
74 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 3002001000 p0p0 p4p4 p3p3 p2p2 p1p1
75
75 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 T TRnd (p 4 ) = W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 4754003002001000 p4p4 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3
76
76 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 T TRnd (p 1 ) = T TRnd (p 4 ) = W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 4754003002001000 p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 550
77
77 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 T TRnd (p 1 ) = T TRnd (p 3 ) = T TRnd (p 4 ) = W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 4754003002001000 p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 550650 750850950
78
78 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 T TRnd (p 0 ) = T TRnd (p 1 ) = T TRnd (p 3 ) = T TRnd (p 4 ) = W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 4754003002001000 p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p0 550650 7508509501050
79
79 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 T TRnd (p 0 ) = T TRnd (p 1 ) = T TRnd (p 2 ) = T TRnd (p 3 ) = T TRnd (p 4 ) = W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 4754003002001000 p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p0 p2p2 p2p2 p2p2 p2p2 550650 7508509501050115012501275
80
80 i (p i ) 0 350 1 125 2 475 3 250 4 75 p0p0 T TRnd (p 0 ) = T TRnd (p 1 ) = T TRnd (p 2 ) = T TRnd (p 3 ) = T TRnd (p 4 ) = W(p 0 ) = 0 W(p 1 ) = 50 W(p 2 ) = 100 W(p 3 ) = 150 W(p 4 ) = 200 W avg = (0+50+100+150+200)/5 = 500/5 = 100 4754003002001000 Equitable Most widely-used Fits naturally with interval timer p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p0 p2p2 p2p2 p2p2 p2p2 550650 7508509501050115012501275 T TRnd _ avg = (1100+550+1275+950+475)/5 = 4350/5 = 870
81
Performance q large FIFO q small q must be large with respect to context switch, otherwise overhead is too high. 81
82
82
83
83
84
84 i (p i ) 0 350 1 125 2 475 3 250 4 75 Overhead must be considered p0p0 T TRnd (p 0 ) = T TRnd (p 1 ) = T TRnd (p 2 ) = T TRnd (p 3 ) = T TRnd (p 4 ) = W(p 0 ) = 0 W(p 1 ) = 60 W(p 2 ) = 120 W(p 3 ) = 180 W(p 4 ) = 240 W avg = (0+60+120+180+240)/5 = 600/5 = 120 5404803602401200 p4p4 p1p1 p0p0 p4p4 p3p3 p2p2 p1p1 p1p1 p2p2 p3p3 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p3p3 p2p2 p0p0 p2p2 p0p0 p2p2 p2p2 p2p2 p2p2 575790 910103011501270139015101535 T TRnd _ avg = (1320+660+1535+1140+565)/5 = 5220/5 = 1044 635670 790
85
85 Ready List 0 Ready List 1 Ready List 2 Ready List n Scheduler CPU Preemption or voluntary yield Done New Process Each list may use a different policy FCFS SJN RR
86
Ready queue is partitioned into separate queues foreground (interactive) background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS 86
87
Scheduling must be done between the queues. Fixed priority scheduling; i.e., serve all from foreground then from background. Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR 20% to background in FCFS 87
88
88
89
A process can move between the various queues; aging can be implemented this way. Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service 89
90
90
91
Three queues: Q 0 – time quantum 8 milliseconds Q 1 – time quantum 16 milliseconds Q 2 – FCFS Scheduling A new job enters queue Q 0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1. At Q 1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2. 91
92
92
93
93
94
CPU scheduling more complex when multiple CPUs are available. Homogeneous processors within a multiprocessor. Load sharing Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing. 94
95
Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload. Queuing models Implementation 95
96
96
97
Involuntary CPU sharing -- timer interrupts Time quantum determined by interval timer -- usually fixed for every process using the system Sometimes called the time slice length Priority -based process (job) selection Select the highest priority process Priority reflects policy With preemption Usually a variant of Multi-Level Queues 97
98
All use a multiple-queue system UNIX SVR4: 160 levels, three classes time-sharing, system, real-time Solaris: 170 levels, four classes time-sharing, system, real-time, interrupt OS/2 2.0: 128 level, four classes background, normal, server, real-time Windows NT 3.51: 32 levels, two classes regular and real-time Mach uses “hand-off scheduling” 98
99
Involuntary CPU Sharing Preemptive algorithms 32 Multi-Level Queues Queues 0-7 are reserved for system functions Queues 8-31 are for user space functions nice influences (but does not dictate) queue level 99
100
Involuntary CPU Sharing across threads Preemptive algorithms 32 Multi-Level Queues Highest 16 levels are “real-time” Next lower 15 are for system/user threads Range determined by process base priority Lowest level is for the idle thread 100
101
In file kernel/sched.c The policy is a variant of RR scheduling 101
102
The scheduler is responsible for multiplexing the CPU among a set of ready processes / threads It is invoked periodically by a timer interrupt, by a system call, other device interrupts, any time that the running process terminates It selects from the ready list according to its scheduling policy Which includes non-preemptive and preemptive algorithms 102
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.