Download presentation
Presentation is loading. Please wait.
Published byRalph Booker Modified over 9 years ago
1
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling
2
5.2 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Basic Concepts Multiprogramming /Multithreading: Maximum CPU utilization process execution: cycle of CPU execution and I/O wait Process execution begins with a CPU burst That may be followed by an I/O burst Also, may be followed by another CPU burst, another I/O burst, and so on. CPU burst distribution
3
5.3 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Example: CPU cycle and I/O cycles { printf(“\nEnter the first integer: ”); scanf(“%d”, &a); I/O cycle printf(“\nEnter the second integer: ”); scanf(“%d”, &b); c = a+b d = (a*b)–c CPU cycle e = a–b f = d/e printf(“\n a+b= %d”, c); printf(“\n (a*b)-c = %d”, d); I/O cycle printf(“\n a-b = %d”, e); printf(“\n d/e = %d”, f);} I/O cycle CPU cycle
4
5.4 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Alternating Sequence of CPU and I/O Bursts Eventually, the final CPU burst ends with a system request to terminate execution Initially, the process is always started with CPU burst
5
5.5 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Histogram of CPU-burst Times A large number of short CPU bursts and a small number of long CPU bursts. An I/O-bound program typically has many short CPU bursts. might have a few long CPU bursts A CPU-bound program might have a few long CPU bursts
6
5.6 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition LONG-TERMMID-TERM SHORT-TERM Three-level Scheduling
7
5.7 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition CPU Scheduler (short-term scheduler) Scheduler selects from among the processes in ready queue, and allocates the CPU to one of them Queue may be ordered in various ways CPU scheduling decisions may take place when a process: 1.Switches from running to waiting state 2.Switches from running to ready state 3.Switches from waiting to ready 4. Terminates Scheduling under 1 and 4 is non-preemptive ( غیر قابل پس گرفتن ) or cooperative Scheduling under 1 and 4 is non-preemptive ( غیر قابل پس گرفتن ) or cooperative قابل پس گرفتن All other scheduling is preemptive ( قابل پس گرفتن ) l Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Mac OS X operating system for the Macintosh also uses preemptive scheduling
8
5.8 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Preemptive / non-preemptive Scheduling non-preemptive Scheduling issues Consider access to shared data While one is updating the data, it is preempted so that the second process can run Consider preemption while in kernel mode What happens if the process is preempted in the middle of changing important kernel data (for instance, I/O queues) and the kernel (or the device driver) needs to read or modify the same structure? Consider interrupts occurring during crucial OS activities So that these sections of code are not accessed concurrently by several processes
9
5.9 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running
10
5.10 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – Number of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process. Time between the submission of a job or process and its completion by the system. This concept is meaningful for non-interactive jobs or processes only. Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment). Time between the submission of a sub-request for processing to the time its result becomes available. This concept is applicable to interactive processes. It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time.
11
5.11 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Scheduling Algorithm Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time In most cases, systems optimize average measure It is important to minimize variance Users prefer predictable response time to faster system with high variances.
12
5.12 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition First-Come, First-Served (FCFS) Scheduling ProcessBurst Time P 1 24 P 1 24 P 2 3 P 2 3 P 3 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 (all at time 0) The Gantt Chart for the schedule is: Waiting time for P 1 = 0;P 2 = 24; P 3 = 27 Average waiting time= (0 + 24 + 27)/3 = 17 P1P1P1P1 P2P2 P3P3 242730 0
13
5.13 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P 2, P 3, P 1 P 2, P 3, P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case P1P1P1P1 P3P3P3P3 P2P2P2P2 63300
14
5.14 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Problems with FCFS Non-preemptive Does not minimize AWT Cannot utilize resources in parallel: Assume 1 process CPU bounded and many I/O bounded processes result: Convoy effect, low CPU and I/O Device utilization Convoy effect - short process behind long process
15
5.15 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Why Convoy Effects with FCFS? n Consider n-1 jobs in system that are I/O bound and 1 job that is CPU bound: 1. I/O bound jobs pass quickly through the ready queue and suspend themselves waiting for I/O. 2. CPU bound job arrives at head of queue and executes until completion. 3. I/O bound jobs re-join ready queue and wait for CPU bound job to complete. 4. I/O devices idle until CPU bound job completes. 5. When CPU bound job completes, the ready I/O-bounded processes quickly move through the running state and become blocked on I/O events again. CPU becomes idle.
16
5.16 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition 16 Shortest Job First (SJF) Schedule the job with the shortest computation time first Different names: Shortest-Job-First Scheduling or shortest-next-CPU- burst algorithm Two types: Non-preemptive Preemptive (also called, shortest-remaining-time-first scheduling) Optimal if all jobs are available simultaneously: Gives the best possible AWT (average waiting time)
17
5.17 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Non-preemptive SJF: Example ProcessDurationOrderArrival Time P1610 P2820 P3730 P4340 0 3 P4 (3) P1 (6) 9 P3 (7) 16 P4 waiting time: 0 P1 waiting time: 3 P3 waiting time: 9 P2 waiting time: 16 The total time is: 24 The average waiting time (AWT): (0+3+9+16)/4 = 7 P2 (8) 24
18
5.18 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Comparing to FCFS ProcessDurationOrderArrival Time P1610 P2820 P3730 P4340 0 6 P4 (3)P1 (6) 14 P3 (7) 21 P1 waiting time: 0 P2 waiting time: 6 P3 waiting time: 14 P4 waiting time: 21 The total time is the same. The average waiting time (AWT): (0+6+14+21)/4 = 10.25 (comparing to AWT (SJF) = 7) P2 (8) 24
19
5.19 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Another example of SJF Process Arrival TimeBurst Time P 1 06 P 2 28 P 3 47 P 4 53 SJF scheduling chart Average waiting time = (0+14+5+1) / 4 = 5 P1 P3 P4 6 16 0 9 P2 24
20
5.20 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Shortest-Job-First (SJF) Scheduling Problems: The difficulty is knowing the length of the next CPU request Could ask the user Can be implemented for long-term (job) scheduling in a batch system it cannot be implemented at the level of short-term CPU scheduling Suitable for scheduling in Batch Systems
21
5.21 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Can only estimate the length – should be similar to the previous one Approximate next CPU-burst duration from the durations of the previous bursts The past can be a good predictor of the future No need to remember entire past history Use exponential average: t n duration of the n th CPU burst n+1 predicted duration of the (n+1) st CPU burst n+1 = t n + (1- ) n where 0 1 determines the weight placed on past behavior Commonly, α set to ½ (recent history and past history are equally weighted) Shortest Job First Prediction Determining Length of Next CPU Burst
22
5.22 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Prediction of the Length of the Next CPU Burst
23
5.23 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Examples of Exponential Averaging =0 n+1 = n Recent history does not count (current state is not stable) =1 n+1 = t n Only the actual last CPU burst counts If we expand the formula, we get: n+1 = t n +(1 - ) t n -1 +(1 - ) j t n -j + … +(1 - ) n +1 0 Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor
24
5.24 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Example of Preemptive SJF or Shortest-remaining-time-first Now we add the concepts of varying arrival times and preemption to the analysis ProcessAarri Arrival TimeTBurst Time P 1 08 P 2 14 P 3 29 P 4 35 Preemptive SJF Gantt Chart Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec P1P1 P1P1 P2P2 1 17 0 10 P3P3 26 5 P4P4
25
5.25 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition A Problem with Preemptive SJF n Starvation l A job may keep getting preempted by shorter ones l Example Process A with computation time of 1 hour arrives at time 0 But every 1 minute, a short process with computation time of 2 minutes arrives Result of SJF: A never gets to run
26
5.26 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Priority Scheduling (Event Driven) A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer highest priority) Preemptive Nonpreemptive SJF is priority scheduling where priority is the inverse of predicted next CPU burst time Problem Starvation – low priority processes may never execute Solution Aging – as time progresses increase the priority of the process
27
5.27 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Example of Priority Scheduling ProcessAarri Burst TimeTPriority P 1 103 P 2 11 P 3 24 P 4 15 P 5 52 Priority scheduling Gantt Chart Average waiting time = 8.2 msec P2P2 P3P3 P5P5 1 18 0 16 P4P4 19 6 P1P1
28
5.28 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Priority Scheduling: example Average turn around time: AT TRnd = (350 + 425 + 900 + 1025 + 1275)/5 = 795 Average wait time: AWT = (0 + 350 + 425 + 900 + 1025)/5 = 540 Pi Burst Time ( τ) Priority P03505 P11252 P24753 P32501 P4754 Highest priority corresponds to highest value
29
5.29 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Round-robin n One of the oldest, simplest, most commonly used scheduling algorithm n Select process/thread from ready queue in a round-robin fashion (take turns) Problems: Does not consider priority Context switch overhead Preemption after quantum expiration Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.
30
5.30 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Round Robin (RR) If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Timer interrupts every quantum to schedule next process Performance q large q large FIFO behavior Poor initial waiting time q small Too many context switches (overheads) Inefficient CPU utilization q must be large with respect to context switch, otherwise overhead is too high
31
5.31 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Example of RR with Time Quantum = 4 ProcessBurst Time P 1 24 P 2 3 P 3 3 The Gantt chart is: Typically, higher average turnaround than SJF, but better response Heuristic: 70-80% of jobs block within time-slice Typical time-slice: 10-100 ms (depends on job priority) P1P1 P2P2 P3P3 P1P1 P1P1 P1P1 P1P1 P1P1 0 47 101418222630
32
5.32 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Time Quantum and Context Switch Time
33
5.33 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Turnaround Time Varies With The Time Quantum 80% of CPU bursts should be shorter than q
34
5.34 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Multilevel Queue Ready queue is partitioned into separate queues, eg: foreground (interactive) background (batch) permanently Process permanently in a given queue Each queue has its own scheduling algorithm: foreground – RR background – FCFS Scheduling must be done between the queues: Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR and 20% to background in FCFS
35
5.35 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Multilevel Queue Scheduling
36
5.36 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Practical example: BSD UNIX scheduling n MLQ with feedback approach – 32 run queues l 0 through 7 for system processes l 8 through 31 for processes executing in user space n The dispatcher selects a process from the queue with highest priority l RR is used within a queue l therefore only processes in highest priority queue can execute l the time slice is less than 100us
37
5.37 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service
38
5.38 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Example of Multilevel Feedback Queue Three queues: Q 0 – RR with time quantum 8 milliseconds Q 1 – RR time quantum 16 milliseconds Q 2 – FCFS Scheduling A new job enters queue Q 0 which is served FCFS When it gains CPU, job receives 8 milliseconds If it does not finish in 8 milliseconds, job is moved to queue Q 1 At Q 1 job is again served FCFS and receives 16 additional milliseconds If it still does not complete, it is preempted and moved to queue Q 2
39
5.39 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Multilevel Feedback Queues
40
5.40 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Example Suppose, there are two queues with time slices of 2 for queue1 (Q1) and time slice 4 for queue 2 (Q2). Each queue uses RR algorithm. Compute the AWT with MLFQ scheduling. Suppositions: Process P3 has an I/O burst of 1 unit after each CPU burst of 2 units. After each I/O burst P3 returns to queue 1. ProcessAarri Arrival TimeTBurst Time P 1 030 P 2 020 P 3 010 Q111223333333333 Q2111122221111222211112222111122221111221111111 I/O3333 MLFQ waiting time: P1:30; P2:32; P3:4 AWT: (30+32+4)/3 = 22
41
5.41 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Case study: CPU scheduling in Solaris Priority-based scheduling Four classes: real time, system, time sharing, interactive (in order of priority) Different priorities and algorithm in different classes Default class: time sharing Policy in the time sharing class: – Multilevel feedback queue with variable time slices
42
5.42 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Windows XP scheduling A priority-based, preemptive scheduling Highest priority thread will always run Also have multiple classes and priorities within classes Similar idea for user processes – Multilevel feedback queue Lower priority when quantum runs out Increase priority after a wait event Some twists to improve “user perceived” performance: Boost priority and quantum for foreground process (the window that is currently selected). Boost priority more for a wait on keyboard I/O (as compared to disk I/O)
43
5.43 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Linux scheduling A priority-based, preemptive with global round-robin scheduling Each process have a priority Processes with a larger priority also have a larger time slices Before the time slices is used up, processes are scheduled based on priority. After the time slice of a process is used up, the process must wait until all ready processes to use up their time slice (or be blocked) – a round-robin approach. No starvation problem. For a user process, its priority may + or – 5 depending whether the process is I/O- bound or CPU-bound. Giving I/O bound process higher priority.
44
5.44 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Thread Scheduling Threads Threads: Distinction between user-level and kernel-level threads When threads supported by OS, it schedules threads, not processes Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP Known as process-contention scope (PCS) since scheduling competition is within the process Typically done via priority set by programmer Kernel thread scheduled onto available CPU is system-contention scope (SCS) – competition among all threads in system
45
5.45 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Pthread Scheduling SCS API allows specifying either PCS or SCS during thread creation PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling Can be limited by OS – Linux and Mac OS X only allow PTHREAD_SCOPE_SYSTEM
46
5.46 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available Homogeneous processors within a multiprocessor Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating (diminish) the need for data sharing Symmetric multiprocessing (SMP) – each processor is self-scheduling, all processes in common ready queue, or each has its own private queue of ready processes Currently, SMP is most common Processor affinity – process has affinity for processor on which it is currently running soft affinity hard affinity Variations including processor sets
47
5.47 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition NUMA and CPU Scheduling Note that memory-placement algorithms can also consider affinity NUMA: Non-Uniform Memory Access a CPU has faster access to some parts of main memory than to other parts.
48
5.48 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Load balancing On SMP systems, it is important to keep the workload balanced among all processors to fully utilize the benefits of having more than one processor. attempts to keep the workload evenly distributed across all processors in an SMP system. On systems with a common run queue, load balancing is often unnecessary Load balancing Push migration a specific task periodically checks the load on each processor It moves (or pushes) processes from overloaded to idle or less-busy processors Pull migration Pull migration occurs when an idle processor pulls a waiting task from a busy processor load balancing often counteracts (negate) the benefits of processor affinity Solution: processes are moved only if the imbalance exceeds a certain threshold
49
5.49 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Multicore Processors Recent trend to place multiple processor cores on same physical chip Faster and consumes less power Multiple threads per core also growing Takes advantage of memory stall to make progress on another thread while memory retrieve happens memory stall : may occur for various reasons, such as a cache miss (accessing data that is not in cache memory)
50
5.50 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Multithreaded Multicore System
51
5.51 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition How to multithread a processor there are two ways to multithread a processor: coarse-grained and fine- grained multithreading. coarse-grained: a thread executes on a processor until a long-latency event such as a memory stall occurs. However, the cost of switching between threads is high Fine-grained (or interleaved): switching between threads at a much finer level of granularity—typically at the boundary of an instruction cycle. However, the architectural design of fine-grained systems includes logic for thread switching. As a result, the cost of switching between threads is small.
52
5.52 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtualization and Scheduling Virtualization software schedules multiple guests onto CPU(s) Each guest doing its own scheduling Not knowing it doesn’t own the CPUs Can result in poor response time Can effect time-of-day clocks in guests Virtualization can undo good scheduling algorithm efforts of guests
53
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition End of Chapter 5
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.