Scheduling.

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Scheduling in Batch Systems
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
Wk 2 – Scheduling 1 CS502 Spring 2006 Scheduling The art and science of allocating the CPU and other resources to processes.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
More Scheduling cs550 Operating Systems David Monismith.
Chapter 6 Scheduling. Basic concepts Goal is maximum utilization –what does this mean? –cpu pegged at 100% ?? Most programs are I/O bound Thus some other.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
 In a single-processor system, only one process can run at a time; any others must wait until the CPU is free and can be rescheduled.  The objective.
Scheduling.
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
lecture 5: CPU Scheduling
CPU SCHEDULING.
Chapter 6: CPU Scheduling
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Chapter 5a: CPU Scheduling
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Uniprocessor Scheduling
Chapter 2 Scheduling.
CPU Scheduling.
Operating Systems Processes Scheduling.
Uniprocessor Scheduling
Chapter 2.2 : Process Scheduling
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling Basic Concepts Scheduling Criteria
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Operating System Concepts
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Operating systems Process scheduling.
COT 4600 Operating Systems Spring 2011
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Outline Scheduling algorithms Multi-processor scheduling
CPU scheduling decisions may take place when a process:
Chapter 6: CPU Scheduling
CGS 3763 Operating Systems Concepts Spring 2013
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Processor Scheduling Hank Levy 1.
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: Scheduling Algorithms Dr. Amjad Ali
Scheduling 21 May 2019.
Chapter 6: CPU Scheduling
Uniprocessor Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
CPU Scheduling CSE 2431: Introduction to Operating Systems
Presentation transcript:

Scheduling

Overview Scheduling mechanism Scheduling goals Scheduling policies Multiprocessor scheduling

Scheduling Mechanism: Review What is the purpose of the scheduler? When does the scheduler run? What is preemptive scheduling? What are the benefits/drawbacks of preemption? run thread_wakeup thread_sleep preemptive thread_yield Running Ready Blocked Exited thread_exit thread_destroy thread_create purpose of scheduler: implements the thread abstraction, runs different threads using context switching when scheduler runs: Whenever the scheduler (thread_*) API calls are invoked, note that context switching only happens when thread_sleep, yield and exit are invoked (arrows out of the Running state) preemptive scheduling: when context switching is invoked by OS, and not under the control of running thread Pros: protection (i.e., one thread cannot stop other threads from running), Cons: increases context switches, which are expensive, and applications don’t do any useful work during a context switch.

CPU and IO Bound Programs Programs alternate between computation and IO, called CPU and IO bursts A CPU-bound program has frequent CPU bursts An IO-bound program has frequent IO bursts When a program performs IO, CPU is not needed Scheduler runs another program to keep CPU busy Improves CPU utilization

Scheduling Goals Scheduler aims to improve different metrics depending on system environment Batch systems: long running programs, no interactive users, no time constraints CPU Utilization: % of time that CPU is busy (not idle) Throughput: nr. of programs that complete per unit time Turnaround time: time needed from start to finish of program turnaround time = processing time (running) + waiting time (not running) Interactive (or general-purpose) systems: short running programs, interactive users, weak time constraints Response time: time between receiving request and starting to produce output

Scheduling Policy Scheduler achieves its goals by deciding which thread to run next (when multiple threads are runnable) This is called scheduling policy OS may have one scheduling mechanism, but multiple scheduling policies based on system environment

Scheduling Policies Batch Systems Interactive Systems First-Come, First Served (FIFO) Shortest Job First (non-preemptive) Shortest Remaining Time (preemptive) Interactive Systems Round-Robin Scheduling Static Priority Scheduling Feedback Scheduling

First-Come, First-Served (FIFO) Select threads in the order they arrive Run each thread until completion (non-preemptive) What happens when a thread blocks? Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 When thread blocks/sleeps, then upon wakeup, it is placed at the end of the run queue. So, in this case, each CPU burst is served in FIFO order. In these slides, each thread is assumed to arrive just before the next time interval starts. For example, thread 3 is assumed to arrive just before time 4. 5 10 15 20 1 2 3 4 5

First-Come, First-Served (FIFO) Select threads in the order they arrive Run each thread until completion Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 0 3 2 2 6 1 7 3 4 4 5 9 4 6 5 7 12 5 8 2 10 12 average waiting time = (0 + 1 + 5 + 7 + 10)/5 = 4.6 1 2 3 4 5 5 10 15 20 1 2 3 4 5

Shortest Job First Select the thread with the shortest running time Run thread to completion (non-preemptive) Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 non-preemptive: that means that when a thread is run, it is run to completion (even if another thread arrives while it is running and is a shorter job). 5 10 15 20 1 2 3 4 5

Shortest Job First Select the thread with the shortest running time Run thread to completion (non-preemptive) Why is average waiting time lower than with FIFO? Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 0 3 2 2 6 1 7 3 4 4 7 11 4 6 5 9 14 5 8 2 1 3 average waiting time = (0 + 1 + 7 + 9 + 1)/5 = 3.6 Why does the average waiting time go down? consider a simple example of one job with processing time of 100, and 2 jobs with processing time of 1, all three arriving at time 0. If the long job is run first, then the two short jobs have to wait for 100 units, so the average waiting time is (0 + 100 + 101)/3 = 201/3 = 67. However, if we run the two short jobs first, the average waiting time is (0 + 1 + 2)/3 = 1! So doing shorter jobs first ensures that other jobs have to wait less. 1 2 5 3 4 5 10 15 20 1 2 3 4 5

Shortest Remaining Time Select thread with the shortest remaining time to finish Run thread until it ends or until another thread arrives Preemptive version of Shortest Job First Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 5 10 15 20 1 2 3 4 5

Shortest Remaining Time Select thread with the shortest remaining time to finish Run thread until it ends or until another thread arrives Provably optimal w.r.t. average wait time Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 0 3 2 2 6 7 13 3 4 4 0 4 4 6 5 9 14 5 8 2 0 2 average waiting time = (0 + 7 + 0 + 9 + 0)/5 = 3.2 1 2 3 5 2 4 5 10 15 20 1 2 3 4 5

Interactive Scheduling Policies Batch policies don't work for interactive systems Some of them require estimate of processing time If jobs do IO, then we need time of each CPU burst SJF and SRT can starve long running threads Long response time Three interactive scheduling policies Round-robin scheduling Static priority scheduling Feedback scheduling

Round-Robin Scheduling Preemptive version of FIFO scheduling Processes run in FIFO order but each process is allowed to run for a limited time called time slice If process does not complete by the end of time slice, it is placed at the tail of the run queue (requires timer interrupts) Next process is chosen from head of run queue What happens when a process blocks? How does RR fix the problems with batch scheduling? Estimation of processing time Starvation Long response time When the running process blocks, it is moved to a wait queue. Similar to the previous batch scheduling policies, when the blocked process is woken up, it is placed at the tail of the run queue. fix problems with batch scheduling? Does not require estimate of job processing time Does not cause starvation Enables interactivity by limiting the amount of time a thread can run at a time (a time slice), so a thread gets to run at least once every (time slice * nr. of ready threads) seconds.

Time Slice Time slice (ts) >> context switch time (cs) context switch overhead = cs/(ts + cs) Typical ts <= 1-100 ms, typical cs ~= 10 us Assuming ts = 1 ms: context switch overhead = 10/(1000 + 10) = 1% on Linux, the sched_RR policy has a time slice of 100 ms. Note that this is not the same as timer interrupt frequency, which is about 4 ms.

Round-Robin Scheduling Run each thread one time slice at a time (or until the thread blocks) in round-robin order New thread is added to end of ready list Assume it arrives just before another thread’s slice finishes Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2 5 10 15 20 1 2 3 4 5

Round-Robin Scheduling Run each thread one time slice at a time (or until the thread blocks) in round-robin order New thread is added to end of ready list Assume it arrives just before another thread’s slice finishes Arrival Processing Waiting Turnaround Thread Time Time Time Time 1 0 3 1 4 2 2 6 10 16 3 4 4 9 13 4 6 5 9 14 5 8 2 5 7 Time ready queue running head on the right side, tail on the left side 0 1 1 1 2 1 2 (2 arrives just before time slice 2 starts) 3 2 1 (1 finishes) 4 3 2 (3 arrives) 5 2 3 6 34 2 (4 arrives) 7 23 4 8 452 3 (5 arrives) 9 345 2 10 234 5 11 523 4 12 452 3 13 345 2 14 234 5 (5 finishes) 15 23 4 16 42 3 (3 finishes) 17 4 2 (2 finishes) 18 4 19 4 (4 finishes) average waiting time = (1 + 10 + 9 + 9 + 5)/5 = 6.8 1 2 1 2 3 2 4 3 2 5 4 3 2 5 4 3 2 4 5 10 15 20 1 2 3 4 5

Round-Robin Scheduling Effectiveness of round-robin depends on The number of threads More => slower response The size of the time slice Long => slower response Short => higher overhead

Static Priority Scheduling Each thread is assigned a priority when it is started When scheduler runs, it always chooses to run the highest priority thread

Multi-level Queue Scheduling Combines priority with round-robin scheduling Multiple ready queues, with decreasing priority Scheduler chooses thread from highest-priority queue that has a ready thread Round-robin scheduling within each queue Typically, IO bound threads have higher priority High priority CPU Low priority

Dynamic Priority Scheduling With static priority algorithms, choosing priority is tricky Starvation: low priority process may never run Priority inversion: low priority threads may prevent high priority task from running by holding a shared resource Dynamic priority (feedback) scheduling Priority of a thread is changed based on thread behavior Thread priority is reduced when thread uses CPU heavily Benefits? priority inversion: say we have three thread with high priority (H), middle priority (M), and low priority (L). Say L acquires lock(a), and then H tries to acquire lock(a). H will block, and so M gets to run, since it has higher priority than L. However, L will not get to run, and so it can’t release its lock, and so H doesn’t get to run. In essence, a lower priority thread (M) is running, while preventing a higher priority thread (H) from running, which is why it is called a priority inversion. H: lock(a) M: running, so L doesn’t get to run, which blocks H. L: lock(a) Benefits: 1) prevents starvation of a lower priority thread, 2) helps improve response time of interactive or I/O-bound threads

Unix Feedback Scheduling Goals Allocate CPU fairly among threads Give CPU priority to IO bound threads Each thread has the following parameters associated with it: CPU usage (C) Current priority (Pi), Initial priority (P0) Nice value (N) The time period of the timer interrupt is one time unit (sometimes called a clock tick). The time slice consists of some number of time units, e.g., 5, 10, 100, etc.

Unix Feedback Scheduling On each timer interrupt (e.g., every 10 ms) Update CPU usage of running thread: C = C + 1 Every time slice (e.g., every 1 s) Update current priority of all threads: Pi = Pi-1 / 2 + C + N Reset CPU usage for all threads: C = 0 Choose thread with smallest Pi value If thread blocks, or another thread becomes runnable: Choose another thread with smallest Pi value Time Slice Timer Interrupt 5 10 15 20 Why is time slice so long? time slice long: Making the time slice long improves throughput because there are fewer context switches. 'i' is the ith time slice, P0 = 0

Unix Feedback Scheduling Example Say time slice is 5 timer interrupt units N = 0 A thread runs as follows: Calculate its priority at time 20 C = 3 C = 2 C = 3 C = 2 5 10 15 20 P4 = C4 + C3/2 + C2/4 + C1/8 = 2 + 3/2 + 2/4 + 3/8

Some Comments About Unix Scheduling A thread runs for a full time slice unless the thread blocks or some blocked thread becomes runnable Why? What is the benefit of this approach? How does scheduler give priority to IO-bound threads? What are the benefits of this approach? why? since current priority value is only updated every time slice benefit of the approach: reduces context switch overhead priority to IO-bound threads: Thread priority value becomes smaller when a thread does not use much CPU, so a thread that is sleeping is more likely to run when it becomes runnable benefits: ensures fairness, IO bound threads have low response time

Multiprocessor Scheduling Asymetric multiprocessing One processor runs all OS code, I/O processing code, etc. Other processors run user code Simple to implement Symmetric multiprocessing (SMP) All processors run OS and user code More efficient Harder to implement SMP Scheduling Issues Processor affinity Load balancing

Processor Affinity When a thread is running on a processor P1, the processor caches the thread’s data If thread is migrated from P1 to P2, cache has to be invalidated on P1 and populated on P2 Processor affinity OS tries to ensure that thread keeps running on P1 A thread can specify which processor it wants to use (hard affinity)

Load Balancing Use one ready queue or ready queue per processor? Single ready queue makes load balancing easier When a processor becomes idle, it picks next ready thread However, the ready queue can become a bottleneck A ready queue per processor is more scalable However, task migration and load balancing are more tricky Task migration: need to be careful or else deadlock possible There are two complementary options for load balancing Push migration: A migration thread periodically checks load on each processor and schedules threads on less-busy processors Pull migration: An idle processor pulls threads from overloaded processors How does load balancing affect processor affinity? one ready queue: can become a bottleneck with increasing cores because the different cores will need to acquire a (spin) lock on the run queue to run the thread scheduling functions. deadlock: If one thread is moved from RQ1 (run queue 1) to RQ2 and another is moved from RQ2 to RQ1? ready queue per processor: One method is to use sleep and wakeup Sleep removes thread from one queue Wakeup can add thread to another queue Load balancing improves throughput by balancing the load across CPUs but it reduces processor affinity because processes are moved to different cores/processors, which can slow the process.

Summary A scheduler chooses threads to run based on a scheduling policy Batch scheduling policies: aim to improve throughput Often use non-preemptive scheduling, such as FCFS or SJF, because it minimizes context switches Interactive scheduling policies: aim to reduce response time Use preemptive scheduling, such as round-robin, to ensure good response times for IO bound jobs Prioritization A scheduler essentially prioritizes jobs, so fairness/starvation are important issues A dynamic prioritization scheme helps improve the response time of IO-bound jobs, while providing fairness to CPU-bound jobs

Think Time Run the “top” program. On the top right, you will see the load average. Consider the first of the three values shown. When the system is idle, this value will be close to 0. When the system is busy, it will be close to 1 (on a uniprocessor). How do you think the OS calculates this value? What are the other two values shown by load average? Feedback scheduling: Say 10 timer interrupts occur in a time slice, and a process takes 30% of the CPU in each time slice. What will its priority value be over time? Top shows the system load avg over the last 1, 5 and 15 minutes. This load average is calculated based on the average number of (ready + running) processes in the last 1 min, 5 min, 15 min. The averaging is performing using a formula similar to how the unix feedback scheduler performs the priority calculation. C = 3 (on average 3 timer interrupts every time slice) Over a long time, the P value will stabilize, so P = P/2 + C, or P/2 = 3, so P = 6.