Scheduling CS 111 On-Line MS Program Operating Systems Peter Reiher

Slides:



Advertisements
Similar presentations
Virtual Memory (II) CSCI 444/544 Operating Systems Fall 2008.
Advertisements

CPU Scheduling Questions answered in this lecture: What is scheduling vs. allocation? What is preemptive vs. non-preemptive scheduling? What are FCFS,
CS 104 Introduction to Computer Science and Graphics Problems Operating Systems (2) Process Management 10/03/2008 Yang Song (Prepared by Yang Song and.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
Slide 6-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 6 Threads and Scheduling 6.
CPU Scheduling Chapter 6 Chapter 6.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
Chapter 6 CPU SCHEDULING.
Scheduling CS 111 Operating System Principles Peter Reiher
1 CSE451 Scheduling Autumn 2002 Gary Kimura Lecture #6 October 11, 2002.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
1 Scheduling The part of the OS that makes the choice of which process to run next is called the scheduler and the algorithm it uses is called the scheduling.
2.5 Scheduling Given a multiprogramming system. Given a multiprogramming system. Many times when more than 1 process is waiting for the CPU (in the ready.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
Lecture 6 Page 1 CS 111 Spring 2015 Scheduling CS 111 Operating Systems Peter Reiher.
Lecture 6 Page 1 CS 111 Fall 2015 Scheduling CS 111 Operating Systems Peter Reiher.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Lecture 4 Page 1 CS 111 Summer 2013 Scheduling CS 111 Operating Systems Peter Reiher.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU Scheduling CS Introduction to Operating Systems.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Scheduling.
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
Non-Preemptive Scheduling
Outline What is scheduling? What resources should we schedule?
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Operating Systems Design (CS 423)
Copyright ©: Nahrstedt, Angrave, Abdelzaher
Chapter 5a: CPU Scheduling
Networks and Operating Systems: Exercise Session 2
Mechanism: Limited Direct Execution
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Scheduling CS 111 Operating Systems Peter Reiher
Chapter 2 Scheduling.
Chapter 8 – Processor Scheduling
Chapter 2.2 : Process Scheduling
Chapter 6: CPU Scheduling
Operating Systems CPU Scheduling.
Process management Information maintained by OS for process management
CS 143A - Principles of Operating Systems
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Lecture 21: Introduction to Process Scheduling
Operating System Concepts
Process & its States Lecture 5.
TDC 311 Process Scheduling.
Operating systems Process scheduling.
Chapter 5: CPU Scheduling
CPU SCHEDULING.
CPU scheduling decisions may take place when a process:
Chapter 6: CPU Scheduling
Process Scheduling Decide which process should run and for how long
Chapter 5: CPU Scheduling
Process Scheduling B.Ramamurthy 2/23/2019.
Lecture 21: Introduction to Process Scheduling
Scheduling.
Uniprocessor scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Scheduling & Dispatching
Chapter 5: CPU Scheduling
Scheduling 21 May 2019.
Scheduling & Dispatching
Concurrency: Threads, Address Spaces, and Processes
CPU Scheduling CSE 2431: Introduction to Operating Systems
Presentation transcript:

Scheduling CS 111 On-Line MS Program Operating Systems Peter Reiher

Outline What is scheduling? What resources should we schedule? What are our scheduling goals? What resources should we schedule? Example scheduling algorithms and their implications

What Is Scheduling? An operating system often has choices about what to do next In particular: For a resource that can serve one client at a time When there are multiple potential clients Who gets to use the resource next? And for how long? Making those decisions is scheduling

OS Scheduling Examples What job to run next on an idle core? How long should we let it run? In what order to handle a set of block requests for a disk drive? If multiple messages are to be sent over the network, in what order should they be sent?

How Do We Decide How To Schedule? Generally, we choose goals we wish to achieve And design a scheduling algorithm that is likely to achieve those goals Different scheduling algorithms try to optimize different quantities So changing our scheduling algorithm can drastically change system behavior

The Process Queue The OS typically keeps a queue of processes that are ready to run Ordered by whichever one should run next Which depends on the scheduling algorithm used When time comes to schedule a new process, grab the first one on the process queue Processes that are not ready to run either: Aren’t in that queue Or are at the end Or are ignored by scheduler

Potential Scheduling Goals Maximize throughput Get as much work done as possible Minimize average waiting time Try to avoid delaying too many for too long Ensure some degree of fairness E.g., minimize worst case waiting time Meet explicit priority goals Scheduled items tagged with a relative priority Real time scheduling Scheduled items tagged with a deadline to be met

Different Kinds of Systems, Different Scheduling Goals Time sharing Fast response time to interactive programs Each user gets an equal share of the CPU Batch Maximize total system throughput Delays of individual processes are unimportant Real-time Critical operations must happen on time Non-critical operations may not happen at all

Preemptive Vs. Non-Preemptive Scheduling When we schedule a piece of work, we could let it use the resource until it finishes Or we could use virtualization techniques to interrupt it part way through Allowing other pieces of work to run instead If scheduled work always runs to completion, the scheduler is non-preemptive If the scheduler temporarily halts running jobs to run something else, it’s preemptive

Pros and Cons of Non-Preemptive Scheduling Low scheduling overhead Tends to produce high throughput Conceptually very simple Poor response time for processes Bugs can cause machine to freeze up If process contains infinite loop, e.g. Not good fairness (by most definitions) May make real time and priority scheduling difficult

Pros and Cons of Pre-emptive Scheduling Can give good response time Can produce very fair usage Works well with real-time and priority scheduling More complex Requires ability to cleanly halt process and save its state May not get good throughput

An Intermediate Choice When process blocks for some reason, schedule someone else E.g., when current process blocks waiting for I/O Resume once ready and new process has yielded Going a step further Ask all programmers to voluntarily yield the CPU After some interval Cooperative scheduling Since purely voluntary, not necessarily effective Windows 3.1 used cooperative scheduling

Scheduling: Policy and Mechanism The scheduler will move jobs into and out of a processor (dispatching) Requiring various mechanics to do so How dispatching is done should not depend on the policy used to decide who to dispatch Desirable to separate the choice of who runs (policy) from the dispatching mechanism Also desirable that OS process queue structure not be policy-dependent

Scheduling the CPU yield (or preemption) ready queue dispatcher context switcher CPU resource manager resource granted resource request new process

Scheduling and Performance How you schedule important system activities has a major effect on performance Performance has different aspects You may not be able to optimize for both Scheduling performance has very different characteristic under light vs. heavy load Important to understand the performance basics regarding scheduling

General Comments on Performance Performance goals should be quantitative and measurable If we want “goodness” we must be able to quantify it You cannot optimize what you do not measure Metrics ... the way & units in which we measure Choose a characteristic to be measured It must correlate well with goodness/badness of service Find a unit to quantify that characteristic It must a unit that can actually be measured Define a process for measuring the characteristic That’s enough for now But actually measuring performance is complex

How Should We Quantify Scheduler Performance? Candidate metric: throughput (processes/second) But different processes need different run times Process completion time not controlled by scheduler Candidate metric: delay (milliseconds) But specifically what delays should we measure? Some delays are not the scheduler's fault Time to complete a service request Time to wait for a busy resource Different parties care about these metrics

An Example – Measuring CPU Scheduling Process execution can be divided into phases Time spent running The process controls how long it needs to run Time spent waiting for resources or completions Resource managers control how long these take Time spent waiting to be run This time is controlled by the scheduler Proposed metric: Time that “ready” processes spend waiting for the CPU

A Little Bit of Queueing Theory Queueing theory is the study of the behavior of lines and service queues Like how long it takes you to get your burrito at a busy taco truck Or how long you wait on a line at Disneyland Or how long a thread waits before it gets assigned to a core A mathematical subject with relevant results

Basic Queueing Theory Terms Standard terms use Greek letters As mathematicians prefer to do λ – lambda: the rate at which requests arrive at a queueing system E.g., how many disk blocks does the system ask for per second? μ – mu: the rate at which requests can be serviced E.g., how many messages per second can your network card send? ρ – rho: the system load ρ = λ/μ

Some Basic Queueing Results If ρ > 1, the system is overloaded Requests are arriving faster than they can be handled Which isn’t good Queue of unfilled requests will continue to grow as long as overload continues If queue of limited size, some requests will be dropped If T = average waiting time + average service time and N is the average number of customers in system N = λT Average number of customers in system is arrival rate times average time in system Little’s results

Typical Throughput vs. Load Curve Maximum possible capacity ideal throughput typical offered load

Why Don’t We Achieve Ideal Throughput? Scheduling is not free It takes time to dispatch a process (overhead) More dispatches means more overhead (lost time) Less time (per second) is available to run processes How to minimize the performance gap Reduce the overhead per dispatch Minimize the number of dispatches (per second) This phenomenon is seen in many areas besides process scheduling

Typical Response Time vs. Load Curve Delay (response time) ideal offered load

Why Does Response Time Explode? Real systems have finite limits Such as queue size When those limits are exceeded, requests are typically dropped Which is an infinite response time, for them There may be automatic retries (e.g., TCP), but they could be dropped, too If load arrives a lot faster than it is serviced, lots of stuff gets dropped Unless careful, overheads during heavy load explode Effects like receive livelock can also hurt in this case

Graceful Degradation When is a system “Overloaded”? When it is no longer able to meet service goals What can we do when overloaded? Continue service, but with degraded performance Maintain performance by rejecting work Resume normal service when load drops to normal What should we not do when overloaded? Allow throughput to drop to zero (i.e., stop doing work) Allow response time to grow without limit