Thread Implementation and Scheduling CSE451 Andrew Whitaker.

Slides:



Advertisements
Similar presentations
Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per.
Advertisements

CSE 451: Operating Systems Winter 2010 Module 6 Scheduling Mark Zbikowski Gary Kimura.
CPU Scheduling CPU Scheduler Performance metrics for CPU scheduling
Operating Systems ECE344 Ding Yuan Scheduling Lecture 10: Scheduling.
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems CPU Scheduling. Agenda for Today What is Scheduler and its types Short-term scheduler Dispatcher Reasons for invoking scheduler Optimization.
Chapter 3: CPU Scheduling
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
Scheduling in Batch Systems
CS 3013 & CS 502 Summer 2006 Scheduling1 The art and science of allocating the CPU and other resources to processes.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Job scheduling Queue discipline.
02/11/2004CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
1 Scheduling Algorithms FCFS First-Come, First-Served Round-robin SJF Multilevel Feedback Queues.
1Chapter 05, Fall 2008 CPU Scheduling The CPU scheduler (sometimes called the dispatcher or short-term scheduler): Selects a process from the ready queue.
CPU-Scheduling Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The short term scheduler.
Chapter 6: CPU Scheduling
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-3 CPU Scheduling Department of Computer Science and Software Engineering.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Chapter 6 CPU SCHEDULING.
1 Scheduling Processes. 2 Processes Each process has state, that includes its text and data, procedure call stack, etc. This state resides in memory.
Operating Systems Lecture Notes CPU Scheduling Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
1 CSE451 Scheduling Autumn 2002 Gary Kimura Lecture #6 October 11, 2002.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
1 Our focus  scheduling a single CPU among all the processes in the system  Key Criteria: Maximize CPU utilization Maximize throughput Minimize waiting.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Lecture 7: Scheduling preemptive/non-preemptive scheduler CPU bursts
CSC 322 Operating Systems Concepts Lecture - 10: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Processor Scheduling Hank Levy. 22/4/2016 Goals for Multiprogramming In a multiprogramming system, we try to increase utilization and thruput by overlapping.
Lecture Topics: 11/15 CPU scheduling: –Scheduling goals and algorithms.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Operating Systems Scheduling. Scheduling Short term scheduler (CPU Scheduler) –Whenever the CPU becomes idle, a process must be selected for execution.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
Process Scheduling. Scheduling Strategies Scheduling strategies can broadly fall into two categories  Co-operative scheduling is where the currently.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Lecturer 5: Process Scheduling Process Scheduling  Criteria & Objectives Types of Scheduling  Long term  Medium term  Short term CPU Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
CPU Scheduling Andy Wang Operating Systems COP 4610 / CGS 5765.
CPU Scheduling Scheduling processes (or kernel-level threads) onto the cpu is one of the most important OS functions. The cpu is an expensive resource.
lecture 5: CPU Scheduling
CPU SCHEDULING.
Chapter 5a: CPU Scheduling
April 6, 2001 Gary Kimura Lecture #6 April 6, 2001
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Andy Wang Operating Systems COP 4610 / CGS 5765
Operating System Concepts
CSE 451: Operating Systems Winter 2007 Module 9 Scheduling
CSE 451: Operating Systems Winter 2006 Module 9 Scheduling
CSE 451: Operating Systems Winter 2009 Module 6 Scheduling
Scheduling Adapted from:
CSE 451: Operating Systems Autumn 2009 Module 6 Scheduling
Scheduling.
COT 4600 Operating Systems Spring 2011
Process Scheduling Decide which process should run and for how long
CSE 451: Operating Systems Spring 2007 Module 6 Scheduling
CSE 451: Operating Systems Winter 2003 Lecture 6 Scheduling
Processor Scheduling Hank Levy 1.
CSE 451: Operating Systems Spring 2006 Module 9 Scheduling
CSE 451: Operating Systems Winter 2012 Scheduling
CSE 451: Operating Systems Winter 2007 Module 9 Scheduling
CSE 451: Operating Systems Winter 2004 Module 9 Scheduling
CSE 451: Operating Systems Winter 2001 Lecture 6 Scheduling
Presentation transcript:

Thread Implementation and Scheduling CSE451 Andrew Whitaker

Scheduling: A Few Lectures Ago… Scheduler decides when to run runnable threads Programs should make no assumptions about the scheduler  Scheduler is a “black box” scheduler CPU {4, 2, 3, 5, 1, 3, 4, 5, 2, …}

This Lecture: Digging into the Blackbox Previously, we glossed over the choice of which process or thread is chosen to be run next  “some thread from the ready queue” This decision is called scheduling scheduling is policy context switching is mechanism We will focus on CPU scheduling  But, we will touch on network, disk scheduling

Scheduling goals Maximize CPU utilization Maximize throughput  Req/sec Minimize latency  Average turnaround time Time from request submission to completion  Average response time Time from request submission till first result Favor some particular class of requests  Priorities Avoid starvation These goals can conflict!

Context Goals are different across applications  I/O Bound Web server, database (maybe)  CPU bound Calculating   Interactive system The ideal scheduler works well across a wide range of scenarios

Preemption Non-preemptive: Once a thread is given the processor, it has it for as long as it wants  A thread can give up the processor with Thread.yield or a blocking I/O operation Preemptive: A long-running thread can have the processor taken away  Typically, this is triggered by a timer interrupt

To Preempt or not to Preempt? Non-preemptive schedulers are subject to starvation Non-preemptive schedulers are easier to implement Non-preemptive scheduling is easier to program  Fewer scheduler interleavings => fewer race conditions Non-preemptive scheduling can perform better  Fewer locks => better performance while (true) { /* do nothing */ }

Preemption in Linux User-mode code can always be preempted Linux 2.4 (and before): the kernel was non-preemptive  Any thread/process running in the kernel runs to completion Kernel is “trusted” Linux 2.6: kernel is preemptive  Unless holding a lock

Linux 2.6 Preemption Details The task_struct contains a preempt_count variable  Incrementing when grabbing a lock  Decrementing when releasing a lock A thread in the kernel can be preempted iff preempt_count == 0

Algorithm #1: FCFS/FIFO First-come first-served / First-in first-out (FCFS/FIFO)  Jobs are scheduled in the order that they arrive  Like “real-world” scheduling of people in lines Supermarkets, bank tellers, McD’s, Starbucks …  Typically non-preemptive no context switching at supermarket!

FCFS example Suppose the duration of A is 5, and the durations of B and C are each 1. What is the turnaround time for schedules 1 and 2 (assuming all jobs arrive at roughly the same time) ? Job A B C CB time 1 2 Schedule 1: (5+6+7)/3 = 18/3 = 6 Schedule 2: (1+2+7)/3 = 10/3 = 3.3

+No starvation (assuming tasks terminate) -Average response time can be lousy -Small requests wait behind big ones -FCFS may result in poor overlap of CPU and I/O activity  I/O devices are left idle during long CPU bursts Analysis FCFS

FCFS at Amazon: Head-of-line Blocking Clients and servers at Amazon communicate over HTTP HTTP is a request/reply protocol which does not allow for re-ordering Problem: Short requests can get stuck behind a long request  Called “head-of-line blocking” Job A B C

Algorithm #2: Shortest Job First Choose the job with the smallest service requirement  Variant: shortest processing time first Provably optimal with respect to average waiting time

It’s non-preemptive  … but there’s a preemptive version – SRPT (Shortest Remaining Processing Time first) – that accommodates arrivals Starvation  Short jobs continually crowd out long jobs Possible solution: aging How do we know processing times?  In practice, we must estimate SJF drawbacks

Longest Job First? Why would a company like Amazon favor Longest Job First?

Algorithm #3: Priority Assign priorities to requests  Choose request with highest priority to run next if tie, use another scheduling algorithm to break (e.g., FCFS)  To implement SJF, priority = expected length of CPU burst Abstractly modeled (and usually implemented) as multiple “priority queues”  Put a ready request on the queue associated with its priority

Priority drawbacks How are you going to assign priorities? Starvation  if there is an endless supply of high priority jobs, no low- priority job will ever run Solution: “age” threads over time  Increase priority as a function of accumulated wait time  Decrease priority as a function of accumulated processing time  Many ugly heuristics have been explored in this space

Priority Inversion A low-priority task acquires a resource (e.g., lock) needed by a high-priority task Can have catastrophic effects (see Mars PathFinder) Solution: priority inheritance  Temporarily “loan” some priority to the lock- holding thread

Algorithm #4: RR Round Robin scheduling (RR)  Ready queue is treated as a circular FIFO queue  Each request is given a time slice, called a quantum Request executes for duration of quantum, or until it blocks Advantages:  Simple No need to set priorities, estimate run times, etc.  No starvation Great for timesharing

RR Issues What do you set the quantum to be?  No value is “correct” If small, then context switch often, incurring high overhead If large, then response time degrades Turnaround time can be poor  Compared to FCFS or SJF All jobs treated equally  If I run 100 copies of it degrades your service  Need priorities to fix this…

Combining algorithms In practice, any real system uses some sort of hybrid approach, with elements of FCFS, SPT, RR, and Priority Example: multi-level queue scheduling  Use a set of queues, corresponding to different priorities  Priority scheduling across queues  Round-robin scheduling within a queue Problems:  How to assign jobs to queues?  How to avoid starvation?  How to keep the user happy?

UNIX scheduling: Multi-level Feedback Queues Segregate processes according to their CPU utilization  Highest priority queue has shortest CPU quantum Processes begin in the highest priority queue Priority scheduling across queues, RR within  Process with highest priority always run first  Processes with same priority scheduled RR Processes dynamically change priority  Increases over time if process blocks before end of quantum  Decreases if process uses entire quantum Goals:  Reward interactive behavior over CPU hogs Interactive jobs typically have short bursts of CPU

Multi-processor Scheduling Each processor is scheduled independently Two implementation choices  Single, global ready queue  Per-processor run queue Increasingly, per-processor run queues are favored (e.g., Linux 2.6)  Promotes processor affinity (better cache locality)  Decreases demand for (slow) main memory

What Happens When a Queue is Empty? Tasks migrate from busy processors to idle processors  Book talks about push vs pull migration Java 1.6 support: thread-safe double- ended queue (java.util.Deque)  Use a bounded buffer per consumer  If nothing in a consumer’s queue, steal work from somebody else