Scheduler Activations Jeff Chase. Threads in a Process Threads are useful at user-level – Parallelism, hide I/O latency, interactivity Option A (early.

Slides:



Advertisements
Similar presentations
3.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Process An operating system executes a variety of programs: Batch system.
Advertisements

Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
OPERATING SYSTEMS Threads
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 AE4B33OSS Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows.
Chap 4 Multithreaded Programming. Thread A thread is a basic unit of CPU utilization It comprises a thread ID, a program counter, a register set and a.
Threads. Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
Threads.
CS 5204 – Operating Systems 1 Scheduler Activations.
Chapter 4: Multithreaded Programming
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 7 Chapter 4: Threads (cont)
Day 10 Threads. Threads and Processes  Process is seen as two entities Unit of resource allocation (process or task) Unit of dispatch or scheduling (thread.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
SCHEDULER ACTIVATIONS Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska, Henry.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
User Level Interprocess Communication for Shared Memory Multiprocessor by Bershad, B.N. Anderson, A.E., Lazowska, E.D., and Levy, H.M.
Scheduler Activations Effective Kernel Support for the User-Level Management of Parallelism.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for Threads.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Threads. Processes and Threads  Two characteristics of “processes” as considered so far: Unit of resource allocation Unit of dispatch  Characteristics.
G Robert Grimm New York University Scheduler Activations.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 4: Threads Overview Multithreading Models Threading Issues.
ThreadsThreads operating systems. ThreadsThreads A Thread, or thread of execution, is the sequence of instructions being executed. A process may have.
Chapter 4: Threads Adapted to COP4610 by Robert van Engelen.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Thread. A basic unit of CPU utilization. It comprises a thread ID, a program counter, a register set, and a stack. It is a single sequential flow of control.
Chapter 4: Threads. 4.2CSCI 380 Operating Systems Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Chapter 4: Multithreaded Programming. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 4: Multithreaded Programming Overview.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 4 Operating Systems.
Chapter 4: Threads. 4.2 Chapter 4: Threads Overview Multithreading Models Threading Issues Pthreads Windows XP Threads Linux Threads Java Threads.
Silberschatz, Galvin and Gagne ©2011Operating System Concepts Essentials – 8 th Edition Chapter 4: Threads.
Scheduler Activations: Effective Kernel Support for the User- Level Management of Parallelism. Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
 2004 Deitel & Associates, Inc. All rights reserved. 1 Chapter 4 – Thread Concepts Outline 4.1 Introduction 4.2Definition of Thread 4.3Motivation for.
Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
CSE 60641: Operating Systems Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism. Thomas E. Anderson, Brian N.
Threads G.Anuradha (Reference : William Stallings)
CSE 451: Operating Systems Winter 2015 Module 5 1 / 2 User-Level Threads & Scheduler Activations Mark Zbikowski 476 Allen Center.
Lecture 5: Threads process as a unit of scheduling and a unit of resource allocation processes vs. threads what to program with threads why use threads.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 4: Threads Modified from the slides of the text book. TY, Sept 2010.
Chapter 4: Multithreaded Programming. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts What is Thread “Thread is a part of a program.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
Department of Computer Science and Software Engineering
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
Managing Processors Jeff Chase Duke University. The story so far: protected CPU mode user mode kernel mode kernel “top half” kernel “bottom half” (interrupt.
4.1 Introduction to Threads Overview Multithreading Models Thread Libraries Threading Issues Operating System Examples Windows XP Threads Linux Threads.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes.
Chapter 4 – Thread Concepts
OPERATING SYSTEM CONCEPT AND PRACTISE
Day 12 Threads.
Chapter 4 – Thread Concepts
Chapter 4: Threads.
Chapter 4: Multithreaded Programming
Nadeem MajeedChoudhary.
Chapter 4: Threads.
Chapter 4: Threads.
Scheduler Activations
OPERATING SYSTEMS Threads
Modified by H. Schulzrinne 02/15/10 Chapter 4: Threads.
Chapter 4: Threads.
Fast Communication and User Level Parallelism
Chapter 4: Threads.
Thomas E. Anderson, Brian N. Bershad,
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
CS703 – Advanced Operating Systems
Chapter 4: Threads.
Threads.
Presentation transcript:

Scheduler Activations Jeff Chase

Threads in a Process Threads are useful at user-level – Parallelism, hide I/O latency, interactivity Option A (early Java): user-level library, within a single-threaded process – Library does thread context switch – Kernel time slices between processes, e.g., on system call I/O Option B (Linux, MacOS, Windows): use kernel threads – System calls for thread fork, join, exit (and lock, unlock,…) – Kernel does context switching – Simple, but a lot of transitions between user and kernel mode Option C (Windows): scheduler activations – Kernel allocates processors to user-level library – Thread library implements context switch – System call I/O that blocks triggers upcall Option D: Asynchronous I/O

Threads and the kernel Modern operating systems have multi- threaded processes. A program starts with one main thread, but once running it may create more threads. Threads may enter the kernel (e.g., syscall). Threads are known to the kernel and have separate kernel stacks, so they can block independently in the kernel. – Kernel has syscalls to create threads (e.g., Linux clone). Implementations vary. – This model applies to Linux, MacOS-X, Windows, Android, and pthreads or Java on those systems. data trap fault resume user mode user space kernel mode kernel space process threads VAS

A thread Program kernel stack user stack User TCB kernel TCB active ready or running blocked wait sleep wait wakeup signal When a thread is blocked its TCB is placed on a sleep queue of threads waiting for a specific wakeup event. This slide applies to the process abstraction too, or, more precisely, to the main thread of a process.

The case for user-level threads Inherent costs of kernel threads – Cost of accessing thread management functions. Kernel trap Copying and checking parameters Costly compared to user level. – Cost of generality Thread management is not inherently expensive. – Can be within order of magnitude of procedure call. – So overhead added by kernel is significant.

Scheduler Activations Scheduler Activations is an “interesting” kernel syscall interface. [Anderson et. al., SOSP 1991] It’s not an API! Not for application programmers. Not for the faint of heart. Suitable for implementing a thread API in a library. What is the right division of function between the kernel and the library? – Kernel manages only resource (processor) allocation and I/O. – User-level code (library) manages its resources, etc. – Interesting use of upcalls. Landmark paper for the concept of library OS.

Performance comparison Comparing user-level threads, kernel threads, and processes. OperationFastThreads Topaz Threads Ultrix Processes Null fork Signal-wait Procedure call takes 7 microseconds. Kernel trap takes 19 microseconds. (Maybe kernel trap is not so significant.)

Goals of Scheduler Activations When kernel intervention not needed, do as well as user-level threads. But system maintains these invariants: – No processor idles when a thread is ready. – No higher-priority thread waits while a lower-priority thread runs. – During any thread blocking, e.g., blocking syscall or page fault, other threads can run.

Sources of poor integration Thesis: Kernel threads are the wrong abstraction for user-level threads. – Kernel threads block, resume, and are preempted without notification to the user level. – Kernel threads are scheduled obliviously w.r.t. the user-level state. When user thread blocks, k-thread also blocks. – If only one kernel thread  stuck. – If more kernel threads than cores, must time-slice. Kernel thread could be preempted while in a critical section, increasing overhead (e.g., w/ spinlocks). Priority issues

Effective kernel support for user-level threads Each process supplied with a virtual multiprocessor. – Kernel allocates processors to address spaces. – User level threads system (ULTS) has complete control over scheduling. – Kernel notifies ULTS whenever it changes the number of processors, or a user thread blocks or unblocks. – ULTS notifies kernel when application needs more or fewer processors.

Approach Each app has complete knowledge of and control over the processors in its virtual MP. Kernel notifies the AS thread scheduler of all kernel events. Use upcalls to deliver notification Upcall also delivers a processor to run the ULTS scheduler code: a scheduler activation.

Scheduler Activations A scheduler activation is an allocated processor (core), delivered to the ULTS via upcall. Three roles – As a vessel, or execution context, for running user- level threads, like a kernel thread. – As a notification to the ULTS of a kernel event. – As a data structure for saving state, etc. Two execution stacks Note: UL threads are never directly resumed by the kernel.

Upcalls Add this processor (processor #) – Execute a runnable user-level thread. Processor has been preempted (preempted activation # and its machine state) – Return to the ready list the user-level thread that was executing in the context of the preempted scheduler activation. Scheduler activation has blocked (blocked activation #) – The blocked scheduler activation is no longer using its processor. Scheduler activation has unblocked (unblocked activation # and its machine state) – Return to the ready list the user-level thread that was executing in the context of the blocked scheduler activation.

Example (T1) At time Tl, the kernel allocates the application two processors. On each processor, the kernel upcalls to user-level code that removes a thread from the ready list and starts running it.

Example (T2) At time T2, one of the user-level threads (thread 1) blocks in the kernel. To notify the user level of this event, the kernel takes the processor that had been running thread 1 and performs an upcall in the context of a fresh scheduler activation. The user-level thread scheduler can then use the processor to take another thread off the ready list and start running it.

Example (T3) At time T3, the I/O completes. Again, the kernel must notify the user-level thread system of the event, but this notification requires a processor. The kernel preempts one of the processors running in the address space and uses it to do the upcall. (If there are no processors assigned to the address space when the I/O completes, the upcall must wait until the kernel allocates one). This upcall notifies the user level of two things: the I/0 completion and the preemption. The upcall invokes code in the user- level thread system that (1) puts the thread that had been blocked on the ready list and (2) puts the thread that was preempted on the ready list. At this point, scheduler activations A and B can be discarded.

Example (T4) Finally, at time T4, the upcall takes a thread off the ready list and starts running it.

Complications Additional preemption required for correct priorities. Preemption or upcall in thread manager. – Thread manager needs to be reentrant. – If preempted while idling, nothing needs to be done. – Otherwise, need to do a UL CS to finish what it was doing. – Also, if page fault in thread manager, need to handle this specially.

AS to Kernel Calls Only needed whenever the relationship between number of processors and number of runnable threads changes. Add more processors (additional # of processors needed) – Allocate more processors to this address space and start them running scheduler activations. This processor is idle () – Preempt this processor if another address space needs it.

Critical Sections When a upcall informs ULTS that a thread has been preempted, we do a UL CS to that thread to finish any critical sections.

Compute-Bound No spin-waiting for Topaz so lots of overhead. New FastThreads better for less than 6 processors due to poor scheduling of Topaz. Daemon thread causes preemption, even if processors avail.

I/O Bound FastThreads degrades due to inability to reallocate processor when thread blocks. Topaz parallels scheduler activations, but poorer performance due to kernel involvement.

Multiprogramming Ran two copies, ideal speedup is 3. –Topaz: 1.29 –Original FastThreads: 1.26 –Scheduler Activations: 2.45 Topaz: –Thread ops are more expensive. FastThreads: –Time slicing causes waiting.