Scheduler Activations : Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska.

Slides:



Advertisements
Similar presentations
Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 8 SCHEDULING.
Advertisements

Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
OPERATING SYSTEMS Threads
Chap 4 Multithreaded Programming. Thread A thread is a basic unit of CPU utilization It comprises a thread ID, a program counter, a register set and a.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Multithreaded Programming.
Threads. Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
CS 5204 – Operating Systems 1 Scheduler Activations.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented by Alana Sweat.
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 7 Chapter 4: Threads (cont)
Chapter 4: Multithreaded Programming
Chapter 5 Processes and Threads Copyright © 2008.
SCHEDULER ACTIVATIONS Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska, Henry.
4.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 4 Multithreaded Programming Objectives Objectives To introduce a notion of.
User Level Interprocess Communication for Shared Memory Multiprocessor by Bershad, B.N. Anderson, A.E., Lazowska, E.D., and Levy, H.M.
Scheduler Activations Effective Kernel Support for the User-Level Management of Parallelism.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
3.5 Interprocess Communication
USER LEVEL INTERPROCESS COMMUNICATION FOR SHARED MEMORY MULTIPROCESSORS Presented by Elakkiya Pandian CS 533 OPERATING SYSTEMS – SPRING 2011 Brian N. Bershad.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
5: CPU-Scheduling1 Jerry Breecher OPERATING SYSTEMS SCHEDULING.
1 Concurrency: Deadlock and Starvation Chapter 6.
CS533 Concepts of Operating Systems Class 3 Integrated Task and Stack Management.
Process Concept An operating system executes a variety of programs
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
G Robert Grimm New York University Scheduler Activations.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Chapter 4: Threads READ 4.1 & 4.2 NOT RESPONSIBLE FOR 4.3 &
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Multithreaded Programming.
Scheduler Activations Jeff Chase. Threads in a Process Threads are useful at user-level – Parallelism, hide I/O latency, interactivity Option A (early.
1 Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska and Henry M. Levy Presented by: Karthika Kothapally.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism by Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
OPERATING SYSTEMS CPU SCHEDULING.  Introduction to CPU scheduling Introduction to CPU scheduling  Dispatcher Dispatcher  Terms used in CPU scheduling.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2011Operating System Concepts Essentials – 8 th Edition Chapter 4: Threads.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Threads A thread (or lightweight process) is a basic unit of CPU.
Operating System Concepts Ku-Yaw Chang Assistant Professor, Department of Computer Science and Information Engineering Da-Yeh University.
Java Threads 11 Threading and Concurrent Programming in Java Introduction and Definitions D.W. Denbo Introduction and Definitions D.W. Denbo.
Scheduler Activations: Effective Kernel Support for the User- Level Management of Parallelism. Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
CSE 60641: Operating Systems Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism. Thomas E. Anderson, Brian N.
ITFN 3601 Introduction to Operating Systems Lecture 3 Processes, Threads & Scheduling Intro.
CSE 451: Operating Systems Winter 2015 Module 5 1 / 2 User-Level Threads & Scheduler Activations Mark Zbikowski 476 Allen Center.
Discussion Week 2 TA: Kyle Dewey. Overview Concurrency Process level Thread level MIPS - switch.s Project #1.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 4: Threads Modified from the slides of the text book. TY, Sept 2010.
Multithreaded Programing. Outline Overview of threads Threads Multithreaded Models  Many-to-One  One-to-One  Many-to-Many Thread Libraries  Pthread.
Chapter 4: Multithreaded Programming. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts What is Thread “Thread is a part of a program.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
Lecture 3 Threads Erick Pranata © Sekolah Tinggi Teknik Surabaya 1.
Deadlock Operating Systems: Internals and Design Principles.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Saurav Karmakar. Chapter 4: Threads  Overview  Multithreading Models  Thread Libraries  Threading Issues  Operating System Examples  Windows XP.
Copyright © Curt Hill More on Operating Systems Continuation of Introduction.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Jonas Johansson Summarizing presentation of Scheduler Activations – A different approach to parallelism.
Scheduler Activations
Modified by H. Schulzrinne 02/15/10 Chapter 4: Threads.
CPU SCHEDULING.
Fast Communication and User Level Parallelism
Multithreaded Programming
Kernel Synchronization I
Presented by: SHILPI AGARWAL
Thomas E. Anderson, Brian N. Bershad,
Presentation transcript:

Scheduler Activations : Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska and Henry M. Levy Presenter: Quan (Cary) Zhang

Parallel Computing is the elicitation of appearance of thread, but the essential factor is to achieve higher performance Parallel program can be realized by three ways: – View from performance – View from flexibility

Process (lose at the time delay of communication controlled by memory) Kernel level Thread (lose at the time cost in the transmission between kernel and user space User level Thread needs nothing except the switch between different functions View from perform ance

Kernel Thread must compromise to the generality Different kinds of user thread can comply with different kinds of requirement from outer user, without the necessity of kernel modification View from flexibilit y

Q:What is the borderline between Kernel threads and User level thread?

However, the user level thread is at the beck and call of kernel. In other word, the user level thread can never ever communicate with the kernel in order to arrange the resource initiatively according to the factual situation, which can affect the performance of user thread, or even fallacy.

Q:Why the communication between the kernel and the user-level thread management system via scheduler activation has less overhead, comparing with possible ways to communicate between the kernel and the user-level for user-level threads built on the Traditional Kernel Interface?

Q:What kind of user-level thread operations that might affect processor allocation decisions?

So, the design of Scheduler Activations is to let kernel not so arbitrary, but let it inform to user thread system, that is, to vector control from the kernel to the thread scheduler on a kernel event, letting the user thread scheduler take its effect, acting as a real scheduler. Q:Is the Scheduler activation lightweight? How is it implemented in there system?

The Schedule Activation is activated | thread user thread when the state of kernel thread mapping to itself is changed. There is a user thread scheduler, communicating all the schedule activation, in order to have complete control over which of its threads are running on the allocating processor.

Supplement: – The user thread scheduler can schedule each user thread preemptively under the help of kernel. – From the view to kernel, the kernel needs no knowledge to what is happening to the user level. – Some specific situations at the time of returning from kernel: No user thread Event handler Interrupt that can cause continuing interrupt

In order to prevent the situation that a kernel thread runs only user thread scheduler (nothing), the user thread scheduler also add two system call: – Add more processors: – This processor is idle:

The critical section problem can cause poor performance or even deadlock. One solution is to use the free lists of thread control blocks (implemented in the Windows Kernel) Prevention mechanism break the semantics of the kernel Recovery, a mechanism that let the user thread continue its execution, is a more natural way to solve the problem

Implementation Details: – Add an additional suit of thread-related system calls, rather than revise the original ones. – Copy the critical section code, eliminating the overhead on lock acquisition and release, simplifying the common case that the preemption does not occur on the thread, yet complicate the other case.

Q:With regard to the mechanism mentioned in the paper for avoiding threads in the critical section, which is running a copy of that section, Where does this copy reside in the address space ? (Is it in the same address space which the original code reside ?) How does the switching happens to the new copy ? I do not see much difference between their explanation of dealing with critical section and this approach. I would say this introduces an overhead.

While handling of pre-empted threads that are in a critical section, they state that a context switch is made to allow the thread to finish execution, or more accurately transfers control to the identical position in a copy of the critical section which then completes before handing control back. On which activation is this run? If it is run on the same one as the pre-empted thread, then it is effectively the same as waiting for the original thread to unlock before exiting, like Psyche and Simmunix. If it is on another activation in the thread, then what happens if all the threads are currently in a critical state? Or are they time spliced over an existing activations?

Improvement of Performance (See paper) – Thread Performance – Upcall Performance – Application Performance

Discussion Since User-level thread system can notify the kernel the application needs more or fewer processors, so is the system robust to malicious user level behaviors who simply created a large amount of thread and thus apply for more processors in order to influence the kernel's allocation decision? What is the balance between this kind of malicious behavior and a task that really need a heavyweight?

To make each program consume fair amount of resources, the authors proposed to favor address spaces that use fewer processors and penalize those that use more, utilizing this heuristics may work great in some cases, however, it is not good for computation intensive tasks, so are there some alternatives available? Can we measure the actual resource usage and try penalize those processes which use more than necessary amount of resources?

As for the debugging consideration, the kernel assigns each scheduler activation being debugged a logical processor, so what will happen if the debugging system itself corrupts? Since no upcalls into the user level thread system will be made.

Early in this paper the authors lay out the case against kernel-level thread management (1:1 threading). The authors chiefly complain about costs incurred (cost of crossing protection domains for thread management, cost of a 'general' scheduler). The authors state that these are inherent to a 1:1 model. Couldn't a proportion of these costs be artifacts of early 1:1 threading implementations?

BSD seems to be replacing scheduler activations with the model of implementing user threads over kernel threads (1:1 threading). What are the particular reasons for reverting back to an older, and possibly slower scheduling mechanism? Was some of the lack of performance of kernel threads also a result of implementation issues?

- SA's have been implemented time and time again for various systems (Mach, FreeBSD, NetBSD, Linux), but in each case SAs were eventually dropped). Are SAs just hard to get right, or is the problem more fundamental?

To deal with inopportune preemption, this paper proposed to continue execution of user thread before it quits the critical section, I think it will decrease the performance of concurrency, is there some performance study for this issue? And is there some other solutions?

While handling of pre-empted threads that are in a critical section, they state that a context switch is made to allow the thread to finish execution, or more accurately transfers control to the identical position in a copy of the critical section which then completes before handing control back. On which activation is this run? If it is run on the same one as the pre-empted thread, then it is effectively the same as waiting for the original thread to unlock before exiting, like Psyche and Simmunix. If it is on another activation in the thread, then what happens if all the threads are currently in a critical state? Or are they time spliced over an existing activations?

The paper implies a one-to-one mapping of threads onto processors. In section 3.1, the example says that in normal operation, "threads can be created, run, and completed, all without kernel intervention". If the user level application creates a thread during its execution, doesn't it need to request another processor (because of the one-to-one mapping), meaning that any user level application that uses more than one thread does require kernel intervention? Because the kernel needs to assign a new scheduler activation for the new processor?

The paper says that (in 3.2), when a user level application notifies the kernel that it has idle processors, if the processors are not needed, then the kernel leaves them assigned to the application. Wouldn't putting them back on the free list be more efficient, given a higher probability that a different user-level application, or a new user-level application will need those processors?