Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,

Slides:



Advertisements
Similar presentations
Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Advertisements

User-Level Interprocess Communication for Shared Memory Multiprocessors Bershad, B. N., Anderson, T. E., Lazowska, E.D., and Levy, H. M. Presented by Akbar.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
CS 5204 – Operating Systems 1 Scheduler Activations.
Lightweight Remote Procedure Call BRIAN N. BERSHAD THOMAS E. ANDERSON EDWARD D. LAZOWSKA HENRY M. LEVY Presented by Wen Sun.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Lightweight Remote Procedure Call Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented by Alana Sweat.
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 7 Chapter 4: Threads (cont)
Chapter 4: Threads. Overview Multithreading Models Threading Issues Pthreads Windows XP Threads.
Threads Section 2.2. Introduction to threads A thread (of execution) is a light-weight process –Threads reside within processes. –They share one address.
User-Level Interprocess Communication for Shared Memory Multiprocessors Bershad, B. N., Anderson, T. E., Lazowska, E.D., and Levy, H. M. Presented by Chris.
SCHEDULER ACTIVATIONS Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska, Henry.
User Level Interprocess Communication for Shared Memory Multiprocessor by Bershad, B.N. Anderson, A.E., Lazowska, E.D., and Levy, H.M.
Scheduler Activations Effective Kernel Support for the User-Level Management of Parallelism.
Threads vs. Processes April 7, 2000 Instructor: Gary Kimura Slides courtesy of Hank Levy.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
1 Thursday, June 15, 2006 Confucius says: He who play in root, eventually kill tree.
1 Process Description and Control Chapter 3. 2 Process Management—Fundamental task of an OS The OS is responsible for: Allocation of resources to processes.
3.5 Interprocess Communication
USER LEVEL INTERPROCESS COMMUNICATION FOR SHARED MEMORY MULTIPROCESSORS Presented by Elakkiya Pandian CS 533 OPERATING SYSTEMS – SPRING 2011 Brian N. Bershad.
Scheduler Activations : Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska.
1 Process Description and Control Chapter 3 = Why process? = What is a process? = How to represent processes? = How to control processes?
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
G Robert Grimm New York University Scheduler Activations.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
ThreadsThreads operating systems. ThreadsThreads A Thread, or thread of execution, is the sequence of instructions being executed. A process may have.
Scheduler Activations Jeff Chase. Threads in a Process Threads are useful at user-level – Parallelism, hide I/O latency, interactivity Option A (early.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism by Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Processes and Threads.
1 Process States (1) Possible process states –running –blocked –ready Transitions between states shown.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Silberschatz, Galvin and Gagne ©2011Operating System Concepts Essentials – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Multithreading in Java Project of COCS 513 By Wei Li December, 2000.
Scheduler Activations: Effective Kernel Support for the User- Level Management of Parallelism. Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska,
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 13 Threads Read Ch 5.1.
CSE 60641: Operating Systems Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism. Thomas E. Anderson, Brian N.
Threads G.Anuradha (Reference : William Stallings)
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
ITFN 3601 Introduction to Operating Systems Lecture 3 Processes, Threads & Scheduling Intro.
CSE 451: Operating Systems Winter 2015 Module 5 1 / 2 User-Level Threads & Scheduler Activations Mark Zbikowski 476 Allen Center.
Lecture 5: Threads process as a unit of scheduling and a unit of resource allocation processes vs. threads what to program with threads why use threads.
Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy. Presented by: Tim Fleck.
Department of Computer Science and Software Engineering
Operating Systems: Internals and Design Principles
Brian Bershad, Thomas Anderson, Edward Lazowska, and Henry Levy Presented by: Byron Marohn Published: 1991.
Operating Systems Unit 2: – Process Context switch Interrupt Interprocess communication – Thread Thread models Operating Systems.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
Processes Chapter 3. Processes in Distributed Systems Processes and threads –Introduction to threads –Distinction between threads and processes Threads.
Jonas Johansson Summarizing presentation of Scheduler Activations – A different approach to parallelism.
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes.
CS 6560: Operating Systems Design
Achieving Multiprogramming Scalability of Parallel Programs on Intel SMP Platforms: Nanothreading in the Linux Kernel Christos D. Antonopoulos Panagiotis.
Processes and Threads Processes and their scheduling
Chapter 4: Threads.
Scheduler Activations
Fast Communication and User Level Parallelism
Threads Chapter 4.
Presented by Neha Agrawal
Thomas E. Anderson, Brian N. Bershad,
Threads vs. Processes Hank Levy 1.
CS510 Operating System Foundations
CS703 – Advanced Operating Systems
CSE 542: Operating Systems
Threads CSE 2431: Introduction to Operating Systems
Presentation transcript:

Scheduler Activations: Effective Kernel Support for the User-level Management of Parallelism Thomas E. Anderson, Brian N. Bershad, Edward D. Lazowska, and Henry M. Levy Presented by Reinette Grobler

Managing Concurrency Using Threads User-level library –Management in application’s address space –High performance and very flexible –Lack functionality Operating system kernel –Poor performance (when compared to user-level threads) –Poor flexibility –High functionality New system: kernel interface combined with user-level thread package –Same functionality as kernel threads –Performance and flexibility of user-level threads

User-level Threads Thread management routines linked into application No kernel intervention == high performance Supports customized scheduling algorithms == flexible (Virtual) processor blocked during system services == lack of functionality –I/O, page faults, and multiprogramming cause entire process to block User space Kernel space Process table Process (“virtual processor”) Thread Runtime system Thread table

Kernel Threads No system integration problems (system calls can be blocking calls) == high functionality Extra kernel trap and copy and check of all parameters on all thread operations == poor performance Kernel schedules thread from same or other address space (process) Single, general purpose scheduling algorithm == lack of flexibility User space Kernel space Process table Process Thread Thread table

Kernel Threads Supporting User-level Threads Question: Can we accomplish system integration by implementing user-level threads on top of kernel threads? Typically one kernel thread per processor (virtual processor) –User-level thread blocks, so does kernel thread: processor idle –More kernel threads implicitly results in kernel scheduling of user-level threads –Increasing communication between kernel and user-level will negate performance and flexibility advantages of using user-level threads Answer: No Also: –No dynamic reallocation of processors among address spaces –Cannot ensure logical correctness of user-level thread system built on top of kernel threads

New Thread Implementation Goal: System with the functionality of kernel threads and the performance and flexibility of user-level threads. High performance without system calls. Blocked thread can cause processor to be used by another thread from same or different address space. No high priority thread waits for processor while low priority runs. Application customizable scheduling. No idle processor in presence of ready threads. Challenge: control and scheduling information distributed between kernel and application’s address space.

Virtual Multiprocessor Application knows how many and which processors allocated to it by kernel. Application has complete control over which threads are running on processors. Kernel notifies thread scheduler of events affecting address space. Thread scheduler notifies kernel regarding processor allocation. User space Kernel space (“virtual multiprocessor”)

Scheduler Activations Vessels for running user-level threads One scheduler activation per processor assigned to address space. Also created by kernel to perform upcall into application’s address space –“Scheduler activation has blocked” –“Scheduler activation has unblocked” –“Add this processor” –“Processor has been preempted” Result: Scheduling decisions made at user-level and application is free to build any concurrency model on top of scheduler activations.

Scheduler activations (2)

New Thread Implementation Goal: System with the functionality of kernel threads and the performance and flexibility of user-level threads. High performance without system calls. No high priority thread waits for processor while low priority runs. Blocked thread can cause processor to be used by another thread from same or different address space. Application customizable scheduling.  No idle processor in presence of ready threads. Challenge: control and scheduling information distributed between kernel and application’s address space.

Processor allocation Goal: No idle processor in presence of runnable threads. Processor allocation based on available parallelism in each address space. Kernel notified when: –User-level has more runnable threads than processors –User-level has more processors than runnable threads Kernel uses notifications as hints for its actual processor allocation.

Implementation Scheduler activations added to Topaz kernel thread management. –Performs upcalls instead of own scheduling. –Explicit processor allocation to address spaces. Modifications to FastThreads user-level thread package –Processing of upcalls. –Resume interrupted critical sections. –Pass processor allocation information to Topaz.

Performance Thread performance without kernel involvement similar to FastThreads before changes. Upcall performance significantly worse than Topaz threads. –Untuned implementation. –Topaz in assembler, this system in Modula-2+. Application performance –Negligible I/O: As quick as original FastThreads. –With I/O: Performs better than either FastThreads or Topaz threads.

Application Performance (negligible I/O)

Application Performance (with I/O)