Download presentation
Presentation is loading. Please wait.
Published byPrimrose Jennings Modified over 8 years ago
1
Jonas Johansson Summarizing presentation of Scheduler Activations – A different approach to parallelism
2
Contents Introduction The Problem The Approach Result Legacy Summary Conclusions and comments
3
Introduction – About the paper Written in 1991 Authors: Anderson, Bershad, Lazowska, Levy University of Washington (Seattle) Summary Describes drawbacks to the management of parallelism (the way it was back in 1991) Suggests a new solution (”scheduler activations”) Compares performance (old methods, scheduler activations)
4
Introduction – Ambition of this presentation Give an understanding of: the characteristics of kernel threads and user-level threads the scheduler activations approach Tie this to topics covered in this course Parallelism Lock holding threads (e.g. mutexlock) I/O, blocking
5
Introduction - Threads review Threads Several threads in each process Parallelism Share address space, system resources (e.g. files and terminals) etc. Less overhead when exchanging data between threads compared to inter-process communication (due to shared memory etc) Two types of threads: Kernel threads User-level threads Address space FilesCode Multithreaded process
6
Introduction – Kernel and User-level threads Applications Kernel CPUMemoryDevices User level Kernel level System resources
7
Introduction – Kernel threads Application CPU1CPU2 User level Kernel level KERNEL System resources The kernel will create the threads The kernel is also responsible for thread scheduling
8
Introduction – Kernel threads Kernel threads Supported directly by the Operating system The kernel handles: Thread creation Thread scheduling Thread management
9
Introduction – Kernel threads Kernel threads +- The kernel controls all thread management -> Good functionality! Many kernel calls -> Slow! No system integration problems; Kernel manages threads To modify the thread system means modifying the kernel -> Difficult! + Good functionality – Poor performance and flexibility
10
Introduction – User-level threads Application CPU1CPU2 User level Kernel level System resources Threads are created at user level Thread scheduling is done at user level The user threads are mapped onto kernel threads (here 1 thread/processor “User-level threads are built on top of kernel threads” Thread library
11
Introduction – User-level threads User-level threads Implemented by a thread library at user level Creation, scheduling and management of threads without support from the kernel Creation and management is fast Built on top of kernel threads
12
Introduction – User-level threads User level threads +- Can be modified at user-level -> Flexible! System integration problems Less kernel calls -> Fast!I/O, page faults etc are handled by the kernel (not seen from the user level) -> Not functional! + High performance and flexibility – Poor functionality
13
Introduction – Simplified Kernel threads ”work right” but perform poorly, user-level threads perform well but doesn’t always “work right”
14
Problem
15
Problem – in brief ”Threads can be supported either at user level or in the kernel. Neither approach has been fully satisfactory.”
16
Problem – Simplified summary 2 main problems in traditional thread systems 1.The user-level is not informed of kernel events such as I/O blocking Processor lost during the block 2.Kernel threads are scheduled without regard to the user-level thread state Lock-holding threads can be de-scheduled In other words: Communication between the kernel and the user level is not good enough…
17
Approach
18
”Scheduler activations” A newly designed kernel interface A new user-level thread system Note that this is still a user-level approach but with a new kind of kernel support Scheduler activations ≈ User-level thread system with better communication and interaction with the kernel Approach – Scheduler activations
19
Approach – What does what? Scheduler activations User levelThread scheduling; Decides the number of threads and when to execute each thread Communication: Notifies the kernel when more or fewer processors are needed The kernelProcessor allocation; Decides how many processors an application is given Communication: Notifies the user level of all the events that affect thread scheduling
20
Approach Why called Scheduler activation? The scheduler is activated when it needs to make a decision about scheduling The key point is communication between the kernel and the user level
21
KERNEL Approach – Illustration of startup ApplicationVCPU2VCPU1 Memory space CPU1CPU2 User level Kernel level An application is started The kernel creates a scheduler activation and assigns this to a processor The same processor is used to start running the thread When the program runs more threads will be created More threads than processors ->A new scheduler activation is created…
22
KERNEL Approach – Illustration of communication Application VCPU2VCPU1 Memory space VCPU1VCPU2VCPU3CPU1CPU2CPU3CPU4CPU5 User level Kernel level ! Want more CPUs This processor has been preempted Add this processor This processor is idle
23
Problem – Simplified summary Main problems in traditional thread systems 1.The user-level is not informed of kernel events such as I/O blocking Processor lost during the block 2.Kernel threads are scheduled without regard to the user-level thread state Lock-holding threads can be descheduled In other words: Communication between the kernel and the user level is not good enough…
24
Approach – Blocking With user-level threads When a user-level thread blocks, its kernel thread also blocks The physical processor is lost during the block With scheduler activations The user-level is informed about the block and can therefore run another thread on the processor When unblocked, the user-level is informed and can choose which thread to schedule next
25
Problem – Simplified summary Main problems in traditional thread systems 1.The user-level is not informed of kernel events such as I/O blocking Processor lost during the block 2.Kernel threads are scheduled without regard to the user-level thread state Lock-holding threads can be de-scheduled In other words: Communication between the kernel and the user level is not good enough…
26
Approach – Scheduling With user-level threads Timeslicing of kernel threads Processor idle while waiting for a lock holding thread that has been de-scheduled With scheduler activations The user level: Knows when a thread is holding a lock Is responsible for all of the scheduling Will not de- schedule lock holders
27
Result
28
Result – Effects of scheduler activations Performance Functionality Flexibility
29
Result – Measuring performance Performance Measured by running a parallel application with Little use of kernel services (100% memory) Much use of kernel services(limited memory) Blocking… Two simultaneous applications(100% memory) Lock-holding threads…
30
Result – Performance – Little use of kernel services Topaz= Kernel threads orig= User-level threads new= Scheduler activations Almost no I/O Speedup compared to sequential implementation Scheduler activations perform as good as user- level threads
31
Topaz= Kernel threads orig= User-level threads new= Scheduler activations Much use of I/O Working set stops fitting into memory at about 50% -> kernel services Scheduler activations and kernel threads Handle I/O in a good way User- level threads Physical processor is lost during I/O blocks Result – Performance – Much use of kernel services
32
Simultaneous applications Two copies of the same program This will also generate kernel events (system-induced events) Performance difference User-level – Time slicing gives idling processors Kernel level – ”Expensive” thread operations Result – Performance – Much use of kernel services Thread system Speed-up Kernel threads:1.29 User-level threads:1.26 Scheduler activations:2.45
33
Result – Effects of scheduler activations Performance Functionality Flexibility Pass
34
Result – Functionality Functionality Same as kernel threads (even with I/O, page faults, multiprogramming) No idle processor when there are ready threads When a thread blocks, the processor that was running that thread must be able to run another thread during the block Pass
35
Result – Effects of scheduler activations Performance Functionality Flexibility Pass
36
Goal: Simple application-specific customization The kernel is unaware of the scheduling policies All scheduling is decided at user level (by the programmer) Result – Flexibility The programmer can modify the policy for scheduling decisions relatively easily Pass
37
Result – Effects of scheduler activations Performance Functionality Flexibility Pass
38
Results – Summary Goals were met Performance Equal or better than user-level thread performance Great performance increase when dealing with I/O Functionality Equal to that of kernel threads Flexibility Programming an application using scheduler activations is similar to traditional multiprogramming However…
39
Legacy To implement there is a need of modifying both user space code and the kernel – Hard! Implemented in NetBSD kernel by Nathan Williams FreeBSD - KSE project (threading system similar to Scheduler activations) Later replaced with kernel threads Scheduler activations has been implemented mostly for research purposes
40
Summary Comparison of thread systems Thread system+- User-levelHigh performance: Often very fast! Poor functionality: Blocking etc. KernelGood functionality: ”Works well” Low performance: Overhead Scheduler activations High performance, good functionality Hard to implement in an OS
41
Conclusion and comments Suffers from none of the weaknesses with user-level and kernel level threads Logical approach – ”makes sense” Complex implementation still makes it worse than the traditional approaches Implementation: Though important, only briefly covered in the paper
42
Jonas Johansson Summarizing presentation of Scheduler Activations – A different approach to parallelism
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.