Presentation is loading. Please wait.

Presentation is loading. Please wait.

Threads Section 2.2. Introduction to threads A thread (of execution) is a light-weight process –Threads reside within processes. –They share one address.

Similar presentations


Presentation on theme: "Threads Section 2.2. Introduction to threads A thread (of execution) is a light-weight process –Threads reside within processes. –They share one address."— Presentation transcript:

1 Threads Section 2.2

2 Introduction to threads A thread (of execution) is a light-weight process –Threads reside within processes. –They share one address space, which means they share data as well as files A process initially has one thread of execution –The initial thread can create multiple threads to accomplish distinct tasks (more on this soon)

3 The goals The ability for multiple threads of execution to share a set of resources so they can work together to perform some larger task (pp. 82-83) The use of threads allows each one to use blocking system calls without affecting the other threads of the same process. –While one thread is blocked on I/O, another thread can execute within the same process. –The effect is that the process completes execution more quickly

4 Why use threads? Many applications are comprised of distinct multiple activities. Threads simplify the programming model. –Word processors, servers They are –more efficient to create and destroy –especially efficient in processes that have a good mix of CPU and I/O activities. –able to exploit the processing power of multiple CPU’s.

5 A multithreaded word processor These three threads cooperate with each other Communicates with user Reformats document AutoSave

6 A multithreaded web server Client request for service

7 Rough outline of code for multithreaded web server Dispatcher thread Worker thread

8 What threads offer ~ web server example ~ A single-thread web server would result in much idle time for the CPU –No other request could be served while the thread is blocked and waiting for I/O Multiple threads make it possible to achieve parallelism –This improves performance

9 The Thread Model Three unrelated processes, each with one initial thread Three related threads within one process

10 Contrast processes with threads Text Process Status Data Stack Kernel File Resources Text Data Thread Status Text Data Process Status ProcessThread Grouping related resources Executable entities Stack PC

11 Contrast processes with threads Items shared by all threads within a process Items private to each thread

12 Each thread has its own stack Why does each thread need its own stack?

13 Procedures for manipulating threads thread_create –Issued by a thread wishing to create another thread thread_exit –Issued by a thread that is done executing thread_wait –Issued by a thread waiting for another thread to exit thread_yield –Issued by a thread voluntarily surrendering the CPU to another thread (no time-sharing within a process)

14 Where are threads managed? Either in user space or in the kernel There are advantages and disadvantages to either approach. Originally, no operating systems supported threads, so user space libraries were developed to define threads packages. –Today, both Windows and Linux offer kernel support for threads.

15 Implementing Threads in User Space

16 Each process maintains its own thread table. If Thread A is running but must wait for Thread B to complete some work, we say that Thread A is “locally blocked”. –Thread A puts itself into a blocked state by pushing its register contents onto the process’ thread table, searching the table for a ready thread to run, and reloading the CPU registers with the new thread’s saved register contents. The new thread now begins executing. –These are just a few quick instructions, so very efficient. If a thread is done running for the time being, it calls thread_yield –The code of thread_yield saves the thread’s information in the thread table, and then calls the thread scheduler to pick another thread to run. –Saving the thread’s state and scheduling threads are accomplished by local procedures, and do not require a time-consuming system call.

17 More advantages of user-space thread management Each process can implement a customized thread scheduling algorithm because it knows the tasks its threads are performing. Even if many processes generate many threads, each process maintains its own thread table, so a large thread table in the kernel is not a potential problem.

18 One big problem How to handle blocking system calls? –There is no convenient way to ensure that, once a thread is blocked, other threads within the same process will be able to execute. –One solution is through the select system call This returns information about the system and tells the threads whether or not a read call will result in blocking

19 Implementing Threads in the Kernel

20 The kernel maintains one thread table. –When a thread wants to create, destroy or block another thread, it must make a system call. More time-consuming than in user-space When a thread is blocked, the kernel chooses the next thread to run, either from the current process, or from another process. But, the kernel also engages in thread recycling to improve performance time.

21 Attempts to combine both approaches Hybrid implementation Scheduler activation

22 Pop-up threads Useful in server processes. The arrival of a message (request) causes the creation of a new thread to “pop up” and handle it (instead of waking up a blocked thread). Less overhead involved.


Download ppt "Threads Section 2.2. Introduction to threads A thread (of execution) is a light-weight process –Threads reside within processes. –They share one address."

Similar presentations


Ads by Google