Download presentation
Presentation is loading. Please wait.
Published byPrimrose McDaniel Modified over 8 years ago
1
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes
2
Tread Usage Threads are a kind of processes within a process, sharing the same address space, running as though they were separate processes. 2July 2016Processes Threads are useful on systems with multiple CPUs, where real parallelism, but also in a single CPU with multiple ALUs. In many applications multiple activities are going on at once. Some of these may block from time to time. 1. By decomposing it into multiple sequential threads that run in quasi-parallel, the programming model becomes simpler.
3
3July 2016Processes Instead of thinking about interrupts, timers, and context switches, we think about parallel processes with the ability to share an address space and its underlying data. 2. Threads are faster to create and destroy 10-100 than processes. 3. Another performance argument. Threads yield no performance gain when all of them are CPU bound, but when there is substantial computing and also substantial I/O, threads allow activities to overlap, thus speeding up the application.
4
4July 2016Processes Example: Word processor. In single-threaded, during auto-saving commands from the keyboard and mouse would be ignored until the disk backup finishes. Alternatively, keyboard and mouse events could interrupt the disk backup, allowing good performance but programming model is complex interrupt-driven. With two threads, the programming model is much simpler. The first thread just interacts with the user. The second thread writes RAM to disk periodically. Having two separate processes would not work here because of the common address space.
5
5July 2016Processes Example: An applications processing big data. A single- thread reads in block of data, process it, and write it out, blocking the process (CPU idle) when data are in/out. In multi-threads the process could be structured with an input thread, a processing thread, and an output thread. The input thread reads data into an input buffer. The processing thread takes data out of the buffer, processes it, and puts the results in an output buffer. The output buffer writes these results back to disk. In this way, input, output, and processing can all be going on at the same time.
6
Tread Modeling Although a thread must execute in some process, the thread and its process are different concepts. A process groups related resources together: address space, open files, child processes, pending alarms, signal handlers, accounting information, and more. A process has also a thread of execution, having a PC, registers and a stack, with one frame for each procedure called but not yet returned from. What threads add to the process model is to allow multiple executions to take place in the same process environment. 6July 2016Processes
7
Having multiple threads running in parallel in one process is analogous to having multiple processes running in parallel in one computer. 7July 2016Processes traditionalmultithreading
8
Switching among multiple processes gives the illusion of separate sequential processes running in parallel. Multithreading works the same way. The CPU switches rapidly among the threads, illusion that the threads are running in parallel, albeit on a slower CPU. All threads have exactly the same address space, which means that they also share the same global variables. There is no protection between threads. All the threads can share the same set of open files, child processes, alarms, signals, and so on. 8July 2016Processes
9
Like process, a thread can be in running, blocked, ready, or terminated states. The transitions between thread states are the same as those between process states. A running thread currently has the CPU and is active. A blocked thread is waiting for some event to unblock it, e.g. read from the keyboard. A ready thread is scheduled to run and will when its turn comes up. Each thread has its own stack, containing one frame for each procedure called but not yet returned from. It contains the procedure’s local variables and the return address. 9 July 2016Processes
10
10 July 2016Processes Thread creation and termination is similar to process. A thread can create new threads by calling thread_create, running in the address space of the creator thread. When a thread has finished, it exits by calling thread_exit, vanishing it, no longer schedulable.
11
11 July 2016Processes A thread can wait for another thread to exit by thread_join, which blocks it until the other thread exits. A thread can voluntarily give up the CPU to let another thread run by calling thread_yield. Threads also introduce programming complications. Consider the UNIX fork system call. Does the child inherit all the threads of the parent? their states? stacks? What happens if one thread closes a file while another one is still reading from it? Doubling memory allocation. Many more questions…
12
User-Space Tread Implementation There are two main places to implement threads: user space and the kernel. Hybrid is also possible. 12July 2016Processes In user space the kernel does not know anything about threads. User-level threads library is implemented on an OS. The kernel is a single- threaded processes, not supporting threads.
13
13 July 2016Processes Each process needs its own private thread table to keep track of the threads in that process (analogous to the kernel’s process table). It is managed by the run-time system. Thread switching is one-two order of magnitude faster than trapping to the kernel (HW supported). No trap is needed, no context switch, the memory cache need not be flushed, etc. User-space threads scale better, since kernel threads invariably require some table space and stack space in the kernel, which can be a problem if there are a very large number of threads.
14
14 July 2016Processes Problems. How blocking system calls are implemented? A thread is not aware of other threads. Waiting for I/O, it will block the rest threads. Changes to OS of how it handles blocking is possible but undesirable. It is possible to tell in advance if a call will block. A system call select exists, allowing the caller to tell whether a read will block. In a page fault, the kernel not aware of existence of threads, blocks the entire process until the disk I/O is completes. Big burden for programmers
15
Kernel-Space Tread Implementation The kernel has a thread table that keeps track of all the threads in the system. 15July 2016Processes When a thread wants to create or destroy threads, it makes a kernel call, which then does the update the kernel thread table. Thread calls are implemented as system calls, considerably higher cost than a call to a run-time system procedure.
16
16 July 2016Processes When a thread blocks, the kernel, at its option, can run either another thread from the same process or a thread from a different process. With user-level threads, the run-time system keeps running threads from its own process until the kernel takes the CPU away from it or there are no ready threads left to run. Kernel threads do not require any new, non blocking system calls. If a thread in a process causes a page fault, the kernel easily run one other process’s runnable thread.
17
17 July 2016Processes Disadvantage. The cost of a system call is substantial. Thread creation and termination have more overhead. Problems. What happens when a multithreaded process forks? Does the new process copy the threads the old one has? Does it have just one? If it calls exec to start a new program, one thread is the correct choice. If it continues to execute, reproducing all the threads is probably best. Signals are sent to processes, not to threads. When a signal comes in, which thread should handle it?
18
18 July 2016Processes Threads may register interest in certain signals. When a signal came in it would be given to the related thread. What happens if few threads register that signal?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.