Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 4: Threads & Concurrency

Similar presentations


Presentation on theme: "Chapter 4: Threads & Concurrency"— Presentation transcript:

1 Chapter 4: Threads & Concurrency

2 Chapter 4: Threads Overview Multicore Programming
Multithreading Models Thread Libraries Implicit Threading Threading Issues Operating System Examples

3 Objectives Identify the basic components of a thread, and contrast threads and processes Describe the benefits and challenges of designing multithreaded applications Illustrate different approaches to implicit threading including thread pools, fork-join, and Grand Central Dispatch Describe how the Windows and Linux operating systems represent threads Design multithreaded applications using the Pthreads, Java, and Windows threading APIs

4 Motivation A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter (PC), a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as opened files and signals. Traditionally, a process has a single thread of control, but most modern applications are multithreaded Threads run within an application/process Multiple tasks with the application can be implemented by separate threads Update display Fetch data Spell checking Answer a network request Process creation is heavy-weight while thread creation is light-weight Can simplify code, increase efficiency Kernels are generally multithreaded

5 Single and Multithreaded Processes

6 Multithreaded Server Architecture

7 Benefits Responsiveness – Resource Sharing – Economy – Scalability –
may allow continued execution if part of process is blocked, especially important for user interfaces Resource Sharing – Processes can share resources only through techniques such as shared memory and message passing. Threads share resources of process, easier than shared memory or message passing used between processes threads share the memory and the resources of the process to which they belong by default. No need for explicit arrangements that are needed between processes. This means different threads of activity within the same address space Economy – cheaper than process creation, thread switching lower overhead than context switching. It is more economical to create and context-switch threads; context switching is typically faster between threads than between processes. Scalability – process can take advantage of multicore architectures, where threads may be running in parallel on different processing cores.

8 Multicore Programming: challenges
Multicore vs. Multiprocessor systems putting pressure on programmers, challenges include: Dividing activities Find areas that can be divided into separate, concurrent tasks. Ideally, tasks are independent of one another and thus can run in parallel Balance Tasks perform equal work of equal value. In some instances, a certain task may not contribute as much value to the overall process as other tasks. Using a separate execution core to run that task may not be worth the cost. Data splitting Just as applications are divided into separate tasks, the data accessed and manipulated by the tasks must be divided to run on separate cores. Data dependency When one task depends on data from another, programmers must ensure that the execution of the tasks is synchronized to accommodate the data dependency. Testing and debugging Different execution paths are possible, then testing and debugging is more difficult than testing and debugging single-threaded applications.

9 Multicore Programming
Parallelism : implies a system can perform more than one task simultaneously (at the same time) Concurrency: supports more than one task making progress Single processor/core: scheduler providing concurrency Thus, it is possible to have concurrency without parallelism; the CPU schedulers were designed to provide the illusion of parallelism by rapidly switching between processes, thereby allowing each process to make progress. Such processes were running concurrently, but not in parallel.

10 Concurrency vs. Parallelism
Concurrent execution on single-core system: Parallelism on a multi-core system:

11 Types of Parallelism

12 Multicore Programming: Types of parallelism
Data parallelism – distributes subsets of the same data across multiple cores, same operation on each Give an example ?? What about summing the contents of an array of size N Task parallelism – distributing threads across cores, each thread performing unique operation, different threads may be operating on the same data, or they may be operating on different data. Give an Example ??? What about performing different but unique statistical operation on an array of elements. Thus, threads again are operating in parallel on separate computing cores, but each is performing a unique operation. Fundamentally, then, data parallelism involves the distribution of data across multiple cores, and task parallelism involves the distribution of tasks across multiple cores

13 Data and Task Parallelism

14 Amdahl’s Law Identifies performance gains from adding additional cores to an application that has both serial and parallel components S is serial portion N processing cores That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times As N approaches infinity, speedup approaches 1 / S Serial portion of an application has disproportionate (too large or too small) effect on performance gained by adding additional cores But does the law take into account contemporary multicore systems?

15 Amdahl’s Law

16 Multithreading Models

17 User Threads and Kernel Threads
User threads - management done by user-level threads library Three primary thread libraries: POSIX Pthreads Windows threads Java threads Kernel threads – Supported/managed by the Kernel Examples – virtually all contemporary and general purpose operating systems, including: Windows Linux Mac OS X iOS Android

18 User and Kernel Threads
Ultimately, a relationship must exist between user threads and kernel threads

19 Multithreading Models
Many-to-One Many user level threads are mapped to one kernel level thread One-to-One Each user level thread is mapped to one kernel level thread Many-to-Many Many user level threads are mapped to many kernel level threads

20 Many-to-One Many user-level threads mapped to single kernel thread
Thread management is done by the thread library in user space The entire process will block if a thread makes a blocking system call Multiple threads may not run in parallel on multicore system because only one may be in kernel at a time Few systems currently use this model Examples: Solaris Green Threads GNU Portable Threads

21 One-to-One Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread This is a drawback to this model; creating a user thread requires creating the corresponding kernel thread, and a large number of kernel threads may burden the performance of a system. More concurrency than many-to-one Allows another thread to run when a thread makes a blocking system call It also allows multiple threads to run in parallel on multiprocessors. Number of threads per process sometimes restricted due to overhead Examples Windows Linux

22 Many-to-Many Model Allows many user level threads to be mapped to many kernel threads Allows the operating system to create a sufficient number of kernel threads The number of kernel threads may be specific to either a particular application or a particular machine An application may be allocated more kernel threads on a system with eight processing cores than a system with four cores Windows with the ThreadFiber package Otherwise not very common

23 Shortcomings Whereas the many-to-one model allows the developer to create as many user threads as she wishes, it does not result in parallelism, The one-to-one model allows greater concurrency, but the developer has to be careful not to create too many threads within an application. In fact, on some systems, she may be limited in the number of threads she can create. The many-to-many model suffers from neither of these shortcomings: developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel on a multiprocessor Also, when a thread performs a blocking system call, the kernel can schedule another thread for execution.

24 Two-level Model Similar to M:M, except that it allows a user thread to be bound to kernel thread One variation on the many-to-many model still multiplexes many user level threads to a smaller or equal number of kernel threads but also allows a user-level thread to be bound to a kernel thread, this is called the Two-level Model However, with an increasing number of processing cores appearing on most systems, limiting the number of kernel threads has become less important. As a result, most operating systems now use the one-to-one model.

25 Thread Libraries

26 Thread Libraries Thread library provides programmer with API for creating and managing threads Two primary ways of implementing Library entirely in user space All code and data structures for the library exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call. Kernel-level library supported by the OS Implement a kernel-level library supported directly by the operating system Code and data structures for the library exist in kernel space Invoking a function in the API for the library typically results in a system call to the kernel Examples of thread libraries: POSIX Pthreads: may be provided as either a user-level or a kernel-level library Windows Threads: a kernel-level library available on Windows systems Java Threads: Often JVM is running on top of a host OS, thus, Java thread API is generally implemented using a thread library available on the host system

27 Examples (This part is self-reading, you need to practice with Java threads and Pthreads in HW??) Note: you may stop by my office if you are stuck somewhere

28 Pthreads May be provided either as user-level or kernel-level
A POSIX standard (IEEE c) API for thread creation and synchronization Specification, not implementation (what is the difference?) API specifies interface and behavior of the thread library, Implementation is up to development of the library Common in UNIX operating systems Linux & Mac OS X Solaris The command $ps -ef can be used to display the kernel threads on a running Linux system. Examining the output of this command will show the kernel thread kthreadd (with pid = 2), which serves as the parent of all other kernel threads.

29 Pthreads Example

30 Pthreads Example (cont)

31 Pthreads Code for Joining 10 Threads

32 Windows Multithreaded C Program

33 Windows Multithreaded C Program (Cont.)

34 Java Threads Java threads are managed by the JVM
Typically implemented using the threads model provided by underlying OS Java threads may be created by: Extending Thread class Implementing the Runnable interface Standard practice is to implement Runnable interface

35 Java Threads Implementing Runnable interface: Creating a thread:
Waiting on a thread:

36 Implicit Threading

37 Implicit Threading Growing in popularity as numbers of threads increase, program correctness more difficult with explicit threads Creation and management of threads done by compilers and run-time libraries rather than programmers Application developers to identify tasks—not threads—that can run in parallel A task is usually written as a function, which the run-time library then maps to a separate thread, Now, developers only need to identify parallel tasks, and the libraries determine the specific details of thread creation and management. Examples: Thread Pools, Fork-Join, OpenMP, Grand Central Dispatch, and Intel Threading Building Blocks

38 Thread Pools Create a number of threads in a pool where they await work Advantages: Usually slightly faster to service a request with an existing thread than create a new thread Allows the number of threads in the application(s) to be bound to the size of the pool Separating task to be performed from mechanics of creating task allows different strategies for running task i.e. Tasks could be scheduled to run periodically Windows API supports thread pools:

39 Threading Issues

40 Threading Issues / fork() and exec()
Semantics of fork() and exec() system calls If one thread in a program calls fork(), does the new process duplicate all threads, or is the new process single-threaded? Some UNIX systems have chosen to have two versions of fork(), one that duplicates all threads and another that duplicates only the thread that invoked the fork() system call. Which of the two versions of fork() to use depends on the application if a thread invokes the exec() system call, the program specified in the parameter to exec() will replace the entire process—including all threads.

41 Threading Issues / Signal handling
Signals are software interrupts sent to a program to indicate that an important event has occurred Used in UNIX systems to notify a process that a particular event has occurred A signal may be received either synchronously or asynchronously, depending on the source of and the reason for the event being signaled All signals, whether synchronous or asynchronous, follow the same pattern: A signal is generated by the occurrence of a particular event The signal is delivered to a process Once delivered, the signal must be handled Examples of synchronous signals: illegal memory access and Division by 0 Synchronous signals are delivered to the same process that performed the operation that caused the signal When a signal is generated by an event external to a running process, that process receives the signal asynchronously. i.e. Ctrl+C Typically, an asynchronous signal is sent to another process.

42 Signal Handling (Cont.)
Signals are used in UNIX systems to notify a process that a particular event has occurred. A signal handler is used to process signals Signal is generated by particular event Signal is delivered to a process Signal is handled by one of two signal handlers: default user-defined Every signal has default handler that kernel runs when handling signal User-defined signal handler can override default For single-threaded, signal delivered to process

43 Signal Handling (Cont.)
delivering signals is more complicated in multithreaded programs Where should a signal be delivered for multi-threaded? Deliver the signal to the thread to which the signal applies ?? Deliver the signal to every thread in the process ?? Deliver the signal to certain threads in the process ?? Assign a specific thread to receive all signals for the process ?? Delivering a signal depends on the type of signal generated !! synchronous signals need to be delivered to the thread causing the signal and not to other threads in the process the situation with asynchronous signals is not as clear  The standard UNIX function for delivering a signal is kill(pid t_pid, int signal) Pthreads provides the following function, which allows a signal to be delivered to a specified thread (tid): Pthread_kill(pthread t_tid, int signal) Although Windows does not explicitly provide support for signals, it allows us to emulate them using asynchronous procedure calls (APCs).

44 Thread Cancellation Terminating a thread before it has finished
Thread to be canceled is target thread Two general approaches: Asynchronous cancellation terminates the target thread immediately Deferred cancellation allows the target thread to periodically check if it should be cancelled Pthread code to create and cancel a thread:

45 Thread Cancellation (Cont.)
Invoking thread cancellation requests cancellation, but actual cancellation depends on thread state If thread has cancellation disabled, cancellation remains pending until thread enables it Default type is deferred Cancellation only occurs when thread reaches cancellation point I.e. pthread_testcancel() Then cleanup handler is invoked On Linux systems, thread cancellation is handled through signals

46 Thread-Local Storage (TLS)
Threads belonging to a process share the data of the process However, in some circumstances, each thread might need its own copy of certain data. Thread-Local Storage (TLS) allows each thread to have its own copy of data For example, in a transaction-processing system, we might service each transaction in a separate thread. Furthermore, each transaction might be assigned a unique identifier. Thus, we could use thread-local storage for the identifier. Useful when you do not have control over the thread creation process (i.e., when using a thread pool) TLS is Different from local variables Local variables visible only during single function invocation TLS visible across function invocations Similar to static data TLS is unique to each thread

47 Scheduler Activations
Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application Typically use an intermediate data structure between user and kernel threads – lightweight process (LWP) Appears to be a virtual processor on which process can schedule user thread to run Each LWP attached to kernel thread How many LWPs to create ?? Scheduler activations provide upcalls - a communication mechanism from the kernel to the upcall handler in the thread library This communication allows an application to maintain the correct number kernel threads

48 End of Chapter 4


Download ppt "Chapter 4: Threads & Concurrency"

Similar presentations


Ads by Google