Chapter 4: Threads & Concurrency

Slides:



Advertisements
Similar presentations
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Multithreaded Programming.
Advertisements

Threads. Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the basis of multithreaded computer systems.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Modified from Silberschatz, Galvin and Gagne ©2009 Lecture 7 Chapter 4: Threads (cont)
Chapter 4: Multithreaded Programming
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 2 nd Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 5: Threads Overview Multithreading Models Threading Issues Pthreads Solaris.
4.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 4 Multithreaded Programming Objectives Objectives To introduce a notion of.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Chapter 4: Threads READ 4.1 & 4.2 NOT RESPONSIBLE FOR 4.3 &
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Multithreaded Programming.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
國立台灣大學 資訊工程學系 Chapter 4: Threads. 資工系網媒所 NEWS 實驗室 Objectives To introduce the notion of a thread — a fundamental unit of CPU utilization that forms the.
Chapter 4: Multithreaded Programming. 4.2 Multithreaded Programming n Overview n Multithreading Models n Thread Libraries n Threading Issues n Operating.
14.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2011Operating System Concepts Essentials – 8 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 4: Threads Modified from the slides of the text book. TY, Sept 2010.
Chapter 4: Threads. 4.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th edition, Jan 23, 2005 Chapter 4: Threads Overview Multithreading.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition, Chapter 4: Multithreaded Programming.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
Lecture 3 Threads Erick Pranata © Sekolah Tinggi Teknik Surabaya 1.
Saurav Karmakar. Chapter 4: Threads  Overview  Multithreading Models  Thread Libraries  Threading Issues  Operating System Examples  Windows XP.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
Introduction to threads
Chapter 4: Multithreaded Programming
OPERATING SYSTEM CONCEPT AND PRACTISE
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Multithreaded Programming
Chapter 4: Threads.
Threads.
Chapter 4: Threads.
Nadeem MajeedChoudhary.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads & Concurrency
Chapter 4: Threads.
Chapter 4: Threads.
OPERATING SYSTEMS Threads
Modified by H. Schulzrinne 02/15/10 Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
CHAPTER 4:THreads Bashair Al-harthi OPERATING SYSTEM
Multithreaded Programming
Threads.
Chapter 4: Threads.
Chapter 4: Threads & Concurrency
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4: Threads.
Chapter 4 Threads!!!.
Chapter 4: Threads & Concurrency
Chapter 4: Threads.
Presentation transcript:

Chapter 4: Threads & Concurrency

Chapter 4: Threads Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading Issues Operating System Examples

Objectives Identify the basic components of a thread, and contrast threads and processes Describe the benefits and challenges of designing multithreaded applications Illustrate different approaches to implicit threading including thread pools, fork-join, and Grand Central Dispatch Describe how the Windows and Linux operating systems represent threads Design multithreaded applications using the Pthreads, Java, and Windows threading APIs

Motivation A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter (PC), a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as opened files and signals. Traditionally, a process has a single thread of control, but most modern applications are multithreaded Threads run within an application/process Multiple tasks with the application can be implemented by separate threads Update display Fetch data Spell checking Answer a network request Process creation is heavy-weight while thread creation is light-weight Can simplify code, increase efficiency Kernels are generally multithreaded

Single and Multithreaded Processes

Multithreaded Server Architecture

Benefits Responsiveness – Resource Sharing – Economy – Scalability – may allow continued execution if part of process is blocked, especially important for user interfaces Resource Sharing – Processes can share resources only through techniques such as shared memory and message passing. Threads share resources of process, easier than shared memory or message passing used between processes threads share the memory and the resources of the process to which they belong by default. No need for explicit arrangements that are needed between processes. This means different threads of activity within the same address space Economy – cheaper than process creation, thread switching lower overhead than context switching. It is more economical to create and context-switch threads; context switching is typically faster between threads than between processes. Scalability – process can take advantage of multicore architectures, where threads may be running in parallel on different processing cores.

Multicore Programming: challenges Multicore vs. Multiprocessor systems putting pressure on programmers, challenges include: Dividing activities Find areas that can be divided into separate, concurrent tasks. Ideally, tasks are independent of one another and thus can run in parallel Balance Tasks perform equal work of equal value. In some instances, a certain task may not contribute as much value to the overall process as other tasks. Using a separate execution core to run that task may not be worth the cost. Data splitting Just as applications are divided into separate tasks, the data accessed and manipulated by the tasks must be divided to run on separate cores. Data dependency When one task depends on data from another, programmers must ensure that the execution of the tasks is synchronized to accommodate the data dependency. Testing and debugging Different execution paths are possible, then testing and debugging is more difficult than testing and debugging single-threaded applications.

Multicore Programming Parallelism : implies a system can perform more than one task simultaneously (at the same time) Concurrency: supports more than one task making progress Single processor/core: scheduler providing concurrency Thus, it is possible to have concurrency without parallelism; the CPU schedulers were designed to provide the illusion of parallelism by rapidly switching between processes, thereby allowing each process to make progress. Such processes were running concurrently, but not in parallel.

Concurrency vs. Parallelism Concurrent execution on single-core system: Parallelism on a multi-core system:

Types of Parallelism

Multicore Programming: Types of parallelism Data parallelism – distributes subsets of the same data across multiple cores, same operation on each Give an example ?? What about summing the contents of an array of size N Task parallelism – distributing threads across cores, each thread performing unique operation, different threads may be operating on the same data, or they may be operating on different data. Give an Example ??? What about performing different but unique statistical operation on an array of elements. Thus, threads again are operating in parallel on separate computing cores, but each is performing a unique operation. Fundamentally, then, data parallelism involves the distribution of data across multiple cores, and task parallelism involves the distribution of tasks across multiple cores

Data and Task Parallelism

Amdahl’s Law Identifies performance gains from adding additional cores to an application that has both serial and parallel components S is serial portion N processing cores That is, if application is 75% parallel / 25% serial, moving from 1 to 2 cores results in speedup of 1.6 times As N approaches infinity, speedup approaches 1 / S Serial portion of an application has disproportionate (too large or too small) effect on performance gained by adding additional cores But does the law take into account contemporary multicore systems?

Amdahl’s Law

Multithreading Models

User Threads and Kernel Threads User threads - management done by user-level threads library Three primary thread libraries: POSIX Pthreads Windows threads Java threads Kernel threads – Supported/managed by the Kernel Examples – virtually all contemporary and general purpose operating systems, including: Windows Linux Mac OS X iOS Android

User and Kernel Threads Ultimately, a relationship must exist between user threads and kernel threads

Multithreading Models Many-to-One Many user level threads are mapped to one kernel level thread One-to-One Each user level thread is mapped to one kernel level thread Many-to-Many Many user level threads are mapped to many kernel level threads

Many-to-One Many user-level threads mapped to single kernel thread Thread management is done by the thread library in user space The entire process will block if a thread makes a blocking system call Multiple threads may not run in parallel on multicore system because only one may be in kernel at a time Few systems currently use this model Examples: Solaris Green Threads GNU Portable Threads

One-to-One Each user-level thread maps to kernel thread Creating a user-level thread creates a kernel thread This is a drawback to this model; creating a user thread requires creating the corresponding kernel thread, and a large number of kernel threads may burden the performance of a system. More concurrency than many-to-one Allows another thread to run when a thread makes a blocking system call It also allows multiple threads to run in parallel on multiprocessors. Number of threads per process sometimes restricted due to overhead Examples Windows Linux

Many-to-Many Model Allows many user level threads to be mapped to many kernel threads Allows the operating system to create a sufficient number of kernel threads The number of kernel threads may be specific to either a particular application or a particular machine An application may be allocated more kernel threads on a system with eight processing cores than a system with four cores Windows with the ThreadFiber package Otherwise not very common

Shortcomings Whereas the many-to-one model allows the developer to create as many user threads as she wishes, it does not result in parallelism, The one-to-one model allows greater concurrency, but the developer has to be careful not to create too many threads within an application. In fact, on some systems, she may be limited in the number of threads she can create. The many-to-many model suffers from neither of these shortcomings: developers can create as many user threads as necessary, and the corresponding kernel threads can run in parallel on a multiprocessor Also, when a thread performs a blocking system call, the kernel can schedule another thread for execution.

Two-level Model Similar to M:M, except that it allows a user thread to be bound to kernel thread One variation on the many-to-many model still multiplexes many user level threads to a smaller or equal number of kernel threads but also allows a user-level thread to be bound to a kernel thread, this is called the Two-level Model However, with an increasing number of processing cores appearing on most systems, limiting the number of kernel threads has become less important. As a result, most operating systems now use the one-to-one model.

Thread Libraries

Thread Libraries Thread library provides programmer with API for creating and managing threads Two primary ways of implementing Library entirely in user space All code and data structures for the library exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call. Kernel-level library supported by the OS Implement a kernel-level library supported directly by the operating system Code and data structures for the library exist in kernel space Invoking a function in the API for the library typically results in a system call to the kernel Examples of thread libraries: POSIX Pthreads: may be provided as either a user-level or a kernel-level library Windows Threads: a kernel-level library available on Windows systems Java Threads: Often JVM is running on top of a host OS, thus, Java thread API is generally implemented using a thread library available on the host system

Examples (This part is self-reading, you need to practice with Java threads and Pthreads in HW??) Note: you may stop by my office if you are stuck somewhere

Pthreads May be provided either as user-level or kernel-level A POSIX standard (IEEE 1003.1c) API for thread creation and synchronization Specification, not implementation (what is the difference?) API specifies interface and behavior of the thread library, Implementation is up to development of the library Common in UNIX operating systems Linux & Mac OS X Solaris The command $ps -ef can be used to display the kernel threads on a running Linux system. Examining the output of this command will show the kernel thread kthreadd (with pid = 2), which serves as the parent of all other kernel threads.

Pthreads Example

Pthreads Example (cont)

Pthreads Code for Joining 10 Threads

Windows Multithreaded C Program

Windows Multithreaded C Program (Cont.)

Java Threads Java threads are managed by the JVM Typically implemented using the threads model provided by underlying OS Java threads may be created by: Extending Thread class Implementing the Runnable interface Standard practice is to implement Runnable interface

Java Threads Implementing Runnable interface: Creating a thread: Waiting on a thread:

Implicit Threading

Implicit Threading Growing in popularity as numbers of threads increase, program correctness more difficult with explicit threads Creation and management of threads done by compilers and run-time libraries rather than programmers Application developers to identify tasks—not threads—that can run in parallel A task is usually written as a function, which the run-time library then maps to a separate thread, Now, developers only need to identify parallel tasks, and the libraries determine the specific details of thread creation and management. Examples: Thread Pools, Fork-Join, OpenMP, Grand Central Dispatch, and Intel Threading Building Blocks

Thread Pools Create a number of threads in a pool where they await work Advantages: Usually slightly faster to service a request with an existing thread than create a new thread Allows the number of threads in the application(s) to be bound to the size of the pool Separating task to be performed from mechanics of creating task allows different strategies for running task i.e. Tasks could be scheduled to run periodically Windows API supports thread pools:

Threading Issues

Threading Issues / fork() and exec() Semantics of fork() and exec() system calls If one thread in a program calls fork(), does the new process duplicate all threads, or is the new process single-threaded? Some UNIX systems have chosen to have two versions of fork(), one that duplicates all threads and another that duplicates only the thread that invoked the fork() system call. Which of the two versions of fork() to use depends on the application if a thread invokes the exec() system call, the program specified in the parameter to exec() will replace the entire process—including all threads.

Threading Issues / Signal handling Signals are software interrupts sent to a program to indicate that an important event has occurred Used in UNIX systems to notify a process that a particular event has occurred A signal may be received either synchronously or asynchronously, depending on the source of and the reason for the event being signaled All signals, whether synchronous or asynchronous, follow the same pattern: A signal is generated by the occurrence of a particular event The signal is delivered to a process Once delivered, the signal must be handled Examples of synchronous signals: illegal memory access and Division by 0 Synchronous signals are delivered to the same process that performed the operation that caused the signal When a signal is generated by an event external to a running process, that process receives the signal asynchronously. i.e. Ctrl+C Typically, an asynchronous signal is sent to another process.

Signal Handling (Cont.) Signals are used in UNIX systems to notify a process that a particular event has occurred. A signal handler is used to process signals Signal is generated by particular event Signal is delivered to a process Signal is handled by one of two signal handlers: default user-defined Every signal has default handler that kernel runs when handling signal User-defined signal handler can override default For single-threaded, signal delivered to process

Signal Handling (Cont.) delivering signals is more complicated in multithreaded programs Where should a signal be delivered for multi-threaded? Deliver the signal to the thread to which the signal applies ?? Deliver the signal to every thread in the process ?? Deliver the signal to certain threads in the process ?? Assign a specific thread to receive all signals for the process ?? Delivering a signal depends on the type of signal generated !! synchronous signals need to be delivered to the thread causing the signal and not to other threads in the process the situation with asynchronous signals is not as clear  The standard UNIX function for delivering a signal is kill(pid t_pid, int signal) Pthreads provides the following function, which allows a signal to be delivered to a specified thread (tid): Pthread_kill(pthread t_tid, int signal) Although Windows does not explicitly provide support for signals, it allows us to emulate them using asynchronous procedure calls (APCs).

Thread Cancellation Terminating a thread before it has finished Thread to be canceled is target thread Two general approaches: Asynchronous cancellation terminates the target thread immediately Deferred cancellation allows the target thread to periodically check if it should be cancelled Pthread code to create and cancel a thread:

Thread Cancellation (Cont.) Invoking thread cancellation requests cancellation, but actual cancellation depends on thread state If thread has cancellation disabled, cancellation remains pending until thread enables it Default type is deferred Cancellation only occurs when thread reaches cancellation point I.e. pthread_testcancel() Then cleanup handler is invoked On Linux systems, thread cancellation is handled through signals

Thread-Local Storage (TLS) Threads belonging to a process share the data of the process However, in some circumstances, each thread might need its own copy of certain data. Thread-Local Storage (TLS) allows each thread to have its own copy of data For example, in a transaction-processing system, we might service each transaction in a separate thread. Furthermore, each transaction might be assigned a unique identifier. Thus, we could use thread-local storage for the identifier. Useful when you do not have control over the thread creation process (i.e., when using a thread pool) TLS is Different from local variables Local variables visible only during single function invocation TLS visible across function invocations Similar to static data TLS is unique to each thread

Scheduler Activations Both M:M and Two-level models require communication to maintain the appropriate number of kernel threads allocated to the application Typically use an intermediate data structure between user and kernel threads – lightweight process (LWP) Appears to be a virtual processor on which process can schedule user thread to run Each LWP attached to kernel thread How many LWPs to create ?? Scheduler activations provide upcalls - a communication mechanism from the kernel to the upcall handler in the thread library This communication allows an application to maintain the correct number kernel threads

End of Chapter 4