Lecture 14: Pthreads Mutex and Condition Variables

Slides:



Advertisements
Similar presentations
Operating Systems Semaphores II
Advertisements

1 Interprocess Communication 1. Ways of passing information 2. Guarded critical activities (e.g. updating shared data) 3. Proper sequencing in case of.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
EEE 435 Principles of Operating Systems Interprocess Communication Pt II (Modern Operating Systems 2.3)
Multi-core Programming Programming with Posix Threads.
Using synchronization primitives CS-3013 A-term Using Synchronization Primitives in C, C++, and Linux Kernel CS-3013 Operating Systems A-term 2008.
CS427 Multicore Architecture and Parallel Computing
CY2003 Computer Systems Lecture 05 Semaphores - Theory.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
8-1 JMH Associates © 2004, All rights reserved Windows Application Development Chapter 10 - Supplement Introduction to Pthreads for Application Portability.
Pthread (continue) General pthread program structure –Encapsulate parallel parts (can be almost the whole program) in functions. –Use function arguments.
Object Oriented Analysis & Design SDL Threads. Contents 2  Processes  Thread Concepts  Creating threads  Critical sections  Synchronizing threads.
The University of Adelaide, School of Computer Science
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Programming with POSIX* Threads Intel Software College.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
Chapter 6 – Process Synchronisation (Pgs 225 – 267)
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
POSIX Synchronization Introduction to Operating Systems: Discussion Module 5.
IT 325 Operating systems Chapter6.  Threads can greatly simplify writing elegant and efficient programs.  However, there are problems when multiple.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 11: Thread-safe Data Structures, Semaphores.
Pthreads #include pthread_t tid ; //thread id. pthread_attr_t attr ; void *sleeping(void *); /* thread routine */ main() { int time = 2 ; pthread_create(&tid,
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
1 Condition Variables CS 241 Prof. Brighten Godfrey March 16, 2012 University of Illinois.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Synchronization Emery Berger and Mark Corner University.
PThread Synchronization. Thread Mechanisms Birrell identifies four mechanisms commonly used in threading systems –Thread creation –Mutual exclusion (mutex)
CS 360 pthreads Condition Variables for threads. Page 2 CS 360, WSU Vancouver What is the issue? Creating a thread to perform a task and then joining.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Working with Pthreads. Operations on Threads int pthread_create (pthread_t *thread, const pthread_attr_t *attr, void * (*routine)(void*), void* arg) Creates.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion Mutexes, Semaphores.
pThread synchronization
1 Programming with Shared Memory - 2 Issues with sharing data ITCS 4145 Parallel Programming B. Wilkinson Jan 22, _Prog_Shared_Memory_II.ppt.
Case Study: Pthread Synchronization Dr. Yingwu Zhu.
Background on the need for Synchronization
Principles of Operating Systems Lecture 11
Multithreading Tutorial
Operating Systems CMPSC 473
Thread synchronization
Lecture 13: Producer-Consumer and Semaphores
Lecture 16: Readers-Writers Problem and Message Passing
Lecture 15: Dining Philosophers Problem
PTHREADS AND SEMAPHORES
Multithreading Tutorial
Threading And Parallel Programming Constructs
Jonathan Walpole Computer Science Portland State University
Lecture 2 Part 2 Process Synchronization
Pthread Prof. Ikjun Yeom TA – Mugyo
Implementing Mutual Exclusion
Multithreading Tutorial
CSCI1600: Embedded and Real Time Software
Multithreading Tutorial
Implementing Mutual Exclusion
Lecture 14: Pthreads Mutex and Condition Variables
Lecture 16: Readers-Writers Problem and Message Passing
Lecture 15: Dining Philosophers Problem
Lecture 13: Producer-Consumer and Semaphores
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
Programming with Shared Memory - 2 Issues with sharing data
CSCI1600: Embedded and Real Time Software
“The Little Book on Semaphores” Allen B. Downey
CSE 542: Operating Systems
CSE 542: Operating Systems
POSIX Threads(pthreads)
Presentation transcript:

Lecture 14: Pthreads Mutex and Condition Variables

Review: Mutual Exclusion Software solution Disabling interrupts Strict alternation Peterson’s solution Hardware solution TSL/XCHG Semaphore

Review: Semaphore Implementation down(&S): If (S=0) then Suspend thread, put into a waiting queue Schedule another thread to run Else decrement S and return up(&S): Increment S If any threads in waiting queue, then release one of them (make it ‘ready’) Both the above are done atomically by disabling interrupts by TSL/XCHG

In this lecture Pthreads APIs Mutex Condition variables

Some Pthreads APIs for mutex

Mutex usage in POSIX Threads pthread_mutex_t m; pthread_mutex_init(&m); pthread_mutex_lock(&m); critical_region(); pthread_mutex_unlock(&m); To solve our synchronization problem, we introduce mutexes—a synchronization construct providing mutual exclusion. A mutex is used to insure either that only one thread is executing a particular piece of code at once (code locking) or that only one thread is accessing a particular data structure at once (data locking). A mutex belongs either to a particular thread or to no thread (i.e., it is either locked or unlocked). A thread may lock a mutex by calling pthread_mutex_lock. If no other thread has the mutex locked, then the calling thread obtains the lock on the mutex and returns. Otherwise it waits until no other thread has the mutex, and finally returns with the mutex locked. There may of course be multiple threads waiting for the mutex to be unlocked. Only one thread can lock the mutex at a time; there is no specified order for who gets the mutex next, though the ordering is assumed to be at least somewhat “fair.” To unlock a mutex, a thread calls pthread_mutex_unlock. It is considered incorrect to unlock a mutex that is not held by the caller (i.e., to unlock someone else’s mutex). However, it is somewhat costly to check for this, so most implementations, if they check at all, do so only when certain degrees of debugging are turned on. Like any other data structure, mutexes must be initialized. This can be done via a call to pthread_mutex_init or can be done statically by assigning PTHREAD_MUTEX_INITIALIZER to a mutex. The initial state of such initialized mutexes is unlocked. Of course, a mutex should be initialized only once! (I.e., make certain that, for each mutex, no more than one thread calls pthread_mutex_init.)

pthread_mutex_trylock Return fails when the mutex is already locked Used for implementing busy waiting

Mutex vs condition variables Mutex is good to guarantee mutual exclusion Allow and block access to the critical regions Conditional variables Block threads due to some condition not met

Condition Variables Allows a thread to wait till a condition is satisfied Testing the condition must be done within a mutex With every condition variable, a mutex is associated

Pthreads APIs for condition variables

Comparison with Semaphores If a signal is sent to a conditional variable on which no thread is waiting, the signal is lost Semaphore will accumulate ‘signals’ by up()

Condition Variables pthread_cond_t condition_variable; pthread_mutex_t mutex; Signaling Thread: pthread_mutex_lock(&mutex); /* change variable value */ if (condition satisfied) { pthread_cond_signal( &condition_variable); } pthread_mutex_unlock(&mutex); /* Alternative to cond_signal is pthread_cond_broadcast( &condition_variable); */ Waiting Thread: pthread_mutex_lock(&mutex); while (condition not satisfied) { pthread_cond_wait( &condition_variable, &mutex); } pthread_mutex_unlock(&mutex); Condition variables are another means for synchronization in POSIX; they represent queues of threads waiting to be woken by other threads and are used in conjunction with the routines shown in the slide. Though they are rather complicated at first glance, when used in standard ways (as discussed in upcoming slides), they are fairly straightforward to use. A thread puts itself to sleep and joins the queue of threads associated with a condition variable by calling pthread_cond_wait. When it places this call, it must have some mutex locked, and it passes the mutex as the second argument. As part of the call, the mutex is unlocked and the thread is put to sleep, all in a single atomic step: i.e., nothing can happen that might affect the thread between the moments when the mutex is unlocked and when the thread goes to sleep. Threads queued on a condition variable are released in first-in-first-out order within priority levels. So far, though complicated, the description is rational. Now for the weird part: a thread may be released from the condition-variable queue at any moment, perhaps spontaneously, perhaps due to sun spots. In addition, the first thread in line is released in response to some other thread’s calling pthread_cond_signal. All threads currently in line are released in response to a thread’s calling pthread_cond_broadcast. Though almost all of the time threads are released to calls to these routines, the official POSIX semantics allow threads to be released at any time without provocation. (This, apparently, makes the implementation easier on some platforms.) Note that if pthread_cond_signal or pthread_cond_broadcast is called and no thread is waiting, nothing happens: there is no memory that such a call has occurred. Once a thread is released, things still aren’t simple. It does not return from pthread_cond_wait until it has retaken the lock on the mutex (the one it passed as its second argument).

Condition variable and mutex A mutex is passed into wait: pthread_cond_wait(cond_var, mutex) Mutex is unlocked before the thread sleeps Mutex is locked again before pthread_cond_wait() returns Safe to use pthread_cond_wait() in a while loop and check condition again before proceeding

Example Usage Write a program using two threads Thread 1 prints “hello” Thread 2 prints “world” Thread 2 should wait till thread 1 finishes before printing Use a condition variable

Using condition variables int thread1_done = 0; pthread_cond_t cv; pthread_mutex_t mutex; Thread 1: printf(“hello “); pthread_mutex_lock(&mutex); thread1_done = 1; pthread_cond_signal(&cv); pthread_mutex_unlock(&mutex); Thread 2: pthread_mutex_lock(&mutex); pthread_cond_wait(&cv, &mutex); printf(“ world\n“); pthread_mutex_unlock(&mutex); What is the problem in this implementation?

Using condition variables int thread1_done = 0; pthread_cond_t cv; pthread_mutex_t mutex; Thread 1: printf(“hello “); pthread_mutex_lock(&mutex); thread1_done = 1; pthread_cond_signal(&cv); pthread_mutex_unlock(&mutex); Thread 2: pthread_mutex_lock(&mutex); while (thread1_done == 0) { pthread_cond_wait(&cv, &mutex); } printf(“ world\n“); pthread_mutex_unlock(&mutex);

Multiple Locks thread a thread b proc1( ) { pthread_mutex_lock(&m1); /* use object 1 */ pthread_mutex_lock(&m2); /* use objects 1 and 2 */ pthread_mutex_unlock(&m2); pthread_mutex_unlock(&m1); } proc2( ) { pthread_mutex_lock(&m2); /* use object 2 */ pthread_mutex_lock(&m1); /* use objects 1 and 2 */ pthread_mutex_unlock(&m1); pthread_mutex_unlock(&m2); } In this example our threads are using two mutexes to control access to two different objects. Thread 1, executing proc1, first takes mutex 1, then, while still holding mutex 1, obtains mutex 2. Thread 2, executing proc2, first takes mutex 2, then, while still holding mutex 2, obtains mutex 1. However, things do not always work out as planned. If thread 1 obtains mutex 1 and, at about the same time, thread 2 obtains mutex 2, then if thread 1 attempts to take mutex 2 and thread 2 attempts to take mutex 1, we have a deadlock. thread a thread b

Deadlock Thread 1 Mutex m1 Mutex m2 Thread 2 The slide shows what’s known as a resource graph: a directed graph with two sorts of nodes, representing threads and mutexes (protecting resources). There’s an arc from a mutex to a thread if the thread has that mutex locked. There’s an arc from a thread to a mutex if the thread is waiting to lock that mutex. Clearly, such a graph has a cycle if and only if there is a deadlock. Thread 2