Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS252: Systems Programming

Similar presentations


Presentation on theme: "CS252: Systems Programming"— Presentation transcript:

1 CS252: Systems Programming
Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Condition Variable, Read/Write Lock, and Deadlock

2 Pseudo-Code Implementing Semaphore Using Mutex Lock
sem_wait(sem_t *sem){ lock(sem -> mutex); sem -> count--; if(sem ->count < 0){ unlock(sem->mutex); wait(); } else { unlock(sem->mutex) } sem_post(sem_t *sem){ lock(sem -> mutex); sem ->count++; if(sem->count < 0){ wake up a thread; } unlock(sem->mutex); The thread is blocked while holding the lock; no other thread could successfully execute post. We have a deadlock. If we first unlock(), and then wait(). Then the thread may miss the wakeup signal. Assume that wait() causes a thread to be blocked. What could go wrong? How to fix it?

3 Condition Variable What we need is the ability to wait on a condition while simultaneously giving up the mutex lock. Condition Variable (CV): A thread can wait on a CV; it will be blocked until another thread call signal on the CV A condition variable is always used in conjunction with a mutex lock. The thread calling wait should hold the lock, and the wait call will releases the lock while going to wait

4 Using Condition Variable
Declaration: #include <pthread.h> pthread_cond_t cv; Initialization: pthread_cond_init(&cv, pthread_condattr_t *attr); Wait on the condition variable: int pthread_cond_wait(pthread_cond_t *cv, pthread_mutex_t *mutex); The calling threshold should hold mutex; it will be released atomically while start waiting on cv Upon successful return, the thread has re-aquired the mutex; however, the thread waking up and reaquiring the lock is not atomic. These functions atomically release mutex and cause the calling thread to block on the condition variable cond; atomically here means "atomically with respect to access by another thread to the mutex and then the condition variable". That is, if another thread is able to acquire the mutex after the about-to-block thread has released it, then a subsequent call to pthread_cond_signal() or pthread_cond_broadcast() in that thread behaves as if it were issued after the about-to-block thread has blocked.

5 Using Condition Variable
Waking up waiting threads: int pthread_cond_signal(pthread_cond_t *cv); Unblocks one thread waiting on cv int pthread_cond_broadcast(pthread_cond_t *cv); Unblocks all threads waiting on cv The two methods can be called with or without holding the mutex that the thread calls wait with; but it is better to call it while holding the mutex

6 What is a Condition Variable?
Each Condition Variable is a queue of blocked threads The cond_wait(cv, mutex) call adds the calling thread to cv’s queue, while releasing mutex; The call returns when the thread is unblocked (by another thread calling cond_signal), and the thread obtaining the mutex The cond_signal(cv) call removes one thread from the queue and unblocks it.

7 Implementing Semaphore using Mutex and Cond Var
struct semaphore { pthread_cond_t cond; pthread_mutex_t mutex; int count; }; typedef struct semaphore semaphore_t; int semaphore_wait (semaphore_t *sem) { int res = pthread_mutex_lock(&(sem->mutex)); if (res != 0) return res; sem->count --; if (sem->count < 0) res= pthread_cond_wait(&(sem->cond),&(sem->mutex)); pthread_mutex_unlock(&(sem->mutex)); return res; }

8 Implementing Semaphore using Mutex and Cond Var
int semaphore_post (semaphore_t *sem) { int res = pthread_mutex_lock(&(sem->mutex)); if (res != 0) return res; sem->count ++; if (sem->count <= 0) { res = pthread_cond_signal(&(sem->cond)); } pthread_mutex_unlock(&(sem->mutex)); return res;

9 Usage of Semaphore: Bounded Buffer
Implement a queue that has two functions    enqueue() - adds one item into the queue. It blocks if queue if full    dequeue() - remove one item from the queue. It blocks if queue is empty Strategy: Use an _emptySem semaphore that dequeue() will use to wait until there are items in the queue Use a _fullSem semaphore that enqueue() will use to wait until there is space in the queue.

10 Bounded Buffer #include <pthread.h>
#include <semaphore.h> enum {MaxSize = 10}; class BoundedBuffer{    int _queue[MaxSize];      int _head;      int _tail;     mutex_t _mutex;      sem_t _emptySem;        sem_t _fullSem;  public:      BoundedBuffer();      void enqueue(int val);   int dequeue(); }; BoundedBuffer::BoundedBuffer() { _head = 0; _tail = 0; pthtread_mutex_init(&_mutex, NULL); sem_init(&_emptySem, 0, 0); sem_init(&_fullSem, 0, MaxSize); }

11 Bounded Buffer { { void BoundedBuffer::enqueue(int val)
sem_wait(&_fullSem); mutex_lock(_mutex); _queue[_tail]=val; _tail = (_tail+1)%MaxSize; mutex_unlock(_mutex); sem_post(_emptySem); } int BoundedBuffer::dequeue() { sem_wait(&_emptySem); mutex_lock(_mutex); int val = _queue[_head]; _head = (_head+1)%MaxSize; mutex_unlock(_mutex); sem_post(_fullSem); return val; }

12 Bounded Buffer Assume queue is empty T1 T2 T3 v=dequeue()
sem_wait(&_emptySem); _emptySem.count==-1 wait _emptySem.count==-2 enqueue(6) sem_wait(&_fullSem) put item in queue sem_post(&emptySem) wakeup T1 T1 continues Get item from queue

13 Bounded Buffer Assume queue is empty T1 T2 …… T10 enqueue(1)
sem_wait(&_fullSem); _fullSem.count==9 put item in queue enqueue(2) _fullSem.count==8 enqueue(10) _fullSem.count==0

14 Bounded Buffer T11 T12 enqueue(11) sem_wait(&_fullSem);
_fullSem.count==-1 wait val=dequeue() sem_wait(&_emptySem); _emptySem.count==9 get item from queue sem_post(&_fullSem) _fullSem.count==0 wakeup T11

15 Bounded Buffer Notes The counter for _emptySem represents the number of items in the queue The counter for _fullSem represents the number of spaces in the queue. Mutex locks are necessary since sem_wait(_emptySem) or sem_wait(_fullSem) may allow more than one thread to execute the critical section.

16 Read/Write Locks They are locks for data structures that can be read by multiple threads simultaneously ( multiple readers ) but that can be modified by only one thread at a time. Example uses: Data Bases, lookup tables, dictionaries etc where lookups are more frequent than modifications.

17 Read/Write Locks Multiple readers may read the data structure simultaneously Only one writer may modify it and it needs to exclude the readers. Interface: ReadLock() – Lock for reading. Wait if there are writers holding the lock ReadUnlock() – Unlock for reading WriteLock() - Lock for writing. Wait if there are readers or writers holding the lock WriteUnlock() – Unlock for writing

18 Read/Write Locks Threads: R1 R2 R3 R4 W1 --- --- --- --- --- RL RL RL
RL RL RL WL wait RU RU continue Wait WU continue rl = readLock; ru = readUnlock; wl = writeLock; wu = writeUnlock;

19 Read/Write Locks Implementation
class RWLock { int _nreaders; //Controls access //to readers/writers sem_t _semAccess; mutex_t _mutex; public: RWLock(); void readLock(); void writeLock(); void readUnlock(); void writeUnlock(); }; RWLock::RWLock() { _nreaders = 0; sem_init( &semAccess, 1 ); mutex_init( &_mutex ); } Can we replace the semaphore here with a mutex? Answer seems to be Yes.

20 Read/Write Locks Implementation
void RWLock::readLock() { mutex_Lock( &_mutex ); _nreaders++; if( _nreaders == 1 ) //This is the // first reader //Get sem_Access sem_wait(&_semAccess); } mutex_unlock( &_mutex ); void RWLock::readUnlock() { mutex_lock( &_mutex ); _nreaders--; if( _nreaders == 0 ) //This is the last reader //Allow one writer to //proceed if any sem_post( &_semAccess ); } mutex_unlock( &_mutex ); When there is a writer, first reader is blocked on the semaphore while holding the mutex lock. Later readers will be blocked on the mutex lock.

21 Read/Write Locks Implementation
void RWLock::writeLock() { sem_wait( &_semAccess ); } void RWLock::writeUnlock() { sem_post( &_semAccess ); }

22 Read/Write Locks Example
Threads: R R R W W2 readLock nreaders++(1) if (nreaders==1) sem_wait continue nreaders++(2) nreaders++(3) writeLock (block)

23 Read/Write Locks Example
Threads: R R R W W2 writeLock sem_wait (block) readUnlock() nreaders—(2) nreaders—(1) nreaders—(0) if (nreaders==0) sem_post W1 continues writeUnlock W2 continues

24 Read/Write Locks Example
Threads: (W2 is holding lock in write mode) R R R W W2 readLock mutex_lock nreaders++(1) if (nreaders==1) sema_wait block writeUnlock sema_post R1 continues mutex_unlock R2 continues

25 Notes on Read/Write Locks
Fairness in locking: First-come-first serve Mutexes and semaphores are fair. The thread that has been waiting the longest is the first one to wake up. Spin locks do not guarantee fairness, the one waiting the longest may not be the one getting it This should not be an issue in the situation when one wants to use spin locks, namely low contention, and short lock holding time This implementation of read/write locks suffers from “starvation” of writers. That is, a writer may never be able to write if the number of readers is always greater than 0.

26 Write Lock Starvation (Overlapping readers)
Threads: R1 R2 R3 R4 W1 RL RL RL WL wait RU RU RL RU RL rl = readLock; ru = readUnlock; wl = writeLock; wu = writeUnlock;

27 Deadlock and Starvation
It happens when one or more threads will have to block forever ( or until process is terminated) because they have to wait for a resource that will never be available. Once a deadlock happens, the process has to be killed. Therefore we have to prevent deadlocks in the first place. Starvation This condition is not as serious as a deadlock. Starvation happens when a thread may need to wait for a long time before a resource becomes available. Example: Read/Write Locks

28 Example of a Deadlock Assume two bank accounts protected with two mutexes int balance1 = 100; int balance2 = 20; mutex_t m1, m2; Transfer1_to_2(int amount) { mutex_lock(&m1); mutex_lock(&m2); balance1 - = amount; balance2 += amount; mutex_unlock(&m1); mutex_unlock(&m2); } Transfer2_to_1(int amount) { mutex_lock(&m2); mutex_lock(&m1); balance2 - = amount; balance1 += amount; mutex_unlock(&m2); mutex_unlock(&m1); }

29 Example of a Deadlock Thread 1 Thread 2
Transfer1_to_2(int amount) { mutex_lock(&m1); context switch Transfer2_to_1(int amount) { mutex_lock(&m2); block waiting for m1 block waiting for m2

30 Example of a Deadlock Once a deadlock happens, the process becomes unresponsive. You have to kill it. Before killing get as much info as possible since this event usually is difficult to reproduce. Use gdb to attach the debugger to the processes to see where the deadlock happens. gdb progname <pid> gdb> threads //Lists all threads gdb> thread <thread number> //Switch to a thread gdb >where // Prints stack trace Do this for every thread. Then you can kill the process.

31 Deadlock A deadlock happens when there is a combination of instructions in time that causes resources and threads to wait for each other. You may need to run your program for a long time and stress-test them in order to find possible deadlocks Also you can increase the probability of a deadlock by running your program in a multi-processor (multi- core) machine. We need to prevent deadlocks to happen in the first place.

32 Graph Representation of Deadlocks
Thread T1 is waiting for mutex M1 Thread T1 is holding mutex M1 T1 M1 T1 M1

33 Deadlock Representation
M2 T2 M1 Deadlock = Cycle in the graph.

34 Larger Deadlock T1 M2 T2 M3 M1 T3 T4 M4

35 Deadlock Prevention A deadlock is represented as a cycle in the graph.
To prevent deadlocks we need to assign an order to the locks: m1, m2, m3 … Notice in the previous graph that a cycle follows the ordering of the mutexes except at one point.

36 Deadlock Prevention Deadlock Prevention:
When calling mutex_lock mi, lock the mutexes with lower order of I before the ones with higher order. If m1 and m3 have to be locked, lock m1 before locking m3. This will prevent deadlocks because this will force not to lock a higher mutex before a lower mutex breaking the cycle.

37 Lock Ordering => Deadlock Prevention
Claim: By following the lock ordering deadlocks are prevented. Proof by contradiction Assume that the ordering was followed but we have a cycle. By following the cycle in the directed graph we will find mi before mj. Most of the time i< j but due to the nature of the cycle, at some point we will find i > j . This means that a tread locked mi before mj where i>j so it did not follow the ordering. This is a contradiction to our assumptions. Therefore, lock ordering prevents deadlock.

38 Preventing a Deadlock Rearranging the Bank code to prevent the deadlock, we make sure that the mutex_locks are locked in order. int balance1 = 100; int balance2 = 20; mutex_t m1, m2; Transfer1_to_2(int amount) { mutex_lock(&m1); mutex_lock(&m2); balance1 -= amount; balance2 += amount; mutex_unlock(&m1); mutex_unlock(&m2); } Transfer2_to_1(int amount) { mutex_lock(&m1); mutex_lock(&m2); balance2 -= amount; balance1 += amount; mutex_unlock(&m2); mutex_unlock(&m1); }

39 Preventing a Deadlock We can rewrite the Transfer function s more generically as: balance1 -= amount; balance2 += amount; mutex_unlock(&mutex[i] ); mutex_unlock(&mutex[j] ); } int balance[MAXACOUNTS]; mutex_t mutex[MAXACOUNTS]; Transfer_i_to_j(int i, int j, int amount) { if ( i< j) { mutex_lock(&mutex[i]); mutex_lock(&mutex[j]); } else {

40 Ordering of Unlocking Since mutex_unlock does not force threads to wait, then the ordering of unlocking does not matter.

41 Review Questions What are Condition Variables? What is the behavior of wait/signal on CV? How to implement semaphores using using CV and Mutex? How to implement bounded buffer using semaphores? What is a deadlock? How to prevent deadlocks by enforcing a global ordering of locks? Why this prevents deadlocks?

42 Review Questions What are read/write locks? What is the behavior of read/write lock/unlock? How to implement R/W locks using semaphore? Why the implementation given in the slides can cause writer starvation? How to Implement a read/write lock where writer is preferred (i.e., when a writer is waiting, no reader can gain read lock and must wait until all writers are finished)?


Download ppt "CS252: Systems Programming"

Similar presentations


Ads by Google