Download presentation
Presentation is loading. Please wait.
Published byOswin Eaton Modified over 8 years ago
1
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion
2
Overview Concurrent programming and race conditions Mutual exclusion Implementing mutual exclusion Deadlocks, starvation, livelock 2
3
Concurrent Programming Programming with two or more threads that cooperate to perform a common task o Threads cooperate by sharing data via shared address space o What types of data/variables are shared? Problems o Race Conditions E.g., Two threads T1 and T2 read and update the same variable, so access to the threads must be exclusive (i.e. one at a time) o Synchronization E.g., T1 initializes a variable, T2 runs after variable is initialized, so ordering between T1 and T2 must be enforced 3
4
Race Condition Example What thread interleaving would lead to problems? worker() { …; counter = counter + 1; … } Dump of assembler code for function worker: … 0x00401398 :mov 0x406018,%eax ; 1. read from mem 0x0040139d :add $0x1,%eax ; 2. increment reg 0x004013a0 :mov %eax,0x406018 ; 3. write to mem worker() { …; counter = counter + 1; … } Dump of assembler code for function worker: … 0x00401398 :mov 0x406018,%eax ; 1. read from mem 0x0040139d :add $0x1,%eax ; 2. increment reg 0x004013a0 :mov %eax,0x406018 ; 3. write to mem Thread 1 Thread 2
5
Why do Races Occur? Result depends on timing of execution of threads Some execution sequences lead to unexpected results How can we avoid this problem?
6
Atomicity and Mutual Exclusion Need to ensure that reading and updating the counter is an atomic operation o An operation is atomic if it appears to occur instantaneously to the rest of the system o The operation appears indivisible, so the rest of the system either doesn’t observe any of the effects of the operation, or all the effects of the operation One way to ensure atomicity is by ensuring that only one thread can read and update the counter at a time o This is called mutual exclusion o The code region on which mutual exclusion is enforced is called a critical section 6
7
Mutex Lock Abstraction A mutex lock helps ensures mutual exclusion o mutex = lock_create():create a free lock, called mutex o lock_destroy(mutex):destroy the mutex lock o lock(mutex): Acquire the lock if it is free Otherwise wait (or sleep) until it can be acquired Lock is now acquired o unlock(mutex): Release the lock If there are waiting threads wake up one of them Lock is now free Critical section is accessed in between lock, unlock o A toilet is a critical section! 7
8
Mutex Locks 8 LockUnlock LockUnlockAcquired
9
Using a Mutex Lock 9 // counter and lock are located in shared address space int counter; struct lock *l; while() { lock(l); // critical section; counter++; unlock(l); // remainder section; } while() { lock(l); // critical section; counter++; unlock(l); // remainder section; } Thread 1 Thread 2
10
Mutual Exclusion Conditions No two threads simultaneously in critical section No assumption on the speed of thread execution No thread running outside its critical section may block another thread o Why? No thread must wait forever to enter its critical section o Why? 10
11
Implementing Mutex Locks Naive implementation: use a variable to track whether a thread is in the critical section Is there a problem with this implementation? lock() and unlock() access a shared variable o So they themselves need to be atomic! 11 lock(l) { while (l == TRUE) ; // no-op l = TRUE; } unlock(l) { l = FALSE; }
12
Implementing Mutex Locks Naive implementation: make lock() atomic Disabling interrupt ensures that pre-emption doesn’t occur in the lock() code, ensuring it runs atomically Is there a problem with this implementation? 12 lock(l) { disable interrupts; while (l == TRUE) ; // no-op l = TRUE; enable interrupts; } unlock(l) { l = FALSE; }
13
Implementation 1: Interrupt Disabling What about this implementation? What is the problem with this implementation? 13 lock() { disable_interrupts; } unlock() { enable_interrupts; }
14
Atomic Instructions Previous implementation only works on single CPU o Interrupts are disabled only on local CPU o But threads could still run on another CPU, causing a race Hardware support for locking o Interrupts provide h/w support for locking on single CPU o Need h/w support for locking on multi-processors Multi-processor h/w provides atomic instructions o Atomic Test and Set Lock, Atomic Compare and Swap o These instructions operate on a memory word o Notice they perform 2 operations on the word indivisibly o How does h/w performs these operations indivisibly? 14
15
Test-and-Set Lock Instruction Tset instruction operates on an integer o It reads and returns the old value of the integer o It updates the value of the integer to 1 o These two operations are performed atomically 15 int tset(int *lock) { // atomic in hardware int old = *lock; *lock = 1; return old; }
16
Implementation 2: Spin Locks Lock uses tset in a loop o *l is initialized to 0 o If returned value is 0, lock is acquired o If returned value is 1, then someone else has lock, try again This mutex lock is called a spin lock because threads wait in a tight loop Problem: While a thread waits, CPU performs no useful work 16 lock(int *l) { while (tset(l)) ; // no-op } unlock(int *l) { *l = FALSE; }
17
Implementation 3: Yielding Locks Yield the CPU voluntarily while waiting for the lock Recall that thread_yield runs another thread, so the CPU can perform useful work This mutex is a yielding lock Problem: scheduler determines when thread_yield() returns 17 lock_s(int *l) { while (tset(l)) thread_yield(); } unlock_s(int *l) { *l = FALSE; }
18
Implementation 4: Blocking Locks Both spin and yielding locks are essentially polling for lock to become available o Choosing right polling frequency is not simple: spin locks waste CPU, yielding locks can delay lock acquire Ideally, lock() would block until unlock() was called o Invoke thread_sleep() when lock is not available o Invoke thread_wakeup() on unlock() These functions access shared ready list, so they need to be critical sections, i.e., we need locking while trying to implement blocking! How can we solve this problem? 18
19
Using a Previous Solution Previous solutions work correctly but don’t block o Interrupt disabling works correctly on single CPU o Spin locks work correctly on multi-processor We can use these solutions to access the shared data structures in the thread scheduler o Scheduler implements blocking, so it can’t use a blocking lock! Notice how locking solutions depend on lower-level locking Lab 3 requires you to implement blocking locks 19 blocking lock spin lock atomic instruction
20
Using Locks Note that to protect shared variables, we need to create lock variables that are also shared variables How many lock variables should be created? o Say we want to protect a linked list, we could create one lock for the entire list, or we could create one lock per list node o More locks allow more parallelism but more potential for bugs If locks are used incorrectly, strange and really hard to find bugs can happen o Hard to reason about every possible interleaving o Finding concurrency bugs, and better concurrent programming models, are active research areas Let’s see one problem when using multiple locks 20
21
Deadlocks A set of threads is deadlocked if each thread is waiting for a resource (an event) that some another thread in the set holds (can perform) o So no thread can run o Breaking deadlocks generally requires killing threads 21 Thread_A() { lock(resource_2); lock(resource_1); use resource 1 and 2; unlock(resource_1); unlock(resource_2); } Thread_B() { lock(resource_1); lock(resource_2); use resource 1 and 2; unlock(resource_2); unlock(resource_1); }
22
Deadlock Conditions A deadlock situation can occur if and only if the following conditions hold simultaneously o Mutual exclusion – each resource is assigned to one thread o Hold and wait – threads can get more than one resource o No preemption – acquired resources cannot be preempted o Circular wait – threads form a circular chain, each waiting for a resource from the next thread in chain 22
23
Examples of Deadlock 23
24
Detecting Deadlocks Deadlocks can be detected using wait-for graphs Deadlock Cycle in the wait-for graph 24 Resource R1 P1 Thread holds requests Resource R2 P2
25
Preventing Deadlocks Avoid hold and wait o If a lock is unavailable, release previously acquired locks, and try to reacquire all locks again o What are the problems with this approach? Prevent circular wait o Number each of the resources o Require each thread to acquire lower numbered resources before higher numbered resources o Problems? 25
26
Deadlock, Starvation, Livelock Deadlock o A particular set of threads perform no work because of a circular wait condition o Once a deadlock occurs, it does not go away Starvation o A particular set of threads perform no work because the resources they need are being used by others constantly o Starvation can be a temporary condition Livelock o A set of threads continue to run but make no progress! o Examples include interrupt livelock How can we solve interrupt livelock? 26
27
Summary Concurrent programming model o Threads enable concurrent execution o Threads cooperate by accessing shared variables Races o Concurrent accesses to shared variables can lead to races, i.e., incorrect execution under some thread interleavings Critical sections and mutual exclusion o Avoiding races requires defining critical code sections that are run atomically (indivisibly) using mutual exclusion, i.e., only one thread accesses the critical section at a time Mutual exclusion is implemented using locks o Locking requires h/w support (interrupts, atomic instructions) 27
28
Think Time What is a race condition? How can we protect against race conditions? Can locks be implemented by reading and writing to a binary variable? Why is it better to block rather than spin on a uniprocessor? Why is a blocking lock better than interrupt disabling or using spin locks? Is the blocking lock always better? 28
29
Think Time How can one avoid starvation? How can one avoid livelock? 29
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.