Download presentation
Presentation is loading. Please wait.
Published byLoren Armstrong Modified over 9 years ago
1
Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion
2
Processes, Threads, Concurrency Traditional processes are sequential: one instruction at a time is executed. Multithreaded processes may have several sequential threads that can execute concurrently. Processes (threads) are concurrent if their executions overlap – start time of one occurs before finish time of another.
3
Concurrent Execution On a uniprocessor, concurrency occurs when the CPU is switched from one process to another, so the instructions of several threads are interleaved (alternate) On a multiprocessor, execution of instructions in concurrent threads may be overlapped (occur at same time) if the threads are running on separate processors.
4
Concurrent Execution An interrupt, followed by a context switch, can take place between any two instructions. Hence the pattern of instruction overlapping and interleaving is unpredictable. Processes and threads execute asynchronously – we cannot predict if event a in process i will occur before event b in process j.
5
Sharing and Concurrency System resources (files, devices, even memory) are shared by processes, threads, the OS. Uncontrolled access to shared entities can cause data integrity problems – Example: Suppose two threads (1 and 2) have access to a shared (global) variable “balance”, which represents a bank account. Each thread has its own private (local) variable “withdrawal i ”, where i is the thread number
6
Example Let balance = 100, withdrawal 1 =50, and withdrawal 2 = 75. Thread i will execute the following algorithm: if (balance >= withdrawal i ) balance = balance – withdrawal i else // print “Can’t overdraw account!” If thread1 executes first, balance will be 50 and thread2 can’t withdraw funds. If thread2 executes first, balance will be 25 and thread1 can’t withdraw funds.
7
But --- what if the two threads execute concurrently instead of sequentially? Break down into machine-level operations: if (balance >= withdrawal i ) balance = balance – withdrawal i move balance to a register compare register to withdrawal i branch if less-than register = register – withdrawal i store register contents in balance
8
Example-Multiprocessor (A possible instruction sequence showing interleaved execution) Thread 1 (2) Move balance to register 1 (register = 100) (4) compare register 1 to withdraw 1 (5)register 1 = register 1 – withdraw 1 (100-50) (7) store register 1 in balance (balance = 50) Thread 2 (1) Move balance to register 2 (register = 100) (3) compare register 2 to withdraw 2 (6) register 2 = register 2 – withdraw 2 (100 – 75) (8) store register 2 in balance (balance = 25 )
9
Example – Uniprocessor (A possible instruction sequence showing interleaved execution) Thread 1 –Move balance to register (Reg. = 100) P 1 ’s time slice expires – its state is saved … … P 1 is re-scheduled; its state is restored (Reg. = 100) –balance = balance – withdraw 1 (100-50) –Result: balance = 50 Thread 2 –Move balance to reg. –balance >= withdraw 2 –balance = balance – withdraw 2 = (100-75)
10
Race Conditions The previous examples illustrate a race condition (data race): an undesirable condition that exists when several processes access shared data, and –At least one access is a write and –The accesses are not mutually exclusive Race conditions can lead to inconsistent results.
11
Mutual Exclusion Mutual exclusion forces serial resource access as opposed to concurrent access. When one thread locks a critical resource, no other thread can access it until the lock is released. Critical section (CS): code that accesses shared resources. Mutual exclusion guarantees that only one process/thread at a time can execute its critical section, with respect to a given resource.
12
Mutual Exclusion Requirements It must ensure that only one process/thread at a time can access a shared resource. In addition, a good solution will ensure that –If no thread is in the CS a thread that wants to execute its CS must be allowed to do so –When 2 or more threads want to enter their CS’s, can’t postpone decision indefinitely –Every thread should have a chance to execute its critical section (no starvation)
13
Solution Model Begin_mutual_exclusion /* some mutex primitive execute critical section End_mutual_exclusion /* some mutex primitive The problem: How to implement the mutex primitives? –Busy wait solutions (e.g., test-set operation, spinlocks of various sorts, Peterson’s algorithm) –Semaphores (OS feature usually, blocks waiting process) –Monitors (language feature – e.g. Java)
14
Semaphores Definition: an integer variable on which processes can perform two indivisible operations, P( ) and V( ), + initialization. (P and V sometimes called Wait & Signal) Each semaphore has a wait queue associated with it. Semaphores are protected by the operating system.
15
Semaphores Binary semaphore: only values are 1 and 0 Traditional semaphore: may be initialized to any non-negative value; can count down to zero. Counting semaphores: P & V operations may reduce semaphore values below 0, in which case the negative value records the number of blocked processes. (See CS 490 textbook)
16
Semaphores Are used to synchronize and coordinate processes and/or threads Calling the P (wait) operation may cause a process to block Calling the V (signal) operation never causes a process to block, but may wake a process that has been blocked by a previous P operation.
17
Traditional Semaphore Counting Semaphore P(S): if S > = 1 then S = S – 1 else block the process on S queue V(S): if some processes are blocked on S queue then unblock a process else S = S + 1 P(S): S = S – 1 if ( S < 0) then block the process on S queue V(S): S = S + 1 if (S <= 0) then move a process from S queue to the Ready queue
18
Usage – Mutual Exclusion Using a semaphore to enforce mutual exclusion. P(mutex)// mutex initially = 1 execute CS; V(mutex) Each process that uses a shared resource must first check (using P) that no other process is in the critical section and then must use V to release the critical section.
19
Bank Problem Revisited Thread 1 P(S) Move balance to register 1 Compare register 1 to withdraw 1 register 1 = register 1 – withdraw 1 Store register 1 in balance V(S) Thread 2 P(S) Move balance to register 2 Compare register 2 to withdraw 2 register 2 = register 2 – withdraw 2 Store register 2 in balance V(S) Semaphore S = 1
20
Example – Uniprocessor Thread 1 –P(S) S is decremented: S = 0, T1 continues to execute –Move balance to register (Reg. = 100) T 1 ’s time slice expires – its state is saved … T 1 is re-scheduled; its state is restored (Reg. = 100) –balance = balance – withdraw 1 (100-50) –V(S) Thread 2 returns to run state, S remains 0 Thread 2 –P(S) Since S = 0, T2 is blocked T2 resumes executing some time after T1 executes V(S) –Move balance to reg. (50) –balance >= withdraw 2 Since !(50>=75), T2 does not make withdrawal –V(S) Since no thread is waiting, S is set back to 1
21
Critical Sections are Indivisible The effect of mutual exclusion is to make a critical section appear to be “indivisible” – much like a hardware instruction. (Recall the atomic nature of a transaction) In the bank example, once T1enters its critical section no other thread is allowed to operate on balance until T1 signals it has left the CS. (assumes that all users employ mutual exclusion)
22
Implementing Semaphores: P and V Must Be Indivisible Semaphore operations themselves must be indivisible, or atomic; i.e., execute under mutual exclusion. Once OS begins to execute a P or V operation, it cannot allow another P or V to begin on the same semaphore.
23
P and V Must Be Indivisible P operation must be indivisible; otherwise there is no guarantee that two processes won’t try to test P at the “same” time and both find it equal to 1. –P(S): if S > = 1 then S = S – 1 else block the process on S queue Two V operations executed at the same time could unblock two processes, leading to two processes in their critical sections concurrently. –V(S): if some processes are blocked on the queue for S then unblock a process else S = S + 1
24
if S >= 1 then S = S – 1 else block the process on S queue execute critical section if processes are blocked on the queue for S then unblock a process else S = S + 1
25
Semaphore Usage – Event Wait (synchronization that isn’t mutex) Suppose a process P2 wants to wait on an event of some sort (call it A) which is to be executed by another process P1 Initialize a shared semaphore to 0 By executing a wait (P) on the semaphore, P2 will wait until P1 executes event A and signals, using the V operation.
26
Event Wait – Example semaphore signal = 0; Process 1 …. execute event A V(signal) Process 2 … P(signal) …
27
Semaphores Are Not Perfect Programmer must know something about other processes using the semaphore Must use semaphores carefully (be sure to use them when needed; don’t leave out a V(), etc.) Hard to prove program correctness when using semaphores.
28
Other Synchronization Problems (in addition to simple mutual exclusion) Dining Philosophers: resource deadlock Producer-consumer: buffering (as of messages, input data, etc.) Readers-writers: data base or file sharing –Reader’s priority –Writer’s priority
29
Producer-Consumer Producer processes and consumer processes share a (usually finite) pool of buffers. Producers add data to pool Consumers remove data, in FIFO order
30
Producer-Consumer Requirements The processes are asynchronous. A solution must ensure producers don’t deposit data if pool is full and consumers don’t take data if pool is empty. Access to buffer pool must be mutually exclusive since multiple consumers (or producers) may try to access the pool simultaneously.
31
Bounded Buffer P/C Algorithm Initialization: s=1; n=0; e=sizeofbuffer; Producer: while(true) produce v; P(e); // wait for buffer slot P(s); // wait for buffer pool access append(v); V(s); // release buffer pool V(n); // signal a full buffer Consumer: while(true) P(n); // wait for a full buffer P(s); // wait for buffer pool access w:=take(); V(s); // release buffer pool V(e); // signal an empty buffer consume(w);
32
Readers and Writers Problem Characteristics: –concurrent processes access shared data area (files, block of memory, set of registers) –some processes only read information, others write (modify and add) information Restrictions: –Multiple readers may read concurrently, but when a writer is writing, there should be no other writers or readers.
33
Compare to Prod/Cons Differences between Readers/Writers (R/W) and Producer/Consumer (P/C): –Data in P/C is ordered - placed into buffer and retrieved according to FIFO discipline. All data is read exactly once. –In R/W, same data may be read many times by many readers, or data may be written by writer and changed before any reader reads. No order enforced on reads.
34
procedure writer; begin repeat P (wsem); write data; V (wsem); forever end; // Initialization code integer readcount = 0; // done only once semaphore x, wsem = 1; // done only once procedure reader; begin repeat P (x); readcount = readcount + 1; if readcount = =1 then P (wsem); V (x); read data; P (x); readcount = readcount - 1; if readcount == 0 then V(wsem); V (x); forever end;
35
Any Questions? Can you think of any real examples of producer- consumer or reader-writer situations?
36
Semaphores and User Thread Library Thread libraries can simulate real semaphores. In a multi-(user-level) threaded process the OS only sees a single thread of execution; e.g., T1, T1, T1, L, L, T2, T2, L, L, T1, T1, … –Library functions execute when a u-thread voluntarily yields control Use a variable as a semaphore; access via P & V functions. A thread executes P(S) and finds S = 0. Then it yields control.
37
Semaphores and User Thread Library Why is this safe? Because there is really never more than one thread of control – violations of mutual exclusion happen when separate threads are scheduled concurrently. A user-level thread decides when to yield control; kernel-level threads don’t. If the library is asked to execute P(S) or V(S) it will not be interrupted by another thread in the same process, so there is no danger.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.