Download presentation
Presentation is loading. Please wait.
Published byAlexander Fleming Modified over 9 years ago
1
CPSC 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 6: Mutual Exclusion in Shared Memory 1
2
Shared Memory Model CSCE 668Set 6: Mutual Exclusion in Shared Memory 2 Processors communicate via a set of shared variables, instead of passing messages. Each shared variable has a type, defining a set of operations that can be performed atomically.
3
Shared Memory Model Example CSCE 668Set 6: Mutual Exclusion in Shared Memory 3 p0p0 p1p1 p2p2 X Y readwrite read
4
Shared Memory Model CSCE 668Set 6: Mutual Exclusion in Shared Memory 4 Changes to the model from the message-passing case: no inbuf and outbuf state components configuration includes a value for each shared variable only event type is a computation step by a processor An execution is admissible if every processor takes an infinite number of steps
5
Computation Step in Shared Memory Model CSCE 668Set 6: Mutual Exclusion in Shared Memory 5 When processor p i takes a step: pi 's state in old configuration specifies which shared variable is to be accessed and with which operation operation is done: shared variable's value in the new configuration changes according to the operation's semantics p i 's state in new configuration changes according to its old state and the result of the operation
6
Observations on SM Model CSCE 668Set 6: Mutual Exclusion in Shared Memory 6 Accesses to the shared variables are modeled as occurring instantaneously (atomically) during a computation step, one access per step Definition of admissible execution implies asynchronous no failures
7
Mutual Exclusion (Mutex) Problem CSCE 668Set 6: Mutual Exclusion in Shared Memory 7 Each processor's code is divided into four sections: entry: synchronize with others to ensure mutually exclusive access to the … critical: use some resource; when done, enter the… exit: clean up; when done, enter the… remainder: not interested in using the resource entrycriticalexitremainder
8
Mutual Exclusion Algorithms CSCE 668Set 6: Mutual Exclusion in Shared Memory 8 A mutual exclusion algorithm specifies code for entry and exit sections to ensure: mutual exclusion: at most one processor is in its critical section at any time, and some kind of "liveness" or "progress" condition. There are three commonly considered ones…
9
Mutex Progress Conditions CSCE 668Set 6: Mutual Exclusion in Shared Memory 9 no deadlock: if a processor is in its entry section at some time, then later some processor is in its critical section no lockout: if a processor is in its entry section at some time, then later the same processor is in its critical section bounded waiting: no lockout + while a processor is in its entry section, other processors enter the critical section no more than a certain number of times. These conditions are increasingly strong.
10
Mutual Exclusion Algorithms CSCE 668Set 6: Mutual Exclusion in Shared Memory 10 The code for the entry and exit sections is allowed to assume that no processor stays in its critical section forever shared variables used in the entry and exit sections are not accessed during the critical and remainder sections
11
Complexity Measure for Mutex CSCE 668Set 6: Mutual Exclusion in Shared Memory 11 An important complexity measure for shared memory mutex algorithms is amount of shared space needed. Space complexity is affected by: how powerful is the type of the shared variables how strong is the progress property to be satisfied (no deadlock vs. no lockout vs. bounded waiting)
12
Mutex Results Using RMW CSCE 668Set 6: Mutual Exclusion in Shared Memory 12 When using powerful shared variables of "read- modify-write" type number of SM states upper boundlower bound no deadlock2 (test&set alg) 2 (obvious) no lockout (memoryless) n/2 + c (Burns et al.) (2n) (n/2) (Burns et al.) bounded waiting n 2 (queue alg.) n (Burns & Lynch)
13
Mutex Results Using Read/Write CSCE 668Set 6: Mutual Exclusion in Shared Memory 13 When using read/write shared variables number of distinct vars. upper boundlower bound no deadlockn (Burns & Lynch) no lockout3n booleans (tournament alg.) bounded waiting 2n unbounded (bakery alg.)
14
Test-and-Set Shared Variable CSCE 668Set 6: Mutual Exclusion in Shared Memory 14 A test-and-set variable V holds two values, 0 or 1, and supports two (atomic) operations: test&set(V): temp := V V := 1 return temp reset(V): V := 0
15
Mutex Algorithm Using Test&Set CSCE 668Set 6: Mutual Exclusion in Shared Memory 15 code for entry section: repeat t := test&set(V) until (t = 0) An alternative construction is: wait until test&set(V) = 0 code for exit section: reset(V)
16
Mutual Exclusion is Ensured CSCE 668Set 6: Mutual Exclusion in Shared Memory 16 Suppose not. Consider first violation, when some p i enters CS but another p j is already in CS p j enters CS: sees V = 0, sets V to 1 p i enters CS: sees V = 0, sets V to 1 no node leaves CS so V stays 1 impossible!
17
No Deadlock CSCE 668Set 6: Mutual Exclusion in Shared Memory 17 Claim: V = 0 iff no processor is in CS. Proof is by induction on events in execution, and relies on fact that mutual exclusion holds. Suppose there is a time after which a processor is in its entry section but no processor ever enters CS. no processor enters CS no processor is in CS V always equals 0, next t&s returns 0 proc enters CS, contradiction!
18
What About No Lockout? CSCE 668Set 6: Mutual Exclusion in Shared Memory 18 One processor could always grab V (i.e., win the test&set competition) and starve the others. No Lockout does not hold. Thus Bounded Waiting does not hold.
19
Read-Modify-Write Shared Variable CSCE 668Set 6: Mutual Exclusion in Shared Memory 19 The state of this kind of variable can be anything and of any size. Variable V supports the (atomic) operation rmw(V,f ), where f is any function temp := V V := f(V) return temp This variable type is so strong there is no point in having multiple variables (from a theoretical perspective).
20
Mutex Algorithm Using RMW CSCE 668Set 6: Mutual Exclusion in Shared Memory 20 Conceptually, the list of waiting processors is stored in a circular queue of length n Each waiting processor remembers in its local state its location in the queue (instead of keeping this info in the shared variable) Shared RMW variable V keeps track of active part of the queue with first and last pointers, which are indices into the queue (between 0 and n-1) so V has two components, first and last
21
Conceptual Data Structure CSCE 668Set 6: Mutual Exclusion in Shared Memory 21 The RMW shared object just contains these two "pointers"
22
Mutex Algorithm Using RMW CSCE 668Set 6: Mutual Exclusion in Shared Memory 22 Code for entry section: // increment last to enqueue self position := rmw(V,(V.first,V.last+1) // wait until first equals this value repeat queue := rmw(V,V) until (queue.first = position.last) Code for exit section: // dequeue self rmw(V,(V.first+1,V.last))
23
Correctness Sketch CSCE 668Set 6: Mutual Exclusion in Shared Memory 23 Mutual Exclusion: Only the processor at the head of the queue (V.first) can enter the CS, and only one processor is at the head at any time. n-Bounded Waiting: FIFO order of enqueueing, and fact that no processor stays in CS forever, give this result.
24
Space Complexity CSCE 668Set 6: Mutual Exclusion in Shared Memory 24 The shared RMW variable V has two components in its state, first and last. Both are integers that take on values from 0 to n-1, n different values. The total number of different states of V thus is n 2. And thus the required size of V in bits is 2*log 2 n.
25
Spinning CSCE 668Set 6: Mutual Exclusion in Shared Memory 25 A drawback of the RMW queue algorithm is that processors in entry section repeatedly access the same shared variable called spinning Having multiple processors spinning on the same shared variable can be very time-inefficient in certain multiprocessor architectures Alter the queue algorithm so that each waiting processor spins on a different shared variable
26
RMW Mutex Algorithm With Separate Spinning CSCE 668Set 6: Mutual Exclusion in Shared Memory 26 Shared RMW variables: Last : corresponds to last "pointer" from previous algorithm cycles through 0 to n–1 keeps track of index to be given to the next processor that starts waiting initially 0
27
RMW Mutex Algorithm With Separate Spinning CSCE 668Set 6: Mutual Exclusion in Shared Memory 27 Shared RMW variables: Flags[0..n-1] : array of binary variables these are the variables that processors spin on make sure no two processors spin on the same variable at the same time initially Flags[0] = 1 (proc "has lock") and Flags[i] = 0 (proc "must wait") for i > 0
28
Overview of Algorithm CSCE 668Set 6: Mutual Exclusion in Shared Memory 28 entry section: get next index from Last and store in a local variable myPlace increment Last (with wrap-around) spin on Flags[myPlace] until it equals 1 (means proc "has lock" and can enter CS) set Flags[myPlace] to 0 ("doesn't have lock") exit section: set Flags[myPlace+1] to 1 (i.e., give the priority to the next proc) use modular arithmetic to wrap around
29
Question CSCE 668Set 6: Mutual Exclusion in Shared Memory 29 Do the shared variables Last and Flags have to be RMW variables? Answer: The RMW semantics (atomically reading and updating a variable) are needed for Last, to make sure two processors don't get the same index at overlapping times.
30
Invariants of the Algorithm CSCE 668Set 6: Mutual Exclusion in Shared Memory 30 1. At most one element of Flags has value 1 ("has lock") 2. If no element of Flags has value 1, then some processor is in the CS. 3. If Flags[k] = 1, then exactly (Last - k) mod n processors are in the entry section, spinning on Flags[i], for i = k, (k+1) mod n, …, (Last-1) mod n.
31
Example of Invariant CSCE 668Set 6: Mutual Exclusion in Shared Memory 31 00100000 0 1 2 3 4 5 6 7 Flags 5 Last k = 2 and Last = 5. So 5 - 2 = 3 procs are in entry, spinning on Flags[2], Flags[3], Flags[4]
32
Correctness CSCE 668Set 6: Mutual Exclusion in Shared Memory 32 Those three invariants can be used to prove: Mutual exclusion is satisfied n-Bounded Waiting is satisfied.
33
Lower Bound on Number of Memory States CSCE 668Set 6: Mutual Exclusion in Shared Memory 33 Theorem (4.4): Any mutex algorithm with k-bounded waiting (and no-deadlock) uses at least n states of shared memory. Proof: Assume in contradiction there is an algorithm using less than n states of shared memory.
34
Lower Bound on Number of Memory States CSCE 668Set 6: Mutual Exclusion in Shared Memory 34 Consider this execution of the algorithm: There exist i and j such that C i and C j have the same state of shared memory. p 0 p 0 p 0 …p1p1 p2p2 p n-1 CC0C0 C2C2 C n-1 C1C1 …… p 0 in CS by ND p 1 in entry sec. p 2 in entry sec. p n-1 in entry sec.
35
Lower Bound on Number of Memory States CSCE 668Set 6: Mutual Exclusion in Shared Memory 35 CiCi CjCj p 0 in CS, p 1 -p i in entry, rest in rem. p 0 in CS, p 1 -p j in entry, rest in rem. p i+1, p i+2, …, p j = sched. in which p 0 -p i take steps in round robin by ND, some p h has entered CS k+1 times p h enters CS k+1 times while p i+1 is in entry
36
Lower Bound on Number of Memory States CSCE 668Set 6: Mutual Exclusion in Shared Memory 36 But why does p h do the same thing when executing the sequence of steps in when starting from C j as when starting from C i ? All the processes p 0,…,p i do the same thing because: they are in same states in the two configs shared memory state is same in the two configs only differences between C i and C j are (potentially) the states of p i+1, …,p j and they don't take any steps in
37
Discussion of Lower Bound CSCE 668Set 6: Mutual Exclusion in Shared Memory 37 The lower bound of n just shown on number of memory states only holds for algorithms that must provide bounded waiting in every execution. Suppose we weaken the liveness condition to just no- lockout in every execution: then the bound becomes n/2 distinct shared memory states. And if liveness is weakened to just no-deadlock in every execution, then the bound is just 2.
38
"Beating" the Lower Bound with Randomization CSCE 668Set 6: Mutual Exclusion in Shared Memory 38 An alternative way to weaken the requirement is to give up on requiring liveness in every execution Consider Probabilistic No-Lockout: every processor has non-zero probability of succeeding each time it is in its entry section. Now there is an algorithm using O(1) states of shared memory.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.