CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.

Slides:



Advertisements
Similar presentations
CS 603 Process Synchronization: The Colored Ticket Algorithm February 13, 2002.
Advertisements

Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 14: Simulations 1.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Chapter 6 Process Synchronization Bernard Chen Spring 2007.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
CPSC 668Set 18: Wait-Free Simulations Beyond Registers1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Chapter 3 The Critical Section Problem
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
CPSC 668Set 14: Simulations1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Multiprocess Synchronization Algorithms ( )
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 5: Synchronous LE in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 16: Distributed Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 12: Causality1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Introduction to Lock-free Data-structures and algorithms Micah J Best May 14/09.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
6: Process Synchronization 1 1 PROCESS SYNCHRONIZATION I This is about getting processes to coordinate with each other. How do processes work with resources.
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 8: More Mutex with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Concurrency in Distributed Systems: Mutual exclusion.
CPSC 668Set 11: Asynchronous Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Formal Model for Simulations Instructor: DR. Lê Anh Ngọc Presented by – Group 6: 1. Nguyễn Sơn Hùng 2. Lê Văn Hùng 3. Nguyễn Xuân Hậu 4. Nguyễn Xuân Tùng.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 18: Wait-Free Simulations Beyond Registers 1.
28/10/1999POS-A1 The Synchronization Problem Synchronization problems occur because –multiple processes or threads want to share data; –the executions.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
1 Chapter 9 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch Set 11: Asynchronous Consensus 1.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
11/18/20151 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 5: Synchronous LE in Rings 1.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 8: More Mutex with Read/Write Variables 1.
1 Shared Memory. 2 processes 3 Types of Shared Variables Read/Write Test & Set Read-Modify-Write.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 16: Distributed Shared Memory 1.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
CS4315A. Berrached:CMS:UHD1 Process Synchronization Chapter 8.
CPSC 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 6: Mutual Exclusion in Shared Memory 1.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Chapter 10 Mutual Exclusion Presented by Yisong Jiang.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Background on the need for Synchronization
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Concurrent Distributed Systems
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CS210- Lecture 5 Jun 9, 2005 Agenda Queues
Multiprocessor Synchronization Algorithms ( )
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Presentation transcript:

CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch

CPSC 668Set 6: Mutual Exclusion in Shared Memory2 Shared Memory Model Processors communicate via a set of shared variables, instead of passing messages. Each shared variable has a type, defining a set of operations that can be performed atomically.

CPSC 668Set 6: Mutual Exclusion in Shared Memory3 Shared Memory Model Example p0p0 p1p1 p2p2 X Y readwrite read

CPSC 668Set 6: Mutual Exclusion in Shared Memory4 Shared Memory Model Changes to the model from the message-passing case: –no inbuf and outbuf state components –configuration includes a value for each shared variable –only event type is a computation step by a processor –An execution is admissible if every processor takes an infinite number of steps

CPSC 668Set 6: Mutual Exclusion in Shared Memory5 Computation Step in Shared Memory Model When processor p i takes a step: –pi 's state in old configuration specifies whch shared variable is to be accessed and with which operation –operation is done: shared variable's value in the new configuration changes according to the operation's semantics –p i 's state in new configuration changes according to its old state and the result of the operation

CPSC 668Set 6: Mutual Exclusion in Shared Memory6 Observations on SM Model Accesses to the shared variables are modeled as occurring instantaneously (atomically) during a computation step, one access per step Definition of admissible execution implies –asynchronous –no failures

CPSC 668Set 6: Mutual Exclusion in Shared Memory7 Each processor's code is divided into four sections: –entry: synchronize with others to ensure mutually exclusive access to the … –critical: use some resource; when done, enter the… –exit: clean up; when done, enter the… –remainder: not interested in using the resource Mutual Exclusion (Mutex) Problem entrycriticalexitremainder

CPSC 668Set 6: Mutual Exclusion in Shared Memory8 Mutual Exclusion Algorithms A mutual exclusion algorithm specifies code for entry and exit sections to ensure: –mutual exclusion: at most one processor is in its critical section at any time, and –some kind of "liveness" or "progress" condition. There are three commonly considered ones…

CPSC 668Set 6: Mutual Exclusion in Shared Memory9 Mutex Progress Conditions no deadlock: if a processor is in its entry section at some time, then later some processor is in its critical section no lockout: if a processor is in its entry section at some time, then later the same processor is in its critical section bounded waiting: no lockout + while a processor is in its entry section, other processors enter the critical section no more than a certain number of times. These conditions are increasingly strong.

CPSC 668Set 6: Mutual Exclusion in Shared Memory10 Mutual Exclusion Algorithms The code for the entry and exit sections is allowed to assume that –no processor stays in its critical section forever –shared variables used in the entry and exit sections are not accessed during the critical and remainder sections

CPSC 668Set 6: Mutual Exclusion in Shared Memory11 Complexity Measure for Mutex An important complexity measure for shared memory mutex algorithms is amount of shared space needed. Space complexity is affected by: –how powerful is the type of the shared variables –how strong is the progress property to be satisfied (no deadlock vs. no lockout vs. bounded waiting)

CPSC 668Set 6: Mutual Exclusion in Shared Memory12 Mutex Results Using RMW When using powerful shared variables of "read-modify-write" type number of SM states upper boundlower bound no deadlock2 (test&set alg) 2 (obvious) no lockout (memoryless) n/2 + c (Burns et al.)  (2n) (n/2) (Burns et al.) bounded waiting n 2 (queue alg.) n (Burns & Lynch)

CPSC 668Set 6: Mutual Exclusion in Shared Memory13 Mutex Results Using Read/Write When using read/write shared variables number of distinct vars. upper boundlower bound no deadlockn (Burns & Lynch) no lockout3n booleans (tournament alg.) bounded waiting 2n unbounded (bakery alg.)

CPSC 668Set 6: Mutual Exclusion in Shared Memory14 Test-and-Set Shared Variable A test-and-set variable V holds two values, 0 or 1, and supports two (atomic) operations: –test&set(V): temp := V V := 1 return temp –reset(V): V := 0

CPSC 668Set 6: Mutual Exclusion in Shared Memory15 Mutex Algorithm Using Test&Set code for entry section: repeat t := test&set(V) until (t = 0) An alternative construction is: wait until test&set(V) = 0 code for exit section: reset(V)

CPSC 668Set 6: Mutual Exclusion in Shared Memory16 Mutual Exclusion is Ensured Suppose not. Consider first violation, when some p i enters CS but another p j is already in CS p j enters CS: sees V = 0, sets V to 1 p i enters CS: sees V = 0, sets V to 1 no node leaves CS so V stays 1 impossible!

CPSC 668Set 6: Mutual Exclusion in Shared Memory17 No Deadlock Claim: V = 0 iff no processor is in CS. –Proof is by induction on events in execution, and relies on fact that mutual exclusion holds. Suppose there is a time after which a processor is in its entry section but no processor ever enters CS. no processor enters CS no processor is in CS V always equals 0, next t&s returns 0 proc enters CS, contradiction!

CPSC 668Set 6: Mutual Exclusion in Shared Memory18 What About No Lockout? One processor could always grab V (i.e., win the test&set competition) and starve the others. No Lockout does not hold. Thus Bounded Waiting does not hold.

CPSC 668Set 6: Mutual Exclusion in Shared Memory19 Read-Modify-Write Shared Variable The state of this kind of variable can be anything and of any size. Variable V supports the (atomic) operation –rmw(V,f ), where f is any function temp := V V := f(V) return temp This variable type is so strong there is no point in having multiple variables.

CPSC 668Set 6: Mutual Exclusion in Shared Memory20 Mutex Algorithm Using RMW Conceptually, the list of waiting processors is stored in a circular queue of length n Each waiting processor remembers in its local state its location in the queue (instead of keeping this info in the shared variable) Shared RMW variable V keeps track of active part of the queue with first and last pointers, which are indices into the queue (between 0 and n-1) –so V has two components, first and last

CPSC 668Set 6: Mutual Exclusion in Shared Memory21 Conceptual Data Structure The RMW shared object just contains these two "pointers"

CPSC 668Set 6: Mutual Exclusion in Shared Memory22 Mutex Algorithm Using RMW Code for entry section: // increment last to enqueue self position := rmw(V,(V.first,V.last+1) // wait until first equals this value repeat queue := rmw(V,V) until (queue.first = position.last) Code for exit section: // dequeue self rmw(V,(V.first+1,V.last))

CPSC 668Set 6: Mutual Exclusion in Shared Memory23 Correctness Sketch Mutual Exclusion: –Only the processor at the head of the queue (V.first) can enter the CS, and only one processor is at the head at any time. n-Bounded Waiting: –FIFO order of enqueueing, and fact that no processor stays in CS forever, give this result.

CPSC 668Set 6: Mutual Exclusion in Shared Memory24 Space Complexity The shared RMW variable V has two components in its state, first and last. Both are integers that take on values from 0 to n-1, n different values. The total number of different states of V thus is n 2. And thus the required size of V in bits is 2*log 2 n.

CPSC 668Set 6: Mutual Exclusion in Shared Memory25 Spinning A drawback of the RMW queue algorithm is that processors in entry section repeatedly access the same shared variable –called spinning Having multiple processors spinning on the same shared variable can be very time- inefficient in certain multiprocessor architectures Alter the queue algorithm so that each waiting processor spins on a different shared variable

CPSC 668Set 6: Mutual Exclusion in Shared Memory26 RMW Mutex Algorithm With Separate Spinning Shared RMW variables: –Last : corresponds to last "pointer" from previous algorithm cycles through 0 to n–1 keeps track of index to be given to the next processor that starts waiting initially 0

CPSC 668Set 6: Mutual Exclusion in Shared Memory27 RMW Mutex Algorithm With Separate Spinning Shared RMW variables: –Flags[0..n-1] : array of binary variables these are the variables that processors spin on make sure no two processors spin on the same variable at the same time initially Flags[0] = 1 (proc "has lock") and Flags[i] = 0 (proc "must wait) for i > 0

CPSC 668Set 6: Mutual Exclusion in Shared Memory28 Overview of Algorithm entry section: –get next index from Last and store in a local variable myPlace increment Last (with wrap-around) –spin on Flags[myPlace] until it equals 1 (means proc "has lock" and can enter CS) –set Flags[myPlace] to 0 ("doesn't have lock") exit section: –set Flags[myPlace+1] to 1 (i.e., give the priority to the next proc) use modular arithmetic to wrap around

CPSC 668Set 6: Mutual Exclusion in Shared Memory29 Question Do the shared variables Last and Flags have to be RMW variables? Answer: The RMW semantics (atomically reading and updating a variable) are needed for Last, to make sure two processors don't get the same index at overlapping times.

CPSC 668Set 6: Mutual Exclusion in Shared Memory30 Invariants of the Algorithm 1.At most one element of Flags has value 1 ("has lock") 2.If no element of Flags has value 1, then some processor is in the CS. 3.If Flags[k] = 1, then exactly (Last - k) mod n processors are in the entry section, spinning on Flags[i], for i = k, (k+1) mod n, …, (Last-1) mod n.

CPSC 668Set 6: Mutual Exclusion in Shared Memory31 Example of Invariant Flags 5 Last k = 2 and Last = 5. So = 3 procs are in entry, spinning on Flags[2], Flags[3], Flags[4]

CPSC 668Set 6: Mutual Exclusion in Shared Memory32 Correctness Those three invariants can be used to prove: –Mutual exclusion is satisfied –n-Bounded Waiting is satisfied.

CPSC 668Set 6: Mutual Exclusion in Shared Memory33 Lower Bound on Number of Memory States Theorem (4.4): Any mutex algorithm with k-bounded waiting (and no-deadlock) uses at least n states of shared memory. Proof: Assume in contradiction there is an algorithm using less than n states of shared memory.

CPSC 668Set 6: Mutual Exclusion in Shared Memory34 Lower Bound on Number of Memory States Consider this execution of the algorithm: There exist i and j such that C i and C j have the same state of shared memory. p 0 p 0 p 0 …p1p1 p2p2 p n-1 CC0C0 C2C2 C n-1 C1C1 …… p 0 in CS by ND p 1 in entry sec. p 2 in entry sec. p n-1 in entry sec.

CPSC 668Set 6: Mutual Exclusion in Shared Memory35 Lower Bound on Number of Memory States CiCi CjCj p 0 in CS, p 1 -p i in entry, rest in rem. p 0 in CS, p 1 -p j in entry, rest in rem. p i+1, p i+2, …, p j  = sched. in which p 0 -p i take steps in round robin by ND, some p h has entered CS k+1 times  p h enters CS k+1 times while p i+1 is in entry

CPSC 668Set 6: Mutual Exclusion in Shared Memory36 Lower Bound on Number of Memory States But why does p h do the same thing when executing the sequence of steps in  when starting from C j as when starting from C i ? All the processes p 0,…,p i do the same thing because: –they are in same states in the two configs –shared memory state is same in the two configs –only differences between C i and C j are (potentially) the states of p i+1, …,p j and they don't take any steps in 

CPSC 668Set 6: Mutual Exclusion in Shared Memory37 Discussion of Lower Bound The lower bound of n just shown on number of memory states only holds for algorithms that must provide bounded waiting in every execution. Suppose we weaken the liveness condition to just no-lockout in every execution: then the bound becomes n/2 distinct shared memory states. And if liveness is weakened to just no- deadlock in every execution, then the bound is just 2.

CPSC 668Set 6: Mutual Exclusion in Shared Memory38 "Beating" the Lower Bound with Randomization An alternative way to weaken the requirement is to give up on requiring liveness in every execution Consider Probabilistic No-Lockout: every processor has non-zero probability of succeeding each time it is in its entry section. Now there is an algorithm using O(1) states of shared memory.