Concurrency in Distributed Systems: Mutual exclusion.

Slides:



Advertisements
Similar presentations
CS 603 Process Synchronization: The Colored Ticket Algorithm February 13, 2002.
Advertisements

Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
1 Chapter 2 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2007 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Multiprocessor Synchronization Algorithms ( ) Lecturer: Danny Hendler The Mutual Exclusion problem.
Chair of Software Engineering Concurrent Object-Oriented Programming Prof. Dr. Bertrand Meyer Lecture 4: Mutual Exclusion.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter 3 The Critical Section Problem
1 Course Syllabus 1. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation.
1 Chapter 3 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Multiprocess Synchronization Algorithms ( )
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
The Critical-Section Problem
1 Tuesday, June 20, 2006 "The box said that I needed to have Windows 98 or better... so I installed Linux." - LinuxNewbie.org.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization.
CPSC 668Set 6: Mutual Exclusion in Shared Memory1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Bakery Algorithm - Proof
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 8: More Mutex with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Process Synchronization
The Critical Section Problem
28/10/1999POS-A1 The Synchronization Problem Synchronization problems occur because –multiple processes or threads want to share data; –the executions.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
1 Chapter 10 Distributed Algorithms. 2 Chapter Content This and the next two chapters present algorithms designed for loosely-connected distributed systems.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch Set 11: Asynchronous Consensus 1.
THIRD PART Algorithms for Concurrent Distributed Systems: The Mutual Exclusion problem.
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE
Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University.
1 Chapter 2 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
CS294, Yelick Consensus revisited, p1 CS Consensus Revisited
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 8: More Mutex with Read/Write Variables 1.
Program correctness The State-transition model A global states S  s 0 x s 1 x … x s m {s k = set of local states of process k} S0  S1  S2  Each state.
1 Concurrent Processes. 2 Cooperating Processes  Operating systems allow for the creation and concurrent execution of multiple processes  concurrency.
1 Shared Memory. 2 processes 3 Types of Shared Variables Read/Write Test & Set Read-Modify-Write.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Hwajung Lee. Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism,
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-5 Process Synchronization Department of Computer Science and Software.
Program Correctness. The designer of a distributed system has the responsibility of certifying the correctness of the system before users start using.
CPSC 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 6: Mutual Exclusion in Shared Memory 1.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally INTERPROCESS COMMUNICATION AND SYNCHRONIZATION SYNCHRONIZATION.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
6.852: Distributed Algorithms Spring, 2008 Class 14.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Synchronization Questions answered in this lecture: Why is synchronization necessary? What are race conditions, critical sections, and atomic operations?
Bakery Algorithm - Proof
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Chapter 6-7: Process Synchronization
Concurrent Distributed Systems
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
The Critical-Section Problem
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Lecture 20 Syed Mansoor Sarwar
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Multiprocessor Synchronization Algorithms ( )
Lecture 21 Syed Mansoor Sarwar
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Course Syllabus 1. Introduction - History; Views; Concepts; Structure
Syllabus 1. Introduction - History; Views; Concepts; Structure
Presentation transcript:

Concurrency in Distributed Systems: Mutual exclusion

Shared Memory Model Changes to the model from the MPS: –Processors communicate via a set of shared variables, instead of passing messages –Each shared variable has a type, defining a set of operations that can be performed atomically (i.e., instantaneously, without interferences) –No inbuf and outbuf state components –Configuration includes a value for each shared variable –The only event type is a computation step by a processor

Shared Memory processes

Types of Shared Variables 1.Read/Write 2.Read-Modify-Write 3.Test & Set 4.Fetch-and-add 5.Compare-and-swap. We will focus on the Read/Write type (the simplest one to be realized)

Read/Write Variables Read(v)Write(v,a) return(v); v = a; In one atomic step a processor can: – read the variable, or – write the variable … but not both!

write 10

10

write read

write read 10

write 10 write 20 Simultaneous writes

write 10 write 20 Possibility 1 10 Simultaneous writes are scheduled:

write 10 write 20 Possibility 2 20 Simultaneous writes are scheduled:

write In general: write xє{a 1,…,a k } Simultaneous writes are scheduled:

read Simultaneous Reads: no problem! aa a All read the same value

A more powerful type: Read-Modify-Write Variables RMW(v, f) temp = v; v = f(v); return (temp); function on v Atomic operation Remark: a R/W type cannot simulate a RMW type, since reads and writes might interleave!

Computation Step in the Shared Memory Model When processor p i takes a step: –pi 's state in old configuration specifies which shared variable is to be accessed and with which operation –operation is done: shared variable's value in the new configuration changes according to the operation's semantics –p i 's state in new configuration changes according to its old state and the result of the operation

Assumptions on the execution –Asynchronous –Fault-free

Each processor's code is divided into four sections: –entry: synchronize with others to ensure mutually exclusive access to the … –critical: use some resource; when done, enter the… –exit: clean up; when done, enter the… –remainder: not interested in using the resource Mutual Exclusion (Mutex) Problem entrycriticalexitremainder

Mutex Algorithms A mutual exclusion algorithm specifies code for entry and exit sections to ensure: –mutual exclusion: at most one processor is in its critical section at any time, and –some kind of liveness condition. There are three commonly considered ones…

Mutex Liveness Conditions no deadlock: if a processor is in its entry section at some time, then later some processor is in its critical section no lockout: if a processor is in its entry section at some time, then later the same processor is in its critical section bounded waiting: no lockout + while a processor is in its entry section, any other processor can enter the critical section no more than a certain number of times. These conditions are increasingly strong.

Mutex Algorithms: assumptions The code for the entry and exit sections is allowed to assume that –no processor stays in its critical section forever –shared variables used in the entry and exit sections are not accessed during the critical and remainder sections

Complexity Measure for Mutex Main complexity measure of interest for shared memory mutex algorithms is amount of shared space needed. Space complexity is affected by: –how powerful is the type of the shared variables –how strong is the progress property to be satisfied (no deadlock vs. no lockout vs. bounded waiting)

Mutex Results Using R/W number of distinct vars. upper boundlower bound no deadlockn no lockout3n booleans (tournament alg.) bounded waiting 2n unbounded (bakery alg.)

Bakery Algorithm Guaranteeing: –Mutual exclusion –Bounded waiting Using 2n shared read/write variables –booleans Choosing[i] : initially false, written by p i and read by others –integers Number[i] : initially 0, written by p i and read by others

Bakery Algorithm Code for entry section: Choosing[i] := true Number[i] := max{Number[0],..., Number[n-1]} + 1 Choosing[i] := false for j := 0 to n-1 (except i) do wait until Choosing[j] = false wait until Number[j] = 0 or (Number[j],j) > (Number[i],i) endfor Code for exit section: Number[i] := 0

BA Provides Mutual Exclusion Lemma 1: If p i is in the critical section and Number [k] ≠ 0 (k ≠ i), then ( Number [k],k) > ( Number [i],i). Proof: Since p i is in the CS, it passed the second wait statement for j=k. There are two cases: p i in CS and Number [k] ≠ 0 p i 's most recent read of Number [k]; Case 1: returns 0 Case 2: returns ( Number [k],k) > ( Number [i],i)

Case 1 p i in CS and Number [k] ≠ 0 p i 's most recent read of Number [k], returns 0. So p k is in remainder or choosing number. p i 's most recent read of Choosing [k], returns false. So p k is not in the middle of choosing number. p i 's most recent write to Number [i] So p k chooses number in this interval, sees p i 's number, and chooses a larger one.

Case 2 p i in CS and Number [k] ≠ 0 p i 's most recent read of Number [k] returns (Number[k],k)>(Number[i],i). So p k has already taken its number. So p k chooses chooses a number larger than p i in this interval, and does not change it until p i exits from the CS END of PROOF

Mutual Exclusion for BA Lemma 2: If p i is in the critical section, then Number [i] > 0. –Proof is a straightforward induction. Mutual Exclusion: Suppose p i and p k are simultaneously in CS. –By Lemma 2, both have number > 0. –By Lemma 1, ( Number [k],k) > ( Number [i],i) and ( Number [i],i) > ( Number [k],k)

No Lockout for BA Assume in contradiction there is a starved processor. Starved processors are stuck at the wait statements, not while choosing a number. Let p i be a starved processor with smallest ( Number [i],i). Any processor entering entry section after p i has chosen its number, chooses a larger number, and therefore cannot overtake p i Every processor with a smaller number eventually enters CS (not starved) and exits. Thus p i cannot be stuck at the wait statements.

What about bounded waiting? YES: It’s easy to see that any processor in the entry section can be overtaken at most once by any other processor (and so in total it can be overtaken at most n-1 times).

Space Complexity of BA Number of shared variables is 2n · Choosing variables are booleans · Number variables are unbounded: as long as the CS is occupied and some processor enters the entry section, the ticket number increases Is it possible for an algorithm to use less shared space?

Bounded 2-Processor ME Algorithm with ND Start with a bounded algorithm for 2 processors with ND, then extend to NL, then extend to n processors. Some ideas used in 2-processor algorithm: –each processor has a shared boolean W[i] indicating if it wants the CS –p 0 always has priority over p 1 ; asymmetric code

Bounded 2-Processor ME Algorithm with ND Code for p 0 's entry section: W[0] := wait until W[1] = 0 Code for p 0 's exit section: 7. 8W[0] := 0

Bounded 2-Processor ME Algorithm with ND Code for p 1 's entry section: 1W[1] := 0 2wait until W[0] = 0 3W[1] := if (W[0] = 1) then goto Line 1 6. Code for p 1 's exit section: 7. 8W[1] := 0

Discussion of 2-Processor ND Algorithm Satisfies mutual exclusion: processors use W variables to make sure of this Satisfies no deadlock But unfair w.r.t. p 1 (lockout) Fix by having the processors alternate in having the priority

Bounded 2-Processor ME Algorithm with NL Uses 3 binary shared read/write variables: W[0] : initially 0, written by p 0 and read by p 1 W[1] : initially 0, written by p 1 and read by p 0 Priority : initially 0, written and read by both

Bounded 2-Processor ME Algorithm with NL Code for p i ’s entry section: 1W[i] := 0 2wait until W[1-i] = 0 or Priority = i 3W[i] := 1 4if (Priority = 1-i) then 5 if (W[1-i] = 1) then goto Line 1 6else wait until (W[1-i] = 0) Code for p i ’s exit section: 7Priority := 1-i 8W[i] := 0

Analysis: ME Mutual Exclusion: Suppose in contradiction p 0 and p 1 are simultaneously in CS. W[0]=W[1]=1, both procs in CS p 1 's most recent write of 1 to W[1] (Line 3) p 0 's most recent write of 1 to W[0] (Line 3) p 0 's most recent read of W[1] (Line 6): must return 1, not 0!

Analysis: No-Deadlock Useful for showing no-lockout. If one proc. ever enters remainder forever, other one cannot be starved. –Ex: If p 1 enters remainder forever, then p 0 will keep seeing W[1] = 0. So any deadlock would starve both procs. In the entry section

Analysis: No-Deadlock Suppose in contradiction there is deadlock. W.l.o.g., suppose Priority gets stuck at 0 after both processors are stuck in their entry sections. p 0 and p 1 stuck in entry Priority =0 p 0 not stuck in Line 2, skips line 5, stuck in Line 6 with W[0]=1 waiting for W[1] to be 0 p 0 sees W[1] = 0, enters CS p 1 skips Line 6, stuck at Line 2 with W[1]=0, waiting for W[0] to be 0

Analysis: No-Lockout Suppose in contradiction p 0 is starved. Since there is no deadlock, p 1 enters CS infinitely often. The first time p 1 executes Line 7 in exit section after p 0 is stuck in entry, Priority gets stuck at 0. p 1 at Line 7; Priority =0 forever after p 0 stuck in entry p 0 stuck at Line 6 with W[0]=1, waiting for W[1] to be 0 p 1 enters entry, gets stuck at Line 2, waiting for W[0] to be 0

Bounded Waiting? NO: A processor, even if having priority, might be overtaken repeatedly (in principle, an unbounded number of times) when it is in between Line 2 and 3.

Bounded n-Processor ME Alg. Can we get a bounded space NL mutex algorithm for n>2 processors? Yes! Based on the notion of a tournament tree: complete binary tree with n-1 nodes –tree is conceptual only! does not represent message passing channels A copy of the 2-proc. algorithm is associated with each tree node –includes separate copies of the 3 shared variables

Tournament Tree p 0, p 1 p 2, p 3 p 4, p 5 p 6, p 7

Tournament Tree ME Algorithm Each proc. begins entry section at a specific leaf (two procs per leaf) A proc. proceeds to next level in tree by winning the 2-proc. competition for current tree node: –on left side, play role of p 0 –on right side, play role of p 1 When a proc. wins the 2-proc. algorithm associated with the tree root, it enters CS.

The code

More on Tournament Tree Alg. Code is recursive p i begins at tree node 2 k +  i/2 , playing role of p i mod 2, where k =  log n  -1. After winning node v, "critical section" for node v is –entry code for all nodes on path from  v/2  to root –real critical section –exit code for all nodes on path from root to  v/2 

Tournament Tree Analysis Correctness: based on correctness of 2- processor algorithm and tournament structure: –ME for tournament alg. follows from ME for 2- proc. alg. at tree root. –NL for tournament alg. follows from NL for the 2- proc. algs. at all nodes of tree Space Complexity: 3n boolean read/write shared variables. Bounded Waiting? No, as for the 2- processor algorithm.