Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Mutual Exclusion.
Advertisements

CS542 Topics in Distributed Systems Diganta Goswami.
Ricart and Agrawala’s Algorithm
Program correctness The State-transition model A global state S  s 0 x s 1 x … x s m {s k = local state of process k} S0  S1  S2  … Each state transition.
Token-Dased DMX Algorithms n LeLann’s token ring n Suzuki-Kasami’s broadcast n Raymond’s tree.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Time and Clock Primary standard = rotation of earth De facto primary standard = atomic clock (1 atomic second = 9,192,631,770 orbital transitions of Cesium.
Page 1 Mutual Exclusion* Distributed Systems *referred to slides by Prof. Paul Krzyzanowski at Rutgers University and Prof. Mary Ellen Weisskopf at University.
CS 582 / CMPE 481 Distributed Systems
What we will cover…  Distributed Coordination 1-1.
20101 Synchronization in distributed systems A collection of independent computers that appears to its users as a single coherent system.
Hwajung Lee. Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down, should we care.
Distributed Mutual Exclusion Béat Hirsbrunner References G. Coulouris, J. Dollimore and T. Kindberg "Distributed Systems: Concepts and Design", Ed. 4,
1 Distributed Process Management: Distributed Global States and Distributed Mutual Exclusion.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms.
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed Mutual Exclusion.
Distributed Mutual Exclusion
Distributed Algorithms
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
Maekawa’s algorithm Divide the set of processes into subsets that satisfy the following two conditions: i  S i  i,j :  i,j  n-1 :: S i  S j.
MUTUAL EXCLUSION AND QUORUMS CS Distributed Mutual Exclusion Given a set of processes and a single resource, develop a protocol to ensure exclusive.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms –Bully algorithm.
Chapter 2/6 –Critical Section Problem / Mutual exclusion progress, bounded wait –Hardware Solution disable interrupts –problems ? –Software Solution busy.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Coordination and Agreement. Topics Distributed Mutual Exclusion Leader Election.
Presenter: Long Ma Advisor: Dr. Zhang 4.5 DISTRIBUTED MUTUAL EXCLUSION.
Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University.
Hwajung Lee. Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down, should we care.
Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’
Program correctness The State-transition model A global states S  s 0 x s 1 x … x s m {s k = set of local states of process k} S0  S1  S2  Each state.
Hwajung Lee. The State-transition model The set of global states = s 0 x s 1 x … x s m {s k is the set of local states of process k} S0  S1  S2  Each.
Lecture 10 – Mutual Exclusion Distributed Systems.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion & Leader Election Steve Ko Computer Sciences and Engineering University.
Hwajung Lee. The State-transition model The set of global states = s 0 x s 1 x … x s m {s k is the set of local states of process k} S0  S1  S2  Each.
Physical clock synchronization Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down,
Mutual exclusion Ludovic Henrio CNRS - projet SCALE Distributed Algorithms.
ITEC452 Distributed Computing Lecture 6 Mutual Exclusion Hwajung Lee.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are: 1. Resource sharing 2. Avoiding concurrent update on shared data 3. Controlling the.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
Lecture 12-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2012 Indranil Gupta (Indy) October 4, 2012 Lecture 12 Mutual Exclusion.
Lecture 7- 1 CS 425/ECE 428/CSE424 Distributed Systems (Fall 2009) Lecture 7 Distributed Mutual Exclusion Section 12.2 Klara Nahrstedt.
Decentralized solution 1
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Token-passing Algorithms Suzuki-Kasami algorithm The Main idea Completely connected network of processes There is one token in the network. The holder.
CSC 8420 Advanced Operating Systems Georgia State University Yi Pan Transactions are communications with ACID property: Atomicity: all or nothing Consistency:
Homework-6 Questions : 2,10,15,22.
Chapter 10 Mutual Exclusion Presented by Yisong Jiang.
Revisiting Logical Clocks: Mutual Exclusion Problem statement: Given a set of n processes, and a shared resource, it is required that: –Mutual exclusion.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Oct 1, 2015 Lecture 12: Mutual Exclusion All slides © IG.
Lecture 18: Mutual Exclusion
Mutual Exclusion Continued
Distributed Mutual Exclusion
Distributed Mutual Exclusion
Decentralized solution 1
Mutual Exclusion Problem Specifications
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Outline Distributed Mutual Exclusion Introduction Performance measures
Mutual Exclusion CS p0 CS p1 p2 CS CS p3.
CSE 486/586 Distributed Systems Mutual Exclusion
Physical clock synchronization
Synchronization (2) – Mutual Exclusion
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Distributed Systems and Concurrency: Synchronization in Distributed Systems Majeed Kassis.
Distributed Mutual eXclusion
CSE 486/586 Distributed Systems Mutual Exclusion
Hwajung Lee ITEC452 Distributed Computing Lecture 6 Mutual Exclusion Sequential and concurrent events. Understanding logical clocks and vector clocks.
Presentation transcript:

Hwajung Lee

Mutual Exclusion CS p0 p1 p2 p3

Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the grain of atomicity  Medium Access Control in Ethernet  Collision avoidance in wireless broadcasts

Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Medium Access Control in Ethernet  Collision avoidance in wireless broadcasts

ME1. [Mutual Exclusion] At most one process in the CS.(Safety property) ME2. [Freedom from Deadlock] No deadlock. (Safety property) ME3. [Progress] Every process trying to enter its CS must eventually succeed. (Liveness property) Violation of ME3 is livelock (=starvation) Progress is quantified by the criterion of bounded waiting. It measures a form of fairness by answering the question: Between two consecutive CS trips by one process, how many times other processes can enter the CS? There are many solutions, both on the shared memory model and the message-passing model

clients Client do true  send request; reply received  enter CS; send release; od Server do request received and not busy  send reply; busy:= true request received and busy  enqueue sender release received and queue is empty  busy:= false release received and queue not empty  send reply to the head of the queue od busy: boolean server queue req reply release

 Centralized solution is simple.  But the server is a single point of failure. This is BAD.  ME1-ME3 is satisfied, but FIFO fairness is not guaranteed. Why? Can we do better? Yes!

{Lamport’s algorithm} 1. To request entry into its CS, b roadcast a timestamped request to all. 2. At each process i: Request received  enqueue it in local Q. If process i is not in CS  send ack, else postpone sending ack until it exit from CS. 3. Enter CS, when (i) You are at the head of your Q (ii) You have received ack from all 4. To exit from the CS, (i) Delete the request from your Q, and (ii) Broadcast a timestamped release 5. When a process receives a release message, it removes the sender from its Q. Completely connected topology

Can you show that it satisfies all the properties (i.e. ME1, ME2, ME3) of a correct solution? Observation. Processes making a decision to enter CS must have identical views of their local queues, when all acks have been received. Proof of ME1. At most one process can be in its CS at any time. Proof by contradiction Suppose not, and both j, k enter their CS. This implies  j in CS  Qj.ts.j < Qk.ts.k  k in CS  Qk.ts.k < Qj.ts.j Impossible.

Can you show that it satisfies all the properties (i.e. ME1, ME2, ME3) of a correct solution? Observation. Processes making a decision to enter CS must have identical views of their local queues, when all acks have been received. Proof of ME1. At most one process can be in its CS at any time. Proof by contradiction Suppose not, and both j, k enter their CS. This implies  j in CS  Q.ts.j < Q.ts.k  k in CS  Q.ts.k < Q.ts.j Impossible.

Proof of ME2. (No deadlock) The waiting chain is acyclic. i waits for j  i is behind j in all queues (or j is in its CS)  j does not wait for i Proof of ME3. (progress) New requests join the end of the queues, so new requests do not pass the old ones

Proof of ME2. (No deadlock) and ME3. (progress) Basis. When process i makes a request, there may be at most n-1 processes ahead of process i in its request queue. Inductive step. Assume K (1<= K <= n-1) processes ahead of process i in the request queue. (1) # of process ahead (2) bounded number of steps to enter CS

Proof of FIFO fairness. Proof by contradiction timestamp (j) < timestamp (k)  j enters its CS before k does so Suppose not. So, k enters its CS before j, which means k did not receive j’s request but received the ack from j for its own req. This is impossible if the channels are FIFO Message complexity of each process in one round trip to CS = 3(N-1) (N-1 requests + N-1 ack + N-1 release) k j Req (20 ) ack Req (30)

{ Ricart & Agrawala’s algorithm} What is new? 1. Broadcast a timestamped request to all. 2. Upon receiving a request, send ack if -You do not want to enter your CS, or -You are trying to enter your CS, but your timestamp is higher than that of the sender. (If you are already in CS, then buffer the request) 3. Enter CS, when you receive ack from all. 4. Upon exit from CS, send ack to each pending request before making a new request. (No release message is necessary)

{Ricart & Agrawala’s algorithm} ME1. Prove that at most one process can be in CS. Proof (by contradiction) Suppose not. Two processes k and j can enter the CS at the same time, only if both k and j received n-1 ack s. However, both k and j cannot send ack to each other. Thus, impossible. ME2. Prove that deadlock is not possible. Proof The waiting chain is acyclic. k waits for j  k is behind j implicitly in the wait-for chain  j does not wait for k TS(j) < TS(k) k j Req(j) Ack(j) Req(k)

{Ricart & Agrawala’s algorithm} ME3. Prove that FIFO fairness holds even if channels are not FIFO Proof. If TS(j) < TS(k), then process j is ranked higher than (i.e., ahead of) k in the wait-for chain. Therefore, process j will enter the CS before process k. Message complexity = 2(N-1)  (N-1 requests + N-1 acks - no release message) TS(j) < TS(k) k j Req(j) Ack(j) Req(k)

{Maekawa’s algorithm} - First solution with a sublinear O( √ N) message complexity. - “Close to” Ricart-Agrawala’s solution, but each process is required to obtain permission from only a subset of peers

 With each process i, associate a subset S i.Divide the set of processes into subsets that satisfy the following two conditions: i  S i  i,j :  i,j  n-1 :: S i  S j ≠   Main idea. Each process i is required to receive permission from S i only. Correctness requires that multiple processes will never receive permission from all members of their respective subsets. 1,21,5 2,5 S0S0 S1S1 S2S2 Note: It divides processes into a number of subsets of identical size K. Every process i is present in the same number (D) of subsets.

Example. Let there be seven processes 0, 1, 2, 3, 4, 5, 6 S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Version 1 {Life of process I} 1. Send timestamped request to each process in S i. 2. Request received  send ack to process with the lowest timestamp. Thereafter, "lock" (i.e. commit) yourself to that process, and keep others waiting. 3. Enter CS if you receive an ack from each member in S i. 4. To exit CS, send release to every process in S i. 5. Release received  unlock yourself. Then send ack to the next process with the lowest timestamp. S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

ME1. At most one process can enter its critical section at any time. Proof by contradiction Let i and j attempt to enter their Critical Sections S i ∩ S j ≠   there is a process k  S i ∩ S j Process k will never send ack to both. So it will act as the arbitrator and establishes ME1 S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

ME2. No deadlock. Unfortunately deadlock is possible! Assume 0, 1, 2 want to enter their critical sections. From S 0 = {0,1,2}, 0,2 send ack to 0, but 1 sends ack to 1; From S 1 = {1,3,5}, 1,3 send ack to 1, but 5 sends ack to 2; From S 2 = {2,4,5}, 4,5 send ack to 2, but 2 sends ack to 0; Now, 0 waits for 1, 1 waits for 2, and 2 waits for 0. So deadlock is possible! S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Avoiding deadlock If processes receive messages in increasing order of timestamp, then deadlock “could be” avoided. But this is too strong an assumption. Version 2 uses three additional messages: - failed - inquire - relinquish S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

New features in version 2 - Send ack and set lock as usual. - If lock is set and a request with larger timestamp arrives, send failed (you have no chance). If the incoming request has a lower timestamp, then send inquire (are you in CS?) to the locked process. - Receive inquire and at least one failed message  send relinquish. The recipient resets the lock. S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

- Let K = |S i |. Let each process be a member of D subsets. When N = 7, K = D = 3. When K=D, N = K(K- 1)+1. So K is of the order √N - The message complexity of Version 1 is 3√N. Maekawa’s analysis of Version 2 reveals a complexity of 7√N  O(√N) message complexity.

Suzuki-Kasami algorithm The Main idea Completely connected network of processes There is one token in the network. The holder of the token has the permission to enter CS. Any other process trying to enter CS must acquire that token. Thus the token will move from one process to another based on demand. I want to enter CS

Process i broadcasts (i, num) Each process maintains -an array req : req[j] denotes the sequence num of the latest request from process j (Some requests will be stale soon) Additionally, the holder of the token maintains - an array last: last[j] denotes the sequence number of the latest visit to CS from for process j. - a queue Q of waiting processes req: array[0..n-1] of integer last: array [0..n-1] of integer Sequence number of the request req last queue Q

 When a process i receives a request (k, num) from process k, it sets req[k] to max(req[k], num). The holder of the token --Completes its CS --Sets last[i]:= its own num --Updates Q by retaining each process k only if 1+ last[k] = req[k] ( This guarantees the freshness of the request) --Sends the token to the head of Q, along with the array last and the tail of Q In fact, token  (Q, last) Req: array[0..n-1] of integer Last: Array [0..n-1] of integer

{ Program of process j} Initially,  i: req[i] = last[i] = 0 * Entry protocol * req[j] := req[j] + 1 Send (j, req[j]) to all Wait until token (Q, last) arrives Critical Section * Exit protocol * last[j] := req[j]  k ≠ j: k  Q  req[k] = last[k] + 1  append k to Q; if Q is not empty  send (tail-of-Q, last) to head-of-Q fi * Upon receiving a request (k, num) * req[k] := max(req[k], num)

req=[1,0,0,0,0] last=[0,0,0,0,0] req=[1,0,0,0,0] initial state

req=[1,1,1,0,0] last=[0,0,0,0,0] req=[1,1,1,0,0] 1 & 2 send requests

req=[1,1,1,0,0] last=[1,0,0,0,0] Q=(1,2) req=[1,1,1,0,0] 0 prepares to exit CS

req=[1,1,1,0,0] last=[1,0,0,0,0] Q=(2) req=[1,1,1,0,0] 0 passes token (Q and last) to 1

req=[2,1,1,1,0] last=[1,0,0,0,0] Q=(2,0,3) req=[2,1,1,1,0] 0 and 3 send requests

req=[2,1,1,1,0] last=[1,1,0,0,0] Q=(0,3) req=[2,1,1,1,0] 1 sends token to 2 Message Complexity = (n-1) + 1  O(n)

1,4 4, ,4,7 want to enter their CS

1,4 4, sends the token to 6

4 4,7 4 [Question] Proof ME1, ME2 and ME3 for this algorithm. The message complexity is O(diameter) of the tree. Extensive empirical measurements show that the average diameter of randomly chosen trees of size n is O(log n). Therefore, the authors claim that the average message complexity is O(log n) 6 forwards the token to 1 4