Presentation is loading. Please wait.

Presentation is loading. Please wait.

ITEC452 Distributed Computing Lecture 7 Mutual Exclusion

Similar presentations


Presentation on theme: "ITEC452 Distributed Computing Lecture 7 Mutual Exclusion"— Presentation transcript:

1 ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Hwajung Lee

2 Mutual Exclusion CS p0 CS p1 p2 CS CS p3

3 Why mutual exclusion? Some applications are: Resource sharing
Avoiding concurrent update on shared data Medium Access Control in Ethernet Collision avoidance in wireless broadcasts

4 Specifications ME1. [Mutual Exclusion] At most one process in the CS.(Safety property) ME2. [Freedom from Deadlock] No deadlock. (Safety property) ME3. [Progress] Every process trying to enter its CS must eventually succeed. (Liveness property) Violation of ME3 is livelock (=starvation) Progress is quantified by the criterion of bounded waiting. It measures a form of fairness by answering the question: Between two consecutive CS trips by one process, how many times other processes can enter the CS? There are many solutions, both on the shared memory model and the message-passing model

5 Message passing solution: Centralized decision making
Client do true  send request; reply received  enter CS; send release; <other work> od server busy: boolean queue release req reply Server do request received and not busy  send reply; busy:= true request received and busy  enqueue sender release received and queue is empty  busy:= false release received and queue not empty  send reply to the head of the queue od clients

6 Comments Centralized solution is simple.
But the server is a single point of failure. This is BAD. ME1-ME3 is satisfied, but FIFO fairness is not guaranteed. Why? Can we do better? Yes! Answer on “Why?” == > Textbook Page 106: 2nd & 3rd paragraphs

7 Decentralized solution 1
{Lamport’s algorithm} 1. To request entry into its CS, broadcast a timestamped request to all. 2. At each process i: Request received  enqueue it in local Q. If process i is not in CS  send ack, else postpone sending ack until it exit from CS. 3. Enter CS, when (i) You are at the head of your Q (ii) You have received ack from all 4. To exit from the CS, (i) Delete the request from your Q, and (ii) Broadcast a timestamped release 5. When a process receives a release message, it removes the sender from its Q. The distributed program of Lamport’s algorithm is in the page 107 of the textbook. Completely connected topology

8 Analysis of Lamport’s algorithm
Can you show that it satisfies all the properties (i.e. ME1, ME2, ME3) of a correct solution? Observation. Processes making a decision to enter CS must have identical views of their local queues, when all acks have been received. Proof of ME1. At most one process can be in its CS at any time. Proof by contradiction Suppose not, and both j, k enter their CS. This implies  j in CS  Q.ts.j < Q.ts.k  k in CS  Q.ts.k < Q.ts.j Impossible.

9 Analysis of Lamport’s algorithm
Proof of ME2. (No deadlock) and ME3. (progress) Basis. When process i makes a request, there may be at most n-1 processes ahead of process i in its request queue. Inductive step. Assume K (1<= K <= n-1) processes ahead of process i in the request queue. (1) # of process ahead (2) bounded number of steps to enter CS Proofs of ME2 and ME3 by induction is in the page 107 of the textbook.


Download ppt "ITEC452 Distributed Computing Lecture 7 Mutual Exclusion"

Similar presentations


Ads by Google