Maekawa’s algorithm Divide the set of processes into subsets that satisfy the following two conditions: i  S i  i,j :  i,j  n-1 :: S i  S j.

Slides:



Advertisements
Similar presentations
Mutual Exclusion – SW & HW By Oded Regev. Outline: Short review on the Bakery algorithm Short review on the Bakery algorithm Black & White Algorithm Black.
Advertisements

CS542 Topics in Distributed Systems Diganta Goswami.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Process Synchronization Continued 7.2 The Critical-Section Problem.
Mutual Exclusion By Shiran Mizrahi. Critical Section class Counter { private int value = 1; //counter starts at one public Counter(int c) { //constructor.
Chapter 6: Process Synchronization
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter 3 The Critical Section Problem
Parallel Processing (CS526) Spring 2012(Week 6).  A parallel algorithm is a group of partitioned tasks that work with each other to solve a large problem.
Shared Memory Coordination We will be looking at process coordination using shared memory and busy waiting. –So we don't send messages but read and write.
Chapter 6: Process Synchronization. Outline Background Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores Classic Problems.
OS Spring’04 Concurrency Operating Systems Spring 2004.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Process Synchronization (Or The “Joys” of Concurrent.
A Fault-Tolerant h-out of-k Mutual Exclusion Algorithm Using Cohorts Coteries for Distributed Systems Presented by Jehn-Ruey Jiang National Central University.
Concurrency in Distributed Systems: Mutual exclusion.
Hwajung Lee. Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down, should we care.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Process Synchronization.
1 Lecture 20: Protocols and Synchronization Topics: distributed shared-memory multiprocessors, synchronization (Sections )
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed Mutual Exclusion.
Distributed Mutual Exclusion
1 Lecture 9: Synchronization  concurrency examples and the need for synchronization  definition of mutual exclusion (MX)  programming solutions for.
Distributed Algorithms
The Critical Section Problem
28/10/1999POS-A1 The Synchronization Problem Synchronization problems occur because –multiple processes or threads want to share data; –the executions.
Process Synchronization Continued 7.2 Critical-Section Problem 7.3 Synchronization Hardware 7.4 Semaphores.
6.3 Peterson’s Solution The two processes share two variables: Int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
1 Distributed Process Management Chapter Distributed Global States Operating system cannot know the current state of all process in the distributed.
Presenter: Long Ma Advisor: Dr. Zhang 4.5 DISTRIBUTED MUTUAL EXCLUSION.
1 Chapter 10 Distributed Algorithms. 2 Chapter Content This and the next two chapters present algorithms designed for loosely-connected distributed systems.
Mutual Exclusion Using Atomic Registers Lecturer: Netanel Dahan Instructor: Prof. Yehuda Afek B.Sc. Seminar on Distributed Computation Tel-Aviv University.
Hwajung Lee. Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down, should we care.
CY2003 Computer Systems Lecture 04 Interprocess Communication.
Studying Different Problems from Distributed Computing Several of these problems are motivated by trying to use solutiions used in `centralized computing’
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
Physical clock synchronization Question 1. Why is physical clock synchronization important? Question 2. With the price of atomic clocks or GPS coming down,
Mutual exclusion Ludovic Henrio CNRS - projet SCALE Distributed Algorithms.
CSCI-375 Operating Systems Lecture Note: Many slides and/or pictures in the following are adapted from: slides ©2005 Silberschatz, Galvin, and Gagne Some.
ITEC452 Distributed Computing Lecture 6 Mutual Exclusion Hwajung Lee.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are: 1. Resource sharing 2. Avoiding concurrent update on shared data 3. Controlling the.
1 Lecture 19: Scalable Protocols & Synch Topics: coherence protocols for distributed shared-memory multiprocessors and synchronization (Sections )
Lecture 12-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2012 Indranil Gupta (Indy) October 4, 2012 Lecture 12 Mutual Exclusion.
Lecture 7- 1 CS 425/ECE 428/CSE424 Distributed Systems (Fall 2009) Lecture 7 Distributed Mutual Exclusion Section 12.2 Klara Nahrstedt.
Decentralized solution 1
Hwajung Lee. Mutual Exclusion CS p0 p1 p2 p3 Some applications are:  Resource sharing  Avoiding concurrent update on shared data  Controlling the.
Token-passing Algorithms Suzuki-Kasami algorithm The Main idea Completely connected network of processes There is one token in the network. The holder.
Homework-6 Questions : 2,10,15,22.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Oct 1, 2015 Lecture 12: Mutual Exclusion All slides © IG.
Bakery Algorithm - Proof
Mutual Exclusion Continued
Designing Parallel Algorithms (Synchronization)
Decentralized solution 1
Mutual Exclusion Problem Specifications
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Outline Distributed Mutual Exclusion Introduction Performance measures
Lecture 2 Part 2 Process Synchronization
Mutual Exclusion CS p0 CS p1 p2 CS CS p3.
Physical clock synchronization
Synchronization (2) – Mutual Exclusion
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
Lecture 18: Coherence and Synchronization
Distributed Systems and Concurrency: Synchronization in Distributed Systems Majeed Kassis.
Distributed Mutual eXclusion
CSE 486/586 Distributed Systems Mutual Exclusion
Hwajung Lee ITEC452 Distributed Computing Lecture 6 Mutual Exclusion Sequential and concurrent events. Understanding logical clocks and vector clocks.
Presentation transcript:

Maekawa’s algorithm Divide the set of processes into subsets that satisfy the following two conditions: i  S i  i,j :  i,j  n-1 :: S i  S j ≠  Main idea. Each process i is required to receive permission from S i only. Multiple processes will never receive an OK. 0,1,21,3,5 2,4,5 S0S0 S1S1 S2S2

Maekawa’s algorithm Example. Let there be seven processes 0, 1, 2, 3, 4, 5, 6 S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Maekawa’s algorithm Version 1 {Life of process I} 1. Send timestamped request to each process in S i. 2.Request received  send ack to process with the lowest timestamp. Thereafter, " lock " (i.e. commit) yourself to that process, and keep others waiting. 3. Enter CS if you receive ack from each member in S i. 4. To exit CS, send release to every process in S i. 5. Release received  unlock yourself. Then send ack to the next process with the lowest timestamp. S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Maekawa’s algorithm Proof of ME1. At most one process can enter its critical section at any time. Let i and j attempt to enter their Critical Sections S i   S j ≠   there is a process k  S i   S j Process k will not send ack to both. So it will act as the arbitrator. S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Maekawa’s algorithm Proof of ME2. No deadlock Unfortunately deadlock is possible! From S 0 ={0,1,2}, 0,2 send ack to 0, but 1 sends ack to 1 ; From S 1 ={1,3,5}, 1,3 send ack to 1, but 5 sends ack to 2 ; Prom S 2 ={2,4,5}, 4,5 send ack to 2, but 2 sends ack to 0 ; Now, 0 waits for 1, 1 waits for 2, and 2 waits for 0. So deadlock is possible! S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Maekawa’s algorithm-Version 2 Avoiding deadlock If processes could receive messages in increasing order of timestamp, then deadlock “could be” avoided. But this is too strong an assumption. So version 2 uses three more messages: - failed - inquire - relinquish S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Maekawa’s algorithm-Version 2 What is new in version 2?  Send ack and set lock as usual.  If lock is set and a request with larger timestamp arrives, send failed (you have no chance). If the incoming request has a lower timestamp, then send inquire (are you in CS?) to the locked process.  Receive inquire and at least one failed message  send relinquish. The recipient resets the lock. S 0 ={0, 1, 2} S 1 ={1, 3, 5} S 2 ={2, 4, 5} S 3 ={0, 3, 4} S 4 ={1, 4, 6} S 5 ={0, 5, 6} S 6 ={2, 3, 6}

Maekawa’s algorithm-Version 2 Example

Comments Let K = |S i |. Let each process be a member of D subsets. When N = 7, K = D = 3. When K=D, N = K(K-1)+1. So K is of the order √ N The message complexity of Version 1 is 3 √ N. Maekawa’s analysis of Version 2 reveals a complexity of 7 √ N

Token-passing Algorithms Suzuki-Kasami algorithm Completely connected network of processes There is one token in the network. The owner of the token has the permission to enter CS. Token will pass from one process to another based on demand.

Token-passing Algorithms Suzuki-Kasami algorithm Process i broadcasts (i, num) Each process has -an array req : req[j] denotes the sequence no of the latest request from process j (Some requests will be stale soon) - an array last: last[j] denotes the sequence number of the latest visit to CS from for process j. - a queue of waiting processes Req: array[0..n-1] of integer Last: Array [0..n-1] of integer

Token-passing Algorithms Suzuki-Kasami algorithm When a process receives a request (i, num) from process k, it sets req[k] := num When process i receives a token, it sets last[i] := its own num Process i retains process k in its queue only if 1+ last[k] = req[k] This guarantees the freshness of the request Req: array[0..n-1] of integer Last: Array [0..n-1] of integer

Shared-memory algorithms for mutex Complexity of the solution depends on the grain of atomicity. - Atomic reads and writes (what does it mean?) (Numerous solutions have been proposed) - Using read-modify-writes (what is this?) (Somewhat simpler to use) - Using LL (Load Linked) and SC (Store Conditional) primitives (Easy to use, but somewhat unconventional)

Solution using atomic read/write define turn0, turn1 : shared Boolean ( initially false } {process 0}{process 1} do true  turn0 := true;turn1:=true; do turn1  skip od ; do turn0  skip od ;Critical Section; turn0:= false;turn1:= false;od What is the problem here?

Peterson’s two-process algorithm program peterson ; Define flag[0], flag[1]: shared boolean { initially false} turn: shared integer {initially 0} {Program for process 0} {Program for process 1} do true  1: flag[0] = true; 2: turn = 0; 3 : do (flag[1]  turn =0)  skip od 4 : critical section; 5 : flag[0] = false; 6: non-critical section codes; od do true  7: flag[1] = true; 8: turn = 1; 9: do (flag[0]  turn = 1)  skip od ; 10: critical section; 11: flag[1] = false; 12: non-critical section codes; od

Does it work? ME1. At most one process in CS Let 0 be in CS. Can 1 enter its CS? Let’s see … 0 in CS  flag[1] = false OR turn = 1 OR both. To enter CS, 1 must see flag[0] = false OR turn = 0 OR both. But 0 in CS  flag[0] = true! So turn =0 should hold.

Does it work? Case 1. process 0 reads flag[1] = false in step 3  process 1 has not executed step 7  process 1 eventually sets turn to 1 (step 8)  process 1 checks turn (step 9) and finds turn =1  process 1 waits in step 9 and cannot enter its CS

Does it work? Case 2. process 0 reads turn = 1 in step 3  process 1 executed step 8 after 0 executed step 2  in step 9 process 1 reads flag[0] = true and turn = 1  process 1 waits in step 9 and cannot enter its CS

Does it work? ME2. No deadlock (flag[1]  turn =0)  (flag[0]  turn = 1) = false

Does it work? ME3. Progress (eventual entry into CS) Argue about this yourself.