Chapter 12 Message Ordering. Causal Ordering A single message should not be overtaken by a sequence of messages Stronger than FIFO Example of FIFO but.

Slides:



Advertisements
Similar presentations
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Mutual Exclusion.
Advertisements

Global States.
CS542 Topics in Distributed Systems Diganta Goswami.
Logical Clocks (2).
CS425/CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
CS 542: Topics in Distributed Systems Diganta Goswami.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 6 Instructor: Haifeng YU.
SES Algorithm SES: Schiper-Eggli-Sandoz Algorithm. No need for broadcast messages. Each process maintains a vector V_P of size N - 1, N the number of processes.
CS542 Topics in Distributed Systems Diganta Goswami.
Synchronization Chapter clock synchronization * 5.2 logical clocks * 5.3 global state * 5.4 election algorithm * 5.5 mutual exclusion * 5.6 distributed.
Reliability on Web Services Presented by Pat Chan 17/10/2005.
Page 1 Mutual Exclusion* Distributed Systems *referred to slides by Prof. Paul Krzyzanowski at Rutgers University and Prof. Mary Ellen Weisskopf at University.
Distributed Systems Spring 2009
LEADER ELECTION CS Election Algorithms Many distributed algorithms need one process to act as coordinator – Doesn’t matter which process does the.
CS 582 / CMPE 481 Distributed Systems
Ordering and Consistent Cuts Presented By Biswanath Panda.
Group Communications Group communication: one source process sending a message to a group of processes: Destination is a group rather than a single process.
Distributed Systems Fall 2009 Coordination and agreement, Multicast, and Message ordering.
CS603 Process Synchronization February 11, Synchronization: Basics Problem: Shared Resources –Generally data –But could be others Approaches: –Model.
EEC-681/781 Distributed Computing Systems Lecture 11 Wenbing Zhao Cleveland State University.
Computer Science Lecture 10, page 1 CS677: Distributed OS Last Class: Clock Synchronization Physical clocks Clock synchronization algorithms –Cristian’s.
State Machines CS 614 Thursday, Feb 21, 2002 Bill McCloskey.
Time, Clocks, and the Ordering of Events in a Distributed System Leslie Lamport (1978) Presented by: Yoav Kantor.
Logical Clocks (2). Topics r Logical clocks r Totally-Ordered Multicasting r Vector timestamps.
Computer Science 425 Distributed Systems (Fall 2009) Lecture 5 Multicast Communication Reading: Section 12.4 Klara Nahrstedt.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Chapter 5.
1DT066 D ISTRIBUTED I NFORMATION S YSTEM Time, Coordination and Agreement 1.
CIS 720 Distributed algorithms. “Paint on the forehead” problem Each of you can see other’s forehead but not your own. I announce “some of you have paint.
Distributed Mutex EE324 Lecture 11.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
MUTUAL EXCLUSION AND QUORUMS CS Distributed Mutual Exclusion Given a set of processes and a single resource, develop a protocol to ensure exclusive.
DISTRIBUTED ALGORITHMS By Nancy.A.Lynch Chapter 18 LOGICAL TIME By Sudha Elavarti.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Lamport’s Logical Clocks & Totally Ordered Multicasting.
Communication & Synchronization Why do processes communicate in DS? –To exchange messages –To synchronize processes Why do processes synchronize in DS?
Event Ordering Greg Bilodeau CS 5204 November 3, 2009.
Lecture 10 – Mutual Exclusion Distributed Systems.
EEC 688/788 Secure and Dependable Computing Lecture 10 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Distributed Coordination. Turing Award r The Turing Award is recognized as the Nobel Prize of computing r Earlier this term the 2013 Turing Award went.
CIS825 Lecture 2. Model Processors Communication medium.
D u k e S y s t e m s Asynchronous Replicated State Machines (Causal Multicast and All That) Jeff Chase Duke University.
Building Dependable Distributed Systems, Copyright Wenbing Zhao
Distributed Systems Topic 5: Time, Coordination and Agreement
Election Distributed Systems. Algorithms to Find Global States Why? To check a particular property exist or not in distributed system –(Distributed) garbage.
Replication Improves reliability Improves availability ( What good is a reliable system if it is not available?) Replication must be transparent and create.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
Logical Clocks. Topics r Logical clocks r Totally-Ordered Multicasting.
Lecture 12-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2012 Indranil Gupta (Indy) October 4, 2012 Lecture 12 Mutual Exclusion.
Lecture 7- 1 CS 425/ECE 428/CSE424 Distributed Systems (Fall 2009) Lecture 7 Distributed Mutual Exclusion Section 12.2 Klara Nahrstedt.
COMP 655: Distributed/Operating Systems Summer 2011 Dr. Chunbo Chu Week 6: Synchronyzation 3/5/20161 Distributed Systems - COMP 655.
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Revisiting Logical Clocks: Mutual Exclusion Problem statement: Given a set of n processes, and a shared resource, it is required that: –Mutual exclusion.
Distributed Systems Lecture 6 Global states and snapshots 1.
Distributed Mutex EE324 Lecture 11.
Computer Science 425 Distributed Systems CS 425 / ECE 428 Fall 2013
Distributed Systems CS
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Replication Improves reliability Improves availability
Outline Distributed Mutual Exclusion Introduction Performance measures
CSE 486/586 Distributed Systems Mutual Exclusion
Distributed Systems CS
Distributed Systems CS
Distributed algorithms
Distributed Mutual eXclusion
CSE 486/586 Distributed Systems Reliable Multicast --- 2
CSE 486/586 Distributed Systems Mutual Exclusion
Presentation transcript:

Chapter 12 Message Ordering

Causal Ordering A single message should not be overtaken by a sequence of messages Stronger than FIFO Example of FIFO but not causal

Causal and FIFO ordering FIFO:  Any two messages from a process P i to P j are received Causal:  If r 1, r 2 are the receive events on some process and s 1, s 2 are the corresponding send events :

Algorithm for causal ordering Maintain a matrix M[1..N,1..N] at each process When P i sends a message to P j  M[i,j] = M[i,j] + 1  Piggyback M with the message When P i receives a message with matrix W from P j  If Then receive message else block M = max (M, W)

Algorithm for causal ordering The entry M[k,j] at process i, records the number of messages sent by j to process k as known by process I If a process i receives a message from j with the matrix W then  If W[k,i] >M[k,i] then j knows of a message k has sent to i, though i has not received the message till then. Hence process i blocks the message from j.

Algorithm

Applications Causal chat – Figure 12.5  Uses Causal Linker Figure 12.4  P 0  P 1  P 1  P 2  P 0  P 2  If P 0 sends a message to P 1 and P 2 and P 1 sends a reply to both P 0 and P 2 then causal linker gives the guarantee that P 1 ’s reply cannot reach P 2 before the original query

Synchronous Ordering Equivalent to a computation in which all messages are logically instantaneous Stronger than Causal and FIFO ordering Formally, let be the set of all external events. Then, a computation is synchronous iff there exists a mapping T from to the set of natural numbers such that and

Examples Synchronous Non-Synchronous

Synchronous order : Algorithm The algorithm cannot be totally symmetric (if two processes wish to simultaneously send messages to each other) Use process numbers to order all processes Use control messages to enforce synchronous ordering

Synchronous order : Algorithm Messages:  Big : sent by a bigger process to smaller process  Small: sent by a smaller process to bigger process All processes are initially active An active process can send a big message  After sending turn passive till an ack is received  Passive process cannot send or receive any message (except, of course, the ack )

Synchronous order : Algorithm Small messages:  Request permission from the bigger process before sending  Permission can be granted by an active process. The bigger process turns passive after granting the permission  Once the message is received the bigger process can turn active

Algorithm

Total order for multicast messages If process P i sends messages x, y to processes P j, P k,.. then all the processes P j, P k … receive the messages in the same order (x,y or y,x) Observe that this does not imply causal or even FIFO ordering Algorithms:  Similar to the mutex problem  Assume FIFO channels

Centralized and Lamport Algorithms Assume FIFO channels  Broadcast a message   requestCS  Centralized: Coordinator multicasts the message instead of sending the lock  Lamport: The broadcast is stored in a queue by all processes and a timestamped ack is sent back A process can deliver (act on) a message with timestamp t in its request queue if it has received a message with timestamp greater than t from all other processes ( Entering the CS in Lamport’s mutex algorithm)

Skeen’s Algorithm Lamport’s Algorithm is wasteful if messages are multicast (the other processes simple ignore the messages) Skeen’s algorithm results in # of messages proportional to the number of recipients of the message

Skeen’s Algorithm Send a timestamped message to all the destination processes On receiving a message, a process marks it as undeliverable and sends the value of the logical clock as the proposed timestamp to the initiator Set the max of all proposals as the final timestamp and send to all destinations On receiving the final timestamp of a message, it is marked as deliverable. A deliverable message is delivered if it has the smallest timestamp in the message queue.

Process 0 multicasts msg to 1 and 2 On receiving 1 and 2 they mark it undeliverable and send propose with values 2 and 4 respectively If 1 receives another message from a lower priority process (say with id 3), then it ignores the message till it has received final from 0 Process 0 takes the max of the proposed timestamps and send out final 4 to processes 1 and 2 Processes 1 and 2 mark msg as deliverable and deliver it if it has the smallest timestamp 0 1 2

Application Replicated State Machine  Provide fault tolerant service using multiple servers  All machines should process all requests in the same order  Use total ordering of messages