Download presentation
Presentation is loading. Please wait.
Published byAubrey Wiggins Modified over 9 years ago
1
Time, Clocks, and the Ordering of Events in a Distributed System Leslie Lamport Massachusetts Computer Associates,Inc. Presented by Xiaofeng Xiao
2
Abstract Concept of time and distributed system Concept of time and distributed system The Partial Ordering The Partial Ordering Logical Clocks Logical Clocks ---- Clock Conditions ---- Clock Conditions Total Ordering Total Ordering ---- Solve mutual exclusion problem ---- Solve mutual exclusion problem Anomalous Behavior Anomalous Behavior Physical Clocks Physical Clocks Conclusion Conclusion
3
Time One dimension. It can not move backward. It can not stop. One dimension. It can not move backward. It can not stop. It is derived from concept of the order in which events occur. It is derived from concept of the order in which events occur. The concepts “before” and “after” need to be reconsidered in a distributed system. The concepts “before” and “after” need to be reconsidered in a distributed system.
4
Distributed System A distributed system consists of collection of distinct processes which are spatially separated, and which communicate with one another by exchanging messages. A distributed system consists of collection of distinct processes which are spatially separated, and which communicate with one another by exchanging messages. It could be a network of interconnected computers, like ARPA ("Advanced Research Projects Agency" ) net, or just a single computer with separate processes. It could be a network of interconnected computers, like ARPA ("Advanced Research Projects Agency" ) net, or just a single computer with separate processes. It is sometimes impossible to say that one of two events occurred first in a distributed system. “happened before” is a partial ordering of the events in the system. It is sometimes impossible to say that one of two events occurred first in a distributed system. “happened before” is a partial ordering of the events in the system.
5
ARPA net
6
The Partial Ordering Our system: System is composed of a collection of processes. System is composed of a collection of processes. Each process consists of a sequence of events. Each process consists of a sequence of events. One event could be the execution of a subprogram on a computer or the execution of a single machine instruction. It depends upon the application. One event could be the execution of a subprogram on a computer or the execution of a single machine instruction. It depends upon the application. A single process is defined to be a set of events with an a priori total ordering. A single process is defined to be a set of events with an a priori total ordering.
7
Definition of “happened before” The relation “→” on the set of events of a system is the smallest relation satisfying the following three conditions: The relation “→” on the set of events of a system is the smallest relation satisfying the following three conditions: (1) If a and b are events in the same process, and a comes before b, then a→b. (1) If a and b are events in the same process, and a comes before b, then a→b. (2) If a is the sending of a message by one process and b is the receipt of the same message by another processes, then a→b. (2) If a is the sending of a message by one process and b is the receipt of the same message by another processes, then a→b. (3) If a→b and b→c then a→c. (3) If a→b and b→c then a→c. Two distinct events a and b are said to be concurrent if a — /→b and b —/→ a. Two distinct events a and b are said to be concurrent if a — /→b and b —/→ a. We assume that a —/→ a for any event a. We assume that a —/→ a for any event a.
8
Space-time diagram p 1 →r 4 since p 1 → q 2 and q 2 → q 4 and q 4 → r 3 and r 3 → r 4 p 1 →r 4 since p 1 → q 2 and q 2 → q 4 and q 4 → r 3 and r 3 → r 4 p 3 and q 3 are concurrent. p 3 and q 3 are concurrent.
9
Logical Clocks A clock is just a way of assigning a number to an event. Definition of logical clocks: A clock C i for each process P i is a function which assigns a number C i to any event a in that process. A clock C i for each process P i is a function which assigns a number C i to any event a in that process. The entire system of clocks is represented by the function C which assigns to any event b the number C = C j if b is an event b in process P j. The entire system of clocks is represented by the function C which assigns to any event b the number C = C j if b is an event b in process P j.
10
Clock Condition Clock Condition Clock Condition For any events a, b: if a→ b then C For any events a, b: if a→ b then C Clock Condition is satisfied if Clock Condition is satisfied if C1. If a and b are events in process P i, and a comes before b, then C i C1. If a and b are events in process P i, and a comes before b, then C i C2. If a is the sending of a message by process P i and b is the receipt of that message by process P j, then C i C2. If a is the sending of a message by process P i and b is the receipt of that message by process P j, then C i
11
Example of “ticks” C1 means that there must be a tick line between any two events on a process line C1 means that there must be a tick line between any two events on a process line C2 means that every message line must cross a tick line. C2 means that every message line must cross a tick line.
12
Redraw Figure 2
13
Clock Condition Now assume that the processes are algorithms, and the events represent certain actions during their execution. Process Pi’s clock is represented by a register Ci, so that Ci is the value contained by Ci during the event a. Now assume that the processes are algorithms, and the events represent certain actions during their execution. Process Pi’s clock is represented by a register Ci, so that Ci is the value contained by Ci during the event a. To meet condition C1 and C2, the processes need to obey the following rules: IR1. Each process Pi increments Ci between any two successive events. IR1. Each process Pi increments Ci between any two successive events. IR2. (a) If event a is the sending of a message m by process Pi, then the message m contains a timestamp Tm=Ci. (b) Upon receiving a message m, process Pj sets Cj greater than or equal to its present value and greater than Tm. IR2. (a) If event a is the sending of a message m by process Pi, then the message m contains a timestamp Tm=Ci. (b) Upon receiving a message m, process Pj sets Cj greater than or equal to its present value and greater than Tm.
14
Total Ordering We can use a system of clocks satisfying the Clock Condition to place a total ordering on the set of all system events. We can use a system of clocks satisfying the Clock Condition to place a total ordering on the set of all system events. We simply order the events by the times at which they occur We simply order the events by the times at which they occur To break ties, we use any arbitrary total ordering ~< of the processes. To break ties, we use any arbitrary total ordering ~< of the processes.
15
Definition of total ordering“ => ” If a is an event in process P i and b is an event in process P j, then a=>b if and only if either (i) C i (a) b if and only if either (i) C i (a)<C j (b) or (ii) C i (a) =C j (b) and P i ~< P j. Clock Condition implies that if a → b then a=>b. Clock Condition implies that if a → b then a=>b. In other words, the relation => is a way of completing the “happened before” partial ordering to a total ordering.
16
Unique → and not unique => The ordering => depends upon the clock systems and is not unique. The ordering => depends upon the clock systems and is not unique. Example: If we have C i (a) =C j (b) and choose P i ~ b. If we choose P j ~ a The partial ordering → which is uniquely determined by the system of events. The partial ordering → which is uniquely determined by the system of events.
17
Total Ordering Solve the mutual exclusion problem Mutual exclusion: A collection of processes which share a single resource. Only one process can use the resource at a time, the other processes will be excluded from doing the same thing Mutual exclusion: A collection of processes which share a single resource. Only one process can use the resource at a time, the other processes will be excluded from doing the same thing Requirements: (I) A process which has been granted the resource must release it before it can be granted to another process. (II) Different requests for the resource must be granted in the order in which they are made. (III) If every process which is granted the resource eventually release it, then every request is eventually granted. Requirements: (I) A process which has been granted the resource must release it before it can be granted to another process. (II) Different requests for the resource must be granted in the order in which they are made. (III) If every process which is granted the resource eventually release it, then every request is eventually granted.
18
Total Ordering Implement a system clocks with rules IR1 and IR2, and use them to define a total ordering => of all events. Implement a system clocks with rules IR1 and IR2, and use them to define a total ordering => of all events. Assumptions: 1. For any two processes P i and P j, the messages sent from P i to P j are received in the same order as they are sent. 2. Every message is eventually received. 3. A process can send messages directly to every other process. 4. Each process maintains its own request queue which is never seen by any other process. The request queues initially contain the single message T 0 :P 0 requests resource. Assumptions: 1. For any two processes P i and P j, the messages sent from P i to P j are received in the same order as they are sent. 2. Every message is eventually received. 3. A process can send messages directly to every other process. 4. Each process maintains its own request queue which is never seen by any other process. The request queues initially contain the single message T 0 :P 0 requests resource.
19
Total Ordering The algorithm: To request the resource, process Pi sends the message Tm:Pi requests resource to every other process, and puts that message on its request queue, where Tm is the timestamp of the message To request the resource, process Pi sends the message Tm:Pi requests resource to every other process, and puts that message on its request queue, where Tm is the timestamp of the message When process Pj receives the message Tm:Pi requests resource, it places it on its request queue and sends a (timestamped) acknowledgment message to Pi. When process Pj receives the message Tm:Pi requests resource, it places it on its request queue and sends a (timestamped) acknowledgment message to Pi. To release the resource, process Pi removes any Tm:Pi request resource message from its request queue and sends a (timstamped) Pi releases resource messages to every other process. To release the resource, process Pi removes any Tm:Pi request resource message from its request queue and sends a (timstamped) Pi releases resource messages to every other process.
20
Total Ordering The algorithm (cont’d) When process Pj receives a Pi release resource message, it removes any Tm:Pi requests resource message from its request queue. When process Pj receives a Pi release resource message, it removes any Tm:Pi requests resource message from its request queue. Process Pi is granted the resource when the following two conditions are satisfied: (i) There is a Tm:Pi request resource message in its request queue which is ordered before any other request in its queue by the relation =>. (ii) Pi has received a message from every other process timestamped later than Tm. Process Pi is granted the resource when the following two conditions are satisfied: (i) There is a Tm:Pi request resource message in its request queue which is ordered before any other request in its queue by the relation =>. (ii) Pi has received a message from every other process timestamped later than Tm.
21
Total Ordering Example Initial: P0 has the resource, P1 send its request first then P2 send its request Initial: P0 has the resource, P1 send its request first then P2 send its request At time T1,P1 send the message T1:P1 request resource to P0 and P2, and put this message on its request queue. (P1’s queue: T1:P1) At time T1,P1 send the message T1:P1 request resource to P0 and P2, and put this message on its request queue. (P1’s queue: T1:P1) At time T2, P2 send the message T2:P2 request resource to P0 and P1, and put this message on its request queue. (P2’s queue: T2:P2) At time T2, P2 send the message T2:P2 request resource to P0 and P1, and put this message on its request queue. (P2’s queue: T2:P2) At the time T3, P0 receives the message T2:P2, it place it on its request queue and sends a timestamped acknowledge message T3:M to P2 At the time T3, P0 receives the message T2:P2, it place it on its request queue and sends a timestamped acknowledge message T3:M to P2 At the time T4, P0 receives the message T1:P1, it place it on its request queue and sends a timestamped acknowledge message T4:M to P1 At the time T4, P0 receives the message T1:P1, it place it on its request queue and sends a timestamped acknowledge message T4:M to P1
22
Total Ordering Example (cont’d) At the time T5, P2 receives the message T1:P1, it place it on its request queue and sends a timestamped acknowledge message T5:M to P1 (P2’s queue: T2:P2, T1:P1) At the time T5, P2 receives the message T1:P1, it place it on its request queue and sends a timestamped acknowledge message T5:M to P1 (P2’s queue: T2:P2, T1:P1) At the time T6, P1 receives the message T2:P2, it place it on its request queue and sends a timestamped acknowledge message T6:M to P2 (P1’s queue: T1:P1, T2:P2) At the time T6, P1 receives the message T2:P2, it place it on its request queue and sends a timestamped acknowledge message T6:M to P2 (P1’s queue: T1:P1, T2:P2) At the time T7, P0 release the resource and sends a timestamped message T7:M to P1 and P2. From the condition (ii) of rule 5, P1 has received a message from every other process timestamped later than T1. But P2 does not satisfy this condition. Therefore, P1 is granted the resource. At the time T7, P0 release the resource and sends a timestamped message T7:M to P1 and P2. From the condition (ii) of rule 5, P1 has received a message from every other process timestamped later than T1. But P2 does not satisfy this condition. Therefore, P1 is granted the resource.
23
Anomalous Behavior Consider a nationwide system of interconnected computers. Suppose a person issues a request A on a computer A, and then telephones a friend in another city to have him issue a request B on a different computer B. It is quite possible for a request B to receive a lower timestamp and be ordered before request A. Consider a nationwide system of interconnected computers. Suppose a person issues a request A on a computer A, and then telephones a friend in another city to have him issue a request B on a different computer B. It is quite possible for a request B to receive a lower timestamp and be ordered before request A. Relevant external events may influence the ordering of system events. Relevant external events may influence the ordering of system events. Two possible solutions: 1. The user gives the timestamp T B is later than T A ; 2. Construct a system of clocks which satisfies the Strong Clock Condition: For any event a, b in φ: if a→b then C Two possible solutions: 1. The user gives the timestamp T B is later than T A ; 2. Construct a system of clocks which satisfies the Strong Clock Condition: For any event a, b in φ: if a→b then C
24
Physical Clocks We can construct a system of physical clocks which, running quite independently of one another, will satisfy the Strong Clock Condition. Then we can use physical clocks to eliminate anomalous behavior. We can construct a system of physical clocks which, running quite independently of one another, will satisfy the Strong Clock Condition. Then we can use physical clocks to eliminate anomalous behavior. Properties: Properties: 1. Clock runs continuously. 2. Clock runs at approximately the correct rate. i.e. dC i (t)/dt ≈1 for all t. PC1. There exists a constant ĸ<<1 such that for all i: | dC i (t)/dt -1| < ĸ 3. Clocks must be synchronized so that C i (t) ≈ C j (t) for all i, j, and t. PC2. For all i, j: | C i (t) - C j (t) | < ε ( ε is a sufficiently small constant )
25
Physical Clocks Clock synchronize algorithm Clock synchronize algorithm Let m be a message which sent at physical time t and received at time t’. We define ע m = t’ – t to be the total delay of the message m. This delay will not be known to the process which receives m. However, we assume that the receiving process knows some minimum delay 0 ≤ μ m ≤ ע m. We call ξm = ע m - μ m the unpredictable delay of the message. The physical clocks need to obey: IR1’. For each i, if P i does not receive a message at physical time t, then Ci is differentiable at t and dC i (t)/dt > 0. IR2’. (a) If P i sends a message m at physical time t, then m contains a timestamp T m =C i (t). (b) Upon receiving a message m at time t’, process P j sets C j (t’) equal to maximum (C j (t’-0), T m + μ m ).
26
Conclusion The concept of “happening before” defines an invariant partial ordering of the events in a distributed system. The concept of “happening before” defines an invariant partial ordering of the events in a distributed system. An algorithm for extending that partial ordering to a somewhat arbitrary total ordering. An algorithm for extending that partial ordering to a somewhat arbitrary total ordering. Total ordering is used to solve a simple synchronization problem. Total ordering is used to solve a simple synchronization problem. Total ordering can produce anomalous behavior but it can be prevented by the use of properly synchronized physical clocks. Total ordering can produce anomalous behavior but it can be prevented by the use of properly synchronized physical clocks.
27
Happy Thanksgiving
28
Question? Please rise your hand !
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.