Distributed Algorithms – 2g1513 Lecture 1b – by Ali Ghodsi Models of distributed systems continued and logical time in distributed systems.

Slides:



Advertisements
Similar presentations
Computer Science Lecture 6, page 1 CS677: Distributed OS Processes and Threads Processes and their scheduling Multiprocessor scheduling Threads Distributed.
Advertisements

Last Class: Clock Synchronization
Reducing DFA’s Section 2.4. Reduction of DFA For any language, there are many DFA’s that accept the language Why would we want to find the smallest? Algorithm:
Image Modeling & Segmentation
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 29.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 4 Instructor: Haifeng YU.
Lecture #9 EGR 277 – Digital Logic
Randomized Algorithms Randomized Algorithms CS648 Lecture 9 Random Sampling part-I (Approximating a parameter) Lecture 9 Random Sampling part-I (Approximating.
Distributed Algorithms – 2g1513 Lecture 10 – by Ali Ghodsi Fault-Tolerance in Asynchronous Networks.
Randomized Algorithms Randomized Algorithms CS648 Lecture 20 Probabilistic Method (part 1) Lecture 20 Probabilistic Method (part 1) 1.
1 CALCULUS Even more graphing problems
EECE Hybrid and Embedded Systems: Computation T. John Koo, Ph.D. Institute for Software Integrated Systems Department of Electrical Engineering and.
Lecture 22: Matrix Operations and All-pair Shortest Paths II Shang-Hua Teng.
CSI Uncertainty in A.I. Lecture 141 The Bigger Picture (Sic) If you saw this picture, what game would you infer you were watching? How could we get.
Graph the function y = |4 cos x|
1 2.5:Reason Using Properties from Algebra Goal:Use algebraic properties in logical arguments.
Distributed Algorithms – 2g1513 Lecture 9 – by Ali Ghodsi Fault-Tolerance in Distributed Systems.
Distributed Algorithms Lecture 10b – by Ali Ghodsi Fault-Tolerance in Asynchronous Networks – Probabilistic Consensus.
CE1110 :Digital Logic Design lecture 02 Digital Logic Gates Dr. Atef Ali Ibrahim 1.
Decision Maths 1 Sorting Algorithm Shuttle Sort A V Ali : 1.Compare items 1 and 2; swap them if necessary 2.Compare 2 and 3; swap.
Foundations of Data Structures Practical Session #13 Graphs Algorithms.
Reflexivity, Symmetry, and Transitivity Lecture 44 Section 10.2 Thu, Apr 7, 2005.
Lecture 3 Algorithm Analysis. Motivation Which algorithm to use?
Cs Distributed Computing—admin Ali Ghodsi – UC Berkeley / KTH alig(at)cs.berkeley.edu Ali Ghodsi,
PART 10 Pattern Recognition 1. Fuzzy clustering 2. Fuzzy pattern recognition 3. Fuzzy image processing FUZZY SETS AND FUZZY LOGIC Theory and Applications.
Lecture 5: Register Transfer & Micro-OpsOverview1.
Programming for Geographical Information Analysis: Advanced Skills Lecture 1: Introduction Programming Arc Dr Andy Evans.
MULTIPLICATION STRATEGIES
Geometry: Section 2.4 Algebraic Reasoning. What you will learn: 1. Use Algebraic Properties of Equality to justify the steps in solving an equation. 2.
IT 210 Complete Class To purchase this material link 210-Complete-Class. For more courses visit our website
Notes Over 2.1 Graph the numbers on a number line. Then write two inequalities that compare the two numbers and and 9 l l l.
Lecture 13 review Explain how distance vector algorithm works.
Here is the graph of a function
CSE322 Finite Automata Lecture #2.
Formal Models of Distributed Systems
Elec 2607 Digital Switching Circuits
Edmonds-Karp Algorithm
Topological Ordering Algorithm: Example
Physics 101: Lecture 29 Exam Results & Review
Dan C. Marinescu Office: HEC 439 B. Office hours: M, Wd 3 – 4:30 PM.
19 Dec.2010-Self Controlling-Ali Mohamed
Making Change Coins: 2 and
A graphing calculator is required for some problems or parts of problems 2000.
Lecture No. 32 Sequential Logic.
Algorithms Lecture #21 Dr.Sohail Aslam.
Algorithms Lecture # 30 Dr. Sohail Aslam.
Choose the Most Appropriate Counterclaim for a Given Claim
Graph 2x + 4y = 16 by finding the x and y intercepts.
Algorithms Lecture # 29 Dr. Sohail Aslam.
Topological Ordering Algorithm: Example
CSE 321 Discrete Structures
Lecture 8 Processes and events Local and global states Time
Topological Ordering Algorithm: Example
Algorithms Lecture #19 Dr.Sohail Aslam.
Quadratic Graphs.
Algorithms Lecture # 27 Dr. Sohail Aslam.
Algorithms Lecture #43 Dr.Sohail Aslam.
Algorithms Lecture # 02 Dr. Sohail Aslam.
Algorithms Lecture #15 Dr.Sohail Aslam.
CSE 321 Discrete Structures
CIS825 Lecture 5 1.
CS60002: Distributed Systems
Line Graphs.
2015 Planning New Years Resolutions That Stick By Elementary Ali
Topological Ordering Algorithm: Example
Algorithms Lecture #42 Dr. Sohail Aslam.
Algorithms Lecture # 26 Dr. Sohail Aslam.
Algorithms Lecture # 25 Dr. Sohail Aslam.
Digital System Design ASMD based Design
Blockchains Lecture 6.
Presentation transcript:

Distributed Algorithms – 2g1513 Lecture 1b – by Ali Ghodsi Models of distributed systems continued and logical time in distributed systems

2 State transition system - example Example algorithm:Using graphs: X:=0; while (X<3) do X = X + 1; endwhile X:=1 Formally:  States {X0, X1, X2}  Possible transitions {X0→X1, X1→X2, X2→X1 }  Start states {X0} X0 X1 X2 start

3 State transition system - formally A STS is formally described as the triple: (C, →, I) Such that: 1.C is a set of states 2.→ is a subset of C  C, describing the possible transitions (→  C  C) 3.I is a subset of C describing the initial states (I  C) Note that the system may have several transitions from one state E.g. → = {X2→X1, X2→X0}

4 Local Algorithms The local algorithm will be modeled as a STS, in which the following three events happen:  A processor changes state from one state to a another state (internal event)  A processor changes state from one state to a another state, and sends a message to the network destined to another processor (send event)  A processor receives a message destined to it and changes state from one state to another (receive event)

5 Model of the distributed system Based on the STS of a local algorithm, we can now define for the whole distributed system:  Its configurations We want it to be the state of all processes and the network  Its initial configurations We want it to be all possible configurations where every local algorithm is in its start state and an empty network  Its transitions, and We want each local algorithm state event (send, receive, internal) be a configuration transition in the distributed system

6 Execution Example 1/9 Lets do a simple distributed application, where a client sends a ping, and receives a pong from a server This is repeated indefinitely p2 server p1 client send

7 Execution Example 2/9 M = {ping p2, pong p1 } (Z p1, I p1, ├ I p1,├ S p1,├ R p1 )  Z p1 ={c init, c sent }  I p1 ={c init }  ├ I p1 =   ├ S p1 ={(c init, ping p2, c sent )}  ├ R p1 ={(c sent, pong p1, c init )} (Z p2, I p2, ├ I p2,├ S p2,├ R p2 )  Z p2 ={s init, s rec }  I p2 ={s init }  ├ I p2 =   ├ S p2 ={(s rec, pong p1, s init )}  ├ R p2 ={(s init, ping p2, s rec )} p2 server p1 client send

8 Execution Example 3/9 C={(c init, s init,  ), (c init, s rec,  ), (c sent, s init,  ), (c sent, s rec,  ), (c init, s init, {ping p2 }), (c init, s rec, {ping p2 }), (c sent, s init, {ping p2 }), (c sent, s rec, {ping p2 }), (c init, s init, {pong p1 }), (c init, s rec, {pong p1 }), (c sent, s init, {pong p1 }), (c sent, s rec, {pong p1 })...} I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )}

9 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 4/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c init p2 state : s init

10 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 5/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c init p2 state : s init

11 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 6/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c sent send p2 state : s init

12 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 7/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c sent rec p2 state : s rec

13 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 8/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c sent send p2 state : s init

14 p2 (server) (Z p2, I p2, ├ I p2,├ S p2,├ R p2 ) Z p2 ={s init, s rec } I p2 ={s init } ├ I p2 =  ├ S p2 ={(s rec, pong p1, s init )} ├ R p2 ={(s init, ping p2, s rec )} p1 (client) (Z p1, I p1, ├ I p1,├ S p1,├ R p1 ) Z p1 ={c init, c sent } I p1 ={c init } ├ I p1 =  ├ S p1 ={(c init, ping p2, c sent )} ├ R p1 ={(c sent, pong p1, c init )} Execution Example 9/9 I = { (c init, s init,  ) } →={(c init, s init,  ) → (c sent, s init, {ping p2 }), (c sent, s init, {ping p2 }) → (c sent, s rec,  ), (c sent, s rec,  ) → (c sent, s init, {pong p1 }), (c sent, s init, {pong p1 }) → (c init, s init,  )} E=((c init, s init,  ), (c sent, s init, {ping p2 }), (c sent, s rec,  ), (c sent, s init, {pong p1 }), (c init, s init,  ), (c sent, s init, {ping p2 }), … ) p1 state : c init rec p2 state : s init

15 Applicable events Any internal event e=(c,d)  ├ I pi is said to be applicable in an configuration C=(c p1, …, c pi, …, c pn, M) if c pi =c  If event e is applied, we get e(C)=(c p1, …, d, …, c pn, M) Any send event e=(c,m,d)  ├ S pi is said to be applicable in an configuration C=(c p1, …, c pi, …, c pn, M) if c pi =c  If event e is applied, we get e(C)=(c p1, …, d, …, c pn, M  {m}) Any receive event e=(c,m,d)  ├ R pi is said to be applicable in an configuration C=(c p1, …, c pi, …, c pn, M) if c pi =c and m  M  If event e is applied, we get e(C)=(c p1, …, d, …, c pn, M-{m})

16 Order of events The following theorem shows an important result:  The order in which two applicable events are executed is not important! Theorem:  Let e p and e q be two events on two different processors p and q which are both applicable in configuration . Then e p can be applied to e q (  ), and e q can be applied to e p (  ).  Moreover, e p ( e q (  )) = e q ( e p (  ) ).

17 Order of events To avoid a proof by cases (3*3=9 cases) we represent all three event types in one abstraction We let the quadtuple (c, X, Y, d) represent any event:  c is the initial state of the processor  X is a set of messages that will be received by the event  Y is a set of message that will be sent by the event  d is the state of the processor after the event Examples:  (c init, , {ping}, c sent ) represents a send event  (s init, {pong}, , s rec ) represents a receive event  (c1r, , , c2) represents an internal event Any such event e p =(c, X, Y, d), at p, is applicable in a state  ={…, c p,…, M} if and only if c p =c and X  M.

18 Order of events Proof:  Let e p ={c, X, Y, d} and e q ={e, Z, W, f} and  ={…, c p, …, c q,…, M}  As both e p and e p are applicable in  we know that c p =c, c q =e, X  M, and Z  M.  e q (  ) ={…, c p, …, f,…, (M-Z)  W}, c p is untouched and c p =c, and X  (M-Z)  W as X  Z= , hence e p is applicable in e q (  )  Similar argument to show that e q is applicable in e p (  )

19 Order of events Proof:  Lets proof e p ( e q (  )) = e q ( e p (  ) ) e q (  )={…, c p, …, f,…, (M-Z)  W} e p (e q (  ))={…, d, …, f,…, (((M-Z)  W)-X)  Y} e p (  )={…, d, …, c q,…, (M-X)  Y} e q (e p (  ))={…, d, …, f,…, (((M-X)  Y)-Z)  W} (((M-Z)  W)-X)  Y = (((M-X)  Y)-Z)  W Because X  Z= , W  X= , Y  Z=   Both LHS and RHS can be transformed to ((M  W  Y)-Z)-X 

20 Exact order does not always matter In two cases the theorem does not apply:  If p=q, i.e. when the events occur on different processes They would not both be applicable if they are executed out of order  If one is a sent event, and the other is the corresponding receive event They cannot be both applicable In such cases, we say that the two events are causally related!

21 Causally Order The relation ≤ H on the events of an execution, called causal order, is defined as the smallest relation such that:  If e occurs before f on the same process, then e ≤ H f  If s is a send event and r its corresponding receive event, then s ≤ H r  ≤ H is transitive. I.e. If a ≤ H b and b ≤ H c then a ≤ H c  ≤ H is reflexive. I.e. If a ≤ H a for any event a in the execution Two events, a and b, are concurrent iff a ≤ H b and b ≤ H a holds

22 Example of Causally Related events Time-space diagram p1 p2 p3 time Causally Related Events Concurrent EventsCausally Related Events

23 Equivalence of Executions: Computations Computation Theorem:  Let E be an execution E =(  1,  2,  3 …), and V be the sequence of events V =( e 1, e 2, e 3 …) associated with it I.e. applying e k (  k )=  k+1 for all k≥1  A permutation P of V that preserves causal order, and starts in  1, defines a unique execution with the same number of events, and if finite, P and V’s final configurations are the same P =( f 1, f 2, f 3 …) preserves the causal order of V when for every pair of events f i ≤ H f j implies i<j

24 Equivalence of executions If two executions F and E have the same collection of events, and their causal order is preserved, F and E are said to be equivalent executions, written F~E  F and E could have different permutation of events as long as causality is preserved!

25 Computations Equivalent executions form equivalence classes where every execution in class is equivalent to the other executions in the class. I.e. the following always holds for executions:  ~ is reflexive I.e. If a~ a for any execution  ~ is symmetric I.e. If a~b then b~a for any executions a and b  ~ is transitive If a~b and b~c, then a~c, for any executions a, b, c Equivalence classes are called computations of executions

26 Example of equivalent executions p1 p2 p3 time p1 p2 p3 time p1 p2 p3 time Same color ~ Causally related All three executions are part of the same computation, as causality is preserved

27 Two important results (1) The computation theorem gives two important results Result 1: There is no distributed algorithm which can observe the order of the sequence of events (that can “see” the time-space diagram)  Proof:  Assume such an algorithm exists. Assume process p knows the order in the final configuration  Run two different executions of the algorithm that preserve the causality.  According to the computation theorem their final configurations should be the same, but in that case, the algorithm cannot have observed the actual order of events as they differ 

28 Two important results (2) Result 2: The computation theorem does not hold if the model is extended such that each process can read a local hardware clock  Proof:  Similarly, assume a distributed algorithm in which each process reads the local clock each time a local event occurs  The final configuration of different causality preserving executions will have different clock values, which contradicts the computation theorem 

29 Lamport Logical Clock (informal) Each process has a local logical clock, which is kept in a local variable  p for process p, which is initially 0 The logical clock is updated for each local event on process p such that:  If an internal or send event occurs:  p =  p +1  If a receive event happens on a message from process q:  p = max(  p,  q )+1 Lamport logical clocks guarantee that:  If a≤ H b, then  p ≤  q, where a and be happen on p and q

30 Example of equivalent executions p1 p2 p3 time

31 Vector Timestamps: useful implication Each process p keeps a vector v p [n] of n positions for system with n processes, initially v p [p]=1 and v p [i]=0 for all other i The logical clock is updated for each local event on process p such that:  If any event: v p [p]=v p [p]+1  If a receive event happens on a message from process q: v p [x] = max(v p [x], v q [x]), for 1≤x≤n We say v p ≤v q y iff v p [x]≤v q [x] for 1≤x≤n If not v p ≤v q and not v q ≤v p then v p is concurrent with v q Lamport logical clocks guarantee that:  If v p ≤v q then a≤ H b, and a≤ H b, then v p ≤v q, where a and be happen on p and q

32 Example of Vector Timestamps p1 p2 p3 time [2,0,0][4,0,0] [4,2,0] [0,0,2] [5,0,0] [4,3,0] [4,3,3] [3,0,0] [1,0,0] [0,1,0] [0,0,1] This is great! But cannot be done with smaller vectors than size n, for n processes

33 Useful Scenario: what is most recent? p1 p2 p3 time [2,0,0][4,0,0] [5,2,0] [5,0,2] [5,3,0] [3,0,0] [1,0,0] [0,1,0] [0,0,1] p2 examines the two messages it receives, one from p1 [4,0,0] and one from p2 [5,0,3] and deduces that the information from p1 is the oldest ([4,0,0]≤[5,0,3]). [5,0,0] [5,0,3]

34 Summary The total order of executions of events is not always important  Two different executions could yield the same “result” Causal order matters:  Order of two events on the same process  Order of two events, where one is a send and the other one a corresponding receive  Order of two events, that are transitively related according to above Executions which contain permutations of each others event such that causality is preserved are called equivalent executions Equivalent executions form equivalence classes called computations.  Every execution in a computation is equivalent to every other execution in its computation Vector timestamps can be used to determine causality  Cannot be done with smaller vectors than size n, for n processes