Presentation is loading. Please wait.

Presentation is loading. Please wait.

For Distributed Algorithms 2014 Presentation by Ziv Ronen Based on “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer,

Similar presentations


Presentation on theme: "For Distributed Algorithms 2014 Presentation by Ziv Ronen Based on “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer,"— Presentation transcript:

1 For Distributed Algorithms 2014 Presentation by Ziv Ronen Based on “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer, Nancy A. Lynch Michael S. Paterson

2 Main Menu The problem Why the problem is unsolvable If time allow: how to solve the problem with initial faulty processors 2

3 Consensus in the real world Our mission Model: – Objectives – Network – Possible faults 3

4 Consensus in the real world There are many cases when we want that several processors agree on an action. Usually, it is more important that all processors will agree on the same action then which action will be chosen. For example, if we have a database, we will want that any transaction will be committed by all processors or by none of them. 4

5 Consensus in the real world- Cont. Such agreement in fault free network is trivial. – For instance, we can choose a leader that tell all the other what to do. However, real world processors are subject to failures – They might stop working (good case). – They might go haywire (bad case). – They might become malevolent (worse case). 5

6 Our mission We will want to find an algorithm that, for any decision in every network, will choose a single action to perform. However, we want that there will be at least two options, and that both of them can actually happen. 6

7 Our Model - objectives We will work on a simplified problem, in which the processors only need to agree on a number that can be either 1 (commit) or 0 (discard). Initially Each processor chooses is initial number randomly (simulate decisions based on the system condition). – 1 if can commit, 0 if can’t. Each processor need to choose an action. After the action was chosen, it can’t be redone In the end, all the processors need to agree on action, meaning they all choose 1 or 0 7

8 Our Model – objectives (cont.) We will required that the algorithm could return both 1 and 0 (maybe for different cases). – So “always discard” or “always commit” is not a possible policy for our data base. 8

9 Our Model – Network We will assume fully asynchronic network – If we send a message to a non-faulty processor, it will reach it after finite, unbounded time. We will also assume the network is fully connected. For generality we will also assume full knowledge of direction – so any other topology can be simulated. 9

10 Why asynchronic? If processor work synchronic asynchronic P1 P2 P1 P2 M2 tick 10

11 Why asynchronic? But if one fail… synchronic asynchronic P1 P2 P1 P2 M2 tick P2 is faulty! 11

12 Our Model – Possible faults We will assume that the processors can only stop working entirely. We will also assume that only a single processor can malfunction in any given run. However, we will assume that: – Other processors can’t tell that a processor stop working. – A processor can fail at any given time. 12

13 Our Model - more formally N≥2 processors. For each processor: – Input value X p  {0,1}, part of the problem input. – Output value y p  {0,1,b}, initially b, can only change ones. – Infinite storage Messages are of the form (p,m) where p is the target processor and m is the message. Any processor can send such message to any other processor. We will assume that every message stay in a “messages buffer” between the time it was send and received. – Initially, the buffer is empty. Goal: at the end, for each p1,p2: y p1 = y p2 ≠b 13

14 Our model – example, initial state 1 X1=1 Y1=b 2 X2=0 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b Messages buffer 14

15 Our model – example, different state 1 X1=1 Y1=b 2 X2=0 Y2=0 3 X3=1 Y3=0 4 X4=0 Y4=b Messages buffer 2,m1 4,m2 4,m3 2,m2 2,m3 15

16 Our model – example, final state 1 X1=1 Y1=0 2 X2=0 Y2=0 3 X3=1 Y3=0 4 X4=0 Y4=0 Messages buffer 2,m1 4,m2 2,m3 16

17 Intuition Proof – Definitions – Lemma 1 – Lemma 2 – Lemma 3 17

18 Intuition Let show the intuition for why this is an impossible task. I will demonstrate on the problem of database consensus. – All the databases should have output value 1 if all working databases have input value 1. – All the databases should have output value 0 if at least one working database have input value 0. – In this case, working mean not failing at the beginning of the algorithm. 18

19 Initial state We will choose an initial state where both results are possible. In our case, if processor 1 failed during the algorithm, the result might be 1. Otherwise, the result should be 0. 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=1 Y4=b 19

20 case 1: If 1 sent is first message: All processors know that it can’t commit. The algorithm should decide 0. 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=1 Y4=b I failed to commit 20

21 case 2: If 1 failed before sending this message, the algorithm should decide without him. Since all other processor can commit, the algorithm should decide 1. 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=1 Y4=b 21

22 Quasi failure Let say that a processor “quasi failed” if: – It may be alive or dead. – If he is alive, he will execute its next step after the algorithm “finished” without him. 1 X1=0 Y1=b 22

23 Quasi failure - Intuition 1 X1=0 Y1=b 1 X1=0 Y1=b 1 X1=0 Y1=b 1 X1=0 Y1=b Schrödinger's cat Processor 23

24 Quasi failure – our example If 1 quasi failed: The algorithm have 3 choices: 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=1 Y4=b 24

25 Quasi failure choices (1/3) Decide 0. In this case, if processor one actually failed: The result will be wrong! 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=1 Y4=b 25

26 Quasi failure choices (2/3) Decide 1. In this case, if the processor wake up: The result will be wrong! 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=1 Y4=b 26

27 Quasi failure choices(3/3) Not deciding. In this case, if the processor actually failed: The algorithm will never decide. 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=1 Y4=b 27

28 Intuition – summary There is an initial state where both answers are possible (Lemma 2). There is an event in a specific processor (in our case, processor 1 starts working and sending its message) that is occurrence, No matter when(Lemma 1), determine the outcome. If a processor quasi-fail, we can’t decide (because the answer depend on whether he actually fail, and we can’t know that). If we will not decide, then we will reach another one of those state (Lemma 3) and be stuck forever. 28

29 Intuition – summary(cont.) Remember that in the example, we forced them to agree according to some policy. In the real problem (and in the following proof) we just need them to agree on the same value, no matter which. 29

30 Proof – definitions (1/6) Configuration: the combination of the internal state (input, output, memory) for each processor and the messages in the buffer. Step: an action of on processor. For processor p, consists of: – Try receiving a message (removing it from the messages buffer). If succeed, receive (p,m). If failed, receive (p,  ). – Conduct computation. May send any finite amount of messages 30

31 Configuration and step 1 X1=1 Y1=b 2 X2=0 Y2=b Y2=1 34 Messages buffer 2,m1 31 Step 1 Step 2

32 Proof – definitions (2/6) Event e=(p,m): the receiving of message m by p – Since our processors are deterministic, the change of the configuration by step is depend only on the received message. – The event e=(p,  ) is always possible for any p. e(C): the configuration reached from C by the event e. Schedule: a finite or infinite sequence σ of events. – σ(C): The final configuration from initial configuration C 32

33 Event and sequences 1 X1=1 Y1=b 2 X2=0 Y2=b Y2=1 34 Messages buffer 2,m1 (2,m1) (1,  ) σ =((1,  ),(2,m1)) 33

34 Proof – definitions (3/6) Reachable: configuration C is reachable from C’ if schedule σ exists so: σ(C’) = C Accessible configuration: Configuration C is accessible if exists an initial configuration C’ so C is reachable from C’. DV(C): The set {v|v≠b and  p:v=y p }, or the values that were chosen by some processor. A protocol is partially correct if: – If configuration C is accessible, |DV(C)|≤1 – Two accessible configurations C,C’ exists so: DV(C)={0}, DV(C’)={1} 34

35 Partially correctness 1 X1=1 Y1=b Y1=0 2 X2=0 Y2=b Y2=1 34 Messages buffer 2,m1 DV(C)={} DV(C)={0} DV(C)={0,1} 35

36 Proof – definitions (4/6) Nonfaulty: processor is nonfaulty if it take infinite number of steps. Faulty: a Non-Nonfaulty processor (stop taking step after some time). Admissible: a run is admissible if it contain at most one faulty processor and the messages buffer is fair. Deciding: a run is deciding if eventually for some processor p, y p ≠b A protocol P is totally correct in spite of one fault if: – P is partially correct. – Every Admissible run in P is deciding run 36

37 Main Theorem No consensus protocol is totally correct in spite of one fault We will assume the contrary: assume protocol P’ is totally correct in spite of one fault 37

38 Lemma 1 For any two disjoint finite schedule σ 1,σ 2 and initial configuration C exists: σ 1 (σ 2 (C)) = σ 2 (σ 1 (C)) – Disjoint: involving different processors. Proof: – From the system definition, since σ 1,σ 2 don’t interact. 38

39 Lemma 1 – visually 1 X1=1 Y1=b 2 X2=0 Y2=b Y2=1 34 Messages buffer 2,m1 1,m2 1,m3 2,m1 1,m2 1,m3 2,m1 4,m4 4,m5 39

40 Lemma 1 – visually (opposite order) 1 X1=1 Y1=b 2 X2=0 Y2=b Y2=1 34 Messages buffer 2,m1 1,m2 1,m3 2,m1 1,m2 1,m3 2,m1 4,m4 4,m5 40

41 Lemma 1 – visually Normal order:Opposite order: 41

42 Proof – definitions (5/6) Let FDV(C) be the union of DV(C’) for each C’ reachable from C. – If FDV(C) = {0,1}, C is bivalent. – If |FDV(C)|=1, C is univalent. – If FDV(C) = {0}, C is 0-valent. – If FDV(C) = {1}, C is 1-valent. – P’ is totally correct, so FDV(C) ≠ . Intuitively, FDV(C) the possible decisions from configuration C. 42

43 Lemma 2 Lemma: There is a bivalent initial configuration. 43

44 Lemma 2 – Proof (1/3) Assume otherwise: From partial correctness, P’ have both 0-valent and 1-valent initial configurations. Let call two initial configurations adjacent if they differ only by a single processor input value. Any two initial configurations can be joined by a chain of adjacent configuration. Hence, there are two adjacent 0-valent and 1- valent initial configurations. explanation 44

45 Lemma 2 – Proof (2/3) Remainder 1: there are two adjacent 0-valent and 1-valent initial configurations. – Let call them C 0, C 1 accordingly. C 0, C 1 are adjacent, so there is only one processor, p, that has different input value between them. Remainder 2: P’ is totally correct in spite of one fault. – So P’ should reach a decision even if a processor fail. 45

46 Lemma 2 – Proof (3/3) Let R be an admissible run from C 0 where p fail. From totally correctness in spite of one fault, R must reach a deciding run. Let σ be the corresponding schedule. If 1  DV(σ(C 0 )), then 1  FDV(C 0 ), but C 0 is 0-valent. So 1  DV(σ(C 0 )), therefore DV(σ(C 0 ))={0} However, since the only different between C 0, C 1 is p and p fail, σ is legal on C 1 and σ(C 0 )  σ(C 1 ) (equal except p, which fail and therefore didn’t decide) and so DV(σ(C 0 ))=DV(σ(C 1 )) ={0}, 0  FDV(σ(C 1 )), but C 1 is 1-valent. 46

47 Proof – definitions (6/6) For any configuration C and event e=(p,m) so e(C) is legal, Let Rn e (C) be the set of all configuration reachable from C without applying e. – Note that e can be applied on any C’  Rn e (C) Let eR(C) be {e(C’)| C’  Rn e (C)} Let two configuration, C,C’ be called neighbors if one is reachable from the other in a single step. – Equivalent to saying that an event e exists such that C’=e(C) or C=e(C’) 47

48 Lemma 3 If C is bivalent then for each e=(p,m), eR(C) contain bivalent state. 48

49 Lemma 3 – Proof (1/7) Let assume that every D  eR(C) is univalent. C is bivalent, and therefore, for any i  {0,1} exists a i-valent configuration E i that is reachable from C. Let σ i be a schedule that fulfill E i =σ i (C). let the configuration F i be: – If e  σ i, F i =e(E i ) – If e  σ i, then σ i =σ i ‘(e(σ i ‘’)). F i =e(σ i ‘’(C)) In both cases, F i  eR(C), and therefore F i is i- valent – Since either F i is reachable from E i or vice-versa. 49

50 Lemma 3 – Proof (2/7) So, eR(C) contain both 0-valent and 1-valent configuration. By easy induction on the length of the schedule to F i (when e(C) is j-valent for j≠i) there exists two neighbors C 0, C 1 so D i =e(C i ) is i-valent for i  {0,1}. Without loss of generality, assume C 1 =e’(C 0 ) 50

51 “Easy Induction” (in pictures) for e(C) is 0-valent: case A (base) C=C 0 1-valent C1C1 e 0-valent e F1F1 51

52 “Easy Induction” (in pictures) for e(C) is 0-valent: case B (step) C 1- valent C1C1 e 0- valent e C0C0 e e C F1F1 Induction 52

53 “Easy Induction” (in pictures) for e(C) is 0-valent: case C (contradiction ) C 0- valent e R bivalent ee e(R)  eR(C), e(R) is bivalent, contradiction F1F1 53

54 Lemma 3 – Proof (3/7) Remainders: – e=(p,m). – C 0, C 1 are neighbors. – D i =e(C i ) is i-valent for i  {0,1}. – C 1 =e’(C 0 ). – Lemma1: If two schedules are disjoints, you can execute them in any order. 54

55 Lemma 3 – Proof (4/7) Let e’=(p’,m’). – If p’≠p: the schedules σ=(e), σ’=(e’) are disjoints, So by lemma1: D 1 =e(e’(C 0 ))=σ(σ’(C 0 ))=σ’(σ(C 0 ))=e’(e(C 0 ))=e’(D 0 ). But then 1  FDV(D 0 ), contradiction. – If p’=p: so lets look on a finite, deciding run when where p take no step. Since it mimic a single fault (quasi-fail) in p, and P’ is totally correct in spite of one fault, there is such run. 55

56 If p’≠p: From “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer, Nancy A. Lynch, Michael S. Paterson 56

57 Lemma 3 – Proof (5/7) A deciding run Where p quasi-fail: – Let σ be the corresponding schedule. – Let A=σ(C). – A is deciding configuration, meaning |DV(A)|>0 and therefore |FDV(A)|=1 (from partly correctness of P’) – σ‘=(e’,e), σ‘’=(e) are disjoint from σ, since σ contain no event with p (p quasi-fail), and σ‘, σ‘’ contain only event with p (since p=p’). 57

58 Lemma 3 – Proof (6/7) A deciding run Where p quasi-fail: – Let σ be the corresponding schedule. – Let A=σ(C). – A is deciding configuration, meaning A is univalent (from partly correctness of P’) – σ‘=(e’,e), σ‘’=(e) are disjoint from σ, since σ contain no event with p (p quasi-fail), and σ‘, σ‘’ contain only event with p (since p=p’). 58

59 Lemma 3 – Proof (7/7) From lemma1: e(A)=σ’’(σ(C 0 )) = σ(σ’’(C 0 )) = σ(e(C 0 ))= σ(D 0 ), 0  FDV(A) From lemma1: e(e’(A))=σ’(σ(C 0 )) = σ(σ’(C 0 )) = σ(D 1 ), 1  FDV(A) But now A is bivalent, contradiction! 59

60 If p’=p: From “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer, Nancy A. Lynch, Michael S. Paterson 60

61 If p’=p: From “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer, Nancy A. Lynch, Michael S. Paterson From Lemma 1 61

62 If p’=p: From “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer, Nancy A. Lynch, Michael S. Paterson From Lemma 1 62

63 If p’=p: From “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer, Nancy A. Lynch, Michael S. Paterson Two configuration That are reachable from A 63

64 If p’=p: From “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer, Nancy A. Lynch, Michael S. Paterson A Bivalent but σ is deciding 64

65 Proof – conclusion(1/4) In order to finish the proof, we will now show an execution that never reach a decision. Remainder: – A protocol P is totally correct in spite of one fault if: P is partially correct. Every Admissible run in P is deciding run – A run is admissible if it contain at most one faulty processor and the messages buffer is fair. – a run is deciding if eventually for some processor p, y p ≠b (And therefore, reaching an univalent configuration). We will assume that P is partially correct and find an Admissible run that is not deciding 65

66 Proof – conclusion(2/4) First, we will define a way to assure that the run is Admissible. Let have a queue of the processors and define stages in the following way: – The stage end when a the first process in the process queue receive the earliest message sent to it (or no message if none was sent). – At the end of stage, the processor is removed from the head of the queue and enter the tail. Since each stage end with the next processor in the queue and with the earliest message sent to it, infinite stages will mean: – Infinite step in each processor – Every message will eventually be received. Therefore, the run will be admissible. 66

67 The run will be admissible 12 34 Processor Queue 2 3 1 4 P4P3P2P1 m4m3m2m1 m5m10m7 m6m8 m9 Processor in the j entry will run after at most j stages (3) Message at place j will be sent after at most N * j stages (4 * 3 = 12) 67

68 The run will be admissible 1 12 34 Processor Queue 2 3 1 4 P4P3P2P1 m4m3m2m1 m5m10m7 m6m8 m9 stage 68

69 The run will be admissible 2 12 34 Processor Queue 3 1 4 2 P4P3P2P1 m4m3m10m1 m5m7 m6m8 m9 stage 69

70 The run will be admissible 3 12 34 Processor Queue 1 4 2 3 P4P3P2P1 m4m5m10m1 m6m7 m8 m9 stage 70

71 The run will be admissible 4 12 34 Processor Queue 4 2 3 1 P4P3P2P1 m4m5m10m7 m6m8 m9 stage 71

72 The run will be admissible 12 34 Processor Queue 2 3 1 4 P4P3P2P1 m5m10m7 m6m8 m9 72

73 Proof – conclusion(3/4) We will assume that P is partially correct and find an Admissible run that is not deciding. – Now, let make sure that it is not deciding: 1.Start from a bivalent configuration C (Lemma2) 2.Let e denote the first message in the message queue for the first processor in the processors queue. There is a bivalent configuration C’ reachable from C by a schedule that end by e (Lemma3). 3.C = C’ (stage end). 4.Return to step 2. 73

74 Proof – conclusion(4/4) We will assume that P is partially correct and find an Admissible run that is not deciding. – Since each stage end in bivalent configuration, the run is not deciding. Therefore, P is not totally correct! Q.E.D 74

75 THE END! Question? exit Initially dead processors 75

76 Chain of adjacent configuration (d=4) 0-valent1-valent 1 X1=1 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=1 Y4=b 76

77 Chain of adjacent configuration 0-valent1-valent 1 X1=1 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=1 Y4=b ?-valent 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 77

78 Chain of adjacent configuration – case1: 1-valent 0-valent1-valent 1 X1=1 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=1 Y4=b 1-valent 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 78

79 Chain of adjacent configuration case2: 0-valent 0-valent1-valent 1 X1=1 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=1 Y4=b 0-valent 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 79

80 Chain of adjacent configuration (d=3) 0-valent1-valent 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=1 Y4=b 0-valent 1 X1=0 Y1=b 2 X2=1 Y2=b 3 X3=1 Y3=b 4 X4=0 Y4=b 80

81 Chain of adjacent configuration (d=3…2…1) 0-valent1-valent 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=1 Y4=b 0-valent 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=0 Y4=b 81

82 Chain of adjacent configuration (d=1) 0-valent1-valent 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=1 Y4=b 1 X1=0 Y1=b 2 X2=0 Y2=b 3 X3=0 Y3=b 4 X4=0 Y4=b 82

83 Initially dead processors Assume: – N processors. – At least L= (The majority) processors are alive. – The processors don’t know who is alive. We want to reach a consensus. 83

84 Two stages Algorithm – stage 1 In the first stage, we will build a distributed directed graph G. The graphs will be built in the following way: – Each processor have a corresponding node. – Each processor send its id to any other processor. – Each processor will wait for messages from L-1 other processors. – If a message from processor i reach processor j, an edge (i,j) will be added to the graph. 84

85 stage 1 – Example (2 processor view point) 1 2 3 4 5 6 7 85

86 stage 1 – Example (2 processor view point) 1 23 4 5 6 7 86

87 stage 1 – Example (2 processor view point) 1 23 4 5 6 7 87

88 stage 1 – Example (Global View) 1 23 4 5 6 7 88

89 Two stages Algorithm – stage 2 In the second stage, we will build a graph G + which is the transitive closure of G, so that every processor know about enough of the graph. The graphs will be built in the following way: – Each processor send to all the other its: 1.id. 2.Initial value. 3.L-1 neighbors. – Each processor wait until it received such message from all its ancestors. 89

90 stage 2 – Example (processor 2 view point) 1 23 4 5 6 7 2, x2, (3,4,5) 90

91 stage 2 – Example (processor 2 view point) 1 23 4 5 6 7 3,x3,[2,4,5] 4,x4,[2,3,5] 5,x5,[2,4,6] 91

92 stage 2 – Example (processor 2 view point) 1 23 4 5 6 7 92

93 stage 2 – Example: transitive closure (processor 2 view point) 1 23 4 5 6 7 6,x6,[2,3,5] 93

94 stage 2 – Example: transitive closure (processor 2 view point) 1 23 4 5 6 7 94

95 stage 2 – Example: transitive closure (processor 2 view point) 1 23 4 5 6 7 95

96 stage 2 – Example: transitive closure (processor 2 view point) 1 23 4 5 6 7 96

97 stage 2 – Example: transitive closure (processor 7 view point) 1 23 4 5 6 7 97

98 Clique in G + (1/2) Claim: G + contain 1, and only one, clique of size L or more that is not fully contained in other clique. Proof by the following steps. contain at least one: – For each k < N, because the in-degree of each node in G is L-1, if G contain a path of size k then: G contain a cycle of size at least L. or G contain a path of size k+1 – Corollary: G contain a path of size N, it contain a cycle of size at least L (because option 2 is not possible). – Corollary: G contain a cycle of size at least L. – Since G + is a transitive closure of G, if G contain cycle of size k then G + contain a clique of size k. 98

99 Contain at least one Clique: Path of size L A1 L-1 … 99

100 Contain at least one Clique: Path of size L 1A1A L-2 … A2 At least L-2 … At most 1 1 100

101 Contain at least one Clique: Path of size L A1 L-2 … A2 At least L-3 … At most 2 A3 At least L-4 … At most 1 1 11 101

102 Contain at least one Clique: Path of size L A1 L-2 … At least L-i … A L-1 At least 0 … Ai … At most i-1 A L … At most L-2 … At most L-1 …… 1 1 1 At least 0 … Path of size L 102

103 Contain at least one Clique: Induction for k≥L Path of size k-(L-1) A Path of size (L-2 ) … At most L-2 103

104 Contain at least one Clique: Induction for k≥L, case 1: Cycle of at least L Path of size k-(L-1) A Path of size (L-2 ) … At most L-2 104

105 Contain at least one Clique: Induction for k≥L, case 2: Path of size k+1 Path of size k-(L-1) A Path of size (L-2 ) … At most L-2 B 105

106 Contain at least one Clique: A1 A L Ai … … Aj … 106

107 Clique in G + (2/2) Contain at most one clique: – If contain two, since L is the majority of node, then there is a node in both clique. – From transitive, the node set that is a union of the nodes in both clique is a clique. 107

108 Contain at most one clique i j Transitivity 108

109 Two stages Algorithm – Finish Claim: each living processor know about the clique. – That because each node in the graph is a child of a processor in the clique, and therefore all nodes in the clique are ancestor of it and he will wait for them. The consensus: Let f be any function of the form f:({0,1} X 2 |V| )->{0,1}, f known by all processor (part of there state). Then f(Unique Clique) is a binary value known by all processors. Consensus is reached! 109

110 THE END! Question? exit 110


Download ppt "For Distributed Algorithms 2014 Presentation by Ziv Ronen Based on “Impossibility of Distributed Consensus with One Faulty Process” By: Michael J. Fischer,"

Similar presentations


Ads by Google