Presentation is loading. Please wait.

Presentation is loading. Please wait.

Gossip and its application Presented by Anna Kaplun.

Similar presentations


Presentation on theme: "Gossip and its application Presented by Anna Kaplun."— Presentation transcript:

1 Gossip and its application Presented by Anna Kaplun

2 Agenda Technical preliminaries Gossip algorithms ◦ Randomized unbalanced gossip ◦ unbalanced gossip Consensus Distributed computing

3 Technical preliminaries The system is synchronous There are n processors, each with unique integer name in interval {1,..,n}. n is known to all processors. Each processor can send a message to any subset of processors in one round. The size of the message assumed to be sufficiently large to carry a complete local state.

4 Technical preliminaries – Performance metrics Communication denotes the total number of point-to-point messages sent Time is measured as number of rounds. The round is such number of clock cycles that is sufficient to complete: ◦ receive messages delivered in previous round ◦ Local computations ◦ Send messages to arbitrary set of processors and deliver them.

5 Gossip In the beginning each processor has an input value called rumor. The goal: every non faulty processor p ◦ knows the rumor of any other processor q OR ◦ p knows that q has crushed

6 Randomized unbalanced gossip Processors are partitioned into m groups of balanced size. m=min{n,2t} ◦ n – number of processors ◦ t – maximum number of faulty processors Every group has a leader Only leaders send messages while regular nodes may only answer leaders’ requests

7 Randomized unbalanced gossip Processors are partitioned into w chunks of balanced size. w is: ◦ if 2t < n than w=2t ◦ else w=n-t Note: If 2t<n then chunks and groups are the same

8 Communication graph Every node is connected to appropriate leader Leaders form a graph Is a graph which consists of m nodes and has the following properties: For each subgraph of size at least m-t,there is a subgraph P(R) such that: Is a graph which consists of m nodes and has the following properties: For each subgraph of size at least m-t,there is a subgraph P(R) such that: its degree is its degree is

9 Communication graph (m=2t<n) m groupsm leaders connect nodes to leaders Leaders form, graph Chunks and groups are the same

10 Communication graph (m=n≤2t) m groupsm leaders connect nodes to leaders Leaders form, graph n-t chunks

11 Randomized unbalanced gossip - local view Rumors – all known rumors initialized: Rumors p [p]=myRumor Active – list of crashed processors initialized: Active p [q]=nill (for every q) Pending – list of fully informed processors initialized: Pending p [q]=nill (for every q)

12 Randomized unbalanced gossip - messages – carrying the whole local state, sent along communication graph – requests a local state from specific node – carrying the whole local state, sent when the sender knows all rumors (or knows that a certain processor crushed) – carrying the whole local state, sent as reply to inquiry message. graph inquiry notification reply

13 Randomized unbalanced gossip – the algorithm Only leaders send messages, regular nodes only answer queries Leader starts as collector, when it knows about all nodes it becomes disseminator Algorithm consists of phases: 1.Regular phase – executed T times 2.Ending phase – executed 4 times

14 a.Update the local arrays b.If p is a collector that has already heard about all nodes than become disseminator For each processor q: a.If q is active and q is my neighbor in communication graph than send a graph message to q b.If I’m collector and q is in the first chunk with a processor about which I haven't heard yet, then send inquiring message c.I I’m disseminator and q is in the first chunk with a processor that need to be notified, then send notifying message. d.If q is a collector from which an inquiry message was received then send a reply message to q Randomized unbalanced gossip – the algorithm (regular phase,executed T times ) graph inquiry notification reply Chunks are ordered according to permutation π p

15 Randomized unbalanced gossip - the algorithm (regular phase) For some leader p Send graph messages Take first unknown chunk from π p, send query Answer queries

16 Randomized unbalanced gossip - the algorithm (regular phase) For some leader p That collected all rumors Send graph messages Take first uninformed chunk from π p, send notification Answer queries

17 for 4 times a.Update the local arrays b.If p is a collector that has already heard about all nodes than become disseminator For each processor q: a.If I’m collector and I don’t know about q, then send inquiring message b.I I’m disseminator and q need to be notified, then send notifying message. c.If q is a collector from which an inquiry message was received then send a reply message to q Randomized unbalanced gossip - the algorithm (ending phase) inquiry notification reply

18 Randomized unbalanced gossip – updating lists Rumors – when some message received, new rumors are merged to the local list of known rumors r0 r2 r4 r0 r2 r4

19 Randomized unbalanced gossip – updating lists (cont) Active – q can be marked as faulty if: 1.Received a message where q is marked as faulty 2.q is my neighbor in communication graph and I didn’t receive a graph message from it 3.I sent a query to q and didn’t receive a reply in two rounds.

20 Randomized unbalanced gossip – updating lists (cont) Pending – q can be marked as fully informed if: 1.Received a message where q is marked as fully informed 2.Received a notification message from q 3.I’m a disseminator and I sent a notification to q notification

21 Randomized unbalanced gossip - correctness Claim: there is at least one leader that never fails if t<n ◦ If 2t<n than m=2t hence at least half of the leaders won’t fail ◦ If 2t>n than m=n hence at least one leader won’t fail Conclusion: at least one leader will run an ending phase. During the phase it will learn about all processors and will disseminate this knowledge

22 Randomized unbalanced gossip - complexity Rp – is a conceptual list of chunks that has at least one node that p has not hear about r k (i) – let K be subgraph of Sp – is a conceptual list of chunks that has at least one node that p has to notify s k (i) – let K be subgraph of

23 Randomized unbalanced gossip - complexity Look on the graph formed by m leaders. At least m-t leaders never fail there are at least (m-t)/7 nodes in connected component with radius 2+30ln(m). Let call this subgraph K

24 Randomized unbalanced gossip - complexity (cont) Lemma: if a stage takes phases then With probability that at least Proof: If chunk is not in all Rp lists it will be removed from all other lists in one stage. The worst case is when all chunks are in all lists

25 Randomized unbalanced gossip - complexity (cont) Let us consider the choices of chunks made by the processors in K as occurring sequentially. Consider a sequence of 30*|K|*ln(m) consecutive trials X1,X2,..., which represents the case of c = 1

26 Randomized unbalanced gossip - complexity (cont) Case |K|ln(m)>r(i-1)/2 Let us consider it to be a success in trial Xi if either a new chunk is selected or the number of chunks selected already by this trial is at least r(i − 1)/2. The probability of success in a trial is at least 1/2

27 Randomized unbalanced gossip - complexity (cont) Case |K|ln(m) ≤ r(i-1)/2 Let us consider it to be a success in trial Xi if either a new chunk is selected or the number of chunks selected already by this trial is at least |K|ln(m). The probability of success in a trial is at least 1/2

28 Randomized unbalanced gossip - complexity (cont) In both cases we have 30*|K|*ln(m) Bernouli trials with probability at least ½ for success With probability that at least

29 Randomized unbalanced gossip - complexity (cont) Lemma For each, there is such that gossiping is successfully completed by regular phase, while communication is by this phase, with probability that is at least

30 Randomized unbalanced gossip - complexity (cont) Proof: Let L be a fixed subgraph of induced by m − t vertices. There are O(1) stages that There is at most 1+log(w) stages that Other states are called useless

31 Randomized unbalanced gossip - complexity (cont) there is a constant β > 0 that if there is no useless stage among the first even β +lg(w) ones, then r( β +lg(w)) = 0. The probability that there is a useless stages is: There are at most subgraphs L

32 Randomized unbalanced gossip - complexity (cont) Hence the probability that there is a useless stage among the first even β + lg(w) ones, for an arbitrary subgraph L This is a probability that some collector didn’t become a disseminator

33 Randomized unbalanced gossip - complexity (cont) We have the same probability that if after β + lg(w) even stages all leaders became a collectors, but in following β + lg(w) even stages there is some uninformed node. The probability that there is a disseminator after 4 ( β + lg(w)) stages is

34 Randomized unbalanced gossip - time complexity stages Each stage takes phases phases

35 Randomized unbalanced gossip - message complexity Number of graph messages Number of inquiry messages Message complexity

36 From random to deterministic - unbalanced gossip Take number such that. Let α > 0 be the corresponding number that there exists a family of local permutations Π such that termination threshold T = α lgwln(m) = guarantees completion of gossiping without invoking ending phases. Make this threshold value T and such family Π a part of code of algorithm UNBALANCED-GOSSIP UNBALANCED-GOSSIP has time complexity, and message complexity If then we get : time complexity, and message complexity ∏ is only proved to exist If a=0 then message complexity

37 Consensus Every process starts with some initial value {0,1} Processor decides on its decision value Termination: Each processor eventually chooses a decision value, unless it crushes Agreement: No two processors choose different decision values Validity: Only a value among the initial ones may be chosen as a decision values

38 Consensus gossip consensus Let gossip and than decide on the maximum value…… What if some processor has crushed and its input value may be known only to subset of processors? Gossip can be solved in O(1) time while consensus with failures can’t be solved in less than t+1 rounds. ?

39 Consensus The algorithm is designed for t failures If Time complexity: Communication complexity:

40 Consensus – White knights consensus Leaders reach a consensus and then tell their decision to regular nodes. Leaders send messages along a communication graph In order to handle partition of,in case of failure, nodes run gossip algorithm

41 Consensus – White knights consensus 1) Set to initial value 2) Repeat times a) if then send message to every neighbor b)Repeat m times a)Receive short messages b)If and received message than set c)If was set to1 in this round, send message to all neighbors c)Repeat 2+30log(m) times a)Receive compactness messages b)Merge Nearby lists c)Send compactness messages to all neighbors d)If Nearby list containing less than (m-t)/7 nodes, set e)Perform gossiping 3) Decide on value

42 Consensus – White knights consensus Send your preference 1 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Check compactness 0 0 0 0 0 0 1 1 1 1 1 1 Gossip

43 White knights consensus – intuition In every round If nodes preference value is 1 it sends it to its neighbors. If it received new preference value 1 it sends it to its neighbors This is done for m rounds to ensure that ‘1‘ propagates to all nodes in the connected component All nodes in connected component has the same rumor value after step 2.b.b

44 White knights consensus – intuition why so many phases? Let look on some node that before gossiping has rumor = 1 According to the algorithm it’s connected component contains at least (m-t)/7 nods and they should have rumor = 1too. What if they all crash while gossiping? after gossiping some nodes may have rumor = 1 but others don’t

45 White knights consensus – intuition why so many phases? (cont) If this scenario happens every iteration than nodes won’t reach consensus. every iteration at least (m-t)/7 nodes should fail It is impossible that all m nodes crush

46 White knights consensus - correctness A processor is said to be a white knight in an iteration, if it starts gossiping in this iteration with the rumor equal to 1.

47 White knights consensus - correctness(cont) The decision value is among input values all inputs are“0” - no “1“ ever appear all inputs are“1” – At least one processor that never fails will stay compact through all iterations and it will spread its “1” value at the last gossiping step

48 White knights consensus - correctness(cont) All processors decide on the same value There is an iteration without white knight all nodes have rumor “0” There is an iteration with white knight and it survives gossiping – every processor learns its rumor and next iteration all processors will start with rumor=1. At least one processor will stay compact through all iterations and it will spread its “1” value at the last gossiping step.

49 White knights consensus - correctness(cont) There are white knights in each iteration, but no white knight survives gossiping in any iteration We have shown that it can’t happen since there are “to many” iterations

50 White knights consensus – time complexity Phases number

51 White knights consensus – time complexity 1) Set to initial value 2) Repeat times a) if then send message to every neighbor b)Repeat m times a)Receive short messages b)If and received message than set c)If was set to1 in this round, send message to all neighbors c)Repeat 2+30log(m) times a)Receive compactness messages b)Merge Nearby lists c)Send compactness messages to all neighbors d)If Nearby list containing less than (m-t)/7 nodes, set e)Perform gossiping 3) Decide on value O(1) O(m)=O(t)

52 White knights consensus – communication complexity 1) Set to initial value 2) Repeat times a) if then send message to every neighbor b)Repeat m times a)Receive short messages b)If and received message than set c)If was set to1 in this round, send message to all neighbors c)Repeat 2+30log(m) times a)Receive compactness messages b)Merge Nearby lists c)Send compactness messages to all neighbors d)If Nearby list containing less than (m-t)/7 nodes, set e)Perform gossiping 3) Decide on value O(1) O(m)=O(t) Every processor sends message at least once Graph ' s degree is constant Every processor sends message at least once Graph ' s degree is constant

53 Distributed computation The DO-ALL problem There are n processors At most t processors may crush. t<n There are j jobs to perform ◦ jobs are idempotent, i.e., executing task many times and/or concurrently has the same effect as executing the task ones

54 Distributed computation The DO-ALL problem The goal is to perform all the jobs, every job should be executed at least once by some processor. The algorithm terminates when all non faulty processors are aware that all tasks are done Trivial solution: every processor executes all the jobs.

55 The DO-ALL problem Performance metrics Message complexity – number of point to point messages sent during the execution Work complexity – we assume that a processor performs a unit of work per unit of time. Note that the idling processors consume a unit of work per step. For a n-processor, j-task computation subject to a failure pattern F denote by Pi(j, n, F) the number of processors that survive step i of the computation.

56 The DO-ALL problem trivial approach every processor executes all the jobs No message complexity Work complexity To achieve better work complexity we trade messages for communication steps

57 The DO-ALL problem The algorithm Complexity: If we have: Can be implemented with other gossiping algorithm achieving

58 The DO-ALL problem The algorithm Task v – list of j tasks ordered according to some permutation π v Proc v – list of processors v believes are non faulty Done v – variable indicating whether all jobs are done according to v

59 The DO-ALL problem The algorithm 1. done=false 2. task={ π (1), π (2),…, π (j)} 3. proc={1,2,…,n} 4. Repeat Repeat ßlog(n)+1 times 1. Repeat times If task not empty Perform task whose id is first in task and remove it from task else set done to true 1.Run gossip with rumor=(task,proc,done) 2.If done=true AND done is true for all received rumors TERMINATE else update task,proc

60 The DO-ALL problem The algorithm 1,2,3,4,5,6,7,8 Work 1,2,3,4,5,6,7,8 Gossip 1,4,5,8 workgossip Done work

61 The DO-ALL problem The algorithm correctness If some job removed from some of the task lists it means that it was executed The progression in each node is insured by the algorithm Every node will finally terminate and all jobs will be executed

62 The DO-ALL problem The algorithm Complexity: If we have: Can be implemented with other gossiping algorithm achieving

63 summary Gossip algorithms time complexity= Communication complexity= ◦ Randomized unbalanced gossip will have the above complexity with high probability ◦ unbalanced gossip will have the above complexity, but it is not constructive Consensus time complexity= message complexity= Distributed computing – the DO-ALL problem Work complexity= Message complexity=

64 References Bogdan S. Chlebus, Dariusz R. Kowalski: Robust gossiping with an application to consensus. J. Comput. Syst. Sci. (JCSS) 72(8):1262-1281 (2006) Chryssis Georgiou, Dariusz R. Kowalski, and Alexander A. Shvartsman: Efficient gossip and robust distributed computation Theoretical Computer Science, Vol. 347(1-2), November 2005, pp. 130-166


Download ppt "Gossip and its application Presented by Anna Kaplun."

Similar presentations


Ads by Google