Presentation is loading. Please wait.

Presentation is loading. Please wait.

Gossip Protocols Bimodal Multicast and T-MAN

Similar presentations


Presentation on theme: "Gossip Protocols Bimodal Multicast and T-MAN"— Presentation transcript:

1 Gossip Protocols Bimodal Multicast and T-MAN
Ken Birman

2 Gossip: A valuable tool…
So-called gossip protocols can be robust even when everything else is malfunctioning Idea is to build a distributed protocol a bit like gossip among humans “Did you hear that Sally and John are going out?” Gossip spreads like lightening…

3 Gossip: basic idea First rigorous analysis was by Alan Demers!
Node A encounters “randomly selected” node B (might not be totally random) Gossip push (“rumor mongering”): A tells B something B doesn’t know Gossip pull (“anti-entropy”) A asks B for something it is trying to “find” Push-pull gossip Combines both mechanisms

4 Definition: A gossip protocol…
Uses [mostly] random pairwise state merge Runs at a steady rate (much slower than network) [mostly] Uses bounded-size messages Does not depend on messages getting through reliably

5 Gossip benefits… and limitations
Information flows around disruptions Scales very well Typically reacts to new events in log(N) Can be made self- repairing Rather slow Very redundant Guarantees are at best probabilistic Depends heavily on the randomness of the peer selection

6 For example We could use gossip to track the health of system components We can use gossip to report when something important happens In the remainder of today’s talk we’ll focus on event notifications. Next week we’ll look at some of these other uses

7 Typical push-pull protocol
Nodes have some form of database of participating machines Could have a hacked bootstrap, then use gossip to keep this up to date! Set a timer and when it goes off, select a peer within the database Send it some form of “state digest” It responds with data you need and its own state digest You respond with data it needs

8 Where did the “state” come from?
The data eligible for gossip is usually kept in some sort of table accessible to the gossip protocol This way a separate thread can run the gossip protocol It does upcalls to the application when incoming information is received

9 Gossip often runs over UDP
Recall that UDP is an “unreliable” datagram protocol supported in internet Unlike for TCP, data can be lost Also packets have a maximum size, usually 4k or 8k bytes (you can control this) Larger packets are more likely to get lost! What if a packet would get too large? Gossip layer needs to pick the most valuable stuff to include, and leave out the rest!

10 Algorithms that use gossip
Gossip can be used to… Notify applications about some event Track the status of applications in a system Organize the nodes in some way (like into a tree, or even sorted by some index) Find “things” (like files) Let’s look closely at an example

11 Bimodal multicast This is Cornell work from about 15 years ago
Goal is to combine gossip with UDP multicast to make a very robust multicast protocol

12 Stock Exchange Problem: Sometimes, someone is slow…
Most members are healthy…. … but one is slow i.e. something is contending with the red process, delaying its handling of incoming messages…

13 Remedy? Use gossip Start by using unreliable UDP multicast to rapidly distribute the message. But some messages may not get through, and some processes may be faulty. So initial state involves partial distribution of multicast(s)

14 Remedy? Use gossip Periodically (e.g. every 100ms) each process sends a digest describing its state to some randomly selected group member. The digest identifies messages. It doesn’t include them.

15 Remedy? Use gossip Recipient checks the gossip digest against its own history and solicits a copy of any missing message from the process that sent the gossip

16 Remedy? Use gossip Processes respond to solicitations received during a round of gossip by retransmitting the requested message. The round lasts much longer than a typical RPC time.

17 Delivery? Garbage Collection?
Deliver a message when it is in FIFO order Report an unrecoverable loss if a gap persists for so long that recovery is deemed “impractical” Garbage collect a message when you believe that no “healthy” process could still need a copy (we used to wait 10 rounds, but now are using gossip to detect this condition) Match parameters to intended environment

18 Need to bound costs Worries:
Someone could fall behind and never catch up, endlessly loading everyone else What if some process has lots of stuff others want and they bombard him with requests? What about scalability in buffering and in list of members of the system, or costs of updating that list?

19 Optimizations Request retransmissions most recent multicast first
Idea is to “catch up quickly” leaving at most one gap in the retrieved sequence

20 Optimizations Participants bound the amount of data they will retransmit during any given round of gossip. If too much is solicited they ignore the excess requests

21 Optimizations Label each gossip message with senders gossip round number Ignore solicitations that have expired round number, reasoning that they arrived very late hence are probably no longer correct

22 Optimizations Don’t retransmit same message twice in a row to any given destination (the copy may still be in transit hence request may be redundant)

23 Optimizations Use UDP multicast when retransmitting a message if several processes lack a copy For example, if solicited twice Also, if a retransmission is received from “far away” Tradeoff: excess messages versus low latency Use regional TTL to restrict multicast scope

24 Scalability Protocol is scalable except for its use of the membership of the full process group Updates could be costly Size of list could be costly In large groups, would also prefer not to gossip over long high-latency links

25 Router overload problem
Random gossip can overload a central router Yet information flowing through this router is of diminishing quality as rate of gossip rises Insight: constant rate of gossip is achievable and adequate

26 Latency to delivery (ms) Load on WAN link (msgs/sec)
Sender Receiver Limited bandwidth (1500Kbits) Bandwidth of other links:10Mbits Sender Receiver High noise rate Latency to delivery (ms) Load on WAN link (msgs/sec)

27 Hierarchical Gossip Weight gossip so that probability of gossip to a remote cluster is smaller Can adjust weight to have constant load on router Now propagation delays rise… but just increase rate of gossip to compensate

28

29 Idea behind analysis Can use the mathematics of epidemic theory to predict reliability of the protocol Assume an initial state Now look at result of running B rounds of gossip: converges exponentially quickly towards atomic delivery

30 … or data gets through w.h.p.
Either sender fails… … or data gets through w.h.p.

31 Failure analysis Suppose someone tells me what they hope to “avoid”
Model as a predicate on final system state Can compute the probability that pbcast would terminate in that state, again from the model

32 Two predicates Predicate I: A faulty outcome is one where more than 10% but less than 90% of the processes get the multicast … Think of a probabilistic Byzantine General’s problem: a disaster if many but not most troops attack

33 Two predicates Predicate II: A faulty outcome is one where roughly half get the multicast and failures might “conceal” true outcome … this would make sense if using pbcast to distribute quorum-style updates to replicated data. The costly hence undesired outcome is the one where we need to rollback because outcome is “uncertain”

34 Two predicates Predicate I: More than 10% but less than 90% of the processes get the multicast Predicate II: Roughly half get the multicast but crash failures might “conceal” outcome Easy to add your own predicate. Our methodology supports any predicate over final system state

35

36

37

38 Figure 5: Graphs of analytical results

39 Example of a question Create a group of 8 members
Perturb one member in style of Figure 1 Now look at “stability” of throughput Measure rate of received messages during periods of 100ms each Plot histogram over life of experiment

40 Source to dest latency distributions
Notice that in practice, bimodal multicast is fast!

41 Now revisit Figure 1 in detail
Take 8 machines Perturb 1 Pump data in at varying rates, look at rate of received messages

42 Revisit our original scenario with perturbations (32 processes)

43 Throughput variation as a function of scale

44 Impact of packet loss on reliability and retransmission rate
Notice that when network becomes overloaded, healthy processes experience packet loss!

45 What about growth of overhead?
Look at messages other than original data distribution multicast Measure worst case scenario: costs at main generator of multicasts Side remark: all of these graphs look identical with multiple senders or if overhead is measured elsewhere….

46

47 Growth of Overhead? Clearly, overhead does grow
We know it will be bounded except for probabilistic phenomena At peak, load is still fairly low

48 Pbcast versus SRM, 0.1% packet loss rate on all links
Tree networks Star networks

49 Pbcast versus SRM: link utilization

50 Pbcast versus SRM: 300 members on a 1000-node tree, 0
Pbcast versus SRM: 300 members on a 1000-node tree, 0.1% packet loss rate

51 Pbcast Versus SRM: Interarrival Spacing

52 Pbcast versus SRM: Interarrival spacing (500 nodes, 300 members, 1
Pbcast versus SRM: Interarrival spacing (500 nodes, 300 members, 1.0% packet loss)

53 Real Data: Bimodal Multicast on a 10Mbit ethernet (35 Ultrasparc’s)
Injected noise, retransmission limit disabled Injected noise, retransmission limit re-enabled

54 Networks structured as clusters
Sender Receiver High noise rate Sender Receiver Limited bandwidth (1500Kbits) Bandwidth of other links:10Mbits

55 Delivery latency in a 2-cluster LAN, 50% noise between clusters, 1% elsewhere

56 Requests/repairs and latencies with bounded router bandwidth

57 Gossipping in Bologna Ozalp Babaoglu

58 Background 2003: Márk Jelasity brings the gossipping gospel to Bologna from Amsterdam : We get good milage from gossipping in the context of Project BISON 2005-present: Continue to get milage in the context of Project DELIS

59 What have we done? We have used gossipping to obtain fast, robust, decentralized solutions for Aggregation Overlay topology management Heartbeat synchronization Cooperation in selfish environments

60 Collaborators Márk Jelasity Alberto Montresor Gianpaolo Jesi
Toni Binci David Hales Stefano Arteconi

61 Proactive gossip framework
// active thread do forever wait(T time units) q = SelectPeer() push S to q pull Sq from q S = Update(S,Sq) // passive thread do forever (p,Sp) = pull * from * push S to p S = Update(S,Sp)

62 Proactive gossip framework
To instantiate the framework, need to define Local state S Method SelectPeer() Style of interaction push-pull push pull Method Update()

63 #1 Aggregation

64 Gossip framework instantiation
Style of interaction: push-pull Local state S: Current estimate of global aggregate Method SelectPeer(): Single random neighbor Method Update(): Numerical function defined according to desired global aggregate (arithmetic/geometric mean, min, max, etc.)

65 Exponential convergence of averaging

66 Properties of gossip-based aggregation
In gossip-based averaging, if the selected peer is a globally random sample, then the variance of the set of estimates decreases exponentially Convergence factor:

67 Robustness of network size estimation
1000 nodes crash at the beginning of each cycle

68 Robustness of network size estimation
20% of messages are lost

69 #2 Topology Management

70 Gossip framework instantiation
Style of interaction: push-pull Local state S: Current neighbor set Method SelectPeer(): Single random neighbor Method Update(): Ranking function defined according to desired topology (ring, mesh, torus, DHT, etc.)

71 Mesh Example

72 Sorting example

73 Exponential convergence - time

74 Exponential convergence - network size

75 Heartbeat Synchronization
#3 Heartbeat Synchronization

76 Synchrony in nature Nature displays astonishing cases of synchrony among independent actors Heart pacemaker cells Chirping crickets Menstrual cycle of women living together Flashing of fireflies Actors may belong to the same organism or they may be parts of different organisms

77 Coupled oscillators The “Coupled oscillator” model can be used to explain the phenomenon of “self-synchronization” Each actor is an independent “oscillator”, like a pendulum Oscillators coupled through their environment Mechanical vibrations Air pressure Visual clues Olfactory signals They influence each other, causing minor local adjustments that result in global synchrony

78 Fireflies Certain species of (male) fireflies (e.g., luciola pupilla) are known to synchronize their flashes despite: Small connectivity (each firefly has a small number of “neighbors”) Communication not instantaneous Independent local “clocks” with random initial periods

79 Gossip framework instantiation
Style of interaction: push Local state S: Current phase of local oscillator Method SelectPeer(): (small) set of random neighbors Method Update(): Function to reset the local oscillator based on the phase of arriving flash

80 Experimental results

81 Exponential convergence

82 Cooperation in Selfish Environments
#4 Cooperation in Selfish Environments

83 Outline P2P networks are usually open systems
Possibility to free-ride High levels of free-riding can seriously degrade global performance A gossip-based algorithm can be used to sustain high levels of cooperation despite selfish nodes Based on simple “copy” and “rewire” operations

84 Gossip framework instantiation
Style of interaction: pull Local state S: Current utility, strategy and neighborhood within an interaction network Method SelectPeer(): Single random sample Method Update(): Copy strategy and neighborhood if the peer is achieving better utility

85 SLAC Algorithm: “Copy and Rewire”
F D G “Rewire” C Compare utilities A “Copy” strategy A H B K J

86 SLAC Algorithm: “Mutate”
F D Link to random node G C A “Mutate” strategy A H B K Drop current links J

87 Prisoner’s Dilemma Prisoner’s Dilemma in SLAC
Nodes play PD with neighbors chosen randomly in the interaction network Only pure strategies (always C or always D) Strategy mutation: flip current strategy Utility: average payoff achieved

88 Cycle 180: Small defective clusters

89 Cycle 220: Cooperation emerges

90 Cycle 230: Cooperating cluster starts to break apart

91 Cycle 300: Defective nodes isolated, small cooperative clusters formed

92 Phase transition of cooperation
% of cooperating nodes

93 Broadcast Application
How to communicate a piece of information from a single node to all other nodes While: Minimizing the number of messages sent (MC) Maximizing the percentage of nodes that receive the message (NR) Minimizing the elapsed time (TR)

94 Broadcast Application
Given a network with N nodes and L links A spanning tree has MC = N A flood-fill algorithm has MC = L For fixed networks containing reliable nodes, it is possible to use an initial flood-fill to build a spanning tree from any node Practical if broadcasting initiated by a few nodes only In P2P applications this is not practical due to network dynamicity and the fact that all nodes may need to broadcast

95 The broadcast game Node initiates a broadcast by sending a message to each neighbor Two different node behaviors determine what happens when they receive a message for the first time: Pass: Forward the message to all neighbors Drop: Do nothing Utilities are updated as follows: Nodes that receive the message gain a benefit β Nodes that pass the message incur a cost γ Assume β > γ > 0, indicating nodes have an incentive to receive messages but also an incentive to not forward them

96 1000-node static random network

97 1000-node high churn network

98 Fixed random network Average over 500 broadcasts x 10 runs

99 High churn network Average over 500 broadcasts x 10 runs

100 Some food for thought What is it that makes a protocol “gossip based”?
Cyclic execution structure (whether proactive or reactive) Bounded information exchange per peer, per cycle Bounded number of peers per cycle Random selection of peer(s)

101 Some food for thought Bounded information exchange per peer, per round implies Information condensation — aggregation Is aggregation the mother of all gossip protocols?

102 Some food for thought Is exponential convergence a universal characterization of all gossip protocols? No, depends on the properties of the peer selection step What are the minimum properties for peer selection that are necessary to guarantee exponential convergence?

103 Gossip versus evolutionary computing
What is the relationship between gossip and evolutionary computing? Is one more powerful than the other? Are they equal?


Download ppt "Gossip Protocols Bimodal Multicast and T-MAN"

Similar presentations


Ads by Google