Download presentation
Presentation is loading. Please wait.
1
Abstractions for Fault-Tolerant Distributed Computing Idit Keidar MIT LCS
2
? The Big Question Q: How can we make it easier to build good* distributed systems? *good = efficient; fault-tolerant; correct; flexible; extensible; … A: We need good abstractions, implemented as generic services (building blocks)
3
In This Talk Abstraction: Group Communication Application: VoD Algorithm: Moshe Implementation, Performance Other work, new directions
4
Abstraction: Group Communication (GC) GC Send ( Grp, Msg ) Receive ( Msg ) Join / Leave ( Grp ) View ( Members, Id)
5
Example: Highly Available Video-On-Demand (VoD) [ Anker, Dolev, Keidar ICDCS 99] True VoD: clients make online requests Dynamic set of loosely-coupled servers –Fault-tolerance, dynamic load balancing –Clients talk to “abstract” service VoD Service
6
Movie Group Chocolat Movie Group Gladiator Movie Group Spy Kids Abstraction: Group Addressing (Dynamic) start update Movies? Service Group control Session Group
7
Abstraction: Virtual Synchrony Connected group members receive same sequence of events - messages, views Abstraction: state-machine replication –VoD servers in movie group share info about clients using messages and views Make load-balancing decisions based on local copy –Upon start message –When view reports of server failure Joining servers get state transfer
8
VoD server implemented in ~2500 C++ lines including all fault tolerance logic using GC library, commodity hardware
9
General Lessons Learned GC saves a lot of work –Especially for replication with dynamic groups and “local” consistency –E.g., VoD servers, shared white-board,... Good performance but… only on LAN –Next generation will be on WANs (geoplexes)
10
WAN: the Challenge Message latency large and unpredictable Frequent message loss è Time-out failure detection inaccurate è Number of inter-LAN messages matters è Algorithms may change views frequently, view changes require communication e.g., state transfer, costly in WAN
11
GC Multicast & Membership New Architecture “Divide and Conquer” [ Anker, Chockler, Dolev, Keidar DIMACS 98 ] Virtual Synchrony Membership Moshe Notification Service (NS) Multicast [ Keidar, Khazan ] [ Keidar et al. ]
12
New Architecture Benefits Less inter-LAN messages Less remote time-outs Membership out of way of regular multicast Two semantics: –Notification Service - “who is around” –Group membership view = for virtual synchrony
13
Moshe: A Group Membership Algorithm for WAN [Keidar, Sussman, Marzullo, Dolev ICDCS 00 ] Designed for WAN from the ground up Avoids delivery of “obsolete” views –Views that are known to be changing –Not always terminating –Avoid excessive load at unstable periods Runs in 1 round, optimistically –All previous ran in 2
14
New Membership Spec Conditional Liveness: If situation eventually stops changing and NS_Set is eventually accurate, then all NS_Set members have same last view Composable –Can prove application liveness Termination not required – no obsolete views Temporary disagreement allowed – optimism
15
Feedback Cycle: Breaking Conceptions Application Abstraction specs Algorithms Implementation No obsolete views Optimism Allowed
16
The Model Asynchronous – no bound on latency Local NS module at every server –Failure detection, join/leave propagation –Output: NS_Set Reliable communication –Message received or failure detected –E.g., TCP
17
Algorithm – Take 1 Upon NS_Set, send prop to other servers with NS_Set, current view id Store incoming props When received props for NS_Set from all servers, deliver new view: –Members – NS_Set, –Id higher than all previous
18
Optimistic Case Once all servers get same last NS_Set: –All send props for this NS_Set –All props reach all servers –All servers use props to deliver same last view
19
To avoid deadlock: A must respond But how? Out-of-Sync Case: Unexpected Proposal X prop +c X -c prop ABC
20
Algorithm – Take 2 Upon unexpected prop for NS_Set, join in: –Send prop to other servers with NS_Set, current view id
21
view Does this Work? +C+AB+C ABC -C +C view Live-lock!
22
Q: Can all Deadlocks be Detected by Extra Proposals?...Turns out, no Abstraction specs Algorithms Verification Add deadlock detection, no extra messages
23
All C props C props Algorithm – Take 3 Quiescent Optimistic Algorithm NS_Set Deadlock detection Conservative Algorithm Unexpected prop All Opt props Opt props Props have increasing numbers NS_Set
24
The Conservative Algorithm Upon deadlock detection –Send C prop for latest NS_Set with number = max(last_received, last_sent + 1) –Update last_sent Upon receipt of C prop for NS_Set with number higher than last_sent –Send C prop with this number; update last_sent Upon receipt of C props for NS_Set with same number from all, deliver new view
25
Rational for Termination All deadlock cases detected (see paper) Conservative algorithm invoked upon detection Once all servers in conservative algorithm (without exiting) number does not increase –Exit only upon NS_Set Servers match highest number received Eventually, all send props with max number
26
How Typical is the “typical” Case? Depends on the notification service (NS) –Classify NS good behaviors: symmetric and transitive perception of failures Typical case should be very common Need to measure
27
Implementation Use CONGRESS [ Anker et al. ] –Overlay Network and NS for WAN –Always symmetric, can be non-transitive –Logical topology can be configured
28
The Experiment Run over the Internet –US: MIT, Cornell (CU), UCSD –Taiwan: NTU –Israel: HUJI 10 clients at each location (50 total) –Continuously join/leave 10 groups Run 10 days in one configuration, 2.5 days in another
29
Two Experiment Configurations
30
Percentage of “Typical” Cases Configuration 1: –10,786 views, 10,661 one round - 98.84% Configuration 2: –2,559 views, 2,555 one round - 99.84% Overwhelming majority for one round! Depends on topology, good for sparse overlays
31
Performance 200 400 600 800 1000 1200 1400 0 200400600800 1000120014001600180020002200240026002800300032003400360038004000 milliseconds number of cases Histogram of Moshe duration MIT, configuration 1, runs up to 4 seconds (97%)
32
Performance: Configuration II Histogram of Moshe duration MIT, configuration 2, runs up to 3 seconds (99.7%) 0 50 100 150 200 250 300 350 400 450 0 150300450600750900 10501200135015001650180019502100225024002550270028503000 milliseconds number of cases
33
Performance over the Internet: What’s Going on? Without message loss, running time close to biggest round-trip-time, ~650 ms. –As expected Message loss has a big impact Configuration 2 has much less loss – More cases of good performance
34
Observation: Triangle Inequality does not Hold over the Internet Concurrently observed by Detour, RON projects
35
Conclusion: Moshe Features Scalable divide and conquer architecture –Less WAN communication Avoids obsolete views –Less load at unstable times Usually one round (optimism) Uses NS for WAN –Good abstraction –Flexibility to configure multiple ways
36
Optimistic Sussman, Keidar, Marzullo 00 Survey Chockler, Keidar, Vitenberg 01 VS Algorithm, Formal Study Keidar, Khazan 00 Moshe Keidar, Sussman, Marzullo, Dolev 00 Replication Keidar, Dolev 96 CSCW Anker, Chockler, Dolev, Keidar 97 VoD Anker, Dolev, Keidar 99 The Bigger Picture Applications Abstraction specs Algorithms Implementation
37
Other Abstractions Atomic Commit [Keidar, Dolev 98] Atomic Broadcast [Keidar, Dolev 96] Consensus [Keidar, Rajsbaum 01] Dynamic Voting [Yeger-Lotem, Keidar, Dolev 97], [Ingols, Keidar 01] Failure Detectors [Dolev, Friedman, Keidar, Malkhi 97], [Chockler, Keidar, Vitenberg 01]
38
New Directions Performance study: practice theory practice –Measure different parameters on WAN, etc. –Find good models & metrics for performance study –Find best solutions –Adapt to varying situations Other abstractions –User-centric: improve ease-of-use –Framework for policy adaptation, e.g., for collaborative computing –Real-time, mobile, etc.
39
Bon Apetit
40
Group Communication in the Real World Isis used in NY Stock Market, Swiss stock exchange, French air traffic control, Navy radar system... Enabling technology for –Fault-tolerant cluster computing, e.g., IBM, Windows 2000 Cluster –SANs, e.g., IBM, North Folk Networks –Navy battleship DD-21 OMG standard for fault-tolerant CORBA Emerging: SUN Jini group-RMI Freeware, e.g., mod_log_spread for Apache Research projects at Nortel, BBN, military,... *LAN only, WAN should come next
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.