Download presentation
Presentation is loading. Please wait.
1
Faults and fault-tolerance One of the selling points of a distributed system is that the system will continue to perform even if some components / processes fail.
2
Cause and effect Study what causes what. We view the effect of failures at our level of abstraction, and then try to mask it, or recover from it. Be familiar with the terms MTBF (Mean Time Between Failures) and MTTR (Mean Time To Repair)
3
Classification of failures Crash failure Omission failure Transient failure Byzantine failure Software failure Temporal failureSecurity failure
4
Crash failures Crash failure is irreversible. How can we distinguish between a process that has crashed and a process that is running very slowly? In synchronous system, it is easy to detect crash failure (using heartbeat signals and timeout), but in asynchronous systems, it is never accurate. Some failures may be complex and nasty. Arbitrary deviation from program execution is a form of failure that may not be as “nice” as a crash. Fail-stop failure is an simple abstraction that mimics crash failure when program execution becomes arbitrary. Such implementations help detect which processor has failed. If a system cannot tolerate fail-stop failure, then it cannot tolerate crash.
5
Omission failures Message lost in transit. May happen due to various causes, like –Transmitter malfunction –Buffer overflow –Collisions at the MAC layer –Receiver out of range
6
Transient failure (Hardware) Arbitrary perturbation of the global state. May be induced by power surge, weak batteries, lightning, radio- frequency interferences etc. (Software) Heisenbugs, are a class of temporary internal faults and are intermittent. They are essentially permanent faults whose conditions of activation occur rarely or are not easily reproducible, so they are harder to detect during the testing phase. Over 99% of bugs in IBM DB2 production code are non- deterministic and transient
7
Byzantine failure Anything goes! Includes every conceivable form of erroneous behavior. Numerous possible causes. Includes malicious behaviors (like a process executing a different program instead of the specified one) too. Most difficult kind of failure to deal with.
8
Software failures Coding error or human error Design flaws Memory leak Incomplete specification (example Y2K) Many failures (like crash, omission etc) can be caused by software bugs too.
9
Specification of faulty behavior program example1; define x : boolean (initially x = true); {a, b are messages); do {S}: x send a{specified action} {F}: true send b {faulty action} od a a a a b a a a b b a a a a a a a …
10
Specifying Byzantine Faults program example2; define j:integer, flag : boolean ; {a, b are messages}, x : buffer; initially j=0, flag = false; do ~flag message=a x:= a ; flag :=true (j<N) flag send x to j; j := j+1 j = N j := 0; flag :=false od F : (Byzantine) flag x :=b (b != a)
11
Specifying Byzantine Faults program example3; define k:integer, x : boolean initially k=0, x = true; S: do k >2 send k; k:= k+1 x (k=2) send k; k:= k+1 k >= 3 k:=0; F: x x:= false ~x (k=2) send 9; k := k+1 od
12
Specifying Temporal Failures program example4 { for process j}; define f[i]: boolean {initially f[i] = false} S: do ~f[i] message received from process i skip F: timeout (i,j) f[i] := true od
13
Fault-tolerance F-intolerant vs F-tolerant systems Four types of tolerance: - Masking - Non-masking - Fail-safe - Graceful degradation faults tolerances A system that tolerates failure of type F
14
Fault-tolerance P is the invariant of the original fault-free system Q represents the worst possible behavior of the system when failures occur. It is called the fault span. Q is closed under S or F. P Q
15
Fault-tolerance Masking tolerance: P = Q (neither safety nor liveness is violated Non-masking tolerance: P Q (safety property may be temporarily violated, but not liveness). Eventually safety property is restored P Q
16
Classifying fault-tolerance Masking tolerance. Application runs as it is. The failure does not have a visible impact. All properties (both liveness & safety) continue to hold. Non-masking tolerance. Safety property is temporarily affected, but not liveness. Example 1. Clocks lose synchronization, but recover soon thereafter. Example 2. Multiple processes temporarily enter their critical sections, but thereafter, the normal behavior is restored. Backward error-recovery vs. forward error-recovery
17
Backward vs. forward error recovery Backward error recovery When safety property is violated, the computation rolls back and resume from a previous correct state. time rollback Forward error recovery Computation does not care about getting the history right, but moves on, as long as eventually the safety property is restored. True for stabilizing systems.
18
Classifying fault-tolerance Fail-safe tolerance Given safety predicate is preserved, but liveness may be affected Example. Due to failure, no process can enter its critical section for an indefinite period. In a traffic crossing, failure changes the traffic in both directions to red. Graceful degradation Application continues, but in a “degraded” mode. Much depends on what kind of degradation is acceptable. Example. Consider message-based mutual exclusion. Processes will enter their critical sections, but not in timestamp order.
19
Failure detection The design of fault-tolerant systems will be easier if failures can be detected. Depends on the 1. System model, and 2. the type of failures. Asynchronous systems are more tricky. We first focus on synchronous systems only.
20
Detection of crash failures Failure can be detected using heartbeat messages (periodic “I am alive” broadcast) and timeout - if the largest time to execute a step is known - channel delays have a known upper bound.
21
Detection of omission failures For FIFO channels: Use sequence numbers with messages. Non-FIFO channels and bounded propagation delay - use timeout What about non-FIFO channels for which the upper bound of the delay is not known? Use unbounded sequence numbers and acknowledgments. But acknowledgments may be lost too, causing unnecessary re-transmission of messages :- ( Let us look how a real protocol deals with omission ….
22
Tolerating crash failures Triple modular redundancy (TMR) for masking any single failure. N-modular redundancy masks up to m failures, when N = 2m +1 Take a vote What if the voting unit fails?
23
Tolerating omission failures Central theme in networking A B router Routers may drop messages, but reliable end-to-end transmission is an important requirement. This implies, the communication must tolerate Loss, Duplication, and Re-ordering of messages
24
Stenning’s protocol {program for process S} define ok : boolean; next : integer; initially next = 0, ok = true, both channels are empty; dook send (m[next], next); ok:= false ÿ(ack, next) is received ok:= true; next := next + 1 timeout (r,s) send (m[next], next) od {program for process R} define r : integer; initially r = 0; do(m[ ], s) is received s = r accept the message; send (ack, r); r:= r+1 (m[ ], s) is received s≠r (ack, r-1) od Sender S Receiver R m[0], 0 ack
25
Observations on Stenning’s protocol Both messages and acks may be lost Q. Why is the last ack reinforced by R when s≠r? A. Needed to guarantee progress. Progress is guaranteed, but the protocol is inefficient due to low throughput. Sender S Receiver R m[0], 0 ack
26
Sliding window protocol The sender continues the send action without receiving the acknowledgements of at most w messages (w > 0), w is called the window size.
27
Sliding window protocol {program for process S} define next, last, w : integer; initially next = 0, last = -1, w > 0 do last+1 ≤ next ≤ last + w send (m[next], next); next := next + 1 ÿ(ack, j) is received if j > last last := j j ≤ last skip fi ÿtimeout (R,S) next := last+1 {retransmission begins} od {program for process R} define j : integer; initially j = 0; do(m[next], next) is received if j = next accept message; send (ack, j); j:= j+1 j ≠ next send (ack, j-1) fi; od
28
Why does it work? Lemma. Every message is accepted exactly once. Lemma. m[k] is always accepted before m[k+1]. (Argue that these are true.) Observation. Uses unbounded sequence number. This is bad. Can we avoid it?
29
Theorem If the communication channels are non-FIFO, and the message propagation delays are arbitrarily large, then using bounded sequence numbers, it is impossible to design a window protocol that can withstand the (1) loss, (2) duplication, and (3) reordering of messages.
30
Why unbounded sequence no? (m[k],k) ( m’, k) ( m’ ’,k) Retransmitted version of m New message using the same seq number k We want to accept m” but reject m’. How is that possible?
31
Alternating Bit Protocol S R ABP is a link layer protocol. Works on FIFO channels only. Guarantees reliable message delivery with a 1-bit sequence number ( this is the t raditional version with window size = 1). Study how this works. m[0],0 ack, 0 m[1],1m[2],0
32
Alternating Bit Protocol program ABP; {program for process S} define sent, b : 0 or 1 ; next : integer ; initially next = 0, sent = 1, b = 0, and channels are empty ; do sent ≠b send (m[next], b); next := next+1; sent := b (ack, j) is received if j = b b := 1-b j ≠ b skip fi timeout (R,S) send (m[next-1], b) od {program for process R} define j : 0 or 1; {initially j = 0}; do (m[ ], b) is received if j = b accept the message; send (ack, j); j:= 1 - j j ≠ b send (ack, 1-j) fi od S R m[1],1 m[0],0 m[2],0 a,0
33
How TCP works Supports end-to-end logical connection between any two computers on the Internet. Basic idea is the same as those of sliding window protocols. But TCP uses bounded sequence numbers! It is safe to re-use a sequence number when it is unique. With a high probability, a random 32 or 64-bit number is unique. Also, current sequence numbers are flushed out of the system after a time = 2d, where d is the round trip delay.
34
How TCP works
35
Three-way handshake. Sequence numbers are unique w.h.p. Why is the knowledge of roundtrip delay important? What if the window is too small / too large? What if the timeout period is too small / toolarge? Adaptive retransmission: receiver can throttle sender and control the window size to save its buffer space.
36
Distributed Consensus Reaching agreement is a fundamental problem in distributed computing. Some examples are Leader election / Mutual Exclusion Commit or Abort in distributed transactions Reaching agreement about which process has failed Clock phase synchronization Air traffic control system: all aircrafts must have the same view If there is no failure, then reaching consensus is trivial. All-to-all broadcast Followed by a applying a choice function … Consensus in presence of failures can however be complex.
37
Problem Specification u3 u2 u1 u0 v v v v Here, v must be equal to the value at some input line. Also, all outputs must be identical. input output p0 p1 p2 p3
38
Problem Specification Termination. Every non-faulty process must eventually decide. Agreement. The final decision of every non-faulty process must be identical. Validity. If every non-faulty process begins with the same initial value v, then their final decision must be v.
39
Asynchronous Consensus Seven members of a busy household decided to hire a cook, since they do not have time to prepare their own food. Each member separately interviewed every applicant for the cook’s position. Depending on how it went, each member voted "yes" (means “hire”) or "no" (means “don't hire”). These members will now have to communicate with one another to reach a uniform final decision about whether the applicant will be hired. The process will be repeated with the next applicant, until someone is hired. Consider various modes of communication…
40
Asynchronous Consensus Theorem. In a purely asynchronous distributed system, the consensus problem is impossible to solve if even a single process crashes Famous result due to Fischer, Lynch, Patterson (commonly known as FLP 85)
41
Proof Bivalent and Univalent states A decision state is bivalent, if starting from that state, there exist two distinct executions leading to two distinct decision values 0 or 1. Otherwise it is univalent. A univalent state may be either 0-valent or 1-valent.
42
Proof Lemma. No execution can lead from a 0-valent to a 1-valent state or vice versa. Proof. Follows from the definition of 0-valent and 1-valent states.
43
Proof Lemma. Every consensus protocol must have a bivalent initial state. Proof by contradiction. Suppose not. Then consider the following scenario: s[0]0 0 0 0 0 0 …0 0 0{0-valent) 0 0 0 0 0 0 …0 0 1s[j] is 0-valent 0 0 0 0 0 0 …0 1 1s[j+1] is 1-valent …………(differ in j th position) s[n-1]1 1 1 1 1 1 …1 1 1{1-valent} What if process (j+1) crashes at the first step?
44
Lemma. In a consensus protocol, starting from any initial bivalent state, there must exist a reachable bivalent state T, such that every action taken by some process p in state T leads to either a 0-valent or a 1-valent state. Actions 0 and 1 from T must be taken by the same process p. Why? Proof The adversary tries to prevent The system from reaching consensus
45
Lemma. In a consensus protocol, starting from any initial bivalent state I, there must exist a reachable bivalent state T, such that every action taken by some process p in state T leads to either a 0- valent or a 1-valent state. Actions 0 and 1 from T must be taken by the same process p. Why? Proof of FLP (continued)
46
T T0 T1 Decision =1 Decision = 0 p reads q writes e1 e0 Starting from T, let e1 be a computation that excludes any step by p. Let p crash after reading.Then e1 is a valid computation from T0 too. To all non-faulty processes, these two computations are identical, but the outcomes are different! This is not possible! Case 1. 1-valent 0-valent Assume shared memory communication. Also assume that p ≠ q. Various cases are possible Such a computation must exist since p can crash at any time
47
Proof (continued) T T0 T1 Decision =1 Decision = 0 p writes q writes e1 e0 Both write on the same variable, and p writes first. From T, let e1 be a computation that excludes any step by p. Let p crash after writing.Then e1 is a valid computation from T0 too. To all non-faulty processes, these two computations are identical, but the outcomes are different! Case 2.1-valent 0-valent
48
Proof (continued) T T0 T1 Z Decision =1 Decision = 0 p writes q writes Let both p and q write, but on different variables. Then regardless of the order of these writes, both computations lead to the same intermediate global state Z. Is Z 1-valent or 0-valent? Case 3 0-valent 1-valent p writes q writes
49
Proof (continued) Similar arguments can be made for communication using the message passing model too (See Lynch’s book). These lead to the fact that p, q cannot be distinct processes, and p = q. Call p the decider process. What if p crashes in state T? No consensus is reached!
50
Conclusion In a purely asynchronous system, there is no solution to the consensus problem if a single process crashes.. Note that this is true for deterministic algorithms only. Solutions do exist for the consensus problem using randomized algorithm, or using the synchronous model.
51
Byzantine Generals Problem Describes and solves the consensus problem on the synchronous model of communication. -Processor speeds have lower bounds and communication delays have upper bounds. -The network is completely connected -Processes undergo byzantine failures, the worst possible kind of failure
52
Byzantine Generals Problem n generals {0, 1, 2,..., n-1} decide about whether to "attack" or to "retreat" during a particular phase of a war. The goal is to agree upon the same plan of action. Some generals may be "traitors" and therefore send either no input, or send conflicting inputs to prevent the "loyal" generals from reaching an agreement. Devise a strategy, by which every loyal general eventually agrees upon the same plan, regardless of the action of the traitors.
53
Byzantine Generals 0 3 2 1 Attack = 1 Retreat = 0 {1, 1, 0, 0 } Every general will broadcast his judgment to everyone else. These are inputs to the consensus protocol. {1, 1, 0, 1 } {1, 1, 0, 0 } traitor The traitor may send out conflicting inputs
54
Byzantine Generals We need to devise a protocol so that every peer (call it a lieutenant) receives the same value from any given general (call it a commander). Clearly, the lieutenants will have to use secondary information. Note that the roles of the commander and the lieutenants will rotate among the generals.
55
Interactive consistency specifications IC1. Every loyal lieutenant receives the same order from the commander. IC2. If the commander is loyal, then every loyal lieutenant receives the order that the commander sends. commander lieutenants
56
The Communication Model Oral Messages Messages are not corrupted in transit. Messages can be lost, but the absence of message can be detected. When a message is received (or its absence is detected), the receiver knows the identity of the sender (or the defaulter). OM(m) represents an interactive consistency protocol in presence of at most m traitors.
57
An Impossibility Result Using oral messages, no solution to the Byzantine Generals problem exists with three or fewer generals and one traitor. Consider the two cases:
58
Impossibility result Using oral messages, no solution to the Byzantine Generals problem exists with 3m or fewer generals and m traitors ( m > 0 ). Hint. Divide the 3m generals into three groups of m generals each, such that all the traitors belong to one group. This scenario is no better than the case of three generals and one traitor.
59
The OM(m) algorithm Recursive algorithm OM(m) OM(m-1) OM(m-2) OM(0) OM(0) = Direct broadcast OM(0)
60
The OM(m) algorithm 1. Commander i sends out a value v (0 or 1) 2. If m > 0, then every lieutenant j ≠ i, after receiving v, acts as a commander and initiates OM(m-1) with everyone except i. 3. Every lieutenant, collects ( n-1) values: (n-2) values sent by the lieutenants using OM(m-1), and one direct value from the commander. Then he picks the majority of these values as the order from i
61
Example of OM(1)
62
Example of OM(2) OM(2) OM(1) OM(0)
63
Proof of OM(m) Lemma. Let the commander be loyal, and n > 2m + k, where m = maximum number of traitors. Then OM(k) satisfies IC2
64
Proof of OM(m) Proof If k=0, then the result trivially holds. Let it hold for k = r (r > 0) i.e. OM(r) satisfies IC2. We have to show that it holds for k = r + 1 too. Since n > 2m + r+1, so n -1 > 2m + r So OM(r) holds for the lieutenants in the bottom row. Each loyal lieutenant will collect n-m-1 identical good values and m bad values. So bad values are voted out ( n-m-1 > m + r implies n-m-1 > m )
65
The final theorem Theorem. If n > 3m where m is the maximum number of traitors, then OM(m) satisfies both IC1 and IC2. Proof. Consider two cases: Case 1. Commander is loyal. The theorem follows from the previous lemma (substitute k = m). Case 2. Commander is a traitor. We prove it by induction. Base case. m=0 trivial. ( Induction hypothesis ) Let the theorem hold for m = r. We have to show that it holds for m = r+1 too.
66
Proof (continued) There are n > 3(r + 1) generals and r + 1 traitors. Excluding the commander, there are > 3r+2 generals of which there are r traitors. So > 2r+2 lieutenants are loyal. Since 3r+ 2 > 3.r, OM(r) satisfies IC1 and IC2 > 2r+2 r traitors
67
Proof (continued) In OM(r+1), a loyal lieutenant chooses the majority from (1) > 2r+1 values obtained from the loyal lieutenants via OM(r), (2) the r values from the traitors, and (3) the value directly from the commander. > 2r+2 r traitors The values collected in part (1) & (3) are the same for all loyal lieutenants – it is the same value that these lieutenants received from the commander. Also, by the induction hypothesis, in part (2) each loyal lieutenant receives identical values from each traitor. So every loyal lieutenant collects the same set of values.
68
Acknowledgements This part relies heavily on Dr. Sukumar Ghosh’s Iowa University Distributed Systems course 22C:166
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.