Download presentation
Presentation is loading. Please wait.
Published byErick Dickerson Modified over 9 years ago
1
© Chinese University, CSE Dept. Distributed Systems / 8 - 1 Distributed Systems Topic 8: Fault Tolerance and Replication Dr. Michael R. Lyu Computer Science & Engineering Department The Chinese University of Hong Kong
2
© Chinese University, CSE Dept. Distributed Systems / 8 - 2 Outline 1 Introduction 2 Transaction Recovery 3 Failure Classification and Masking 4 Replication 5 Summary
3
© Chinese University, CSE Dept. Distributed Systems / 8 - 3 1 Introduction Achieving Reliability: Fault Tolerance and Recovery Fault-tolerant applications: –transaction based –process control Recovery aspects of distributed transactions. The design of real time services. –Fail-stop vs Byzantine failure. Masking failures in a service.
4
© Chinese University, CSE Dept. Distributed Systems / 8 - 4 1 Basic Approaches Fault Detection: –Push Model: Server objects send heartbeat messages to Fault Manager. –Pull Model: Fault Manager polls (or pings) server objects through their is_alive() interface. Data Recovery: –Checkpoint and rollback: Save the server object states. Roll back to checkpointed states at recovery. –Message logging and replay: Log all messages. Replay them at recovery.
5
© Chinese University, CSE Dept. Distributed Systems / 8 - 5 2 Transaction Recovery Recovery concerns data durability (permanent and volatile data) and failure atomicity. A server keeps data in volatile memory and records committed data in a recovery file. Recovery manager –Save data items in permanent storage –Restore the server’s data items after a crash –Reorganize the recovery file for better performance –Reclaim storage space (in the recovery file)
6
© Chinese University, CSE Dept. Distributed Systems / 8 - 6 Type of entryDescription of contents of entry Object A value of an object Transaction statusTransaction identifier, transaction status (prepared, committed, aborted) and other status values used for two-phase commit Intentions listTransaction identifier and a sequence of intentions, each of which consists of, where Pi is the position in the recovery file of the value of the object. 2 Entries in Recovery File
7
© Chinese University, CSE Dept. Distributed Systems / 8 - 7 2 Intentions List An intentions list of a server is a list of data item names and values altered by a transaction. The server uses the intentions list when a transaction commits or aborts. When a server prepares to commit, it must have saved the intentions list in its recovery file. The recovery files contain sufficient information to ensure the transaction is committed by all the servers. Two approaches: logging and shadow versions
8
© Chinese University, CSE Dept. Distributed Systems / 8 - 8 2.1 Logging A log contains history of all the transactions performed by a server. The recovery file contains a recent snapshot of the values of all the data items in the server followed by a history of transactions. When a server is prepared to commit, the recovery manager appends all the data items in its intentions list to the recovery file. The recovery manager associates a unique identifier with each data item.
9
© Chinese University, CSE Dept. Distributed Systems / 8 - 9 2.1 Log for Banking Service
10
© Chinese University, CSE Dept. Distributed Systems / 8 - 10 2.1 Recovery by Logging Recovery of data items –Recovery manager is responsible for restoring the server’s data items. –The most recent information is at the end of the log. –A recovery manager gets corresponding intentions list from the recovery file. Reorganizing the recovery file –Checkpointing: the process of writing the current committed values (checkpoint) to a new recovery file. –Can be done periodically or right after recovery.
11
© Chinese University, CSE Dept. Distributed Systems / 8 - 11 2.2 Shadow Versions Shadow versions technique uses a map to locate versions of the server’s data items in a file called a version store. The versions written by each transaction are shadows of the previous committed versions. When prepared to commit, any changed data are appended to the version store. When committing, a new map is made. When complete, new map replaces the old map.
12
© Chinese University, CSE Dept. Distributed Systems / 8 - 12 2.2 Shadow Versions Example Map at startMap when T commits A P 0 A P 1 B P 0 'B P 2 C P 0 "C P 0 " P 0 P 0 ' P 0 " P 1 P 2 P 3 P 4 Version store 10020030080220 278242 Checkpoint → → → → → →
13
© Chinese University, CSE Dept. Distributed Systems / 8 - 13 2.2 Log and 2PC Trans:TCoord’r:TTrans:T UPart’pant: U Trans:U U preparedpart’pant list:... committedpreparedCoord’r:..uncertaincommitted intentions list intentions list
14
© Chinese University, CSE Dept. Distributed Systems / 8 - 14 2.3 Recovery of 2PC RoleStatusAction of recovery manager CoordinatorpreparedNo decision had been reached before the server failed. It sends abortTransaction to all the servers in the participant list and adds the transaction statusaborted in its recovery file. Same action for state aborted. If there is no participant list, the participants will eventually timeout and abort the transaction. CoordinatorcommittedA decision to commit had been reached before the server failed. It sends adoCommit to all the participants in its participant list (in case it had not done so before) and resumes the two-phase protocol at step 4 (Fig 17.5). ParticipantcommittedThe participant sends ahaveCommitted message to the coordinator (in case this was not done before it failed). This will allow the coordinator to discard information about this transaction at the next checkpoint. ParticipantuncertainThe participant failed before it knew the outcome of the transaction. It cannot determine the status of the transaction until the coordinator informs it of the decision. It will send agetDecision to the coordinator to determine the status of the transaction. When it receives the reply it will commit or abort accordingly. ParticipantpreparedThe participant has not yet voted and can abort the transaction. Coordinatordone No action is required.
15
© Chinese University, CSE Dept. Distributed Systems / 8 - 15 3 Failures Classification and Masking Two contrasting points on distributed systems: –The operation of a service depends on the correct operation of other services. –Joint execution of a set of servers is less likely to fail than any one of the individual components. Designers of a service should specify its correct behavior and the way it may fail Failure semantics: a description of the ways a service may fail. Can be used for its clients to mask its failures.
16
© Chinese University, CSE Dept. Distributed Systems / 8 - 16 Class of failureSubclassDescription Omission failure A server omits to respond to a request Response failure Server responds incorrectly to a request Value failureReturn wrong value State transitionHas wrong effect on resources (for failureexample, sets wrong values in data items) Timing failure Response not within a specified time interval Crash failure Repeated omission failure: a server repeatedly fails to respond to requests until it is restarted Amnesia-crashA server starts in its initial state, having forgotten its state at the time of the crash Pause-crashA server restarts in the state before the crash Halting-crashServer never restarts 3.1 Characteristics of Failures
17
© Chinese University, CSE Dept. Distributed Systems / 8 - 17 3.2 Fail-Stop vs Byzantine Failures A fail-stop failure is one that server fails cleanly. That is, it either functions, or else it crashes. Byzantine failure behavior is used to describe the worse possible failure semantics of a server: it fails maliciously or arbitrarily. Byzantine agreement is intended for correct behaviors within response time requirement in the presence of faulty hardware. It depends on if messages can be authenticated.
18
© Chinese University, CSE Dept. Distributed Systems / 8 - 18 3.2 Byzantine Generals
19
© Chinese University, CSE Dept. Distributed Systems / 8 - 19 3.2 Byzantine Agreement Algorithms Byzantine agreement algorithms send more messages and use more active servers. When messages can be authenticated, 2N+1 servers are required to tolerate N bad servers. When messages cannot be authenticated, 3N+1 servers are required. With enough good servers, solutions require O(N 2 ) messages with constant delay time. Fortunately, the good news is...
20
© Chinese University, CSE Dept. Distributed Systems / 8 - 20 3.3 Hierarchical Masking of Failures We describe two approaches to masking failures: hierarchical failure masking and group failure masking. In hierarchical failure masking, a server of higher level tries to mask failures at lower-level. When a lower-level failure cannot be masked, it is converted to a higher level exception. Example: Server crash is masked in RR protocol by raising an exception to the client.
21
© Chinese University, CSE Dept. Distributed Systems / 8 - 21 3.4 Group Failure Masking A service can be made fault tolerant by implementing it by a group of servers. A group is t-fault tolerant if it can tolerate up to t member failures. For fail-stop failures, t+1 servers are needed. For Byzantine failures, 2t+1 servers needed. To ensure correctness, the server program must be deterministic, and each operation must be atomic w.r.t. other operations.
22
© Chinese University, CSE Dept. Distributed Systems / 8 - 22 3.4 Group Failure Masking A group can be closely synchronized or loosely synchronized. In a closely synchronized group of servers: –All members execute requests immediately. –Server programs are both deterministic and atomic. –Suitable for real time system and Byzantine failures. In a loosely synchronized group of servers: –One server (primary) performs requests, others (backup) log the requests and take over if needed. –Requires less resource but takes longer to recover.
23
© Chinese University, CSE Dept. Distributed Systems / 8 - 23 4 Replication Replication is the maintenance of on-line copies of data and resources For performance, availability, fault tolerance. Basic architectural model. The process group approach. The primary-backup replication model. The active replication model. The gossip architecture.
24
© Chinese University, CSE Dept. Distributed Systems / 8 - 24 4 Replication Issues Replica management models consider trade- off between accuracy and response time. –Simple asynchronous model –Totally synchronous model –Quorum-based schemes –Causality-ordered Multicast updates to a process group. Read/write ratio.
25
© Chinese University, CSE Dept. Distributed Systems / 8 - 25 4.1 Basic Architectural Model FE Requests and replies C Replica C Service Clients Front ends managers RM FE RM
26
© Chinese University, CSE Dept. Distributed Systems / 8 - 26 4.1 System Model FE issues the request to one or more RMs. Coordination: RMs coordinate in preparation for executing the request consistently. –FIFO ordering: If FE issues request r then r’, then all RMs handle r before r’. –Causal ordering: If r r’ then all RMs handle r before r’. –Total ordering: If an RM handles r before r’ then all other RMs handles r before r’. Execution: RMs execute the request tentatively. Agreement: RMs reach consensus on the effect. Response: One or more RMs responds to FE.
27
© Chinese University, CSE Dept. Distributed Systems / 8 - 27 4.2 Process Group Approach Process group and group communication. Group structure –peer group –server group –client-server group –subscription group –hierarchical groups
28
© Chinese University, CSE Dept. Distributed Systems / 8 - 28 4.2 Process Group Services Group membership management –Create –Join –Leave Group address expansion Multicast communication –unreliable multicast –reliable multicast –atomic multicast Join Group address expansion Multicast communication Group send Fail Group membership management Leave Process group
29
© Chinese University, CSE Dept. Distributed Systems / 8 - 29 4.3 Passive (Primary-Backup) Replication FE C C RM Primary Backup RM
30
© Chinese University, CSE Dept. Distributed Systems / 8 - 30 4.4 Active Replication FEC CRM
31
© Chinese University, CSE Dept. Distributed Systems / 8 - 31 4.5 The Gossip Architecture Query & Update Ops.Clients Communication via FE Query Val FE RM Query,prevVal,new Update FE Update,prev Update id Service Clients gossip FE Clients FE Service Vector timestamps RM gossip
32
© Chinese University, CSE Dept. Distributed Systems / 8 - 32 6 Summary Transaction recovery –long-life application and data integrity –atomic commit protocol is the key –checkpoints and logging in a recovery file Fault tolerance and replication –real-time application –importance of fault semantics –primary-backup server for fail-stop failures –closely synchronized group for Byzantine failures Read textbook Chapter 15.5 for Byzantine Generals Problem, Chapter 17.6 for Transaction Recovery, and Chapter 18 for Replication.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.