Download presentation
Published byCori Lindsey Modified over 9 years ago
1
Practical Byzantine Fault Tolerance and Proactive Recovery
Miguel Castro and Barbara Liskov ACM TOCS ‘02 Presented By: Imranul Hoque
2
Replicate to increase availability
Problem Computer systems provide crucial services Computer systems fail Natural disasters Hardware failures Software errors Malicious attacks Need highly available service Replicate to increase availability
3
Assumptions are a Problem
Before BFT BFT Behavior of faulty process Fail stop failure [Paxos, VS Replication] Byzantine failure Synchrony Synchronous system [Rampart, SecureRing] Asynchronous system! Safety without synchrony Liveness with eventual time bound Number of faults Bounded Unbounded if less then one third fail in a time period Proactive and frequent recovery [When will this not work?]
4
Byzantine Fault Tolerance in Asynchronous Environment
Contributions Practical replication algorithm Weak assumption Good performance [Really?] Implementation BFT: A generic replication toolkit BFS: A replicated file system Performance evaluation Byzantine Fault Tolerance in Asynchronous Environment
5
Challenges Request A Request B Client
6
Challenges Client Client 1: Request A 2: Request B
7
State Machine Replication
2: Request B 1: Request A Client How to assign sequence number to requests?
8
Primary Backup Mechanism
Client What if the primary is faulty? Agreeing on sequence number Agreeing on changing the primary (view change) 1: Request A 2: Request B View 0
9
Agreement Certificate: set of messages from a quorum
Algorithm steps are justified by certificates Quorum A Quorums have at least 2f + 1 replicas Quorums intersect in at least one correct replica Quorum B
10
All have to be designed to work together
Algorithm Components Normal case operation View changes Garbage collection Recovery All have to be designed to work together
11
Quadratic message exchange
Normal Case Operation Three phase algorithm: PRE-PREPARE picks order of requests PREPARE ensures order within views COMMIT ensures order across views Replicas remember messages in log Messages are authenticated {.}σk denotes a message sent by k Quadratic message exchange
12
Pre-prepare Phase Request: m {PRE-PREPARE, v, n, m}σ0
Primary: Replica 0 Replica 1 Replica 2 Replica 3 Request: m {PRE-PREPARE, v, n, m}σ0 Fail
13
Prepare Phase Request: m PRE-PREPARE Primary: Replica 0 Replica 1
Fail Replica 3 Accepted PRE-PREPARE
14
Prepare Phase Request: m PRE-PREPARE Primary: Replica 0
{PREPARE, v, n, D(m), 1}σ1 Replica 1 Replica 2 Fail Replica 3 Accepted PRE-PREPARE
15
Collect PRE-PREPARE + 2f matching PREPARE
Prepare Phase Request: m Collect PRE-PREPARE + 2f matching PREPARE PRE-PREPARE Primary: Replica 0 {PREPARE, v, n, D(m), 1}σ1 Replica 1 Replica 2 Fail Replica 3 Accepted PRE-PREPARE
16
Commit Phase Request: m PRE-PREPARE PREPARE Primary: Replica 0
{COMMIT, v, n, D(m)}σ2 Replica 2 Fail Replica 3
17
Collect 2f+1 matching COMMIT: execute and reply
Commit Phase (2) Request: m Collect 2f+1 matching COMMIT: execute and reply PRE-PREPARE PREPARE COMMIT Primary: Replica 0 Replica 1 Replica 2 Fail Replica 3
18
View Change Provide liveness when primary fails Brief protocol
Timeouts trigger view changes Select new primary (= view number mod 3f+1) Brief protocol Replicas send VIEW-CHANGE message along with the requests they prepared so far New primary collects 2f+1 VIEW-CHANGE messages Constructs information about committed requests in previous views
19
View Change Safety Goal: No two different committed request with same sequence number across views Quorum for Committed Certificate (m, v, n) At least one correct replica has Prepared Certificate (m, v, n) View Change Quorum
20
Clients will not get reply if more than f replicas are recovering
Recovery Corrective measure for faulty replicas Proactive and frequent recovery All replicas can fail if at most f fail in a window System administrator performs recovery, or Automatic recovery from network attacks Secure co-processor Read-only memory Watchdog timer Clients will not get reply if more than f replicas are recovering
21
Sketch of Recovery Protocol
Save state Reboot with correct code and restore state Replica has correct code without losing state Change keys for incoming messages Prevent attacker from impersonating others Send recovery request r Others change incoming keys when r execute Check state and fetch out-of-date or corrupt items Replica has correct up-to-date state
22
Performance Andrew benchmark 4 machines: 600 MHz, Pentium III
Andrew100 and Andrew500 4 machines: 600 MHz, Pentium III 3 Systems BFS: based on BFT NO-REP: BFS without replication NFS: NFS-V2 implementation in Linux No experiment with faulty replicas Scalability issue: only 4 & 7 replicas
23
Benchmark Results (w/o PR)
Although they have some data on view change time – that’s artificial view change! Without view change and faulty replica!
24
Benchmark Results (with PR)
Recovery Period Recovery is staggered!
25
Related Works Fault Tolerance Fail Stop Fault Tolerance
Paxos 1989 (TR) VS Replication PODC 1988 Byzantine Fault Tolerance Byzantine Agreement Rampart TPDS 1995 SecureRing HICSS 1998 BFT TOCS ‘02 BASE TOCS ‘03 Byzantine Quorums Malkhi-Reiter JDC 1998 Phalanx SRDS 1998 Fleet ToKDI ‘00 Q/U SOSP ‘05 Hybrid Quorum HQ Replication OSDI ‘06
26
Today’s Talk Replication in Wide Area Network Prof. Keith Marzullo Distinguished Lectureship Series 4PM – 1404 SC
27
Questions?
29
Backup Slides Optimization Proof of 3f+1 optimality Garbage Collection
View Change with MAC Detailed Recovery Protocol State Transfer
30
Optimization Using MAC instead of Digital Signatures
Replying with digest Tentative execution of Read-Write requests Tentative execution or Read-only requests Piggyback COMMIT with next PRE-PREPARE Request batching
31
Proof of 3f+1 Optimality f faulty replicas
Should reply after n-f replicas respond to ensure liveness Two n-f quorums intersect in at least n-2f replicas Worst case scenario: out of (n-2f), f are faulty Two quorums should have at least 1 non-faulty replica in common Therefore, n-3f >= 1
32
Garbage Collection Replicas should remove from their logs information about executed operations. Even if i has executed a request, it may not be safe to remove that request's information because of a subsequent view change. Use periodic checkpointing (after K requests).
33
Garbage Collection When replica i produces a checkpoint it multicasts {CHECKPOINT, n, d, i}αi. n is the highest sequence number of the requests i has executed. d is a digest of the current state. Each replica collects such messages until it has a quorum certificate with 2f + 1 authenticated CHECKPOINT messages with the same n and d from distinct i. Called the stable certificate: all other replicas will be able to obtain a weak certificate proving that its stable checkpoint is correct if they need to fetch it.
34
Garbage Collection Once a replica has a stable certificate for a checkpoint with sequence number n, then that checkpoint is stable. The replica can discard all entries in the log with sequence numbers less than n and all checkpointed states ealier than n. The checkpoint can be used to define high and low water marks for sequence numbers: L = n H = n + LOG for a log size LOG.
35
View Change with MAC Replica has logged per message m:
m = {REQUEST, o, t, c} αc {PRE-PREPARE, v, n, D(m)} αp m is pre-prepared at this replica {PREPARE, v, n, D(m), i} αi from 2f replicas m is prepared at this replica {COMMIT, v, n, i}αi from 2f+1 replicas m is commited at this replica
36
View Change with MAC At a high level:
The primary of view v + 1 reads information about stable and prepared certificates from a quorum It then computes the stable checkpoint and fills in the command sequence It sends this information to the backups. The backups verify this information.
37
View Change with MAC Each replica i maintains two sets P and Q:
P contains tuples {n, d, v} such that i has collected a prepared certificate for request with digest d for view v, sequence n and there is no such request with sequence n for a later view. Q contains tuples {n, d, v} such that i pre-prepared request for request with digest d for view v, sequence n and there is no such request with sequence n for a later view.
38
View Change with MAC When i suspects the primary for view v has failed: Enter view v + 1 Multicast {VIEW-CHANGE, v+1, h, C, P, Q, i}αi h is the sequence number of the stable checkpoint of i. C is a set of checkpoints at i with their sequence number.
39
View Change with MAC Replicas collect VIEW-CHANGE messages and acknowledge them to primary p of view v + 1. Accepts a message only if the values in their P and Q are earlier than the new view number. {VIEW-CHANGE-ACK, v+1, i, j, d}μi i is sender of ack. j is source of the VIEW-CHANGE message. d is the digest of the VIEW-CHANGE message
40
View Change with MAC The primary p:
Stores VIEW-CHANGE from i in S[i] if gets 2f − 1 corresponding VIEW-CHANGE-ACKs. Call a view-change certificate Chooses from S a checkpoint and sets of requests. Try to do whenever update S.
41
View Change with MAC Select checkpoint for starting state for processing requests in the new view. Picks checkpoint with the highest number h from the set of checkpoints that are known to be correct (at least f + 1 of them) and that have numbers higher than the low water mark of at least f +1 non-faulty replicas (so ordering information is available).
42
View Change with MAC Select a request to pre-prepare in the new view for each n between h and h + Log. If m committed in an earlier view, then choose m. If there is a quorum of replicas that did not prepare any request for n, then p pre-proposes a null command. Multicast decision to backups. Each can verify by running the same decision procedure.
43
Recovery Protocol Save state
Reboot with correct code and restore state Replica has correct code without losing state Change keys for incoming messages Prevent attacker from impersonating others Estimate high-water mark in a correct log (H) Bound sequence number of bad information by H
44
Recovery Protocol Send recovery request r Participate in algorithm
Others change incoming keys when r executes Bound sequence number of forged messages by Hr (high water mark in log when r executes) Participate in algorithm May be needed to complete recovery if not faulty Check state and fetch out-of-date or corrupt items End: checkpoint max(H, Hr) is stable Replica has correct up-to-date state
45
State Transfer
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.