The SMART Way to Migrate Replicated Stateful Services Jacob R. Lorch, Atul Adya, Bill Bolosky, Ronnie Chaiken, John Douceur, Jon Howell Microsoft Research.

Slides:



Advertisements
Similar presentations
There is more Consensus in Egalitarian Parliaments Presented by Shayan Saeed Used content from the author's presentation at SOSP '13
Advertisements

NETWORK ALGORITHMS Presenter- Kurchi Subhra Hazra.
CS 5204 – Operating Systems1 Paxos Student Presentation by Jeremy Trimble.
Distributed Systems Overview Ali Ghodsi
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Byzantine Fault Tolerance Steve Ko Computer Sciences and Engineering University at Buffalo.
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
DISTRIBUTED SYSTEMS II REPLICATION CNT. II Prof Philippas Tsigas Distributed Computing and Systems Research Group.
1 Cheriton School of Computer Science 2 Department of Computer Science RemusDB: Transparent High Availability for Database Systems Umar Farooq Minhas 1,
Consensus Algorithms Willem Visser RW334. Why do we need consensus? Distributed Databases – Need to know others committed/aborted a transaction to avoid.
Distributed Systems Fall 2010 Replication Fall 20105DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Piccolo – Paper Discussion Big Data Reading Group 9/20/2010.
CS 582 / CMPE 481 Distributed Systems
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Lesson 1: Configuring Network Load Balancing
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 18: Replication Control All slides © IG.
Low-Latency Multi-Datacenter Databases using Replicated Commit
Byzantine fault tolerance
Federated, Available, and Reliable Storage for an Incompletely Trusted Environment Atul Adya, Bill Bolosky, Miguel Castro, Gerald Cermak, Ronnie Chaiken,
1 The Google File System Reporter: You-Wei Zhang.
The DHCP Failover Protocol A Formal Perspective Rui FanMIT Ralph Droms Cisco Systems Nancy GriffethCUNY Nancy LynchMIT.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Practical Byzantine Fault Tolerance
Byzantine fault-tolerance COMP 413 Fall Overview Models –Synchronous vs. asynchronous systems –Byzantine failure model Secure storage with self-certifying.
From Viewstamped Replication to BFT Barbara Liskov MIT CSAIL November 2007.
1 ZYZZYVA: SPECULATIVE BYZANTINE FAULT TOLERANCE R.Kotla, L. Alvisi, M. Dahlin, A. Clement and E. Wong U. T. Austin Best Paper Award at SOSP 2007.
VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wananga o te Upoko o te Ika a Maui SWEN 432 Advanced Database Design and Implementation MongoDB Architecture.
Toward Fault-tolerant P2P Systems: Constructing a Stable Virtual Peer from Multiple Unstable Peers Kota Abe, Tatsuya Ueda (Presenter), Masanori Shikano,
Byzantine fault tolerance
Robustness in the Salus scalable block store Yang Wang, Manos Kapritsos, Zuocheng Ren, Prince Mahajan, Jeevitha Kirubanandam, Lorenzo Alvisi, and Mike.
S-Paxos: Eliminating the Leader Bottleneck
Paxos: Agreement for Replicated State Machines Brad Karp UCL Computer Science CS GZ03 / M st, 23 rd October, 2008.
Replication (1). Topics r Why Replication? r System Model r Consistency Models – How do we reason about the consistency of the “global state”? m Data-centric.
Chap 7: Consistency and Replication
Replication (1). Topics r Why Replication? r System Model r Consistency Models r One approach to consistency management and dealing with failures.
Consensus and leader election Landon Cox February 6, 2015.
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
Antidio Viguria Ann Krueger A Nonblocking Quorum Consensus Protocol for Replicated Data Divyakant Agrawal and Arthur J. Bernstein Paper Presentation: Dependable.
Implementing Replicated Logs with Paxos John Ousterhout and Diego Ongaro Stanford University Note: this material borrows heavily from slides by Lorenzo.
Fault Tolerance (2). Topics r Reliable Group Communication.
The Raft Consensus Algorithm Diego Ongaro and John Ousterhout Stanford University.
Primary-Backup Replication
Distributed Systems – Paxos
Alternative system models
Network Load Balancing
View Change Protocols and Reconfiguration
EECS 498 Introduction to Distributed Systems Fall 2017
EECS 498 Introduction to Distributed Systems Fall 2017
Providing Secure Storage on the Internet
Implementing Consistency -- Paxos
Outline Announcements Fault Tolerance.
Principles of Computer Security
Fault-tolerance techniques RSM, Paxos
EEC 688/788 Secure and Dependable Computing
From Viewstamped Replication to BFT
Lecture 21: Replication Control
EEC 688/788 Secure and Dependable Computing
Fault-Tolerant State Machine Replication
EECS 498 Introduction to Distributed Systems Fall 2017
Replicated state machine and Paxos
View Change Protocols and Reconfiguration
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
The SMART Way to Migrate Replicated Stateful Services
EEC 688/788 Secure and Dependable Computing
Federated, Available, and Reliable Storage for an Incompletely Trusted Environment Atul Adya, William J. Bolosky, Miguel Castro, Gerald Cermak, Ronnie.
Lecture 21: Replication Control
Implementing Consistency -- Paxos
Presentation transcript:

The SMART Way to Migrate Replicated Stateful Services Jacob R. Lorch, Atul Adya, Bill Bolosky, Ronnie Chaiken, John Douceur, Jon Howell Microsoft Research First EuroSys Conference 19 April 2006

The SMART Way to Migrate Replicated Stateful Services Paxos Replicated B CA services statefulReplicated stateful services Problem: Machine failure leads to unavailability –Solution: Replicate the service for fault tolerance Problem: Replica state can become inconsistent –Solution: Use replicated state machine approach

The SMART Way to Migrate Replicated Stateful Services Migrating replicated services B CA D E Migration: Changing the configuration – the set of machines running replicas Uses of migration –Replace failed machines for long-term fault tolerance –Load balancing –Increasing or decreasing number of replicas

The SMART Way to Migrate Replicated Stateful Services Can remove non-failed machines –Enables autonomic migration, i.e., migration without human involvement –Enables load balancing Can do concurrent request processing Can perform arbitrary migrations, even ones replacing entire configuration Completely described in our paper Limitations of current approaches Cannot remove non-failed machines without creating window of vulnerability –Can only remove known-failed machines –Cannot use migration for load balancing Cannot process requests in parallel Limitations of current approaches addressed by SMART

The SMART Way to Migrate Replicated Stateful Services Outline Introduction Background on Paxos Limitations of existing approaches SMART: Service Migration And Replication Technique –Configuration-specific replicas –Shared execution modules Implementation and evaluation Conclusions

The SMART Way to Migrate Replicated Stateful Services Background on Paxos

The SMART Way to Migrate Replicated Stateful Services Paxos protocol Background: Paxos overview Goal: Every service replica runs the same sequence of requests –Deterministic service ensures state changes and replies are consistent Approach: Paxos assigns requests to virtual “slots” –No two replicas assign different requests to same slot –Each replica executes requests in slot order BAC … slots: requests: ……

The SMART Way to Migrate Replicated Stateful Services Background: Paxos protocol One replica is the leader Clients send requests to the leader Leader proposes a request by sending PROPOSE message to all replicas Each replica logs it and sends a LOGGED message to the leader When leader receives LOGGED messages from a majority, it decides it and sends a DECIDED message B A CZ client server replicas Req LOGGED PROPOSE DECIDED

The SMART Way to Migrate Replicated Stateful Services Background: Paxos leader change If leader fails, another replica “elects” itself New leader must poll replicas and hear replies from a majority –Ensures it learns enough about previous leaders’ actions to avoid conflicting proposals A B C Poll Reply

The SMART Way to Migrate Replicated Stateful Services Background: Paxos migration Service state includes current configuration –Request that changes that part of the state migrates the service Configuration after request n responsible for requests n+α and beyond A B C 79 Service state A, B, D A, B, C 80 α D 85

The SMART Way to Migrate Replicated Stateful Services Rationale for α With α=1, slot n can change the configuration responsible for slot n+1 Leader can’t propose slot n+1 until n is decided –Doesn’t know who to make proposal to, let alone whether it can make proposal at all Prevents pipelining of requests –Request may wait a network round trip and a disk write B A CZ Req PROPOSE LOGGED

The SMART Way to Migrate Replicated Stateful Services Limitations of existing approaches

The SMART Way to Migrate Replicated Stateful Services No request pipelining Leader change is complicated –How to ensure that new leader knows the right configuration to poll? –How to handle some outstanding proposals being from one configuration and some from another? –Other problems To avoid this complexity, current approaches use α=1 But, this prevents request pipelining

The SMART Way to Migrate Replicated Stateful Services Window of vulnerability Removing a machine creates window of vulnerability –Effectively, it induces a failure of the removed replica –Consequently, service can become permanently unavailable even if less than half the machines fail Considered acceptable since machines only removed when known to a human to have permanently failed Not suitable for autonomic migration using imperfect failure detectors, or for load balancing B A CD DECIDEDPROPOSE DECIDEDLOGGED Poll

The SMART Way to Migrate Replicated Stateful Services SMART

The SMART Way to Migrate Replicated Stateful Services Configuration-specific replicas Each configuration has its own set of replicas and its own separate instance of Paxos Simplifies leader change so we can pipeline requests –Election always happens in a static configuration No window of vulnerability because a replica can remain alive until next configuration is established A Replica 1B B Replica 2AReplica 2B Replica 1C CD Replica 2D Replica 1A

The SMART Way to Migrate Replicated Stateful Services SMART migration protocol After creating new configuration, send JOIN msgs After executing request n+α-1, send FINISHED msgs –Tells new replicas where they can get starting state –Makes up for possibly lost JOIN messages When a majority of successor configuration have their starting state, replica kills itself If a machine misses this phase, it can still join later A Replica 1B B Replica 2AReplica 2B Replica 1C CD Replica 2D Replica 1A JOIN FINISHED READY PREPARE JOIN-REQ JOIN

The SMART Way to Migrate Replicated Stateful Services Agreement 1A Agreement 2A Agreement 1B Agreement 2B Agreement 1C Agreement 2D Shared execution modules Configuration-specific replicas have a downside –One copy of service state for each replica –Need to copy state to new replicas Solution: Shared execution modules –Divide replica into agreement and execution modules –One execution module for all replicas on machine Replica 1A A Replica 1B B Replica 2AReplica 2B Replica 1C CD Replica 2D Execution 1A Execution 2A Execution 1B Execution 2B Execution 1C Execution 2D Execution

The SMART Way to Migrate Replicated Stateful Services Implementation and evaluation SMART implemented in a replicated state machine library, LibSMART –Lets you build a service as if it were single-machine, then turns it into a replicated, migratable service Farsite distributed file system service ported to LibSMART –Straightforward because LibSMART uses BFT interface Experimental results using simple key/value service –Pipelining reduces average client latency by 14% –Migration happens quickly, so clients only see a bit of extra latency, less than 30 ms

The SMART Way to Migrate Replicated Stateful Services Conclusions Migration is useful for replicated services –Long-term fault tolerance, load balancing Current approaches to migration have limitations SMART removes these limitations by using configuration-specific replicas –Can remove live machines, enabling autonomic migration and load balancing –Can overlap processing of concurrent requests SMART is practical –Implementation supports large, complex file system service