Low-Latency Multi-Datacenter Databases using Replicated Commit

Slides:



Advertisements
Similar presentations
Megastore: Providing Scalable, Highly Available Storage for Interactive Services. Presented by: Hanan Hamdan Supervised by: Dr. Amer Badarneh 1.
Advertisements

Two phase commit. Failures in a distributed system Consistency requires agreement among multiple servers –Is transaction X committed? –Have all servers.
There is more Consensus in Egalitarian Parliaments Presented by Shayan Saeed Used content from the author's presentation at SOSP '13
NETWORK ALGORITHMS Presenter- Kurchi Subhra Hazra.
High throughput chain replication for read-mostly workloads
Failure Detection The ping-ack failure detector in a synchronous system satisfies – A: completeness – B: accuracy – C: neither – D: both.
Replication Management. Motivations for Replication Performance enhancement Increased availability Fault tolerance.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Byzantine Fault Tolerance Steve Ko Computer Sciences and Engineering University at Buffalo.
The SMART Way to Migrate Replicated Stateful Services Jacob R. Lorch, Atul Adya, Bill Bolosky, Ronnie Chaiken, John Douceur, Jon Howell Microsoft Research.
Presented By Alon Adler – Based on OSDI ’12 (USENIX Association)
E-Transactions: End-to-End Reliability for Three-Tier Architectures Svend Frølund and Rachid Guerraoui.
Database Replication techniques: a Three Parameter Classification Authors : Database Replication techniques: a Three Parameter Classification Authors :
Two phase commit. What we’ve learnt so far Sequential consistency –All nodes agree on a total order of ops on a single object Crash recovery –An operation.
Systems of Distributed Systems Module 2 -Distributed algorithms Teaching unit 3 – Advanced algorithms Ernesto Damiani University of Bozen Lesson 6 – Two.
CS 582 / CMPE 481 Distributed Systems
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 5: Synchronous Uniform.
Distributed Systems CS Case Study: Replication in Google Chubby Recitation 5, Oct 06, 2011 Majd F. Sakr, Vinay Kolar, Mohammad Hammoud.
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
1 More on Distributed Coordination. 2 Who’s in charge? Let’s have an Election. Many algorithms require a coordinator. What happens when the coordinator.
Clock-RSM: Low-Latency Inter-Datacenter State Machine Replication Using Loosely Synchronized Physical Clocks Jiaqing Du, Daniele Sciascia, Sameh Elnikety.
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 18: Replication Control All slides © IG.
Distributed Databases
Commit Protocols. CS5204 – Operating Systems2 Fault Tolerance Causes of failure: process failure machine failure network failure Goals : transparent:
CS162 Section Lecture 10 Slides based from Lecture and
Replication and Distribution CSE 444 Spring 2012 University of Washington.
04/18/2005Yan Huang - CSCI5330 Database Implementation – Distributed Database Systems Distributed Database Systems.
Bringing Paxos Consensus in Multi-agent Systems Andrei Mocanu Costin Bădică University of Craiova.
Jason Baker, Chris Bond, James C. Corbett, JJ Furman, Andrey Khorlin, James Larson,Jean Michel L´eon, Yawei Li, Alexander Lloyd, Vadim Yushprakh Megastore.
Replication March 16, Replication What is Replication?  A technique for increasing availability, fault tolerance and sometimes, performance 
1 ZYZZYVA: SPECULATIVE BYZANTINE FAULT TOLERANCE R.Kotla, L. Alvisi, M. Dahlin, A. Clement and E. Wong U. T. Austin Best Paper Award at SOSP 2007.
Byzantine fault tolerance
IM NTU Distributed Information Systems 2004 Replication Management -- 1 Replication Management Yih-Kuen Tsay Dept. of Information Management National Taiwan.
S-Paxos: Eliminating the Leader Bottleneck
Paxos A Consensus Algorithm for Fault Tolerant Replication.
Commit Algorithms Hamid Al-Hamadi CS 5204 November 17, 2009.
Byzantine Fault Tolerance CS 425: Distributed Systems Fall 2012 Lecture 26 November 29, 2012 Presented By: Imranul Hoque 1.
Consensus and leader election Landon Cox February 6, 2015.
CIS 720 Replication. Replica Management Three Subproblems Your boss says to you, “Our system is too slow, make it faster.” You decide that replication.
Antidio Viguria Ann Krueger A Nonblocking Quorum Consensus Protocol for Replicated Data Divyakant Agrawal and Arthur J. Bernstein Paper Presentation: Dependable.
Robustness in the Salus scalable block store Yang Wang, Manos Kapritsos, Zuocheng Ren, Prince Mahajan, Jeevitha Kirubanandam, Lorenzo Alvisi, and Mike.
Topics in Distributed Databases Database System Implementation CSE 507 Some slides adapted from Navathe et. Al and Silberchatz et. Al.
Distributed databases A brief introduction with emphasis on NoSQL databases Distributed databases1.
Minimizing Commit Latency of Transactions in Geo-Replicated Data Stores Paper Authors: Faisal Nawab, Vaibhav Arora, Divyakant Argrawal, Amr El Abbadi University.
CS 540 Database Management Systems NoSQL & NewSQL Some slides due to Magda Balazinska 1.
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
MDCC: Multi-data Center Consistency
Shuai Mu, Lamont Nelson, Wyatt Lloyd, Jinyang Li
a journey from the simple to the optimal
Distributed Systems – Paxos
Alternative system models
CPS 512 midterm exam #1, 10/7/2016 Your name please: ___________________ NetID:___________ /60 /40 /10.
Two phase commit.
Implementing Consistency -- Paxos
Outline Announcements Fault Tolerance.
EECS 498 Introduction to Distributed Systems Fall 2017
Fault-tolerance techniques RSM, Paxos
Replication and Recovery in Distributed Systems
EEC 688/788 Secure and Dependable Computing
Lecture 21: Replication Control
EEC 688/788 Secure and Dependable Computing
COS 418: Distributed Systems Lecture 16 Wyatt Lloyd
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
The SMART Way to Migrate Replicated Stateful Services
EEC 688/788 Secure and Dependable Computing
Lecture 21: Replication Control
Implementing Consistency -- Paxos
Presentation transcript:

Low-Latency Multi-Datacenter Databases using Replicated Commit Hatem Mahmoud, Faisal Nawab, Alexander Pucher, Divyakant Agrawal, Amr El Abbadi UCSB Presented by Ashutosh Dhekne

Main Contributions Reduce cross data center communication trips Compare Replicated Commit with Replicated Log Extensive experimental study for evaluation of the approach Number of data centers involved Read vs Write operations Number of operations per transaction Data objects in the database Effect of used shards

2 Phase Commit Coordinator Worker1 Worker2 Worker3 Do Work Prepare Y Ready Y Y Do Commit Y Y Y Complete

Why merge 2PC and Paxos? Two phase commit is very good for reaching consensus Agreement is guaranteed – only coordinator decides Validity is guaranteed – commit happens only if everyone is ready. Termination is guaranteed – no loops possible But Failures can wreak havoc Paxos is good even if failures occur – quorum based, majority wins

Replicated Log vs Commits

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Paxos Data Transaction

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z 2PC Prepare message to Paxos Leaders

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Acquire exclusive locks and log 2PC Prepare locally

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Leader informs same shard in other datacenters Those shards reply accept

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z All of those messages together

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Paxos leaders know status of their shards in other data centers Paxos leaders ack the coordinator that they are ready Y Y

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Y Y Coordinator can now log commit Then asks it’s peer shards to commit and receives acks

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Coordinator releases lock Informs other leaders that commit is made

Replicated Log – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Other leaders go in another frenzy to intimate their shards on other data centers Finally, locks are released

Replicated Commit – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Paxos accept request to coordinators

Replicated Commit – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z 2PC prepare message is sent to other shards locally

Replicated Commit – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z All acquire lock and log 2PC prepare message

Replicated Commit – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z All reply ready to the coordinator

Replicated Commit – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z All coordinators inform each other and the client. Client keeps quorum info

Replicated Commit – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Commit is sent to all shards locally

Replicated Commit – Step by Step Client Datacenter 1 X Y Z Datacenter 2 X Y Z Datacenter 3 X Y Z Finally, locks are released

Replicated Log Vs Replicated Commit Number of messages – Replicated Log requires a lot of inter- datacenter message transfers Latency – Network links across the continents are not at speed of light Access Locality – Messages exchanged locally are very fast A solution to higher latency – reduce the number of cross datacenter messages Replicated Commit achieves this

Experiments

Experimental Setup 5 Data centers 2500 transactions 3000 items in the database, 1000 in each shard 3 shards per data center 50 operations per second in each datacenter

Average Commit Latency – various datacenters Replicated Commit provides much faster responses than Replicated Logs Combination of servers from different regions versus combination of servers from same region has large impact

Scalability of Replicated Commit

Scalability of Replicated Commit Replicated Commit has a very low read latency and comparable other latencies Replicated Commit supports many more clients than Replicated Log

Changing Database Properties Increasing number of data objects does not affect Replicated Commit latency Replicated Commit latency is agnostic of number of shards used

Fault Tolerance Sudden fault at one of the datacenters All clients are immediately served by another data center Latency increases in proportion Total latency for all clients gets affected possibly because of load Served by Ireland

Comments Comparison between SQL and NoSQL is missing Effect of individual shards failing What is the tradeoff between Replicated Logs and Replicated Commit? What are we losing if we adopt Replicated Commit? Why does everyone not use Replicated Commit? Comparisons with other techniques discussed in the related work could have bolstered the paper even further. Intra-datacenter and inter-datacenter protocols are different (physical location knowledge helps the protocol – loss of abstraction)

Discussion

Conclusion A modified approach to achieving ACID properties in multi-datacenter settings is discussed Gains with respect to Replicated Logs are proportional to the latency between the servers More processing is done locally inside a data-center and then consensus is reached

Additional Material

Fault Tolerant Logs Replica 1 Replica 2 Replica 3 Client App value Callback Callback Callback Submit Paxos Framework value value value