Based on slides by Ali Ghodsi and Ion Stoica

Slides:



Advertisements
Similar presentations
Dynamo: Amazon’s Highly Available Key-value Store
Advertisements

Case Study - Amazon. Amazon r Amazon has many Data Centers r Hundreds of services r Thousands of commodity machines r Millions of customers at peak times.
Replication Management. Motivations for Replication Performance enhancement Increased availability Fault tolerance.
Dynamo: Amazon's Highly Available Key-value Store Distributed Storage Systems CS presented by: Hussam Abu-Libdeh.
Efficient Solutions to the Replicated Log and Dictionary Problems
Conflict-free Replicated Data Types MARC SHAPIRO, NUNO PREGUIÇA, CARLOS BAQUERO AND MAREK ZAWIRSKI Presented by: Ron Zisman.
Distributed Systems Fall 2010 Replication Fall 20105DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Database Replication techniques: a Three Parameter Classification Authors : Database Replication techniques: a Three Parameter Classification Authors :
CS 582 / CMPE 481 Distributed Systems
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
CS 603 Data Replication February 25, Data Replication: Why? Fault Tolerance –Hot backup –Catastrophic failure Performance –Parallelism –Decreased.
Amazon’s Dynamo System The material is taken from “Dynamo: Amazon’s Highly Available Key-value Store,” by G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati,
Dynamo: Amazon’s Highly Available Key-value Store Giuseppe DeCandia, et.al., SOSP ‘07.
Cloud Storage – A look at Amazon’s Dyanmo A presentation that look’s at Amazon’s Dynamo service (based on a research paper published by Amazon.com) as.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Case Study: Amazon Dynamo Steve Ko Computer Sciences and Engineering University at Buffalo.
Dynamo: Amazon’s Highly Available Key-value Store COSC7388 – Advanced Distributed Computing Presented By: Eshwar Rohit
Fault Tolerance via the State Machine Replication Approach Favian Contreras.
1. Big Data A broad term for data sets so large or complex that traditional data processing applications ae inadequate. 2.
Dynamo: Amazon’s Highly Available Key-value Store
IM NTU Distributed Information Systems 2004 Replication Management -- 1 Replication Management Yih-Kuen Tsay Dept. of Information Management National Taiwan.
D u k e S y s t e m s Asynchronous Replicated State Machines (Causal Multicast and All That) Jeff Chase Duke University.
Chapter 4 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University Building Dependable Distributed Systems.
EEC 688/788 Secure and Dependable Computing Lecture 9 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Big Data Yuan Xue CS 292 Special topics on.
CS 540 Database Management Systems NoSQL & NewSQL Some slides due to Magda Balazinska 1.
Jonathan Walpole Computer Science Portland State University
CSE 486/586 Distributed Systems Case Study: Amazon Dynamo
CS 440 Database Management Systems
Multiway Search Trees Data may not fit into main memory
Trade-offs in Cloud Databases
Distributed Systems – Paxos
CSE 486/586 Distributed Systems Consistency --- 1
Dynamo: Amazon’s Highly Available Key-value Store
Lecturer : Dr. Pavle Mogin
Relational Algebra Chapter 4, Part A
Replication and Consistency
Asynchronous rebalancing of a replicated tree
CAP THEOREM Have the rules really changed?
CSE 486/586 Distributed Systems Consistency --- 3
Implementing Consistency -- Paxos
Replication and Consistency
EECS 498 Introduction to Distributed Systems Fall 2017
CS 440 Database Management Systems
CRDTs and Coordination Avoidance (Lecture 8, cs262a)
CSE 486/586 Distributed Systems Consistency --- 1
Relational Algebra Chapter 4, Sections 4.1 – 4.2
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
Scalable Causal Consistency
Lecture 21: Replication Control
Atomic Commit and Concurrency Control
EEC 688/788 Secure and Dependable Computing
Causal Consistency and Two-Phase Commit
Distributed Systems CS
Transaction Properties: ACID vs. BASE
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
The SMART Way to Migrate Replicated Stateful Services
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
Lecture 21: Replication Control
CSE 486/586 Distributed Systems Consistency --- 2
Implementing Consistency -- Paxos
CSE 486/586 Distributed Systems Consistency --- 3
CSE 486/586 Distributed Systems Consistency --- 1
CSE 486/586 Distributed Systems Case Study: Amazon Dynamo
Replication and Consistency
CSE 486/586 Distributed Systems Reliable Multicast --- 1
Presentation transcript:

Based on slides by Ali Ghodsi and Ion Stoica EECS 262a Advanced Topics in Computer Systems Lecture 9 CRDTs and Coordination Avoidance September 20th, 2018 John Kubiatowicz Based on slides by Ali Ghodsi and Ion Stoica http://www.eecs.berkeley.edu/~kubitron/cs262 CS252 S05

Replicated Data Replicate data at many nodes Updates Performance: local reads Fault-tolerance: no data loss unless all replicas fail or become unreachable Availability: data still available unless all replicas fail or become unreachable Scalability: load balance across nodes for reads Updates Push to all replicas Consistency: expensive!

Conflicts Updating replicas may lead to different results  inconsistent data s1 5 3 7 s2 5 3 7 s3 5 7 3

Strong Consistency All replicas execute updates in same total order Deterministic updates: same update on same objects  same result s1 5 3 3 7 s2 5 3 7 s3 5 7 3 7 coordinate

Strong Consistency All replicas execute updates in same total order Deterministic updates: same update on same objects  same result Requires coordination and consensus to decide on total order of operations N-way agreement, basically serialize updates  very expensive!

Can only have two of the three properties in a distributed system CAP theorem Can only have two of the three properties in a distributed system Consistency. Always return a consistent results (linearizable). As if there was only a single copy of the data. Availability. Always return an answer to requests (faster than really long lived partitions). Partition-tolerance. Continue operating correctly even if the network partitions.

When the networked is partitioned, you must chose one of these CAP theorem v2 When the networked is partitioned, you must chose one of these Consistency. Always return a consistent results (linearizable). As if there was only a single copy of the data. Availability. Always return an answer to requests (faster than really long lived partitions). How can we get around CAP?

Eventual Consistency to the rescue If no new updates are made to an object all replicas will eventually converge to the same value Update local and propagate No consensus in the background  scale well for both reads and writes Expose intermediate state Assume, eventual, reliable delivery On conflict, applications Arbitrate & Rollback

Eventual Consistency If no new updates are made to an object all replicas will eventually converge to the same value However High complexity Unclear semantics if application reads data and then we have a rollback!

Must be available when partitions happen “For example, customers should be able to view and add items to their shopping cart even if disks are failing, network routes are flapping, or data centers are being destroyed by tornados. Therefore, the service responsible for managing shopping carts requires that it can always write to and read from its data store, and that its data needs to be available across multiple data centers.” Handles 3 million checkouts a day (2009). Availability!

Must be available when partitions happen “Many traditional […]. In such systems, writes may be rejected if the data store cannot reach all (or a majority of) the replicas at a given time. On the other hand, Dynamo targets the design space of an “always writeable” data store (i.e., a data store that is highly available for writes). […] For instance, the shopping cart service must allow customers to add and remove items from their shopping cart even amidst network and server failures. This requirement forces us to push the complexity of conflict resolution to the reads in order to ensure that writes are never rejected”

Must be available when partitions happen “There is a category of applications in Amazon’s platform that can tolerate such inconsistencies and can be constructed to operate under these conditions. For example, the shopping cart application requires that an “Add to Cart” operation can never be forgotten or rejected. If the most recent state of the cart is unavailable, and a user makes changes to an older version of the cart, that change is still meaningful and should be preserved. Note that both “add to cart” and “delete item from cart” operations are translated into put requests to Dynamo. When a customer wants to add an item to (or remove from) a shopping cart and the latest version is not available, the item is added to (or removed from) the older version and the divergent versions are reconciled later. .”

Today’s Papers CRDTs: Consistency without concurrency control Marc Shapiro, Nuno Preguica, Carlos Baquero, Marek Zawirski Research Report, RR-6956, INRIA, 2009 Coordination Avoidance in Database Systems Peter Bailis, Alan Fekete, Michael J. Franklin, Ali Ghodsi, Joseph M. Hellerstein, Ion Stoica, Proceedings of VLDB’14 Thoughts?

Main idea of CRDTs What does CRDT stand for? In a CRDT data structure: Commutative Replicated Data Type? Conflict-free Replicated Data Type? Both…. In a CRDT data structure: If concurrent updates to replicas commute and all replicas execute updates in causal order, then replicas converge Leverages simple mathematical properties that ensure absence of conflict such as monotonicity in a semi-lattice and/or commutativity How does CRDTs get around consistency problems of eventual consistency? Create many specialized APIs with custom semantics Shopping cart might need a SET instead of PUT/GET A search engine might need a distributed DAG CS Research Trick: assume more semantics. More limited applicability, but can do things that were impossible before!

Strong Eventual Consistency Strong Eventual Consistency (SEC): Eventual Consistency with the guarantee that correct replicas that have received the same updates (maybe in different order) have an equivalent correct state! Like eventual consistency but with deterministic outcomes of concurrent updates No need for background consensus No need to rollback Available, fault-tolerant, scalable

Treedoc: A CRDT for Wikipedia A Data structure for storing articles Many replicas of documents Document fragments (words/paragraphs) stored in tree Fragments assembled in depth-first, left-to-right order Each node in tree has unique ID based on path to node Elements inserted by adding to tree at proper place (may unbalance tree) Deletes labeled in tree but not immediately removed Uniqueness (between replicas) enforced by adding tie-breaker based on replica nameh Epidemic replication now works – just send all updates (tuple [ID, Text] or [ID, Delete]) to everyone Periodic, all-replica compression by flattening and rebuilding tree Requires consensus, but only rarely Needs to be aborted if ongoing updates at any replica

Treedoc: A CRDT for Wikipedia

Treedoc size over time (GWB page) George W Bush Wikipedia page gets lots of editing (read – “hacking”) and becomes good test case:

Was This a good paper?? What were the authors’ goals? What about the evaluation / metrics? Did they convince you that this was a good system /approach? Were there any red-flags? What mistakes did they make? Does the system/approach meet the “Test of Time” challenge? How would you review this paper today?

CS262a Project Mini-Research Projects: Actually advance state-of-art Need two or three people/project (may allow four undergrads/project) Complete Research project in 2/3 of a term Typically investigate hypothesis by building an artifact and measuring it against a “base case” Generate conference-length paper and give oral presentation at poster session Often, can lead to an actual publication. I will meet with groups 2 or 3 times during term to brainstorm Many projects supported by other faculty and/or grad students

CS262a Project (con’t) Most important things: Proposal due in week and a half Finally going to put up some projects today or tomorrow Suggested by systems faculty Most important things: What are you going to do? What are your metrics for success? What resources do you need?

Coordination Avoidance in DB Systems Serializability is really expensive in distributed databases: Conclusion: Do as little coordination as possible!

Example of what we want to do:

System Model: I-confluent execution A set of trasactions T is -confluent with respect to invariant I if for all reachable states with common ancestor state, merged state is still I-valid:

Useful idea?

Result for TPC-C

Was This a good paper?? What were the authors’ goals? What about the evaluation / metrics? Did they convince you that this was a good system /approach? Were there any red-flags? What mistakes did they make? Does the system/approach meet the “Test of Time” challenge? How would you review this paper today?

Generalization (Optional Paper) Conflict-free Replicated Data Types Marc Shapiro, Nuno Preguica, Carlos Baquero, Marek Zawirski, 2011 What are general properties of such conflict-free data structures? Two classes of replication: State-based and Operation-based

Partial Order (poset) Set of objects S and an order relationship ≤ between them, such that for all a, b, c in S Reflexive: a ≤ a Antisymmetric: ( a ≤ b ∧ b ≤ a ) ⇒ ( a = b ) Transitive: ( a ≤ b ∧ b ≤ c ) ⇒ ( a ≤ c )

Partial order ≤ set S with a least upper bound (LUB), denoted ⊔ Semi-lattice Partial order ≤ set S with a least upper bound (LUB), denoted ⊔ m = x ⊔ y is a LUB of {x, y} under ≤ iff ∀ m′ ( x ≤ m′ ∧ y ≤ m′) ⇒ ( x ≤ m ∧ y ≤ m ∧ m ≤ m′ ) The nice thing about semi-lattices is that it follows that ⊔ is: commutative: x ⊔ y = y ⊔ x idempotent: x ⊔ x = x associative: ( x ⊔ y) ⊔ z = x ⊔ ( y ⊔ z)

Partial order ≤ on set of integers ⊔: max( ) Then, we have: Example Partial order ≤ on set of integers ⊔: max( ) Then, we have: commutative: max(x, y) = max(y, x) idempotent: max(x, x) = x associative: max(max(x, y), z) = max(x, max(y, z))

Example Partial order ⊆ on sets ⊔: U (set union) Then, we have: commutative: A U B = B U A idempotent: A U A = A associative: (A U B) U C = A U (B U C)

Aha! How can this help us in building replicated distributed systems? Just use the LUB ⊔ to merge state between replicas For instance, could build a CRDT using Supports add(integer) Supports get  returns the maximum integer How? Always correct: available and strongly eventually consistent Can we support remove(integer)?

State-based Replication Replicated object: a tuple (S, s0, q, u, m). Replica at process pi has state si ∈ S s0: initial state Each replica can execute one of following commands q: query object’s state u: update object’s state m: merge state from a remote replica

State-based Replication Algorithm Periodically, replica at pi sends its current state to pj Replica pj merges received state into its local state by executing m After receiving all updates (irrespective of order), each replica will have same state

Monotonic Semi-lattice Object A state-based object with partial order ≤, noted (S,≤, s0, q, u, m), that has following properties, is called a monotonic semi-lattice: Set S of values forms a semi-lattice ordered by ≤ Merging state s with remote state s′ computes the LUB of the two states, i.e., s •m (s′ ) = s⊔s′ State is monotonically non-decreasing across updates, i.e., s ≤ s • u

Convergent Replicated Data Type (CvRDT) Theorem: Assuming eventual delivery and termination, any state-based object that satisfies the monotonic semi-lattice property is SEC

Don’t care about order: Why does it work? Don’t care about order: Merge is both commutative and associative Don’t care about delivering more than once Merge is idempotent

Numerical Example: Union Set u: add new element to local replica q: return entire set merge: union between remote set and local replica {5} {5} U {3} = {3, 5} {3, 5} U {5, 7} = {3, 5, 7} {5} {5} {5} U {3, 5} = {3, 5} {5} {3, 5} U {5, 7} = {3, 5, 7} {5} {5} {5} U {7} = {5, 7} {5, 7} U {3, 5} = {3, 5, 7}

Operation-based Replication An op-based object is a tuple (S, s0, q, t, u, P ), where S, s0 and q have same meaning: state domain, initial state and query method No merge method; instead an update is split into a pair (t, u ), where t: side-effect-free prepare-update method (at local copy) u: effect-free update method (at all copies) P: delivery precondition (see next)

Operation-based Replication Algorithm Updates are delivered to all replicas Use causally-ordered broadcast communication protocol, i.e., deliver every message to every node exactly once, consistent with happen-before order Happen-before: updates from same replica are delivered in the order they happened to all recipients (effectively delivery precondition, P) Note: concurrent updates can be delivered in any order

Commutative Replicated Data Type (CmRDT) Assuming causal delivery of updates and method termination, any op-based object that satisfies the commutativity property for all concurrent updates is SEC

Numerical Example: Union Set t: add a set to local replica u: add delta to every remote replica {5} {5} U {3} = {3, 5} {3, 5} U {5,7} = {3, 5, 7} {5} {5} {5} U {3} = {3, 5} {5} {3, 5} U {5,7} = {3, 5, 7} {5} {5} {5} U {5, 7} = {5, 7} {5, 7} U {3} = {3, 5, 7}

State-based vs Op-based State Based CRDT (CvRDT) Op Based CRDT (CmRDT) What is the differences and why might it matter?

State-based vs Operation-based Replication Both are equivalent! You can use one to emulate the other Operation-based More efficient since you can ship only small updates, but requires causally-ordered broadcast State-based Just requires reliable broadcast; causally-ordered broadcast much more complex! But requires sending all state

CRDT Examples (cont’d) Integer vector (virtual clock): u: increment value at corresponding index by one, inc(i) m: maximum across all values, e.g., m([1, 2, 4], [3, 1, 2]) = [3, 2, 4] Counter: use an integer vector, with query operation q: returns sum of all vector values (1-norm), e.g., q([1, 2, 4]) = 7 Counter that decrements as well: Use two integer vectors: I updated when incrementing D updated when decrementing q: returns difference between 1-norms of I and D

CRDT Examples (cont’d) Add only set object u: add new element to set m: union between two sets q: return local set Add and remove set object Two add only sets A: when adding an element, add it to A R: when removing an element, add it to R q: returns A\R (only supports adding an element at most once)

Summary Serialization, strong consistency Easy to use by applications, but don’t scale well due to conflicts Two solutions to dramatically improve performance: CRDTs: eliminate coordination by restricting types of supported objects for concurrent updates Coordination avoidance: rely on application hints to avoid coordination for transactions