Replicated Data Consistency Explained Through Baseball slides by Landon Cox with some others from elsewhere prepended and appended (cultural history lesson)

Slides:



Advertisements
Similar presentations
Wyatt Lloyd * Michael J. Freedman * Michael Kaminsky David G. Andersen * Princeton, Intel Labs, CMU Dont Settle for Eventual : Scalable Causal Consistency.
Advertisements

Cloud Storage Consistency Doug Terry Microsoft Research Silicon Valley … Explained through Baseball.
CS 542: Topics in Distributed Systems Diganta Goswami.
COMP 655: Distributed/Operating Systems Summer 2011 Dr. Chunbo Chu Week 7: Consistency 4/13/20151Distributed Systems - COMP 655.
Case Study - Amazon. Amazon r Amazon has many Data Centers r Hundreds of services r Thousands of commodity machines r Millions of customers at peak times.
PNUTS: Yahoo!’s Hosted Data Serving Platform Brian F. Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein, Philip Bohannon, HansArno Jacobsen,
Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services Authored by: Seth Gilbert and Nancy Lynch Presented by:
Linearizability Linearizability is a correctness criterion for concurrent object (Herlihy & Wing ACM TOPLAS 1990). It provides the illusion that each operation.
Life after CAP Ali Ghodsi CAP conjecture [reminder] Can only have two of: – Consistency – Availability – Partition-tolerance Examples.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Replication Steve Ko Computer Sciences and Engineering University at Buffalo.
BREWER’S CONJECTURE AND THE FEASIBILITY OF CAP WEB SERVICES (Eric Brewer) Seth Gilbert Nancy Lynch Presented by Kfir Lev-Ari.
CS 582 / CMPE 481 Distributed Systems
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
CS 603 Data Replication February 25, Data Replication: Why? Fault Tolerance –Hot backup –Catastrophic failure Performance –Parallelism –Decreased.
D u k e S y s t e m s Scaling Data and Services Jeff Chase Duke University.
IBM Haifa Research 1 The Cloud Trade Off IBM Haifa Research Storage Systems.
Distributed Systems Tutorial 11 – Yahoo! PNUTS written by Alex Libov Based on OSCON 2011 presentation winter semester,
Paxos Made Simple Jinghe Zhang. Introduction Lock is the easiest way to manage concurrency Mutex and semaphore. Read and write locks. In distributed system:
Distributed OSes Continued Andy Wang COP 5911 Advanced Operating Systems.
Fault Tolerance via the State Machine Replication Approach Favian Contreras.
Ahmad Al-Shishtawy 1,2,Tareq Jamal Khan 1, and Vladimir Vlassov KTH Royal Institute of Technology, Stockholm, Sweden {ahmadas, tareqjk,
© , OrangeScape Technologies Limited. Confidential 1 Write Once. Cloud Anywhere. Building Highly Scalable Web applications BASE gives way to ACID.
VICTORIA UNIVERSITY OF WELLINGTON Te Whare Wananga o te Upoko o te Ika a Maui SWEN 432 Advanced Database Design and Implementation Trade-offs in Cloud.
Alireza Angabini Advanced DB class Dr. M.Rahgozar Fall 88.
Replication March 16, Replication What is Replication?  A technique for increasing availability, fault tolerance and sometimes, performance 
CAP + Clocks Time keeps on slipping, slipping…. Logistics Last week’s slides online Sign up on Piazza now – No really, do it now Papers are loaded in.
Byzantine fault-tolerance COMP 413 Fall Overview Models –Synchronous vs. asynchronous systems –Byzantine failure model Secure storage with self-certifying.
Improving the Efficiency of Fault-Tolerant Distributed Shared-Memory Algorithms Eli Sadovnik and Steven Homberg Second Annual MIT PRIMES Conference, May.
PNUTS PNUTS: Yahoo!’s Hosted Data Serving Platform Brian F. Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein, Philip Bohannon, HansArno.
Eiger: Stronger Semantics for Low-Latency Geo-Replicated Storage Wyatt Lloyd * Michael J. Freedman * Michael Kaminsky † David G. Andersen ‡ * Princeton,
IM NTU Distributed Information Systems 2004 Replication Management -- 1 Replication Management Yih-Kuen Tsay Dept. of Information Management National Taiwan.
CSE 486/586 CSE 486/586 Distributed Systems Consistency Steve Ko Computer Sciences and Engineering University at Buffalo.
Authors Brian F. Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein, Philip Bohannon, Hans-Arno Jacobsen, Nick Puz, Daniel Weaver, Ramana.
Chapter 4 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University Building Dependable Distributed Systems.
Spring 2003CS 4611 Replication Outline Failure Models Mirroring Quorums.
CSE 486/586 Distributed Systems Consistency --- 3
Part 1. Managing replicated server groups These questions pertain to managing server groups with replication, as in e.g., Chubby, Dynamo, and the classical.
EEC 688/788 Secure and Dependable Computing Lecture 9 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Part 1. Managing replicated server groups These questions pertain to managing server groups with replication, as in e.g., Chubby, Dynamo, and the classical.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Paxos Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 486/586 Distributed Systems Consistency --- 2
Trade-offs in Cloud Databases
CSE 486/586 Distributed Systems Consistency --- 2
CSE 486/586 Distributed Systems Consistency --- 1
PNUTS: Yahoo!’s Hosted Data Serving Platform
Strong Consistency & CAP Theorem
Strong Consistency & CAP Theorem
Strong Consistency & CAP Theorem
Replication and Consistency
CSE 486/586 Distributed Systems Consistency --- 3
Consistent Data Access From Data Centers
Replication and Consistency
EECS 498 Introduction to Distributed Systems Fall 2017
EECS 498 Introduction to Distributed Systems Fall 2017
CSE 486/586 Distributed Systems Consistency --- 1
PERSPECTIVES ON THE CAP THEOREM
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
Scalable Causal Consistency
CAP Theorem and Consistency Models
Replicated data consistency explained through baseball
Transaction Properties: ACID vs. BASE
EEC 688/788 Secure and Dependable Computing
Strong Consistency & CAP Theorem
Global Distribution.
CSE 486/586 Distributed Systems Consistency --- 2
Implementing Consistency -- Paxos
CSE 486/586 Distributed Systems Consistency --- 3
CSE 486/586 Distributed Systems Consistency --- 1
Replication and Consistency
Presentation transcript:

Replicated Data Consistency Explained Through Baseball slides by Landon Cox with some others from elsewhere prepended and appended (cultural history lesson)

Preview/overview K-V stores are a common data tier for mega-services. They evolved during the Web era, starting with DDS. Today they often feature geographic replication. – Multiple replicas in different data centers. – Geo-replication offers better scale and reliability/availability. But updates are slower to propagate, and network partitions may interfere. – A read might not see the “latest” write. So we have to think carefully about what consistency properties we need: “BASE” might be “good enough”. – FLP and CAP tell us that there are fundamental limits on what we can guarantee….but many recent innovations in this space.

Key-value stores Many mega-services are built on key-value stores. – Store variable-length content objects: think “tiny files” (value) – Each object is named by a “key”, usually fixed-size. – Key is also called a token: not to be confused with a crypto key! Although it may be a content hash (SHAx or MD5). – Simple put/get interface with no offsets or transactions (yet). – Goes back to literature on Distributed Data Structures [Gribble 2000] and Distributed Hash Tables (DHTs). Over the next couple of years, Amazon transformed internally into a service-oriented architecture. They learned a tremendous amount… - pager escalation gets way harder….build a lot of scaffolding and metrics and reporting. - every single one of your peer teams suddenly becomes a potential DOS attacker. Nobody can make any real forward progress until very serious quotas and throttling are put in place in every single service. - monitoring and QA are the same thing. You'd never think so until you try doing a big SOA. But when your service says "oh yes, I'm fine", it may well be the case that the only thing still functioning in the server is the little component that knows how to say "I'm fine, roger roger, over and out" in a cheery droid voice. In order to tell whether the service is actually responding, you have to make individual calls. The problem continues recursively until your monitoring is doing comprehensive semantics checking of your entire range of services and data, at which point it's indistinguishable from automated QA. So they're a continuum. - if you have hundreds of services, and your code MUST communicate with other groups' code via these services, then you won't be able to find any of them without a service-discovery mechanism. And you can't have that without a service registration mechanism, which itself is another service. So Amazon has a universal service registry where you can find out reflectively (programmatically) about every service, what its APIs are, and also whether it is currently up, and where. - debugging problems with someone else's code gets a LOT harder, and is basically impossible unless there is a universal standard way to run every service in a debuggable sandbox. That's just a very small sample. There are dozens, maybe hundreds of individual learnings like these that Amazon had to discover organically. There were a lot of wacky ones around externalizing services, but not as many as you might think. Organizing into services taught teams not to trust each other in most of the same ways they're not supposed to trust external developers. This effort was still underway when I left to join Google in mid-2005, but it was pretty far advanced. From the time Bezos issued his edict through the time I left, Amazon had transformed culturally into a company that thinks about everything in a services-first fashion. It is now fundamental to how they approach all designs, including internal designs for stuff that might never see the light of day externally. [image from Sean Rhea, opendht.org, 2004]

ACID vs. BASE Jim Gray ACM Turing Award 1998 Eric Brewer ACM SIGOPS Mark Weiser Award 2009

HPTS Keynote, October 2001 ACID vs. BASE ACID u Strong consistency u Isolation u Focus on “commit” u Nested transactions u Availability? u Conservative (pessimistic) u Difficult evolution (e.g. schema) u “small” Invariant Boundary u The “inside” BASE u Weak consistency –stale data OK u Availability first u Best effort u Approximate answers OK u Aggressive (optimistic) u “Simpler” and faster u Easier evolution (XML) u “wide” Invariant Boundary u Outside consistency boundary but it’s a spectrum

Prior to joining Amazon, he worked as a researcher at Cornell University. Dr. Werner Vogels is Vice President & Chief Technology Officer at Amazon.com.

Vogels on consistency Strong consistency: “After the update completes, any subsequent access will return the updated value.” Consistency “has to do with how observers see these updates”. The scenario A updates a “data object” in a “storage system”. Eventual consistency: “If no new updates are made to the object, eventually all accesses will return the last updated value.”

PNUTS: Yahoo!’s Hosted Data Serving Platform Brian F. Cooper, Raghu Ramakrishnan, Utkarsh Srivastava, Adam Silberstein, Philip Bohannon, Hans-Arno Jacobsen, Nick Puz, Daniel Weaver and Ramana Yerneni Yahoo! Research

9 Example: social network updates Brian SonjaJimiBrandonKurt What are my friends up to? Sonja: Brandon:

10 Example: social network updates 16 Mike <ph.. 6 Jimi <ph.. 8 Mary <re.. 12 Sonja <ph.. 15 Brandon <po.. 17 Bob <re.. Flower

11 Asynchronous replication

12 Consistency model Goal: make it easier for applications to reason about updates and cope with asynchrony What happens to a record with primary key “Brian”? Time Record inserted Update Delete Time v. 1 v. 2 v. 3v. 4 v. 5 v. 7 Generation 1 v. 6 v. 8 Update

13 Consistency model Time v. 1 v. 2 v. 3v. 4 v. 5 v. 7 Generation 1 v. 6 v. 8 Current version Stale version Read

14 Consistency model Time v. 1 v. 2 v. 3v. 4 v. 5 v. 7 Generation 1 v. 6 v. 8 Read up-to-date Current version Stale version

15 Consistency model Time v. 1 v. 2 v. 3v. 4 v. 5 v. 7 Generation 1 v. 6 v. 8 Read ≥ v.6 Current version Stale version Read-critical(required version):

16 Consistency model Time v. 1 v. 2 v. 3v. 4 v. 5 v. 7 Generation 1 v. 6 v. 8 Write if = v.7 ERROR Current version Stale version Test-and-set-write(required version)

Wyatt Lloyd * Michael J. Freedman * Michael Kaminsky † David G. Andersen ‡ * Princeton, † Intel Labs, ‡ CMU Don’t Settle for Eventual : Scalable Causal Consistency for Wide-Area Storage with COPS

Wide-Area Storage Stores: Status Updates Likes Comments Photos Friends List Stores: Tweets Favorites Following List Stores: Posts +1s Comments Photos Circles

Wide-Area Storage Serves Requests Quickly

Inside the Datacenter Web TierStorage Tier A-F G-L M-R S-Z Web TierStorage Tier A-F G-L M-R S-Z Remote DC Replication

Desired Properties: ALPS Availability Low Latency Partition Tolerance Scalability “Always On”

Scalability Increase capacity and throughput in each datacenter A-Z A-L M-Z A-L M-Z A-F G-L M-R S-Z A-F G-L M-R S-Z A-C D-F G-J K-L M-O P-S T-V W-Z A-C D-F G-J K-L M-O P-S T-V W-Z

Desired Property: Consistency Restricts order/timing of operations Stronger consistency: – Makes programming easier – Makes user experience better

Consistency with ALPS Strong Sequential Causal Eventual Impossible [Brewer00, GilbertLynch02] Impossible [LiptonSandberg88, AttiyaWelch94] COPS Amazon LinkedIn Facebook/Apache Dynamo Voldemort Cassandra

SystemALPSConsistency Scatter ✖✖✖✔✔ Strong Walter ✖✖✖ ? PSI + Txn COPS ✔✔✔✔ Causal+ Bayou ✔✔✔✖ Causal+ PNUTS ✔✔ ? ✔ Per-Key Seq. Dynamo ✔✔✔✔✖ Eventual

Replicated-data consistency A set of invariants on each read operation Which writes are guaranteed to be reflected? What write orders are guaranteed? Consistency is an application-level concern When consistency is too weak, applications break Example: auction site must not tell two people they won What are consequences of too-strong consistency? Worse performance (for reads and writes) Worse availability (for reads and writes)

The following are slides on the Doug Terry paper by Landon Cox. We went through these pretty fast in class, but you should understand these models and why we might use them.

Assumptions for our discussion 1.Clients perform reads and writes 2.Data is replicated among a set of servers 3.Writes are serialized (logically, one writer) 1.Performed in the same order at all servers 2.Write order consistent with write-request order 4.Reads result of one or more past writes

Consistency models 1.Strong consistency Reader sees effect of all prior writes 2.Eventual consistency Reader sees effect of subset of prior writes 3.Consistent prefix Reader sees effect of initial sequence of writes 4.Bounded staleness Reader sees effect of all “old” writes 5.Monotonic reads Reader sees effect of increasing subset of writes 6.Read my writes Reader sees effect of all writes performed by reader

Setting: baseball game Write (“visitors”, 0); Write (“home”, 0); for inning = 1..9 outs = 0; while outs < 3 visiting player bats; for each run scored score = Read (“visitors”); Write (“visitors”, score + 1); outs = 0; while outs < 3 home player bats; for each run scored score = Read (“home”); Write (“home”, score + 1); end game; Primary game thread. Only thread that issues writes.

HHH V V V V V V Reader Writer (also reads) Writer (also reads) Reader R R W RW Visitors’ scoreHome score R R S1S2S3S4S5S6

Example 1: score keeper score = Read (“visitors”); Write (“visitors”, score + 1); … score = Read (“home”); Write (“home”, score + 1);

Example 1: score keeper What invariant is the score keeper maintaining on the game’s score? Both values increase monotonically Write (“home”, 1); Write (“visitors”, 1); Write (“home”, 2); Write (“home”, 3); Write (“visitors”, 2); Write (“home”, 4); Write (“home”, 5); Visitors = 2 Home = 5

Example 1: score keeper What invariant must the store provide so the score keeper can ensure monotonically increasing scores? Reads must show effect of all prior writes (strong consistency) Reads must show effect of all prior writes (strong consistency) Write (“home”, 1); Write (“visitors”, 1); Write (“home”, 2); Write (“home”, 3); Write (“visitors”, 2); Write (“home”, 4); Write (“home”, 5); Visitors = 2 Home = 5

Example 1: score keeper Write (“home”, 1); Write (“visitors”, 1); Write (“home”, 2); Write (“home”, 3); Write (“visitors”, 2); Write (“home”, 4); Write (“home”, 5); Visitors = 2 Home = 5 Under strong consistency, what possible scores can the score keeper read after this write completes? 2-5

Example 1: score keeper Write (“home”, 1); Write (“visitors”, 1); Write (“home”, 2); Write (“home”, 3); Write (“visitors”, 2); Write (“home”, 4); Write (“home”, 5); Visitors = 2 Home = 5 Under read-my- writes, what possible scores can the score keeper read after this write completes? 2-5

HHH V V V V V V Writer (also reads) Writer (also reads) Reader W W Visitors’ scoreHome score W Writer (also reads) Writer (also reads) S1S2S3S4S5S6

H’HH V’ V V Writer (also reads) Writer (also reads) Reader Visitors’ scoreHome score R Under strong consistency, who must S3 have spoken to (directly or indirectly) to satisfy read request? S2, S5 Writer (also reads) Writer (also reads) S1S2S3S4S5S6

H’HH V’ V V Writer (also reads) Writer (also reads) Reader Visitors’ scoreHome score When does S3 have to talk to S2 and S5? Before writes return or before read returns? Implementation can be flexible. Guarantee is that inform-flow occurs before read completes. Writer (also reads) Writer (also reads) S1S2S3S4S5S6 R

H’HH V’ V V Writer (also reads) Writer (also reads) Reader Visitors’ scoreHome score Under read-my- writes, who must S3 have spoken to (directly or indirectly) to satisfy read request? S5 Writer (also reads) Writer (also reads) S1S2S3S4S5S6 R

H’HH V’ V V Writer (also reads) Writer (also reads) Reader Visitors’ scoreHome score S1S2S3S4S5S6 Reader For baseball, why is read-my-writes equivalent to strong consistency, even though it is “weaker”? Application only has one writer. Not true in general. R

Example 1: score keeper Common theme: (1)Consider application invariants (1)Reason about what store must ensure to support application invariants Common theme: (1)Consider application invariants (1)Reason about what store must ensure to support application invariants Write (“home”, 1); Write (“visitors”, 1); Write (“home”, 2); Write (“home”, 3); Write (“visitors”, 2); Write (“home”, 4); Write (“home”, 5); Visitors = 2 Home = 5

Example 2: umpire if first half of 9 th inning complete then vScore = Read (“visitors”); hScore = Read (“home”); if vScore < hScore end game; Idea: home team doesn’t need another chance to bat if they are already ahead going into final half inning

Example 2: umpire if first half of 9 th inning complete then vScore = Read (“visitors”); hScore = Read (“home”); if vScore < hScore end game; What invariant must the umpire uphold? Game should end if home team leads going into final half inning.

Example 2: umpire if first half of 9 th inning complete then vScore = Read (“visitors”); hScore = Read (“home”); if vScore < hScore end game; What subset of writes must be visible to the umpire to ensure game ends appropriately? Reads must show effect of all prior writes (strong consistency) Reads must show effect of all prior writes (strong consistency)

Example 2: umpire if first half of 9 th inning complete then vScore = Read (“visitors”); hScore = Read (“home”); if vScore < hScore end game; Would read-my- writes work as it did for the score keeper? No, since the umpire doesn’t issue any writes

Example 3: radio reporter do { vScore = Read (“visitors”); hScore = Read (“home”); report vScore, hScore; sleep (30 minutes); } Idea: periodically read score and broadcast it to listeners

Example 3: radio reporter do { vScore = Read (“visitors”); hScore = Read (“home”); report vScore, hScore; sleep (30 minutes); } What invariants must the radio reporter uphold? Should only report scores that actually occurred, and score should monotonically increase

Example 3: radio reporter do { vScore = Read (“visitors”); hScore = Read (“home”); report vScore, hScore; sleep (30 minutes); } Do we need strong consistency? No, since listeners can accept slightly old scores.

Example 3: radio reporter do { vScore = Read (“visitors”); hScore = Read (“home”); report vScore, hScore; sleep (30 minutes); } Can we get away with eventual consistency (some subset of writes is visible)? No, eventual consistency can return scores that never occurred.

Example 3: radio reporter Write (“home”, 1); Write (“visitors”, 1); Write (“home”, 2); Write (“home”, 3); Write (“visitors”, 2); Write (“home”, 4); Write (“home”, 5); Visitors = 2 Home = 5 Under eventual consistency, what possible scores could the radio reporter read after this write completes? 0-0, 0-1, 0-2, 0-4, 0-5, 1-0, … 2-4, 2-5

HHH V V V V V V Score keeper Radio reporte r W1 W3 Visitors’ scoreHome score W2 S1S2S3S4S5S6 Reader

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader How could reporter read a score of 1-0?

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader R R 1-0

Example 3: radio reporter do { vScore = Read (“visitors”); hScore = Read (“home”); report vScore, hScore; sleep (30 minutes); } How about only consistent prefix (some sequence of writes is visible)? No. Would give us scores that occurred, but not monotonically increasing.

HHH V V V V V V Score keeper Radio reporte r W1 W3 Visitors’ scoreHome score W2 S1S2S3S4S5S6 Reader

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader R R What prefix of writes is visible? W1 0-1

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader R R What prefix of writes is visible? (initial state)

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader R R What additional guarantee do we need? Also need monotonic reads (see increasing subset of writes)

Monotonic reads Also called “session consistency” Reads are grouped under a “session” What extra state/logic is needed for monotonic reads? System has to know which reads are related Related reads have to be assigned a sequence (i.e., a total order) What extra state/logic is needed for read-my-writes? System has to know which reads/writes are related Related reads/writes have to be assigned a total order Does read-my-writes guarantee monotonic reads? (get into groups for five minutes to discuss)

Example 3: radio reporter do { vScore = Read (“visitors”); hScore = Read (“home”); report vScore, hScore; sleep (30 minutes); } Can we get away with bounded staleness (see all “old” writes)? If we also have consistent prefix, and as long as bound is < 30 minutes. If we also have consistent prefix, and as long as bound is < 30 minutes.

Example 3: radio reporter T0 Read (“home”); T1 Read (“visitors”); T2 sleep (30 minutes); T3 Read (“home”); T4 Read (“visitors”); T5 sleep (30 minutes); T6 Read (“visitors”); T7 Read (“home”); T8 sleep (30 minutes); … Under bounded staleness (bound = 15 minutes, no consistent prefix), what writes must these reads reflect? Any write that occurred before T3 – 15 minutes Any write that occurred before T3 – 15 minutes

Example 3: radio reporter T0 Read (“home”); T1 Read (“visitors”); T2 sleep (30 minutes); T3 Read (“home”); T4 Read (“visitors”); T5 sleep (30 minutes); T6 Read (“visitors”); T7 Read (“home”); T8 sleep (30 minutes); … Why isn’t unbounded staleness by itself sufficient? Score must reflect writes that occurred before T3 – (15 minutes), could also reflect more recent writes Score must reflect writes that occurred before T3 – (15 minutes), could also reflect more recent writes

H=0 V=0 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader R R 0-0 Sleep 30 minutes

H=0 V=0 Score keeper Radio reporte r W1 W3 Visitors’ scoreHome score W2 S1S2S3S4S5S6 Reader 0-0 Wake up in 10 minutes

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader Under bounded staleness, what writes can a reporter see? W1, W2, and W3 0-0 Wake up!

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader R R

H=2H=1H=0 V=0 V=1 Score keeper Radio reporte r Visitors’ scoreHome score S1S2S3S4S5S6 Reader R What additional guarantee do we need? Also need monotonic reads (see increasing subset of writes) R

Example 4: game-recap writer while not end of game { drink beer; smoke cigar; } do out to dinner; vScore = Read (“visitors”); hScore = Read (“home”); write recap; Idea: write about game several hours after it has ended

Example 4: game-recap writer while not end of game { drink beer; smoke cigar; } do out to dinner; vScore = Read (“visitors”); hScore = Read (“home”); write recap; What invariant must the recapper uphold? Reads must reflect all writes.

Example 4: game-recap writer while not end of game { drink beer; smoke cigar; } do out to dinner; vScore = Read (“visitors”); hScore = Read (“home”); write recap; What consistency guarantees could she use? Strong consistency or bounded staleness w/ bound < time to eat dinner

Example 4: game-recap writer while not end of game { drink beer; smoke cigar; } do out to dinner; vScore = Read (“visitors”); hScore = Read (“home”); write recap; What about eventual consistency? Probably OK most of the time. Bounded to ensure you always get right output.

Example 5: team statistician wait for end of game; hScore = Read (“home”); stat = Read (“season-runs”); Write (“season-runs”, stat + hScore); What invariants must statistician uphold? Season-runs increases monotonically by amount home team scored at the end of the game

Example 5: team statistician wait for end of game; hScore = Read (“home”); stat = Read (“season-runs”); Write (“season-runs”, stat + hScore); What consistency is appropriate for this read? Could use strong consistency, bounded staleness (with appropriate bound), maybe eventual consistency

Example 5: team statistician wait for end of game; hScore = Read (“home”); stat = Read (“season-runs”); Write (“season-runs”, stat + hScore); What consistency is appropriate for this read? Could use strong consistency, bounded staleness, or read-my- writes if statistician is only writer

Geo-replicated stores face fundamental limits common to all distributed systems. FLP result: consensus is impossible in asynchronous distributed systems. Distributed systems may “partly fail”, and the network may block or delay network traffic arbitrarily. In particular, a network partition may cause a “split brain” in which parts of the system function without an ability to contact other parts of the system (see material on leases). Example of consensus: what was the last value written for X? Popular form of FLP: “Brewer’s conjecture” also known as “CAP theorem”. We can build systems that are CA, CP, or AP, but we cannot have all three properties at once, ever. To a large extent these limits drive the consistency models. (Following slides by Chase)

C-A-P choose two C AP consistency AvailabilityPartition-resilience CA: available, and consistent, unless there is a partition. AP: a reachable replica provides service even in a partition, but may be inconsistent. CP: always consistent, even in a partition, but a reachable replica may deny service if it is unable to agree with the others (e.g., quorum). Dr. Eric Brewer “CAP theorem”

Fischer-Lynch-Patterson (1985) No consensus can be guaranteed in an asynchronous system in the presence of failures. Intuition: a “failed” process may just be slow, and can rise from the dead at exactly the wrong time. Consensus may occur recognizably, rarely or often. Network partition Split brain

Getting precise about CAP #1 What does consistency mean? Consistency  Ability to implement an atomic data object served by multiple nodes. Requires linearizability of ops on the object. – Total order for all operations, consistent with causal order, observed by all nodes – Also called one-copy serializability (1SR): object behaves as if there is only one copy, with operations executing in sequence. – Also called atomic consistency (“atomic”) Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition- Tolerant Web Services. Seth Gilbert, Nancy Lynch. MIT manuscript.

Getting precise about CAP #2 Availability  Every request received by a node must result in a response. – Every algorithm used by the service must terminate. Network partition  Network loses or delays arbitrary runs of messages between arbitrary pairs of nodes. – Asynchronous network model assumed – Service consists of at least two nodes Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition- Tolerant Web Services. Seth Gilbert, Nancy Lynch. MIT manuscript.

Getting precise about CAP #3 Theorem. It is impossible to implement an atomic data object that is available in all executions. – Proof. Partition the network. A write on one side is not seen by a read on the other side, but the read must return a response. Corollary. Applies even if messages are delayed arbitrarily, but no message is lost. – Proof. The service cannot tell the difference. Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition- Tolerant Web Services. Seth Gilbert, Nancy Lynch. MIT manuscript.

Getting precise about CAP #4 Atomic and partition-tolerant – Trivial: ignore all requests. – Or: pick a primary to execute all requests Atomic and available. – Multi-node case not discussed. – But use the primary approach. – Need a terminating algorithm to select the primary. Does not require a quorum if no partition can occur. Left as an exercise. Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition- Tolerant Web Services. Seth Gilbert, Nancy Lynch. MIT manuscript.

Getting precise about CAP #5 Available and partition-tolerant – Trivial: ignore writes; return initial value for reads. – Or: make a best effort to propagate writes among the replicas; reads return any value at hand. Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition- Tolerant Web Services. Seth Gilbert, Nancy Lynch. MIT manuscript.

Quorum How to build a replicated store that is atomic (consistent) always, and available unless there is a partition? – Read and write operations complete only if they are acknowledged by some minimum number (a quorum) of replicas. – Set the quorum size so that any read set is guaranteed to overlap with any write set. – This property is sufficient to ensure that any read “sees” the value of the “latest” write. – So it ensures consistency, but it must deny service if “too many” replicas fail or become unreachable.

Quorum consistency [Keith Marzullo] rv=wv=f n=2f+1

Weighted quorum voting [Keith Marzullo] Any write quorum must intersect every other quorum. rv+wv=n+1