Download presentation
Presentation is loading. Please wait.
Published byCalvin Cooper Modified over 9 years ago
1
Causal Consistency Without Dependency Check Messages Willy Zwaenepoel
2
INTRODUCTION
3
Geo-replicated data stores Geo-replicated data centers Full replication between data centers Data in each data center partitioned Data center
4
The Consistency Dilemma Strong Consistency synchronous replication all replicas share same consistent view sacrifice availability Causal Consistency asynchronous replication all replicas eventually converge sacrifice consistency, but … … replication respects causality Eventual Consistency asynchronous replication all replicas eventually converge sacrifice consistency
5
The cost of causal consistency
6
Can we close the throughput gap? The answer is: yes, but there is a price
7
STATE OF THE ART: WHY THE GAP?
8
How is causality enforced? Each update has associated dependencies What are dependencies? – metadata to establish causality relations between operations – used only for data replication Internal dependencies – previous updates of the same client session External dependencies – read updates of other client sessions
9
Internal dependencies W(y = 2) Alice W(x = 1) Bob US Datacenter Europe Datacenter R(y) y = 0 R(y) y = 2 Example of 2 users performing operations at different datacenters at the same partition
10
External dependencies R(x) Alice W(x = 1) Bob US Datacenter Europe Datacenter x = 1 R(y) y = 0 R(y) y = 2 W(y = x + 1) Charlie Example of 3 users performing operations at datacenters at the same partition
11
How dependencies are tracked & checked In current implementations – COPS [SOSP ’11], ChainReaction [Eurosys ‘13], Eiger [NSDI ’13], Orbe [SOCC ’13] DepCheck(A) – “Do you have A installed yet?” Partition 0Partition 1 Partition N … Partition 0Partition 1 … US Datacenter Europe Datacenter Client Read(A) Read(B) Write(C, A+B) DepCheck(B) DepCheck(A) Partition N
12
Encoding of dependencies COPS [SOSP ’11], ChainR. [Eurosys ‘13], Eiger [NSDI ’13] – “direct” dependencies – Worst case: O( reads before a write ) Orbe [SOCC ‘13] – Dependency matrix – Worst case: O( partitions )
13
The main issues Metadata size is considerable – for both storage and communucation Remote dependency checks are expensive – multiple partitions are queried for each update
14
The cost of causal consistency
15
The cost of dependency check messages
16
CAUSAL CONSISTENCY WITH 0/1 DEPENDENCY CHECK MESSAGES
17
Getting rid of external dependencies Partitions serve only fully replicated updates – Replication Confirmation messages broadcast periodically External dependencies are removed – replication information implies dependency installation Internal dependencies are minimized – we only track the previous write – requires at most one remote check zero if write is local, one if it is remote
18
The new replication workflow Example of 2 users performing operations at different datacenters at the same partition Alice W(x = 1) Bob R(x) US Datacenter Europe Datacenter Asia Datacenter Replication Confirmation (periodically) R(x) x = 0 x = 1
19
Reading your own writes Clients need not wait for the replication confirmation – they can see their own updates immediately – other clients’ updates are visible once they are fully replicated Multiple logical update spaces Global update space (fully visible) Replication update space (not yet visible) Alice’s update space (visible to Alice) Alice’s update space (visible to Alice) Bob’s update space (visible to Bob) Bob’s update space (visible to Bob) … …
20
The cost of causal consistency
21
The price paid: update visibility increased With new implementation ~ max ( network latency from origin to furthest replica + network latency from furthest replica to destination + interval of replication information broadcast ) With conventional implementation: ~ network latency from origin to destination
22
CAUSAL CONSISTENCY WITHOUT DEPENDENCY CHECK MESSAGES ?!
23
Is it possible? Only make update visible when one can locally determine no causally preceding update will become visible later at another partition
24
How to do that? Encode causality by means of Lamport clock Each partition maintains its Lamport clock Each update is timestamped with Lamport clock Update visible – update.timestamp ≤ min( Lamport clocks ) Periodically compute minimum
25
0-msg causal consistency - throughput
26
The price paid: update visibility increased With new implementation ~ max ( network latency from origin to furthest replica + network latency from furthest replica to destination + interval of minimum computation ) With conventional implementation: ~ network latency from origin to destination
27
How to deal with “stagnant” Lamport clock? Lamport clock stagnates if no update in a partition Combine – Lamport clock – Loosely synchronized physical clock – (easy to do)
28
More on loosely synchronized physical clocks Periodically broadcast clock Reduces update visibility latency to – Network latency from furthest replica to destination + maximum clock skew + clock broadcast interval
29
Can we close the throughput gap? The answer is: yes, but there is a price The price is increased update visibility
30
MethodThroughput# of dep.check messages Update visibility Conventional< Evt. consistencyO( Rs since W ) O( partitions ) DDDD 01-msg~ Evt. consistencyO or 1~ 2 D max 0-msg~ Evt. consistency0~ 2 D max 0-msg + physical clock~ Evt. consistency0~ D max Conclusion: Throughput, messages, latency
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.