Download presentation
Presentation is loading. Please wait.
Published byPrimrose Whitehead Modified over 9 years ago
1
Coding for Atomic Shared Memory Emulation Viveck R. Cadambe (MIT) Joint with Prof. Nancy Lynch (MIT), Prof. Muriel Médard (MIT) and Dr. Peter Musial (EMC)
2
Erasure Coding for Distributed Storage
3
Locality, Repair Bandwidth, Caching and Content Distribution –[Gopalan et. al 2011, Dimakis-Godfrey-Wu-Wainwright- 10, Wu-Dimakis 09, Niesen-Ali 12] Erasure Coding for Distributed Storage
4
Locality, Repair Bandwidth, Caching and Content Distribution –[Gopalan et. al 2011, Dimakis-Godfrey-Wu-Wainwright- 10, Wu-Dimakis 09, Niesen-Ali 12] Queuing theory –[Ferner-Medard-Soljanin 12, Joshi-Liu-Soljanin 12, Shah-Lee- Ramchandran 12] Erasure Coding for Distributed Storage
5
Locality, Repair Bandwidth, Caching and Content Distribution –[Gopalan et. al 2011, Dimakis-Godfrey-Wu-Wainwright- 10, Wu-Dimakis 09, Niesen-Ali 12] Queuing theory –[Ferner-Medard-Soljanin 12, Joshi-Liu-Soljanin 12, Shah-Lee- Ramchandran 12] Erasure Coding for Distributed Storage This talk: Theory of distributed computing Considerations for storing data that changes
6
Consistency: Value changing, get the “latest” version Failure tolerance, Low storage costs, Fast reads and writes 6
7
7 Shared Memory Emulation - History Atomic (consistent) shared memory [Lamport 1986] Cornerstone of distributed computing and multi-processor programming
8
8 Shared Memory Emulation - History Atomic (consistent) shared memory Emulation over distributed storage systems [Lamport 1986] Cornerstone of distributed computing and multi-processor programming “ABD” algorithm [Attiya-Bar-Noy- Dolev95], 2011 Dijsktra Prize, Amazon dynamo key-value store [Decandia et. al. 2008] Replication-based
9
9 Shared Memory Emulation - History Atomic (consistent) shared memory Emulation over distributed storage systems Costs of emulation [Lamport 1986] Cornerstone of distributed computing and multi-processor programming “ABD” algorithm [Attiya-Bar-Noy- Dolev95], 2011 Dijsktra Prize, Amazon dynamo key-value store [Decandia et. al. 2008] Replication-based Low cost coding based algorithm Communication and storage costs (This talk) [C-Lynch-Medard-Musial 2014], preprint available
10
10 Shared Memory Emulation - History Atomic (consistent) shared memory Emulation over distributed storage systems Costs of emulation [Lamport 1986] Cornerstone of distributed computing and multi-processor programming “ABD” algorithm [Attiya-Bar-Noy- Dolev95], 2011 Dijsktra Prize, Amazon dynamo key-value store [Decandia et. al. 2008] Replication-based Low cost coding based algorithm Communication and storage costs [C-Lynch-Medard-Musial 2014], preprint available (This talk)
11
11
12
Write Read time 12
13
Write Read time 13
14
Atomicity [Lamport 86] aka linearizability. [Herlihy, Wing 90] Write Read time 14
15
Write Read Atomicity [Lamport 86] aka linearizability. [Herlihy, Wing 90] 15 time
16
Write Read Atomicity [Lamport 86] aka linearizability. [Herlihy, Wing 90] 16 time
17
Write Read Atomicity [Lamport 86] aka linearizability. [Herlihy, Wing 90] 17 time Atomic
18
Not atomic Write Read Atomicity [Lamport 86] aka linearizability. [Herlihy, Wing 90] time 18 time
19
19 Shared Memory Emulation - History Atomic (consistent) shared memory Emulation over distributed storage systems Costs of emulation [Lamport 1986] Cornerstone of distributed computing and multi-processor programming “ABD” algorithm [Attiya-Bar-Noy- Dolev95], 2011 Dijsktra Prize, Amazon dynamo key-value store [Decandia et. al. 2008] Replication-based Low cost coding based algorithm Communication and storage costs [C-Lynch-Medard-Musial 2014], preprint available (This talk)
20
Client server architecture, nodes can fail (no. of server failures is limited) Point-to-point reliable links (arbitrary delay). Nodes do not know if other nodes fail An operation should not have to wait for others to complete Distributed Storage Model 20 Servers Write Clients Read Clients
21
Client server architecture, nodes can fail (no. of server failures is limited) Point-to-point reliable links (arbitrary delay) Nodes do not know if other nodes fail An operation should not have to wait for others to complete Distributed Storage Model 21 Servers Write Clients Read Clients
22
Client server architecture, nodes can fail (no. of server failures is limited) Point-to-point reliable links (arbitrary delay). Nodes do not know if other nodes fail An operation should not have to wait for others to complete Distributed Storage Model 22 Servers Write Clients Read Clients
23
23 Write Clients Read Clients Servers Requirements and cost measure Design write, read and server protocols such that Atomicity Concurrent operations, no waiting. Communication overheads: Number of bits sent over links Storage overheads: (Worst-case) server storage costs
24
The ABD algorithm (sketch) 24 Servers Write Clients Read Clients Quorum set: Every majority of server snodes. Any two sets intersect at at least one nodes Algorithm works if at least one quorum set is available.
25
The ABD algorithm (sketch) 25 Write: Send time-stamped value to every server; return after receiving sufficeint acks. Read: Send read query; wait for sufficient responses and return with latest value. Servers: Store latest value from server; send ack Respond to read request with value Servers Write Clients Read Clients
26
The ABD algorithm (sketch) 26 Write: Send time-stamped value to every server; return after receiving acks from quorum. Read:: Send read query; wait for sufficient responses and return with latest value. Servers: Store latest value; send ack Respond to read request with value Servers ACK Write Clients Read Clients
27
The ABD algorithm (sketch) 27 Query Write Clients Read Clients Write: Send time-stamped value to every server; return after receiving sufficeint acks. Read: Send read query; wait for sufficient responses and return with latest value. Servers: Store latest value from server; send ack Respond to read request with value Servers
28
The ABD algorithm (sketch) 28 Servers Write: Send time-stamped value to every server; return after receiving sufficeint acks. Read: Send read query; wait for quorum of responses; return with latest value. Servers: Store latest value from server; send ack Respond to read request with value Write Clients Read Clients
29
The ABD algorithm (sketch) 29 Servers Write: Send time-stamped value to every server; return after receiving sufficeint acks. Read: Send read query; wait for quorum responses; send latest value to quourm; latest value. Servers: Store latest value from server; send ack Respond to read request with value Write Clients Read Clients
30
The ABD algorithm (sketch) 30 Servers Write: Send time-stamped value to every server; return after receiving sufficeint acks. Read: Send read query; wait for acks from quorum responses; send latest value to servers; return latest value after receiving acks from quorum. Servers: Store latest value from server; send ack Respond to read request with value Write Clients Read Clients ACK
31
The ABD algorithm (summary) The ABD algorithm ensures atomic operations. Operations terminate is ensured as long as a majority of nodes do not fail. Implication: A networked distributed storage system can be used as shared memory. Replication to ensure failure tolerance.
32
ABD Storage Communication (read) Communication (write) Performance Analysis f represents number of failures a lower communication cost algorithm in [Fan-Lynch 03]
33
33 Shared Memory Emulation - History Atomic (consistent) shared memory Emulation over distributed storage systems Costs of emulation [Lamport 1986] Cornerstone of distributed computing and multi-processor programming “ABD” algorithm [Attiya-Bar-Noy- Dolev95], 2011 Dijsktra Prize, Amazon dynamo key-value store [Decandia et. al. 2008] Replication-based Low cost coding based algorithm Communication and storage costs (This talk) [C-Lynch-Medard-Musial 2014], preprint available
34
Shared Memory Emulation – Erasure Coding [Hendricks-Ganger-Reiter 07, Dutta-Guerraoui-Levy 08, Dobre-et.al 13, Androulaki et. al 14] New algorithm, a formal analysis of costs Outperforms previous algorithms in certain aspects Previous algorithms incur infinite worst-case storage costs Previous algorithms incur large communication costs
35
35 Erasure Coded Shared Memory
36
Example: (6,4) MDS code 36 Value recoverable from any 4 coded packets Size of coded packet is ¼ size of value Smaller packets, smaller overheads
37
Value recoverable from any 4 coded packets Size of coded packet is ¼ size of value New constraint, need 4 packets with same time- stamp 37 Erasure Coded Shared Memory Smaller packets, smaller overheads Example: (6,4) MDS code
38
38 Quorum set: Every subset of 5 server snodes. Any two sets intersect at 4 nodes Algorithm works if at least one quorum set is available. Coded Shared Memory – Quorum set up Servers Write Clients Read Clients
39
Coded Shared Memory – Why is it challenging? Servers 39 Write Clients Read Clients
40
Coded Shared Memory – Why is it challenging? Servers Query Challenges: reveal elements to readers only when enough elements are propagated discard old versions safely Solutions: Write in multiple phases Store all the write-versions concurrent with a read 40 Servers store multiple versions Write Clients Read Clients
41
Coded Shared Memory – Protocol overview Write: Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum. Read: Send read query; wait for time-stamps from a quorum; Send request with latest time-stamp to servers; decode and return value after receiving acks from quorum. Servers: Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for tag on receiving finalize message. Respond to read query with latest finalized tag. Finalize the requested tag; respond to read request with codeword symbol.
42
Coded Shared Memory – Protocol overview Write: Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum. Read: Send read query; wait for time-stamps from a quorum; Send request with latest time-stamp to servers; decode and return value after receiving acks from quorum. Servers: Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for time-stamp on receiving finalize message. Send ack. Respond to read query with latest finalized tag. Finalize the requested tag; respond to read request with codeword symbol.
43
Coded Shared Memory – Protocol overview Write: Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum. Read: Send read query; wait for time-stamps from a quorum; Send request with latest time-stamp to servers; decode and return value after receiving acks from quorum. Servers: Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for tag on receiving finalize message. Respond to read query with latest finalized tag. Finalize the requested tag; respond to read request with codeword symbol.
44
Coded Shared Memory – Protocol overview Write: Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum. Read: Send read query; wait for time-stamps from a quorum; Send request with latest time-stamp to servers; decode and return value after receiving acks/symbols from quorum. Servers: Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for tag on receiving finalize message. Respond to read query with latest finalized tag. Finalize the requested time-stamp; respond to read request with codeword symbol if it exists, else send ack.
45
Coded Shared Memory – Protocol overview Write: Send time-stamped value to every server; send finalize message after getting acks from quorum; return after receiving acks from quorum. Read: Send read query; wait for time-stamps from a quorum; Send request with latest time-stamp to servers; decode and return value after receiving acks/symbols from quorum. Servers: Store the coded symbol; keep latest δ codeword symbols and delete older ones; send ack. Set finalize flag for time-stamp on receiving finalize message. Respond to read query with latest finalized tag. Finalize the requested time-stamp; respond to read request with codeword symbol if it exists, else send ack.
46
Coded Shared Memory – Protocol overview Use (N,k) MDS code, where N is the number of servers Ensures atomic operations Operations terminate is ensured as long as o Number of failed nodes smaller than (N-k)/2 o Number of writes concurrent with a read smaller than δ
47
Performance comparisons ABD Our Algorithm Storage Communication (read) Communication (write) N represents number of nodes, f represents number of failures δ represents maximum number of writes concurrent with a read
48
Proof Steps After every operation terminates, - there is a quorum of servers with the codeword symbol - there is a quorum of servers with the finalize label - because every pair of servers intersects in k servers, readers can decode the value 48
49
Proof Steps After every operation terminates, - there is a quorum of servers with the codeword symbol - there is a quorum of servers with the finalize label - because every pair of servers intersects in k servers, readers can decode the value When a codeword symbol is deleted at a server –Every operation that wants that time-stamp has terminated –(Or the concurrency bound is violated) 49
50
Main Insights Significant savings on network traffic overheads Reflects the classical gain of erasure coding over replication (New Insight) Storage overheads depend on client activity Storage overhead proportional to the no. of writes concurrent with a read Better than classical techniques for moderate client activity 50
51
51 Future Work – Many open questions Refinements of our algorithm (Ongoing) More robustness to client node failures Information theoretic bounds on costs New coding schemes Finer network models Erasure channels, different topologies, wireless channels Finer source models Correlations across versions Dynamic networks
52
52 Future Work – Many open questions Refinements of our algorithm (Ongoing) More robustness to client node failures Information theoretic bounds on costs New coding schemes Finer network models Erasure channels, different topologies, wireless channels Finer source models Correlations across versions Dynamic networks
53
Storage costs ABD Our algorithm Number of writes concurrent with a read 53 Storage Overhead What is the fundamental cost curve?
54
54 Future Work – Many open questions Refinements of our algorithm (Ongoing) More robustness to client node failures Information theoretic bounds on costs New coding schemes Finer network models, finer source models Erasure channels, different topologies, wireless channels Correlations across versions Dynamic networks
55
55 Future Work – Many open questions Refinements of our algorithm (Ongoing) More robustness to client node failures Information theoretic bounds on costs New coding schemes Finer network models, finer source models Erasure channels, different topologies, wireless channels Correlations across versions Dynamic networks -Interesting replication based algorithm in [Gilbert-Lynch-Shvartsman 03] -Study of costs in terms of network dynamics
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.