Download presentation
Presentation is loading. Please wait.
Published byApril Fowler Modified over 9 years ago
1
Separating Data and Metadata for Robustness and Scalability Yang Wang University of Texas at Austin
2
Goal: A better storage system Data is important. Data grows bigger. Data is accessed in different ways.
3
Challenge: achieve multiple goals simultaneously Robustness – Durable and available despite failures Scalability – Thousands of machines or more Efficiency – Good performance with a reasonable cost
4
Solution Separating data and metadata
5
My works Gnothi Salus Exalt Evaluate Design
6
My works Gnothi Salus Exalt Small-scale Crash failures
7
My works Gnothi Salus Exalt Large-scale Arbitrary failures
8
How to design? Problem: Stronger protection -> Higher cost Key observation: – Data: big (4K to several MBs) – Metadata: small (tens of bytes); can validate data Solution – Strong protection for metadata -> Robustness – Minimal replication for data -> Scalability and Efficiency
9
How to evaluate? Gnothi Salus Exalt Evaluate large-scale storage systems on small to medium platforms
10
Outline Gnothi: Efficient and Available Storage Replication – Small scale; tolerate crash faults and timing errors Salus: Robust and Scalable Block Store – Large scale; tolerate arbitrary failures Exalt: Evaluate large-scale storage systems
11
Resolving a long-standing trade-off Efficiency – Write to f+1 nodes and read from 1 node Robustness – Availability: Aggressive timeout for failure detection – Consistency: Read returns the data of the latest write 11 Synchronous Primary Backup
12
Resolving a long-standing trade-off Efficiency – Write to f+1 nodes and read from 1 node Robustness – Availability: Aggressive timeout for failure detection – Consistency: Read returns the data of the latest write 12 Asynchronous Replication
13
Resolving a long-standing trade-off Efficiency – Write to f+1 nodes and read from 1 node Robustness – Availability: Aggressive timeout for failure detection – Consistency: Read returns the data of the latest write 13 Gnothi
14
Gnothi Overview Gnothi resolves the trade-off … … but only for block storage, meaning … – A fixed number of fixed-size blocks. – A request reads/writes a single block. Key ideas: – Don’t insist that nodes have identical state. – A node knows which blocks are fresh/stale. 14 Gnothi Seauton – Know yourself
15
Separating Data and Metadata 2f+1 nodes Clients Metadata Size: 24 bytes for a block (4K to 1M) LAN 15 Data Write request Metadata: blockNo, client ID,...
16
Rest of Gnothi Why is the trade-off challenging? How does Gnothi resolve the trade-off? How well does Gnothi perform? 16
17
Why is the trade-off challenging? 17 How to handle a timeout? Can we have both f+1 replication and short timeout? 2 2 Timeout ? ? 1 1 Synchronous Primary Backup (Remus, Hbase, Hypervisor, …) Continue with 1 node Use conservative timeout 2 2 Timeout ? ? 1 1 Asynchronous Replication (Paxos, …) Send to 2f+1 nodes and waits for f+1 ACKs 3 3
18
Why is the trade-off challenging? ×Continue with 1 node? – Not safe. ×Wait? – Not live. Switch to another node. (Cheap Paxos, ZZ, …) ?However, state of newly enlisted node may be incomplete. – One solution: on switch, copy all data to new node – bad availability. 2 2 Partial Replication f+1 3 3 1 1 18 Timeout Wait? Copy data Switch ? ?
19
Rest of Gnothi Why is the trade-off challenging? How does Gnothi resolve the trade-off? How well does Gnothi perform? 19
20
Gnothi: Nodes can be incomplete A new write will overwrite the block anyway. Read can be processed correctly – As long as a node knows which blocks are stale Recovery can be processed correctly – As long as a node knows which block is the latest one 2 2 1 1 Read block 2 I do not have current version of block 2 2 2 1 1 Write block 2 Fetch block 2 20 Write latest version of block 2
21
How does Gnothi work? How to perform writes and reads efficiently when no failures occur? - Write to f+1 and read from 1 How to continue processing requests during failures? - Still write to f+1 and read from 1 How to recover the failed node efficiently? 21
22
How to perform writes and reads efficiently when no failures occur? Metadata Data WriteRead Maintain a single bit for each block: “do I have the current data?” Data replicated f+1 times Metadata ensures read can be processed correctly. Node 1 Node 2 Node 3 Node 1 Node 2 Node 3 Client Node with both data and metadata Node with only metadata 22 Gaios: Bolosky et al. NSDI 2011
23
Load-balanced Data Distribution Virtual disk Slice 1 Slice 2 Slice 3 Slice 1 Slice 3 Slice 1 Slice 2 Slice 3 Gnothi Block Drivers LAN Divide space into multiple slices Evenly distribute slices to different preferred nodes 23 3 3 1 1 2 2 Preferred Storage Reserve Storage Node 1 Node 2 Node 3
24
Load-balanced Data Distribution Virtual disk Slice 1 Slice 2 Slice 3 Gnothi Block Drivers LAN Divide space into multiple slices Evenly distribute slices to different preferred nodes 24 Preferred Storage Reserve Storage Node 1 Node 2 Node 3 1 1 2 2 2 2 3 3 1 1 3 3 3 1 2
25
How to continue processing requests during failures? Write Do not wait for data or metadata transfer Read Metadata replicated 2f+1 times Metadata allows a node to process requests correctly. 25 Node 1 Node 2 Node 3 ? ? ? ? ? ?
26
Catch-up problem in recovery Can I catch up? Recovery speed vs Execution speed – Traditional systems have the catch-up problem Node 1 Node 2 Node 3 26
27
How to recover the failed node efficiently? Node 1 Node 2 Node 3 27 Separate metadata and data recovery – Phase 1: Metadata recovery – fast
28
How to recover the failed node efficiently? Node 1 Node 2 Node 3 28 Data Recovery in background Separate metadata and data recovery – Phase 1: Metadata recovery – fast – Phase 2: Data recovery – slow, in background
29
Rest of Gnothi Why is the trade-off challenging? How does Gnothi resolve the trade-off? How well does Gnothi perform? 29
30
Evaluation Throughput – Compare to a Gaios (Bolosky et al. NSDI 2011) like system G’. – Sequential/Random read/write – f=1 (Gnothi-3, G’-3) and f=2 (Gnothi-5 and G’-5) – Block size 4K, 64K, and 1M Failure Recovery – Compare Gnothi to G’ and Cheap Paxos – How long does recovery take? – What is the client throughput during recovery? 30
31
Gnothi achieves higher throughput Gnothi can achieve 40%-64% more write throughput and scalable read throughput. 31 More write throughput Scalable read throughput
32
Higher throughput during recovery Gnothi does not block long for failures. Gnothi can achieve 100%-200% more throughput during recovery. 32 KillRestart Cheap Paxos blocks for data copy 100%-200% more throughput Complete recovery at almost the same time No blocking
33
Gnothi can always catch up Tunable recovery speed. In Gnothi, the recovering node can always catch up with others. 33 Gnothi G’ Catch up Cannot catch up
34
Gnothi conclusion Separate Data and Metadata – Replication Improve efficiency. Ensure availability during failures. – Recovery Ensure catch-up. 34
35
Outline Gnothi: Efficient and Available Storage Replication – Small scale; tolerate crash faults and timing errors Salus: Robust and Scalable Block Store – Large scale; tolerate arbitrary failures Exalt: Evaluate large-scale storage systems
36
Problem: Not enough machines In practice – WAS in Microsoft: 60PB – HDFS in Facebook: 4000 servers – … In research – Salus: 100 servers – COPS: 300 servers – Spanner: 200 servers Research should go beyond practice.
37
Public testbeds Utah Emulab: 588 machines CMU Emulab: 1024 machines TACC (Texas Advanced Computing Center) – 6400 machines, but not enough storage Amazon EC2 – Cost $1400 for our Salus experiment (108 servers)
38
Solution 1: Extrapolation Measure with a small cluster Predict the bottleneck Assumption: resource consumption grows linearly with the scale CPU Network 100 nodes 10% 5% Extrapolate: The system can scale to 1,000 nodes. Scale Resource utilization
39
Solution 1: Extrapolation Measure with a small cluster Predict the bottleneck Problem: Assumption may not be true. CPU Network 100 nodes 10% Scale Resource utilization
40
Solution 2: Stub Build stub components to simulate real components Problem: stub component can be as complex as the original one
41
Solution 3: Simulation
42
Exalt: Evaluate 10,000 nodes on 100 machines Run real code Use fewer resources Seems impossible? – In general, Yes. – For storage systems with big data, we can achieve.
43
Key insight I/O is the bottleneck. However, the content of data does not matter. Solution: – We can choose a highly compressible data pattern. – Build emulated I/O devices that compress data. 00000000… Emulated Network 1 million zeros compress 00000000… decompress
44
Challenge System may add metadata System may split data (possibly nondeteministically) Existing approaches are either inaccurate or inefficient on such mixed patterns. 00000…
45
Goals Can not lose metadata High compressing ratio Computationally efficient Can work with the mixed pattern
46
Existing approaches David (FAST 11): discard file content – Lose metadata since it’s mixed with data Gzip, etc: – Not efficient Write all zeroes and scan for zeros – Still not efficient enough
47
Solution: Tardis Key: we cannot choose metadata but we can choose data – Make data distinguishable from metadata Magic sequence of bytes that do not exist in metadata An integer representing number of bytes left
48
Tardis compression Search for magic sequence Retrieve number of bytes left (Nleft) Jump Nleft bytes Search for magic sequence again
49
Problems How to find a magic sequence – A randomly chosen 8-byte one works for HDFS. – Run the system, record trace, and analyze. What if system inserts metadata into data? – After jumping, check if it matches with the jumped bytes. – If not, binary search until a match is found.
50
Use Exalt Emulated devices have inaccurate performance. If one or several nodes are bottleneck – Run those nodes in real mode – Run other nodes in emulation mode
51
Use Exalt How about if the behavior depends on a large number of nodes? – E.g. 99% latency and parallel recovery Need to model the behavior of emulated devices Number of bytes Disk/Network latency Energy consumption
52
Implementation Bytecode Instrumentation (BCI) Emulated devices: – Disk (transparent) – Network (transparent) – Memory (need to modify code)
53
Preliminary results on HDFS
55
Proposed work Apply “separating data and metadata” to active storage in Salus Complete Exalt: – Incorporate latency modeling – Apply Exalt to more applications – Complete Tardis implementation Multiple-RSM communication – Join the project leaded by Manos – Not part of my thesis
56
Publications "Robustness in the Salus scalable block store". Y. Wang, M. Kapritsos, Z. Ren, P. Mahajan, J. Kirubanandam, L. Alvisi, and M. Dahlin, in NSDI 2013. "All about Eve: Execute-Verify Replication for Multi-Core Servers". M. Kapritsos, Y. Wang, V. Quema, A. Clement, L. Alvisi, and M. Dahlin, in OSDI 2012. "Gnothi: Separating Data and Metadata for Efficient and Available Storage Replication". Y. Wang, L. Alvisi, and M. Dahlin, in USENIX ATC 2012. "UpRight Cluster Services". A. Clement, M. Kapritsos, S. Lee, Y. Wang, L. Alvisi, M. Dahlin, T. Riche, in SOSP 2009.
57
Backup slides
58
Cost of Gnothi Higher write latency: – In LAN, the major latency comes from disk. – Write metadata and data together to disk. – Rethink-the-sync write should also help. Lose generality – Gnothi is only designed for block storage. 58
59
How does Gnothi compare to GFS/HDFS/xFS/… ? Those systems have a metadata server and multiple data servers. Gnothi updates metadata for every write and checks metadata for every read. They do that at a coarse granularity – Advantages: high scalability – Disadvantages: weaker consistency guarantee; append-only interface, worse availability, … 59
60
Efficient Recovery Can I catch up? Recovery speed vs Execution speed – Traditional systems have the catch-up problem Node 1 Node 2 Node 3 60
61
Is timing error a real threat? Can cause data inconsistency Reasons: – Network partitions – Server overloading – … A real concern in practical systems HBASE-2238 “Because HDFS and ZK are partitioned (in the sense that there's no communication between them) and there may be an unknown delay between acquiring the lock and performing the operation on HDFS you have no way of knowing that you still own the lock, like you say.” 61
62
Interface & Models Disk interface – A fixed number of fixed-size blocks – A request can read/write a single block – Linearizable reads and writes Asynchronous model: no maximum delay – Omission failure only – Always safe – Live when the network is synchronous 62
63
Architecture Fully replicated metadata Partially replicated data – Load balancing – Preferred Storage – Reserve Storage Slice 0 Metadata Preferred Virtual disk Slice 1 Slice 2 Slice 0 Slice 1 Slice 2 Slice 0 Slice 2 Slice 0 Slice 1 Reserve 63
64
Data can be stored out of its preferred replicas. Data Network problem Metadata Replica 0 does not have current data. Only Replica 2 has current data. 64
65
Gnothi: Available and Efficient Gnothi Storage Servers Gnothi Block Drivers Availability: same as Asynchronous Replication – Safe regardless of timing errors – Can use aggressive timeout App LAN 65
66
Gnothi: Available and Efficient Gnothi Storage Servers Gnothi Block Drivers Efficiency: – Storage/Bandwidth efficiency: write to f+1 replicas – Read efficiency: read from 1 replica App LAN 66
67
Previous work cannot achieve both Availability Synchronous Primary Backup: Use conservative timeouts Remus, Hypervisor, HBase, … EfficiencyAvailability Preferred Quorum: Use cold backups Cheap Paxos, ZZ, … Efficiency Availability Gaios: Scalable Read Read Storage/Bandwidth Availability Asynchronous Replication: Use 2f+1 replicas Paxos, … EfficiencyAvailability Gnothi: Separating Data and Metadata Efficiency 67
68
Resolving a long-standing trade-off Efficiency – Write to f+1 replicas and read from 1 replica Availability – Aggressive timeout for failure detection Consistency – Read always returns the data of the latest write. 68 Synchronous Primary Backup Asynchronous Replication Gnothi (this talk)
69
Catch-up problem in recovery Recovery speed vs Execution speed – Traditional systems have the catch-up problem FailRecover Node 1 Node 2 Traditional Approaches: Fetch missing data before processing new requests Cannot catch up Have to block or throttle Node 3 69
70
Separate Metadata and Data Recovery Metadata Recovery: fast Data Recovery: slow; in background Metadata Metadata Recovery The recovering node can process new requests after Metadata Recovery. 70 Node 1 recovers Node 2 Node 3
71
Separate Metadata and Data Recovery Metadata Recovery: fast Data Recovery: slow; in background Data Data Recovery Release reserve storage 71 Node 1 recovers Node 2 Node 3
72
Gnothi ensures catch-up FailRecover Node 1 Node 2 Gnothi: fetch missing metadata before processing new requests Traditional Approaches: fetch missing data before processing new requests Node 1 is never left behind after Metadata Recovery. Metadata Node 3 72
73
How does Gnothi work? Write Read Write Read Recovery How to perform writes and reads efficiently when no failures occur? How to continue processing requests during failures? How to recover the failed node efficiently? 73
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.