Download presentation
Presentation is loading. Please wait.
Published byRegina Blackner Modified over 9 years ago
1
Davide Frey, Anne-Marie Kermarrec, Konstantinos Kloudas INRIA Rennes, France Plug
2
Motivation Volume of data stored increases exponentially. Provided services are highly dependent on data. 2
3
Motivation Traditional solutions combine data in tarballs and store them on tape. – Pros: cost efficient. – Cons: low throughput. 3 TapeDisk Acquisition cost$407,000$1,620,000 Operational cost$205,000$573,000 Total cost$612,000$2,193,000 * Source: www.backupworks.com
4
Deduplication Store data only once and replace duplicates with references. 4
5
Deduplication Store data only once and replace duplicates with references. 5 file1
6
Deduplication file1 file2 6 Store data only once and replace duplicates with references.
7
Deduplication 7 file1 file2 Store data only once and replace duplicates with references.
8
Deduplication 8 file1 file2 Store data only once and replace duplicates with references.
9
Challenges Single-node deduplication systems. – Compact indexing structures. – Efficient duplicate detection. 9
10
Challenges Single-node deduplication systems. – Compact indexing structures. – Efficient duplicate detection. Cluster-based solutions. – Single-machine tradeoffs. – Deduplication vs Load balancing. 10 We focus on Cluster-based Deduplication Systems. Plug
11
Storage Nodes Coordinator Clients Example: Deduplication Vs Load Balancing A B C D A client wants to store a file. 11
12
Storage Nodes Coordinator Clients Example: Deduplication Vs Load Balancing A B C D The client sends the file to the Coordinator. 12
13
Storage Nodes Coordinator Clients Example: Deduplication Vs Load Balancing A10% B30% C60% D0% The Coordinator computes the overlap between the contents of and those of each Storage Node. 13
14
Storage Nodes Coordinator Clients Example: Deduplication Vs Load Balancing A10% B30% C60% D0% To maximize DEDUPLICATION, the new file should go to node C. 14
15
Storage Nodes Coordinator Clients Example: Deduplication Vs Load Balancing A10% B30% C60% D0% To achieve LOAD BALANCING, the new file should go to node D. 15
16
Goal: Scalable Cluster Deduplication. 16 Load Balancing. Minimize: Ideally, equal to 1. Load Balancing. Minimize: Ideally, equal to 1. Good Data Deduplication. Maximize: Ideally, deduplication of a single-node system. Good Data Deduplication. Maximize: Ideally, deduplication of a single-node system.
17
Goal: Scalable Cluster Deduplication. 17 Load Balancing. Minimize: Ideally, equal to 1. Load Balancing. Minimize: Ideally, equal to 1. Good Data Deduplication. Maximize: Ideally, deduplication of a single-node system. Good Data Deduplication. Maximize: Ideally, deduplication of a single-node system. Scalability. Minimize memory usage at Coordinator. Scalability. Minimize memory usage at Coordinator.
18
Goal: Scalable Cluster Deduplication. 18 Load Balancing. Minimize: Ideally, equal to 1. Load Balancing. Minimize: Ideally, equal to 1. Good Data Deduplication. Maximize: Ideally, deduplication of a single-node system. Good Data Deduplication. Maximize: Ideally, deduplication of a single-node system. Good Throughput. Minimize CPU/Memory usage at Coordinator. Good Throughput. Minimize CPU/Memory usage at Coordinator. Scalability. Minimize memory usage at Coordinator. Scalability. Minimize memory usage at Coordinator.
19
State-of-the-art Divided in stateless and stateful. Stateless : – Assign data to nodes regardless of previous assignment decisions. Stateful : – Keep state for each storage node and assign data to nodes based on their current state.
20
State-of-the-art : comparison MemoryCPUDeduplication Stateless Stateful
21
State-of-the-art : comparison MemoryCPUDeduplication Stateless Stateful Goal: Make stateful approaches viable
22
PRODUCK architecture Coordinator Storage Nodes Client Split the file in chunks of data. Store and retrieve data. Store the chunks. Provide directory services. Assign chunks to nodes. Keep the system load balanced. 22
23
Client: chunking Chunks: – use content-based chunking techniques. – basic deduplication unit. Super-chunks: – group of consecutive chunks. – basic routing and storage unit. 23
24
Client: chunking 24 Split the file in chunks
25
Client: chunking 25 Organize the chunks in super-chunks
26
Client: chunking 26
27
PRODUCK architecture Coordinator Storage Nodes Client Split the file in chunks of data. Store and retrieve data. Store the chunks. Provide directory services. Assign chunks to nodes. Keep the system load balanced. 27
28
Coordinator: goals Estimate the overlap between a super-chunk and the chunks of a given node. – Maximize deduplication. Equally distribute storage load among nodes. – Guarantee a load balanced system. 28
29
Coordinator: our contributions Novel chunk overlap estimation. – Based on probabilistic counting—PCSA [Flajolet et al. 1985, Michel et al. 2006]. – Never used before in storage systems. Novel load balancing mechanism. – Operating at chunk-level granularity. – Improving co-localization of duplicate chunks. 29
30
Coordinator: Overlap Estimation Main observation : – Do not need the exact matches. – Need only an estimation of the size of the overlap. PCSA permits : – Compact set descriptors. – Accurate intersection estimation. – Computationally efficient. 30
31
Coordinator: Overlap Estimation Chunk 5 Chunk 1 Chunk 2 Chunk 3 Chunk 4 Original Set of Chunks 31
32
Coordinator: Overlap Estimation Chunk 5 Chunk 1 Chunk 2 Chunk 3 Chunk 4 hash() 10111001 7 6 543210 10111011 10111000 10111010 10111000 Original Set of Chunks 32
33
Coordinator: Overlap Estimation Chunk 5 Chunk 1 Chunk 2 Chunk 3 Chunk 4 10111001 7 6 543210 10111011 10111000 10111010 10111000 Original Set of Chunks p(y) = min(bit(y, k)) 11010000 0 1 234567 BITMAP hash() 33
34
Coordinator: Overlap Estimation Chunk 5 Chunk 1 Chunk 2 Chunk 3 Chunk 4 Original Set of Chunks 11010000 hash() BITMAP p(y) = min(bit(y, k)) 0 1 234567 34 INTUITION P(bitmap[0] = 1) = 1/2 P(bitmap[1] = 1) = 1/4 P(bitmap[2] = 1) = 1/8 … INTUITION P(bitmap[0] = 1) = 1/2 P(bitmap[1] = 1) = 1/4 P(bitmap[2] = 1) = 1/8 …
35
Coordinator : Overlap Estimation Chunk 5 Chunk 1 Chunk 2 Chunk 3 Chunk 4 10111001 7 6 543210 10111011 10111000 10111010 10111000 l = 2 Original Set of Chunks 11010000 hash() BITMAP p(y) = min(bit(y, k)) 0 1 234567
36
Coordinator : Overlap Estimation Chunk 5 Chunk 1 Chunk 2 Chunk 3 Chunk 4 10111001 7 6 543210 10111011 10111000 10111010 10111000 l = 2 sizeOf(A) = 2 2 / 0.77 = 5.19 Original Set of Chunks p(y) = min(bit(y, k)) 11010000 0 1 234567 BITMAP hash()
37
Coordinator : Overlap Estimation Intersection Cardinality Estimation ?
38
Coordinator: Overlap Estimation Intersection Cardinality Estimation ?
39
Coordinator: Overlap Estimation Intersection Cardinality Estimation ? Union Cardinality Estimation ? 11011000 0 1 234567 BITMAP(A) 01001100 0 1 234567 BITMAP(B) 11011100 0 1 234567 BITMAP(A V B) BitwiseOR 39
40
Coordinator: Overlap Estimation 40 PCSA set cardinality estimation. Set intersection estimation. Selection of best storage node.
41
In Practice 41 Client creates the bitmap of each superchunck (8192 vectors, total size 64KB) – Trade-off between efficiency and error Coordinator stores only a bitmap for each Storage Node
42
Coordinator: our contributions Novel chunk overlap estimation. – Based on probabilistic counting—PCSA [Flajolet et al. 1985, Michel et al. 2006]. – Never used before in storage systems. Novel load balancing mechanism. – Operating at chunk-level granularity. – Improving co-localization of duplicate chunks. 42
43
Load Balancing 43 Existing solution: choose Storage Nodes that do not exceed average load by a percentage threshold.
44
Load Balancing Problems Too aggressive, especially when a few data are stored in the system. 44 Existing solution: choose Storage Nodes that do not exceed average load by a percentage threshold.
45
Bucket-based storage quota management. – Measure storage space in fixed-size buckets. – Coordinator grants buckets to nodes one by one. – No node can exceed the least loaded by more than a maximum allowed bucket difference. 45 Load Balancing: our solution
46
Bucket-based storage quota management. Bucket 46 Load Balancing: our solution
47
Bucket-based storage quota management. Bucket Can I get a new Bucket? 47 Load Balancing: our solution
48
Bucket-based storage quota management. Bucket Yes, you can. 48 Load Balancing: our solution
49
Bucket-based storage quota management. Bucket Yes, you can. 49 Load Balancing: our solution
50
Bucket-based storage quota management. Bucket 50 Load Balancing: our solution
51
Bucket-based storage quota management. Bucket 51 Load Balancing: our solution
52
Bucket-based storage quota management. Bucket Can I get a new Bucket? 52 Load Balancing: our solution
53
Bucket-based storage quota management. Bucket NO you cannot! 53 Load Balancing: our solution
54
Bucket-based storage quota management. Bucket Searching for the second biggest overlap. 54 Load Balancing: our solution
55
Bucket-based storage quota management. Bucket 55 Load Balancing: our solution
56
Contribution Summary Novel chunk overlap estimation. – Based on probabilistic counting—PCSA [Flajolet et al. 1985, Michel et al. 2006]. – Never used before in storage systems. Novel load balancing mechanism. – Operating at chunk-level granularity. – Improving co-localization of duplicate chunks. 56
57
Evaluation: Datasets 2 real world workloads: 2 competitors [Dong et al. 2011]: – Minhash – BloomFilter 57
58
Evaluation: Competitors MinHash : stateless – Use the minimum hash from a super-chunk as its fingerprint. – Assign super-chunks to bins using the mod(# bins) operator. – Initially assign bins to nodes randomly and re- assign bins to nodes when unbalanced. 58
59
Evaluation: Competitors BloomFilter : statefull – The Coordinator keeps a Bloom filter for each one of the Storage Nodes. – If a node deviates more than 5% from the average load, it is considered overloaded. 59
60
Evaluation: Metrics Deduplication: Load balancing: Overall: ED and TD are normalized to the performance of a single- node system to ease comparison. Throughput : 60
61
Evaluation: Effective Deduplication WikipediaImages 61 32 nodes :Wikipedia 7%Images 16% 64 nodes : Wikipedia 16%Images 21% 32 nodes :Wikipedia 7%Images 16% 64 nodes : Wikipedia 16%Images 21%
62
Evaluation: Throughput 62 WikipediaImages 32 nodes :Wikipedia 11XImages 13X 64 nodes :Wikipedia 16XImages 21X 32 nodes :Wikipedia 11XImages 13X 64 nodes :Wikipedia 16XImages 21X
63
Evaluation: Throughput 63 WikipediaImages Memory : 64KB for Produck 9,6bits/chunk or 168GB for 140TB/node Memory : 64KB for Produck 9,6bits/chunk or 168GB for 140TB/node 32 nodes :Wikipedia 11XImages 13X 64 nodes : Wikipedia 16XImages 21X 32 nodes :Wikipedia 11XImages 13X 64 nodes : Wikipedia 16XImages 21X
64
Evaluation: Load Balancing Load Balancing 64 WikipediaImages
65
To Take Away Lessons learned from cluster-based deduplication – Stateful: good deduplication but impractical – Stateless: practical but poorer deduplication Useful Concepts for SocioPlug – PCSA: Data placement – Load balancing: bucket based 65
66
66
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.