Download presentation
Presentation is loading. Please wait.
Published byGillian Little Modified over 8 years ago
1
Evaluating and Optimizing Indexing Schemes for a Cloud-based Elastic Key- Value Store Apeksha Shetty and Gagan Agrawal Ohio State University David Chiu Washington State University
2
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 2 A Truncated Intro to Cloud Computing Pay-As-You-Go Computing We focus on IaaS ‣ E.g., Amazon EC2 Elasticity
3
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 3 Elasticity and Distributed Hash Tables (DHT) ‣ IaaS Cloud: ‣ Applications can incrementally scale and relax resource requirements on demand --- Elasticity ‣ Distributed Hash Tables: ‣ Manages distributed storage over many commodity nodes ‣ Increasingly popular to provide massive, reliable storage ‣ e.g., Facebook’s Cassandra, Amazon Dynamo, many P2P networks etc.
4
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 4 Elasticity and DHT (cont.) ‣ Clearly, DHTs can benefit from elasticity by harnessing more/less nodes as needed. ‣ But performance of a DHT can be greatly affected by the indexing mechanism on each cooperating node. ‣ We evaluate the effects of three popular indexes: B+Tree, extendible hashing, bloom filters
5
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 5 Outline ‣ Intro to Distributed Hash Tables ‣ Three Indexing Schemes ‣ Performance Evaluation ‣ Conclusion
6
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 6 Outline ‣ Intro to Distributed Hash Tables ‣ Three Indexing Schemes ‣ Performance Evaluation ‣ Conclusion
7
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 7 A B 8 Anatomy of a DHT using Consistent Hashing 25 75 r - 1 Nodes: Data on each node is further indexed using one of the indexing schemes
8
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 8 A B 8 Anatomy of a DHT using Consistent Hashing 25 75 r - 1 Buckets: Points to at most one storage node.
9
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 9 75... Querying over the DHT: Clock-wise Successor Cache Requests (k mod r) r - 1 A B 8 25
10
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 10 75... Overflow and Splitting Nodes in the DHT r - 1 B 8 25 A High traffic region Overflow
11
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 11 75... Overflow and Splitting Nodes in the DHT r - 1 B 8 25 A C Migrate nearly half the keys hashing into (76,8] range to node C Cache Requests (k mod r) IaaS Cloud (EC2)
12
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 12 75... After Split and Incremental Scaling r - 1 B 8 25 A C Cache Requests (k mod r)
13
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 13 Outline ‣ Intro to Distributed Hash Tables ‣ Three Indexing Schemes ‣ Performance Evaluation ‣ Conclusion
14
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 14 Indexing Schemes per Node ‣ On each DHT node an index is used to provide fast access to the key-value data pair. ‣ We evaluate three popular schemes: B+Trees Extendible Hashing Bloom Filters ‣ Each scheme may impact split and migration time significantly.
15
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 15 B+Trees ‣ Always balanced ‣ Leaf level is sorted ascending order on the search key
16
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 16 B+Trees (cont.) ‣ Key Point Queries: O(log n) ‣ Fast Key Range Queries [k_low, k_high]: Point search for k_low: O(log n) Linearly sweep leaf nodes in order until k_high Significant when splitting/migrating! ‣ Popular option for DHT nodes
17
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 17 Extendible Hashing ‣ Each bucket holds M keys ‣ A key is hashed into a bucket via examining least significant bit(s) of incoming key
18
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 18 Extendible Hashing (cont.) ‣ Key Point Queries: O(1) ‣ Slow Key Range Queries [k_low, k_high]: Hashing disrupts natural key ordering Finding every key in range requires a separate hash ‣ Data structure grows exponentially
19
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 19 Bloom Filters ‣ Highly space-efficient, but probabilistic ‣ Often useful as a secondary index for efficiently determining key existence ‣ K hash functions are applied to a key, if any function hashes to 0/false, then the key is not in the set
20
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 20 Bloom Filters (cont.) ‣ Same analysis as Extendible Hashing, but must deal with false positives.
21
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 21 Outline ‣ Intro to Distributed Hash Tables ‣ Three Indexing Schemes ‣ Performance Evaluation ‣ Conclusion
22
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 22 Experimental Configuration Application: Service Caching ‣ Shoreline Extraction ‣ Shoreline Extraction service checks DHT for cached copy ‣ 65K distinct point queries, submitted randomly ‣ DHT starts cold, 1 EC2 node ‣ Amazon EC2 Small Instances (Single core 1.2Ghz, 1.7GB mem, 32-bits) Ubuntu (9.10) Linux
23
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 23 Elastic Feasibility of DHT (Using B+Tree) ** Migration occurs at each EC2 Node Alloc
24
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 24 Elastic Feasibility of DHT (Using B+Tree) ** Migration occurs at each EC2 Node Alloc
25
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 25 Experimental Configuration ‣ The same experiment is run a minimum of 3 times (we report the average) ‣ Over the following indexing configurations: B+Tree Extendible Hashing, bucket size = 100 keys (EH100) Extendible Hashing, bucket size = 300 keys (EH300) Extendible Hashing, bucket size = 500 keys (EH500) Bloom Filter (CBF)
26
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 26 Execution Time: 50 requests/sec Querying Rate: 50 requests/sec
27
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 27 Execution Time: 50 requests/sec Querying Rate: 50 requests/sec
28
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 28 Node Split/Migration: 50 requests/sec ** 7 migrations throughout execution
29
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 29 Execution Time: 250 requests/sec Querying Rate: 250 requests/sec
30
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 30 Node Split/Migration: 250 requests/sec ** 15 migrations throughout execution
31
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 31 A Small Optimization ‣ We can speculate node split potential and pre-launch instances ‣ Launch a new instance when the total number of keys in any node reach T
32
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 32 Pre-launching: Query Rate: 50 requests/sec ** 7 migrations throughout execution
33
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 33 Experimental Summary ‣ In general, As expected, B+Tree performs best for splitting and migration on average Bloom Filters should be avoided, but can be useful as a space- constrained secondary index
34
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 34 Experimental Summary (cont) ‣ If the environment is point-query heavy, an optimally configured Extendible Hash index can outperform B+Trees But hard to configure in practice ‣ Our instance startup speculation heuristic can improve the node splitting overhead by 4x and 14x for EH300 and Bloom Filters
35
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 35 Related Works ‣ Web caching, Proxy memcached CRISP Proxy (Rabinovich, et al.) ‣ DHT for P2P File Sharing Chord (Stoica, et al.) ‣ Other Key-Value data stores Dynamo (Amazon) Cassandra (Facebook)
36
CCGrid 2011, May 23-26, 2011. Newport Beach, CA 36 Thank You, and Acknowledgments ‣ Questions and Comments David Chiu - david.chiu@wsu.edu Gagan Agrawal - agrawal@cse.ohio-state.eduagrawal@cse.ohio-state.edu ‣ This project was supported by:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.