Download presentation
Presentation is loading. Please wait.
Published byMarsha Daniels Modified over 9 years ago
1
Distributed Hash Tables CPE 401 / 601 Computer Network Systems Modified from Ashwin Bharambe and Robert Morris
2
P2P Search How do we find an item in a P2P network? Unstructured: Query-flooding Not guaranteed Designed for common case: popular items Structured DHTs – a kind of indexing Guarantees to find an item Designed for both popular and rare items
3
The lookup problem Internet N1N1 N2N2 N3N3 N6N6 N5N5 N4N4 Publisher Put (Key=“title” Value=file data…) Client Get(key=“title”) ? At the heart of all DHTs
4
Centralized lookup (Napster) Publisher@ Client Lookup(“title”) N6N6 N9N9 N7N7 DB N8N8 N3N3 N2N2 N1N1 SetLoc(“title”, N4) Simple, but O(N) state and a single point of failure Key=“title” Value=file data… N4N4
5
Flooded queries (Gnutella) N4N4 Publisher@ Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Robust, but worst case O(N) messages per lookup Key=“title” Value=file data… Lookup(“title”)
6
Routed queries (Freenet, Chord, etc.) N4N4 Publisher Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Lookup(“title”) Key=“title” Value=file data…
7
Hash Table Name-value pairs (or key-value pairs) E.g,. “Mehmet Hadi Gunes” and mgunes@unr.edu E.g., “http://cse.unr.edu/” and the Web page E.g., “HitSong.mp3” and “12.78.183.2” Hash table Data structure that associates keys with values 7 lookup(key) valuekey value
8
Distributed Hash Table Hash table spread over many nodes o Distributed over a wide area Main design goals o Decentralization no central coordinator o Scalability efficient even with large # of nodes o Fault tolerance tolerate nodes joining/leaving
9
Distributed hash table (DHT) Distributed hash table Distributed application get (key) data node …. put(key, data) Lookup service lookup(key)node IP address Application may be distributed over many nodes DHT distributes data storage over many nodes
10
Distributed Hash Table Two key design decisions How do we map names on to nodes? How do we route a request to that node?
11
Hash Functions Hashing Transform the key into a number And use the number to index an array Example hash function Hash(x) = x mod 101, mapping to 0, 1, …, 100 Challenges What if there are more than 101 nodes? Fewer? Which nodes correspond to each hash value? What if nodes come and go over time?
12
Consistent Hashing “view” = subset of hash buckets that are visible For this conversation, “view” is O(n) neighbors But don’t need strong consistency on views Desired features Balanced: in any one view, load is equal across buckets Smoothness: little impact on hash bucket contents when buckets are added/removed Spread: small set of hash buckets that may hold an object regardless of views Load: across views, # objects assigned to hash bucket is small
13
Fundamental Design Idea - I Consistent Hashing Map keys and nodes to an identifier space; implicit assignment of responsibility Identifiers A CDB Key Mapping performed using hash functions (e.g., SHA-1) Spread nodes and keys uniformly throughout 1111111111 0000000000
14
Fundamental Design Idea - II Prefix / Hypercube routing Source Destination Zoom In
15
Definition of a DHT Hash table supports two operations insert(key, value) value = lookup(key) Distributed Map hash-buckets to nodes Requirements Uniform distribution of buckets Cost of insert and lookup should scale well Amount of local state (routing table size) should scale well
16
Chord Map nodes and keys to identifiers Using randomizing hash functions Arrange them on a circle Identifier Circle x succ(x) 010110110 010111110 pred(x) 010110000
17
Consistent Hashing 0 4 8 12 Bucket 14 Construction – Assign each of C hash buckets to random points on mod 2 n circle; hash key size = n – Map object to random position on circle – Hash of object = closest clockwise bucket Desired features – Balanced: No bucket responsible for large number of objects – Smoothness: Addition of bucket does not cause major movement among existing buckets – Spread and load: Small set of buckets that lie near object Similar to that later used in P2P Distributed Hash Tables (DHTs) In DHTs, each node only has partial view of neighbors
18
Consistent Hashing Large, sparse identifier space (e.g., 128 bits) Hash a set of keys x uniformly to large id space Hash nodes to the id space as well 01 Hash(name) object_id Hash(IP_address) node_id Id space represented as a ring 2 128 -1
19
Where to Store (Key, Value) Pair? Mapping keys in a load-balanced way Store the key at one or more nodes Nodes with identifiers “close” to the key where distance is measured in the id space Advantages Even distribution Few changes as nodes come and go… Hash(name) object_id Hash(IP_address) node_id
20
Hash a Key to Successor N32 N10 N100 N80 N60 Circular ID Space Successor: node with next highest ID K33, K40, K52 K11, K30 K5, K10 K65, K70 K100 Key ID Node ID
21
Joins and Leaves of Nodes Maintain a circularly linked list around the ring Every node has a predecessor and successor node pred succ
22
Successor Lists Ensure Robust Lookup N32 N10 N5 N20 N110 N99 N80 N60 Each node remembers r successors Lookup can skip over dead nodes to find blocks N40 10, 20, 32 20, 32, 40 32, 40, 60 40, 60, 80 60, 80, 99 80, 99, 110 99, 110, 5 110, 5, 10 5, 10, 20
23
Joins and Leaves of Nodes When an existing node leaves Node copies its pairs to its predecessor Predecessor points to node’s successor in the ring When a node joins Node does a lookup on its own id And learns the node responsible for that id This node becomes the new node’s successor And the node can learn that node’s predecessor which will become the new node’s predecessor
24
Nodes Coming and Going Small changes when nodes come and go Only affects mapping of keys mapped to the node that comes or goes Hash(name) object_id Hash(IP_address) node_id
25
How to Find the Nearest Node? Need to find the closest node To determine who should store (key, value) pair To direct a future lookup(key) query to the node Strawman solution: walk through linked list Circular linked list of nodes in the ring O(n) lookup time when n nodes in the ring Alternative solution: Jump further around ring “Finger” table of additional overlay links
26
Links in the Overlay Topology Trade-off between # of hops vs. # of neighbors E.g., log(n) for both, where n is the number of nodes E.g., such as overlay links 1/2, 1/4 1/8, … around the ring Each hop traverses at least half of the remaining distance 1/2 1/4 1/8
27
Chord “Finger Table” Accelerates Lookups N80 ½ ¼ 1/8 1/16 1/32 1/64 1/128
28
Chord lookups take O(log N) hops N32 N10 N5 N20 N110 N99 N80 N60 Lookup(K19) K19
29
Chord Self-organization Node join Set up finger i: route to succ(n + 2 i ) log(n) fingers ) O(log 2 n) cost Node leave Maintain successor list for ring connectivity Update successor list and finger pointers
30
Chord lookup algorithm properties Interface: lookup(key) IP address Efficient: O(log N) messages per lookup N is the total number of servers Scalable: O(log N) state per node Robust: survives massive failures Simple to analyze
31
Plaxton Trees Motivation Access nearby copies of replicated objects Time-space trade-off Space = Routing table size Time = Access hops
32
Plaxton Trees Algorithm 9AE4247B 1. Assign labels to objects and nodes Each label is of log 2 b n digits ObjectNode - using randomizing hash functions
33
Plaxton Trees Algorithm 247B 2. Each node knows about other nodes with varying prefix matches Node 247B 247B 247B 247B 3 1 5 3 6 8 A C 2 2 24 24 247 247 Prefix match of length 0 Prefix match of length 1 Prefix match of length 2 Prefix match of length 3
34
Plaxton Trees Object Insertion and Lookup Given an object, route successively towards nodes with greater prefix matches 247B Node 9AE29A769F109AE4 Object Store the object at each of these locations log(n) steps to insert or locate object
35
Plaxton Trees Why is it a tree? 247B9F109A769AE2 Object
36
Plaxton Trees Network Proximity Overlay tree hops could be totally unrelated to the underlying network hops USA Europe East Asia Plaxton trees guarantee constant factor approximation! Only when the topology is uniform in some sense
37
Pastry Based directly upon Plaxton Trees Exports a DHT interface Stores an object only at a node whose ID is closest to the object ID In addition to main routing table Maintains leaf set of nodes Closest L nodes (in ID space) L = 2 (b + 1),typically -- one digit to left and right
38
CAN [Ratnasamy, et al] Map nodes and keys to coordinates in a multi- dimensional cartesian space source key Routing through shortest Euclidean path For d dimensions, routing takes O(dn 1/d ) hops Zone
39
Symphony [Manku, et al] Similar to Chord – mapping of nodes, keys ‘k’ links are constructed probabilistically! x This link chosen with probability P(x) = 1/(x ln n) Expected routing guarantee: O(1/k (log 2 n)) hops
40
SkipNet [Harvey, et al] Previous designs distribute data uniformly throughout the system Good for load balancing But, my data can be stored in Timbuktu! Many organizations want stricter control over data placement What about the routing path? Should a Microsoft Microsoft end-to-end path pass through Sun?
41
SkipNet Content and Path Locality Basic Idea: Probabilistic skip lists Height Nodes Each node choose a height at random Choose height ‘h’ with probability 1/2 h
42
SkipNet Content and Path Locality Height Nodes machine1.cmu.edumachine2.cmu.edu machine1.berkeley.edu Nodes are lexicographically sorted Still O(log n) routing guarantee!
43
Comparison of Different DHTs # Links per node Routing hops Pastry/Tapestry O(2 b log 2 b n)O(log 2 b n) Chord log nO(log n) CAN ddn 1/d SkipNet O(log n) Symphony kO((1/k) log 2 n) Koorde dlog d n Viceroy 7O(log n)
44
What can DHTs do for us? Distributed object lookup Based on object ID De-centralized file systems CFS, PAST, Ivy Application Layer Multicast Scribe, Bayeux, Splitstream Databases PIER
45
Where are we now? Many DHTs offering efficient and relatively robust routing Unanswered questions Node heterogeneity Network-efficient overlays vs. Structured overlays Conflict of interest! What happens with high user churn rate? Security
46
Are DHTs a panacea? Useful primitive Tension between network efficient construction and uniform key-value distribution Does every non-distributed application use only hash tables? Many rich data structures which cannot be built on top of hash tables alone Exact match lookups are not enough Does any P2P file-sharing system use a DHT?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.