Download presentation
Presentation is loading. Please wait.
Published byAda White Modified over 8 years ago
1
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Dr. Yingwu Zhu
2
A peer-to-peer storage problem 10000 scattered music enthusiasts Willing to store and serve replicas Contributing resources, e.g., storage, bandwidth, etc. How do you find the data? Efficient Lookup mechanism needed!
3
The lookup problem Internet N1N1 N2N2 N3N3 N6N6 N5N5 N4N4 Publisher Key=“title” Value=MP3 data… Client Lookup(“title”) ?
4
Centralized lookup (Napster) Publisher@ Client Lookup(“title”) N6N6 N9N9 N7N7 DB N8N8 N3N3 N2N2 N1N1 SetLoc(“title”, N4) Simple, but O( N ) state and a single point of failure Legal issues! Key=“title” Value=MP3 data… N4N4
5
Napster: Publish I have X, Y, and Z! Publish insert(X, 123.2.21.23)... 123.2.21.23
6
Napster: Search Where is file A? Query Reply search(A) --> 123.2.0.18 Fetch 123.2.0.18
7
Napster Central Napster server Can ensure correct results? Bottleneck for scalability Single point of failure Susceptible to denial of service Malicious users Lawsuits, legislation Search is centralized File transfer is direct (peer-to-peer)
8
Flooded queries (Gnutella) N4N4 Publisher@ Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Robust, but worst case O( N ) messages per lookup Key=“title” Value=MP3 data… Lookup(“title”)
9
Gnutella: Query Flooding = forward query = processed query = source = found result = forward response Breadth-First Search (BFS)
10
Gnutella: Query Flooding A node/peer connects to a set of Gnutella neighbors Forward queries to neighbors Client which has the Information responds. Flood network with TTL for termination + Results are complete – Bandwidth wastage
11
Gnutella: Random Walk Improved over query flooding Same overly structure to Gnutella Forward the query to random subset of it neighbors + Reduced bandwidth requirements – Incomplete results – High latency Peer nodes
12
Kazza (Fasttrack Networks) Hybrid of centralized Napster and decentralized Gnutella Super-peers act as local search hubs Each super-peer is similar to a Napster server for a small portion of the network Super-peers are automatically chosen by the system based on their capacities (storage, bandwidth, etc.) and availability (connection time) Users upload their list of files to a super-peer Super-peers periodically exchange file lists You send queries to a super-peer for files of interest The local super-peer may flood the queries to other super- peers for the files of interest, if it cannot satisfy the queries. Exploit the heterogeneity of peer nodes
13
Kazza Uses supernodes to improve scalability, establish hierarchy Uptime, bandwidth Closed-source Uses HTTP to carry out download Encrypted protocol; queuing, QoS
14
KaZaA: Network Design “Super Nodes”
15
KaZaA: File Insert I have X! Publish insert(X, 123.2.21.23)... 123.2.21.23
16
KaZaA: File Search Where is file A? Query search(A) --> 123.2.0.18 search(A) --> 123.2.22.50 Replies 123.2.0.18 123.2.22.50
17
Routed queries (Freenet, Chord, etc.) N4N4 Publisher Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Lookup(“title”) Key=“title” Value=MP3 data…
18
Routing challenges Define a useful key nearness metric Keep the hop count small Keep the tables small Stay robust despite rapid change (node addition/removal) Freenet: emphasizes anonymity Chord: emphasizes efficiency and simplicity
19
Chord properties Efficient: O( log(N) ) messages per lookup N is the total number of servers Scalable: O( log(N) ) state per node Robust: survives massive failures Proofs are in paper / tech report Assuming no malicious participants
20
Chord overview Provides peer-to-peer hash lookup: Lookup(key) IP address Mapping: key IP address How does Chord route lookups? How does Chord maintain routing tables?
21
Chord IDs Key identifier = SHA-1(key) Node identifier = SHA-1(IP address) Both are uniformly distributed Both exist in the same ID space How to map key IDs to node IDs?
22
Consistent hashing [Karger 97] N32 N90 N105 K80 K20 K5 Circular 7-bit ID space Key 5 Node 105 A key is stored at its successor: node with next higher ID
23
Basic lookup N32 N90 N105 N60 N10 N120 K80 “Where is key 80?” “N90 has K80”
24
Simple lookup algorithm Lookup(my-id, key-id) n = my successor if my-id < n < key-id call Lookup(id) on node n // next hop else return my successor // done Correctness depends only on successors
25
“Finger table” allows log(N)-time lookups N80 ½ ¼ 1/8 1/16 1/32 1/64 1/128 Fast track/ Express lane
26
Finger i points to successor of n+2 i N80 ½ ¼ 1/8 1/16 1/32 1/64 1/128 112 N120
27
Lookup with fingers Lookup(my-id, key-id) look in local finger table for highest node n s.t. my-id < n < key-id if n exists call Lookup(id) on node n // next hop else return my successor // done
28
Lookups take O( log(N) ) hops N32 N10 N5 N20 N110 N99 N80 N60 Lookup(K19) K19
29
Joining: linked list insert N36 N40 N25 1. Lookup(36) K30 K38
30
Join (2) N36 N40 N25 2. N36 sets its own successor pointer K30 K38
31
Join (3) N36 N40 N25 3. Copy keys 26..36 from N40 to N36 K30 K38 K30
32
Join (4) N36 N40 N25 4. Set N25’s successor pointer Update finger pointers in the background Correct successors produce correct lookups K30 K38 K30
33
Failures might cause incorrect lookup N120 N113 N102 N80 N85 N80 doesn’t know correct successor, so incorrect lookup N10 Lookup(90)
34
Solution: successor lists Each node knows r immediate successors After failure, will know first live successor Correct successors guarantee correct lookups Guarantee is with some probability
35
Choosing the successor list length Assume 1/2 of nodes fail P(successor list all dead) = (1/2) r I.e. P(this node breaks the Chord ring) Depends on independent failure P(no broken nodes) = (1 – (1/2) r ) N r = 2log(N) makes prob. = 1 – 1/N
36
Lookup with fault tolerance Lookup(my-id, key-id) look in local finger table and successor-list for highest node n s.t. my-id < n < key-id if n exists call Lookup(id) on node n // next hop if call failed, remove n from finger table return Lookup(my-id, key-id) else return my successor // done
37
Chord status Working implementation as part of CFS Chord library: 3,000 lines of C++ Deployed in small Internet testbed Includes: Correct concurrent join/fail Proximity-based routing for low delay Load control for heterogeneous nodes Resistance to spoofed node IDs
38
Experimental overview Quick lookup in large systems Low variation in lookup costs Robust despite massive failure See paper for more results Experiments confirm theoretical results
39
Chord lookup cost is O(log N) Number of Nodes Average Messages per Lookup Constant is 1/2
40
Failure experimental setup Start 1,000 CFS/Chord servers Successor list has 20 entries Wait until they stabilize Insert 1,000 key/value pairs Five replicas of each Stop X% of the servers Immediately perform 1,000 lookups
41
Massive failures have little impact Failed Lookups (Percent) Failed Nodes (Percent) (1/2) 6 is 1.6%
42
Related Work CAN (Ratnasamy, Francis, Handley, Karp, Shenker) Pastry (Rowstron, Druschel) Tapestry (Zhao, Kubiatowicz, Joseph) Chord emphasizes simplicity
43
Chord Summary Chord provides peer-to-peer hash lookup Efficient: O( log(n) ) messages per lookup Robust as nodes fail and join Good primitive for peer-to-peer systems http://www.pdos.lcs.mit.edu/chord
44
Reflection on Chord Strict overlay structure Strict data placement If data keys are uniformly distributed, and # of keys >> # of nodes Load balanced Each node has O(1/N) fraction of keys Node addition/deletion only move O(1/N) load, load movement is minimized!
45
Reflection on Chord Routing table (successor list + finger table) Deterministic Network topology unaware Routing latency could be a problem Proximity Neighbor Selection (PNS) m neighbor candidates, choose min latency Still O(logN) hops
46
Reflection on Chord Predecessor + Successor must be correct, aggressively maintained Finger tables are lazily maintained Tradeoff: bandwidth, routing correctness
47
Reflection on Chord Assume uniform node distribution In the wild, nodes are heterogeneous Load imbalance! Virtual servers A node hosts multiple virtual servers O(logN)
49
Join: lazy finger update is OK N36 N40 N25 N2 K30 N2 finger should now point to N36, not N40 Lookup(K30) visits only nodes < 30, will undershoot
50
CFS: a peer-to-peer storage system Inspired by Napster, Gnutella, Freenet Separates publishing from serving Uses spare disk space, net capacity Avoids centralized mechanisms Delete this slide? Mention “distributed hash lookup”
51
CFS architecture move later? Block storage Availability / replication Authentication Caching Consistency Server selection Keyword search Lookup Dhash distributed block store Chord Powerful lookup simplifies other mechanisms
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.