Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.

Similar presentations


Presentation on theme: "Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan."— Presentation transcript:

1 Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan MIT Laboratory for CS SIGCOMM Proceedings, 2001 http://pdos.lcs.mit.edu/chord/

2 Chord Contribution Chord is a scalable protocol for lookup in a dynamic peer-to-peer system with frequent node arrivals and departures.

3 Peer-to-peer Application Characteristics Direct connection, without a central point of management Large scale Concurrent node join Concurrent node leave

4 The Lookup Problem Internet N1N1 N2N2 N3N3 N6N6 N5N5 N4N4 Publisher Key=“Ma Yo-yo” Value= Sonata… Client Lookup(“Ma Yo-yo”) ?

5 Some Approaches Napster (DNS) - centralized Gnutella - flooded Freenet, Chord - decentralized

6 Centralized lookup (Napster) Simple Least info maintained in each node A single point of failure DB server N1 N2N2 N3N3 N6N6 N5N5 N4N4 Key=“Ma Yo-yo” Value= Sonata… Client 2. Lookup(“Ma Yo-yo”) N7N7 1. Register “Ma Yo-yo”3. Here (“Ma Yo-yo”, N7) 4. Download

7 Flooded queries (Gnutella) Robust Maintain neighbor state in each node Flooded messages over the network N2N2 N7N7 Client N4N4 N8N8 N5N5 N6N6 N3N3 N1N1 Key=“Ma Yo-yo” Value=Sonata… Lookup(“Ma Yo-yo”) Here(“Ma Yo-yo”, N7)

8 Routed queries (Freenet, Chord) Decentralized Maintain route info in each node Scalable N7N7 Client N4N4 N5N5 N6N6 N3N3 N2N2 N1N1 Lookup(“Ma Yo-yo”) Key=“Ma Yo-yo” Value=Sonata… Here (“Ma Yo-yo”, N7)

9 Chord Basic protocol Node Joins Stabilization Failures and Replication

10 Chord Properties Assumption: no malicious participants Efficiency: O( log(N) ) messages per lookup N is the total number of servers Scalability: O( log(N) ) state per node Robustness: survives massive failures Load balanced: spreading keys evenly

11 Basic Protocol Main operation Given a key, maps the key onto a node Consistent hashing Key identifier = SHA-1( title ) Node identifier = SHA-1( IP ) Find successor( Key ) map key IDs to node IDs

12 Consistent Hashing [Karger 97] Target: web page caching Like normal hashing, assigns items to buckets so that each bucket receives roughly the same number of items Unlike normal hashing, a small change in the bucket set does not induce a total remapping of items to buckets

13 Consistent Hashing ID: 2^7 = 128 N32 N90 N105 K80 K20 K5 Circular 7-bit ID space Key 5 Key 105 A key is stored at its successor: node with next higher ID

14 Basic Lookup Inefficient, O( N ) N32 N90 N105 N60 N10 N120 K80 “Where is key 80?” “N90 has K80”

15 Finger Table (1) Speed up the lookup A routing table with m entries - fingers At node n, i th entry of its finger table s = successor (n+2^(i-1)), 1 ≦ i ≦ m At node n, each finger of its finger table points to the successor of [(n+2^(i-1)) mod (2^m)] Allows log(N) - time lookups

16 S=(1+(2^(2-1))) mod (2^3) = 3 S=(1+(2^(1-1))) mod (2^3) = 2 S=(1+(2^(3-1))) mod (2^3) = 5

17 Lookups take O( log(N) ) hops N32 N10 N5 N20 N110 N99 N80 N60 Lookup(K19) K19

18 Node Join (1) N36 N40 N25 1. Lookup(36) K30 K38

19 Node Join (2) N36 N40 N25 2. N36 sets its own successor pointer K30 K38

20 Node Join (3) N36 N40 N25 3. Copy keys 26..36 from N40 to N36 K30 K38 K30

21 Node Join (4) N36 N40 N25 4. Set N25’s successor pointer Update finger pointers in the background Correct successors produce correct lookups K30 K38 K30

22 Stabilization Stabilization algorithm periodically verifies and refreshes node knowledge Successor pointers Predecessor pointers Finger tables

23 Failures and Replication Maintain correct successor pointers N80 doesn’t know correct successor, so incorrect lookup N120 N113 N102 N80 N85 N10 Lookup(90)

24 Solution: successor lists Each node knows r immediate successors After failure, will know first live successor Correct successors guarantee correct lookups

25 Path length as a function of network size

26 Failed Lookups versus Node Fail/Join Rate

27 Chord Summary Chord provides peer-to-peer hash lookup Efficient: O( log(n) ) messages per lookup Robust as nodes fail and join Not deal with malicious users http://pdos.lcs.mit.edu/chord/


Download ppt "Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan."

Similar presentations


Ads by Google