MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference

Slides:



Advertisements
Similar presentations
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Advertisements

Peer to Peer and Distributed Hash Tables
CHORD – peer to peer lookup protocol Shankar Karthik Vaithianathan & Aravind Sivaraman University of Central Florida.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert MorrisDavid, Liben-Nowell, David R. Karger, M. Frans Kaashoek,
Technische Universität Chemnitz Kurt Tutschku Vertretung - Professur Rechner- netze und verteilte Systeme Chord - A Distributed Hash Table Yimei Liao.
IBM Haifa Research 1 Finding Data in the Cloud using Distributed Hash Tables (Chord) IBM Haifa Research Storage Systems.
Chord: A Scalable Peer-to- Peer Lookup Service for Internet Applications Ion StoicaRobert Morris David Liben-NowellDavid R. Karger M. Frans KaashoekFrank.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
CHORD: A Peer-to-Peer Lookup Service CHORD: A Peer-to-Peer Lookup Service Ion StoicaRobert Morris David R. Karger M. Frans Kaashoek Hari Balakrishnan Presented.
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
1 1 Chord: A scalable Peer-to-peer Lookup Service for Internet Applications Dariotaki Roula
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up.
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up protocol.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion StoicaRobert Morris David Liben-NowellDavid R. Karger M. Frans KaashoekFrank.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
1 Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan.
Topics in Reliable Distributed Systems Lecture 2, Fall Dr. Idit Keidar.
Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications Stoica et al. Presented by Tam Chantem March 30, 2007.
Distributed Lookup Systems
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Peer-to-Peer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
Structure Overlay Networks and Chord Presentation by Todd Gardner Figures from: Ion Stoica, Robert Morris, David Liben- Nowell, David R. Karger, M. Frans.
Chord-over-Chord Overlay Sudhindra Rao Ph.D Qualifier Exam Department of ECECS.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
Peer To Peer Distributed Systems Pete Keleher. Why Distributed Systems? l Aggregate resources! –memory –disk –CPU cycles l Proximity to physical stuff.
Wide-area cooperative storage with CFS
INTRODUCTION TO PEER TO PEER NETWORKS Z.M. Joseph CSE 6392 – DB Exploration Spring 2006 CSE, UT Arlington.
Content Overlays (Nick Feamster). 2 Content Overlays Distributed content storage and retrieval Two primary approaches: –Structured overlay –Unstructured.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
1 Reading Report 5 Yin Chen 2 Mar 2004 Reference: Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applications, Ion Stoica, Robert Morris, david.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
Vincent Matossian September 21st 2001 ECE 579 An Overview of Decentralized Discovery mechanisms.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Presentation 1 By: Hitesh Chheda 2/2/2010. Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT Laboratory for Computer Science.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Presented by: Tianyu Li
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Lecture 2 Distributed Hash Table
Peer to Peer A Survey and comparison of peer-to-peer overlay network schemes And so on… Chulhyun Park
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
Chord Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber Google,
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
CS 347Notes081 CS 347: Parallel and Distributed Data Management Notes 08: P2P Systems.
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Peer-to-peer Systems All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
1 Distributed Hash tables. 2 Overview r Objective  A distributed lookup service  Data items are distributed among n parties  Anyone in the network.
The Chord P2P Network Some slides taken from the original presentation by the authors.
Peer-to-Peer Information Systems Week 12: Naming
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
Magdalena Balazinska, Hari Balakrishnan, and David Karger
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
(slides by Nick Feamster)
DHT Routing Geometries and Chord
Building Peer-to-Peer Systems with Chord, a Distributed Lookup Service
Chord and CFS Philip Skov Knudsen
Consistent Hashing and Distributed Hash Table
P2P: Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
Peer-to-Peer Information Systems Week 12: Naming
Presentation transcript:

MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference Chord: A scalable peer-to-peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference 2002. 12. 18. Jiyong Park

Contents Chord Overview Chord System Model Chord Protocol Lookup Lookup with Scalability Node joining Node joining (concurrent) Failure Simulation Results Conclusion

Overview Chord: lookup protocol for P2P Characteristics Especially for file-sharing application lookup(key)  node which stores {key, value} Characteristics N: number of nodes, K: number of keys Keys for each node: (1+ε)K/N at most Messages for lookup: O(log N) When Nth node joins: O(K/N) keys are moved to diff location When Nth node joins: O(log2 N) messages for reorganization

Overview – Related Work Central Indexed: Napster, … Single source of failure Flooded Requests: Gnutella, … lot of broadcasts not scalable Document Routing: Freenet, Chord, … Scalable document ID must be known server download search search download 24 File Id = h(data) = 8 1500 10 200 1200

System Model {key, value} Operations System paramerer Area Key: m-bit ID Value: array of bytes (file, IP address, …) Operations insert(key, value), update(key, value) lookup(key) join() / leave() System paramerer r : degree of redundancy Area Only for lookup service security, authentication, … not concerned

Protocol Evolution All node know each other join / leave Not scalable Consistent Hashing Not scalable Scalable Don’t have to know each other No join / leave Chord Static Network Dynamic Network Does not know each other Join / leave but not concurrent In Practice Does not know each other Join / leave (concurrent) Handling failure Replication

Protocol – Consistent Hashing m-bit ID space node id: n = hash(IP address) key id: k = hash(Key) Successor(k) Node ID storing {key, value} m = 3 Node : {0, 1, 3} Key : {1, 2, 6}

Protocol – Consistent Hashing Properties (K keys, N nodes) Each Node is responsible for at most (1+ε)K/N keys When Node joins/leaves: O(K/N) keys are moved 6 Node ID 6 Joins 1 successor(1) = 1 7 1 successor(2) = 3 6 6 2 successor(6) = 6 3 5 4 2

Protocol – Basic Chord m-entry routing table (for each node) stores information only about small number of nodes amount of info. falls off exponentially with distance in key-space

Protocol – Basic Chord

Node 8 has more information about 12 Protocol – Basic Chord node 0.lookup(12) Try to find closest preceding of 12 finger[i].node is in test interval ? If yes, search at finger[i].node If no, i = i - 1 test interval (0, 12) node 0 finger table invoke node 8.lookup(12) Node 8 has more information about 12

Protocol – Basic Chord Node 8.lookup(12) Node 10.lookup(12) 12 is between this node (=10) and 10.successor (=14) So, key 12 is stored in node 14 O(log N) nodes contacted during lookup test interval (8, 12) node 8 finger table Invoke node 10.lookup(12)

Protocol – Join / Leave Assumptions join / leave Join procedure Each node should be in static state No concurrent join / leave Should know at least one existing node (n’) Join procedure Initialize finger table of node n (using n’) Update finger table of other nodes Copy keys for which node n has became their successor to n

Protocol – Join / Leave Initialize finger table finger table from the local node’s viewpoint finger[i].node = successor(finger[i].start)  finger[i].node = n’.find_successor(finger[i].start) Update finger table of other nodes in counter clock-wise, for i = 1 to m if this node is ith finger of other node, update that entry Move keys from successor(finger[1].node)

Protocol – Join / Leave Node 6 joins O(log2 N) messages needed to re-establish after join/leave

Protocol – Concurrent Join Requisites Every node have to know its immediate predecessor & successor at all time Drawback less strict time bound Algorithm Find n’s predecessor & successor notify them that they have a new immediate neighbour fill n’s finger table initialize n’s predecessor stabilize periodically

Protocol – Concurrent Join Case 1 (distant node join concurrently) existing node s new node p s Find immediate predecessor & successor p s Notify them to change s & p pointer p

Protocol – Concurrent Join Case 2 (adjacent node join concurrently) s existing node new node p Find immediate predecessor & successor Notify them to change s & p pointer Call stabilize() periodically (fix wrong link)

Protocol – Failure & Replication Detected by stabilize() on immediate neighbor node Find new neighbor

Protocol – Failure & Replication When node n fails, n’ (successor of n) should have {key/value} in n each node has ‘r’ nearest successors pointer When insertion, propagate copy to r successors r r

Simulation Results Load balancing (keys per node = O(K/N) )

Simulation Results Path length (= log N)

Simulation Results Path length (N = 1012 , do not exceed 12)

Simulation Results Failure (after network has been stabilized)

Conclusion Benefits Limitations Scalable, Efficient: O(log N) Load balance (for uniform request) Bounded routing latency Limitations Destroys locality Discard useful application-specific information (e.g. hierarchy) Load unbalance (for skewed request)