Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications 吳俊興 國立高雄大學 資訊工程學系 Spring 2006 EEF582 – Internet Applications and Services 網路應用與服務.

Slides:



Advertisements
Similar presentations
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Advertisements

Peer to Peer and Distributed Hash Tables
Scalable Content-Addressable Network Lintao Liu
CHORD – peer to peer lookup protocol Shankar Karthik Vaithianathan & Aravind Sivaraman University of Central Florida.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert MorrisDavid, Liben-Nowell, David R. Karger, M. Frans Kaashoek,
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Prepared by Ali Yildiz (with minor modifications by Dennis Shasha)
Technische Universität Yimei Liao Chemnitz Kurt Tutschku Vertretung - Professur Rechner- netze und verteilte Systeme Chord - A Distributed Hash Table Yimei.
Technische Universität Chemnitz Kurt Tutschku Vertretung - Professur Rechner- netze und verteilte Systeme Chord - A Distributed Hash Table Yimei Liao.
Chord: A Scalable Peer-to- Peer Lookup Service for Internet Applications Ion StoicaRobert Morris David Liben-NowellDavid R. Karger M. Frans KaashoekFrank.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
CHORD: A Peer-to-Peer Lookup Service CHORD: A Peer-to-Peer Lookup Service Ion StoicaRobert Morris David R. Karger M. Frans Kaashoek Hari Balakrishnan Presented.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Speaker: Cathrin Weiß 11/23/2004 Proseminar Peer-to-Peer Information Systems.
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
1 1 Chord: A scalable Peer-to-peer Lookup Service for Internet Applications Dariotaki Roula
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up.
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up protocol.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion StoicaRobert Morris David Liben-NowellDavid R. Karger M. Frans KaashoekFrank.
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
1 Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan.
Topics in Reliable Distributed Systems Lecture 2, Fall Dr. Idit Keidar.
Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications Stoica et al. Presented by Tam Chantem March 30, 2007.
Distributed Lookup Systems
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Peer-to-Peer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
Peer To Peer Distributed Systems Pete Keleher. Why Distributed Systems? l Aggregate resources! –memory –disk –CPU cycles l Proximity to physical stuff.
Wide-area cooperative storage with CFS
Lecture 10 Naming services for flat namespaces. EECE 411: Design of Distributed Software Applications Logistics / reminders Project Send Samer and me.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Lecture 3 1.
Effizientes Routing in P2P Netzwerken Chord: A Scalable Peer-to- peer Lookup Protocol for Internet Applications Dennis Schade.
Content Overlays (Nick Feamster). 2 Content Overlays Distributed content storage and retrieval Two primary approaches: –Structured overlay –Unstructured.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
1 Reading Report 5 Yin Chen 2 Mar 2004 Reference: Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applications, Ion Stoica, Robert Morris, david.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Presentation 1 By: Hitesh Chheda 2/2/2010. Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT Laboratory for Computer Science.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Presented by: Tianyu Li
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
Chord Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber Google,
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Chord Advanced issues. Analysis Search takes O(log(N)) time –Proof 1 (intuition): At each step, distance between query and peer hosting the object reduces.
CSE 486/586, Spring 2014 CSE 486/586 Distributed Systems Distributed Hash Tables Steve Ko Computer Sciences and Engineering University at Buffalo.
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
Md Tareq Adnan Centralized Approach : Server & Clients Slow content must traverse multiple backbones and long distances Unreliable.
CS694 - DHT1 Distributed Hash Table Systems Hui Zhang University of Southern California.
CSE 486/586 Distributed Systems Distributed Hash Tables
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Peer-to-peer Systems All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
1 Distributed Hash tables. 2 Overview r Objective  A distributed lookup service  Data items are distributed among n parties  Anyone in the network.
The Chord P2P Network Some slides taken from the original presentation by the authors.
CSE 486/586 Distributed Systems Distributed Hash Tables
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
(slides by Nick Feamster)
DHT Routing Geometries and Chord
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
Consistent Hashing and Distributed Hash Table
CSE 486/586 Distributed Systems Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
Presentation transcript:

Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications 吳俊興 國立高雄大學 資訊工程學系 Spring 2006 EEF582 – Internet Applications and Services 網路應用與服務 參考:「同儕計算網路及其應用」課程

Chord Chord provides support for just one operation: given a key, it maps the key onto a node. Chord provides support for just one operation: given a key, it maps the key onto a node. Applications can be easily implemented on top of Chord. Applications can be easily implemented on top of Chord.  Cooperative File System (CFS)  DNS

Chord-based distributed storage system: CFS File System Block Store CHORD Block Store CHORD Block Store CHORD peer

The Block Store Layer A CFS File Structure example A CFS File Structure example The root-block is identified by a public-key and signed by corresponding private key The root-block is identified by a public-key and signed by corresponding private key Other blocks are identified by cryptographic hashes of their contents Other blocks are identified by cryptographic hashes of their contents

Chord properties Efficient: O(Log N) messages per lookup Efficient: O(Log N) messages per lookup  N is the total number of servers Scalable: O(Log N) state per node Scalable: O(Log N) state per node Robust: survives massive changes in membership Robust: survives massive changes in membership

Hashing Hashing is generally used to distribute objects evenly into a set of servers Hashing is generally used to distribute objects evenly into a set of servers  E.g., the liner congruential function h(x)= ax+b (mod p)  SHA-1 When the number of servers changes (p in the above case), then almost every item would be hashed to a new location When the number of servers changes (p in the above case), then almost every item would be hashed to a new location  Cached objects become useless in each server when a server is removed or introduced to the system mod 5  5 6 Add two new buckets (now mod 7) mod 7    

Consistent Hashing Load is balanced Load is balanced Relocation is minimum Relocation is minimum  When an N th server joins/leaves the system, with high probability only an O(1/ N ) fractions of the data objects need to be relocated

A possible implementation Some interval Objects are servers are first mapped (hashed) to points in the same interval Objects are servers are first mapped (hashed) to points in the same interval Then objects are actually placed into the servers that are closest to them w.r.t. the mapped points in the interval. Then objects are actually placed into the servers that are closest to them w.r.t. the mapped points in the interval.  E.g., D001  S0, D012  S1, D303  S3 objectsservers

When server 4 joins Some interval Only D103 needs to be moved from S3 to S4. The rest remains unchanged. Only D103 needs to be moved from S3 to S4. The rest remains unchanged. objectsservers 4

When server 3 leaves Some interval Only D313 and D044 need to be moved from S3 to S4. Only D313 and D044 need to be moved from S3 to S4. objectsservers 4

Consistent Hashing in Chord Node’s ID = SHA-1 (IP address) Node’s ID = SHA-1 (IP address) Key’s ID = SHA-1 (object’s key/name) Key’s ID = SHA-1 (object’s key/name) Chord views the ID’s as Chord views the ID’s as  Uniformly distributed  occupying a circular identifier space Keys are placed at the node whose Ids are the closest to the (ids of) the keys in the clockwise direction. Keys are placed at the node whose Ids are the closest to the (ids of) the keys in the clockwise direction.  successor ( k ): the first node clockwise from k.  Place object k to successor ( k ).

An ID Ring of length k10 Circular ID Space N1 N8 N14 N21 N32 N38 N42 N48 N51 N56 k24 k30 k38 k54

Simple Lookup Lookup correct if successors are correct Lookup correct if successors are correct Average of n/2 message exchanges Average of n/2 message exchanges Circular ID Space N1 N8 N14 N21 N32 N38 N42 N48 N51 N56 k54 lookup(k54)

Scalable Lookup The i th entry in the finger table points to successor(n+2 i1 (mod 2 6 )) k10 N1 N8 N14 N21 N32 N38 N42 N48 N51 N56 k24 k30 k38 k54 Finger table N8+1N14 N8+2N14 N8+4N14 N8+8N21 N8+16N32 N8+32N

Scalable Lookup Look in local finger table for the largest n s.t. Look in local finger table for the largest n s.t. my_id < n < ket_id If n exists, call n.loopup(key_id), else return successor(my_id) N1 N8 N14 N21 N32 N38 N42 N48 N51 N56 k54 Finger table at N8 N8+1N14 N8+2N14 N8+4N14 N8+8N21 N8+16N32 N8+32N lookup(k54)

Scalable Lookup N1 N8 N14 N21 N32 N38 N42 N48 N51 N56 k54 Finger table at N42 N42+1N48 N42+2N48 N42+4N48 N42+8N51 N42+16N1 N42+32N14 lookup(k54)

Scalable Lookup Each node can forward a query at least halfway along the remaining distance between the node and the target identifier. Lookup takes O(log N) steps. N1 N8 N14 N21 N32 N38 N42 N48 N51 N56 k54 Finger table at N51 N51+1N56 N51+2N56 N51+4N56 N51+8N1 N51+16N8 N51+32N21 lookup(k54)

Node joins When a node i joins the system from any existing node j: When a node i joins the system from any existing node j:  Node j finds successor(i) for i, say k  i sets its successor to k, and informs k to set its predecessor to i.  k ’s old predecessor learns the existence of i by running, periodically, a stabilization algorithm to check if k ’s predecessor is still it.

Node joins (cont.) Circular ID Space N1 N8 N14 N21 N32 N38 N42 N48 N51 N56 Finger table N8+1N14 N8+2N14 N8+4N14 N8+8N21 N8+16N32 N8+32N42 N25 joins via N8 N25 k24 k30  aggressive mechanisms requires too many messages and updates

Node Fails Can be handled simply as the invert of node joins; I.r., by running stabilization algorithm. Can be handled simply as the invert of node joins; I.r., by running stabilization algorithm.

Handling Failures Use successor list Use successor list  Each node knows r immediate successors  After failure, will know first live successor  Correct successors guarantee correct lookups Guarantee is with some probability Guarantee is with some probability  Can choose r to make probability of lookup failure arbitrarily small

Weakness NOT that simple (compared to CAN) NOT that simple (compared to CAN) Member joining is complicated Member joining is complicated  aggressive mechanisms requires too many messages and updates  no analysis of convergence in lazy finger mechanism Key management mechanism mixed between layers Key management mechanism mixed between layers  upper layer does insertion and handle node failures  Chord transfer keys when node joins (no leave mechanism!) Routing table grows with # of members in group Routing table grows with # of members in group Worst case lookup can be slow Worst case lookup can be slow

Chord Summary Advantages Advantages  Filed guaranteed to be found in O(log(N)) steps  Routing table size O(log(N))  Robust, handles large number of concurrent join and leaves Disadvantages Disadvantages  Performance: routing in the overlay network can be more expensive than in the underlying network  No correlation between node ids and their locality; a query can repeatedly jump from Taiwan to America, though both the initiator and the node that store the item are in Taiwan!  Partial solution: Weight neighbor nodes by Round Trip Time (RTT)  when routing, choose neighbor who is closer to destination with lowest RTT from me » reduces path latency