Chord Advanced issues. Analysis Search takes O(log(N)) time –Proof 1 (intuition): At each step, distance between query and peer hosting the object reduces.

Slides:



Advertisements
Similar presentations
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Advertisements

Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
CHORD – peer to peer lookup protocol Shankar Karthik Vaithianathan & Aravind Sivaraman University of Central Florida.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert MorrisDavid, Liben-Nowell, David R. Karger, M. Frans Kaashoek,
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Prepared by Ali Yildiz (with minor modifications by Dennis Shasha)
Technische Universität Yimei Liao Chemnitz Kurt Tutschku Vertretung - Professur Rechner- netze und verteilte Systeme Chord - A Distributed Hash Table Yimei.
Technische Universität Chemnitz Kurt Tutschku Vertretung - Professur Rechner- netze und verteilte Systeme Chord - A Distributed Hash Table Yimei Liao.
Chord: A Scalable Peer-to- Peer Lookup Service for Internet Applications Ion StoicaRobert Morris David Liben-NowellDavid R. Karger M. Frans KaashoekFrank.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
CHORD: A Peer-to-Peer Lookup Service CHORD: A Peer-to-Peer Lookup Service Ion StoicaRobert Morris David R. Karger M. Frans Kaashoek Hari Balakrishnan Presented.
P2p CAN, CHORD, BATON (extensions). p2p Additional issues: Fault tolerance, load balancing, network awareness, concurrency Replicate & cache.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Speaker: Cathrin Weiß 11/23/2004 Proseminar Peer-to-Peer Information Systems.
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
1 1 Chord: A scalable Peer-to-peer Lookup Service for Internet Applications Dariotaki Roula
Xiaowei Yang CompSci 356: Computer Network Architectures Lecture 22: Overlay Networks Xiaowei Yang
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up protocol.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
1 Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan.
Topics in Reliable Distributed Systems Lecture 2, Fall Dr. Idit Keidar.
Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications Stoica et al. Presented by Tam Chantem March 30, 2007.
Distributed Lookup Systems
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications 吳俊興 國立高雄大學 資訊工程學系 Spring 2006 EEF582 – Internet Applications and Services 網路應用與服務.
Peer To Peer Distributed Systems Pete Keleher. Why Distributed Systems? l Aggregate resources! –memory –disk –CPU cycles l Proximity to physical stuff.
Wide-area cooperative storage with CFS
Lecture 10 Naming services for flat namespaces. EECE 411: Design of Distributed Software Applications Logistics / reminders Project Send Samer and me.
Concurrent node joins and Stabilization Παρουσίαση: Νίκος Κρεμμυδάς Πάνος Σκυβαλίδας.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Lecture 3 1.
Symmetric Replication in Structured Peer-to-Peer Systems Ali Ghodsi, Luc Onana Alima, Seif Haridi.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
1 Reading Report 5 Yin Chen 2 Mar 2004 Reference: Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applications, Ion Stoica, Robert Morris, david.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Presentation 1 By: Hitesh Chheda 2/2/2010. Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT Laboratory for Computer Science.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
Chord Advanced issues. Analysis Theorem. Search takes O (log N) time (Note that in general, 2 m may be much larger than N) Proof. After log N forwarding.
1 CS 525 Advanced Distributed Systems Spring 2014 Indranil Gupta (Indy) Lecture 5 Peer to Peer Systems II February 4, 2014 All Slides © IG.
Chord Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber Google,
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Indranil Gupta (Indy) September 27, 2012 Lecture 10 Peer-to-peer Systems II Reading: Chord paper on website (Sec 1-4, 6-7)  2012, I. Gupta Computer Science.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
Analysis of the Evolution of Peer-to-Peer Systems Proseminar “Peer – to – Peer Information Systems” WS 04/05 Prof. Gerhard Weikum Speaker : Emil Zankov.
The Cost of Inconsistency in Chord Shelley Zhuang, Ion Stoica, Randy Katz OASIS/i3 Retreat, January 2005.
Computer Science 425/ECE 428/CSE 424 Distributed Systems (Fall 2009) Lecture 20 Self-Stabilization Reading: Chapter from Prof. Gosh’s book Klara Nahrstedt.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Peer-to-peer Systems All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
1 Distributed Hash tables. 2 Overview r Objective  A distributed lookup service  Data items are distributed among n parties  Anyone in the network.
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 24 – Introduction to Peer-to-Peer (P2P) Systems Klara Nahrstedt (presented by Long Vu)
The Chord P2P Network Some slides taken from the original presentation by the authors.
Peer-to-Peer Information Systems Week 12: Naming
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
A Scalable Peer-to-peer Lookup Service for Internet Applications
(slides by Nick Feamster)
DHT Routing Geometries and Chord
Chord Advanced issues.
Chord Advanced issues.
Chord Advanced issues.
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
Consistent Hashing and Distributed Hash Table
A Scalable Peer-to-peer Lookup Service for Internet Applications
Peer-to-Peer Information Systems Week 12: Naming
Presentation transcript:

Chord Advanced issues

Analysis Search takes O(log(N)) time –Proof 1 (intuition): At each step, distance between query and peer hosting the object reduces by a factor of at least 2 Takes at most m steps: is at most a constant multiplicative factor above N, lookup is O(log(N)) –Proof 2 (intuition): After log(N) forwarding steps, distance to key is at most Number of node identifiers in a range of is O(log(N)) with high probability (why? SHA-1!) So using successor s in that range will be ok

Analysis (contd.) O(log(N)) search time is true only if finger and successor entries correct When might these entries be wrong?

Analysis (contd.) O(log(N)) search time is true only if finger and successor entries correct When might these entries be wrong? –When you have failures

Search under peer failures N80 0 Say m=7 N32 N45 File abcnews.com with key K42 stored here X X X Lookup for K42 fails (N16 does not know N45) N112 N96 N16 Who has abcnews.com ? (hashes to K42)

Search under peer failures N80 0 Say m=7 N32 N45 File abcnews.com with key K42 stored here X One solution: maintain r multiple successor entries in case of failure, use other successor entries N112 N96 N16 Who has abcnews.com ? (hashes to K42)

Search under peer failures Choosing r=2log(N) suffices to maintain correctness with high probability –Say 50% of nodes fail (i.e prob of failure = 1/2) –Pr(for given node, at least one successor alive)= –Pr(above is true for all alive nodes)=

Search under peer failures (2) N80 0 Say m=7 N32 N45 File abcnews.com with key K42 stored here X X Lookup fails (N45 is dead) N112 N96 N16 Who has abcnews.com ? (hashes to K42)

Search under peer failures (2) N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X One solution: replicate file/key at r successors and predecessors N112 N96 N16 K42 replicated Who has cnn.com/index.html ? (hashes to K42)

Need to deal with dynamic changes Peers fail New peers join Peers leave Need to update successors and fingers, and ensure keys reside in the right places

New peers joining N80 0 Say m=7 N32 N45 N112 N96 N16 N40 Some gateway node directs N40 to N32 N32 updates successor to N40 N40 initializes successor to N45, and obtains fingers from it N40 periodically talks to neighbors to update finger table Stabilization protocol

New peers joining (2) N80 0 Say m=7 N32 N45 N112 N96 N16 N40 N40 may need to copy some files/keys from N45 (files with fileid between 32 and 40) K34,K38

Concurrent join N80 0 Say m=7 N32 N45 N112 N96 N16 N24 K38 N20 N28 Argue that each node will eventually be reachable K24

Stabilization Protocol Concurrent peer joins, leave, or failure might cause the invariant to be violated, causing loopiness (?) or failure of lookups –periodically a stabilization algorithm checks and updates pointers and keys. [TechReport on Chord in the list of readings] defines weak and strong stability –Ensures non-loopiness of fingers, eventual success of lookups and O(log(N)) lookups w.h.p.

Effect on lookup If in a stable network with N nodes, another set of N nodes joins, and the join protocol correctly sets their successors, then lookups will take O(log N) steps w.h.p

Effect on lookup N80 0 N32 N45 N112 N96 N16 N24 K38 N20 N28 K24 Transfer pending Linear Scan Will locate K24 Consistent hashing guarantees that there be O(log N) new nodes w.h,p between two consecutive nodes

Loopy network N5 N3 N1 N24 N63 N78 N96 What is funny / awkward about this?  v: u < v < successor (u) (succ (pred (u))) = u (Weakly stable) stable Must be false for strong stability

Weak and Strong Stabilization N5 N3 N1 N24 N63 N78 N96  u (successor (predecessor (u))) = u. Still it is weakly stable but not strongly stable. Loopy network

Strong stabilization The key idea of the algorithm is:Let each node u ask its successor to walk around the ring until it reaches a node v : u <v ≤ successor(u). If  v: u <v < successor(u) then loopiness exists, and reset successor(u):=v Takes O(N 2 ) steps. But loopiness is a rare event. No protocol for recovery exists from a split ring.

New peers joining (3) A new peer affects O(log(N)) other finger entries in the system. So, the number of messages per peer join= O(log(N)*log(N)) Similar set of operations for dealing with peers leaving

Cost of lookup o Cost is O(Log N) as predicted by theory o constant is 1/2 Number of Nodes Average Messages per Lookup

Robustness o Simulation results: static scenario o Failed lookup means original node with key failed (no replica of keys) o Result implies good balance of keys among nodes!

Strengths o Based on theoretical work (consistent hashing) o Proven performance in many different aspects o “with high probability” proofs o Good tolerance to random node failures

Weakness o Member join & leave are complicated o aggressive mechanisms requires too many messages and updates o no analysis of convergence in lazy finger mechanism o Key management mechanism mixed between layers o upper layer does insertion and handle node failures o Chord transfer keys when node joins (no leave mechanism!) o Routing table grows with # of members in group o Worst case lookup can be slow

Discussions o Network proximity (can we factor in latency?) o Protocol security o Malicious data insertion o Malicious Chord table information o Keyword search o...