1 CS 525 Advanced Distributed Systems Spring 2014 Indranil Gupta (Indy) Lecture 5 Peer to Peer Systems II February 4, 2014 All Slides © IG.

Slides:



Advertisements
Similar presentations
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Advertisements

Peer to Peer and Distributed Hash Tables
CHORD – peer to peer lookup protocol Shankar Karthik Vaithianathan & Aravind Sivaraman University of Central Florida.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
CHORD: A Peer-to-Peer Lookup Service CHORD: A Peer-to-Peer Lookup Service Ion StoicaRobert Morris David R. Karger M. Frans Kaashoek Hari Balakrishnan Presented.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Speaker: Cathrin Weiß 11/23/2004 Proseminar Peer-to-Peer Information Systems.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up protocol.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT.
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 9-10: Peer-to-peer Systems All slides © IG.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
Topics in Reliable Distributed Systems Lecture 2, Fall Dr. Idit Keidar.
Introduction to Peer-to-Peer (P2P) Systems Gabi Kliot - Computer Science Department, Technion Concurrent and Distributed Computing Course 28/06/2006 The.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Peer-to-Peer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications 吳俊興 國立高雄大學 資訊工程學系 Spring 2006 EEF582 – Internet Applications and Services 網路應用與服務.
CS 525 Advanced Distributed Systems Spring 2015 Indranil Gupta (Indy) Lecture 4, 5 Peer to Peer Systems January 29, 2015 All slides © IG 1.
Wide-area cooperative storage with CFS
Lecture 13 Peer-to-Peer systems (Part II) CS 425/ECE 428/CSE 424 Distributed Systems (Fall 2009) CS 425/ECE 428/CSE 424 Distributed Systems (Fall 2009)
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
1 Reading Report 5 Yin Chen 2 Mar 2004 Reference: Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applications, Ion Stoica, Robert Morris, david.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Sep 15-17, 2015 Lecture 7-8: Peer-to-peer Systems All slides © IG.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Presentation 1 By: Hitesh Chheda 2/2/2010. Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT Laboratory for Computer Science.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Lecture 2 Distributed Hash Table
Paper Survey of DHT Distributed Hash Table. Usages Directory service  Very little amount of information, such as URI, metadata, … Storage  Data, such.
Peer to Peer A Survey and comparison of peer-to-peer overlay network schemes And so on… Chulhyun Park
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
Chord Advanced issues. Analysis Theorem. Search takes O (log N) time (Note that in general, 2 m may be much larger than N) Proof. After log N forwarding.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Indranil Gupta (Indy) September 27, 2012 Lecture 10 Peer-to-peer Systems II Reading: Chord paper on website (Sec 1-4, 6-7)  2012, I. Gupta Computer Science.
Chord Advanced issues. Analysis Search takes O(log(N)) time –Proof 1 (intuition): At each step, distance between query and peer hosting the object reduces.
CSE 486/586, Spring 2014 CSE 486/586 Distributed Systems Distributed Hash Tables Steve Ko Computer Sciences and Engineering University at Buffalo.
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
CS Spring 2014 CS 414 – Multimedia Systems Design Lecture 37 – Introduction to P2P (Part 1) Klara Nahrstedt.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
CS 347Notes081 CS 347: Parallel and Distributed Data Management Notes 08: P2P Systems.
CS 525 Advanced Distributed Systems Spring 2016 Indranil Gupta (Indy) Lecture 4, 5 Peer to Peer Systems January 28, 2016 All slides © IG 1.
CSE 486/586 Distributed Systems Distributed Hash Tables
Computer Science 425/ECE 428/CSE 424 Distributed Systems (Fall 2009) Lecture 20 Self-Stabilization Reading: Chapter from Prof. Gosh’s book Klara Nahrstedt.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Peer-to-peer Systems All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
CS Spring 2010 CS 414 – Multimedia Systems Design Lecture 24 – Introduction to Peer-to-Peer (P2P) Systems Klara Nahrstedt (presented by Long Vu)
The Chord P2P Network Some slides taken from the original presentation by the authors.
Lecture 7: Peer-to-peer Systems I
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
CSE 486/586 Distributed Systems Distributed Hash Tables
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
A Scalable Peer-to-peer Lookup Service for Internet Applications
Lecture 7: Peer-to-peer Systems I
CS 525 Advanced Distributed Systems Spring 2017
CS 525 Advanced Distributed Systems Spring 2018
EE 122: Peer-to-Peer (P2P) Networks
DHT Routing Geometries and Chord
Chord Advanced issues.
Chord Advanced issues.
Chord Advanced issues.
Lecture 7: Peer-to-peer Systems I
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
Consistent Hashing and Distributed Hash Table
CSE 486/586 Distributed Systems Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
Presentation transcript:

1 CS 525 Advanced Distributed Systems Spring 2014 Indranil Gupta (Indy) Lecture 5 Peer to Peer Systems II February 4, 2014 All Slides © IG

2 Systems used widely in practice but with no provable properties Non-academic P2P systems e.g., Gnutella, Napster, Grokster, BitTorrent (previous lecture) Systems with provable properties but with less wider usage Academic P2P systems e.g., Chord, Pastry, Kelips (this lecture) Two types of P2P Systems

3 DHT=Distributed Hash Table A hash table allows you to insert, lookup and delete objects with keys A distributed hash table allows you to do the same in a distributed setting (objects=files) Performance Concerns: –Load balancing –Fault-tolerance –Efficiency of lookups and inserts –Locality Napster, Gnutella, FastTrack are all DHTs (sort of) So is Chord, a structured peer to peer system that we study next

4 Comparative Performance MemoryLookup Latency #Messages for a lookup NapsterO(1) O(1) GnutellaO(N)

5 Comparative Performance MemoryLookup Latency #Messages for a lookup NapsterO(1) O(1) GnutellaO(N) ChordO(log(N))

6 Chord Developers: I. Stoica, D. Karger, F. Kaashoek, H. Balakrishnan, R. Morris, Berkeley and MIT Intelligent choice of neighbors to reduce latency and message cost of routing (lookups/inserts) Uses Consistent Hashing on node’s (peer’s) address –SHA-1(ip_address,port)  160 bit string –Truncated to m bits –Called peer id (number between 0 and ) –Not unique but id conflicts very unlikely –Can then map peers to one of logical points on a circle

7 Ring of peers N80 N112 N96 N16 0 Say m=7 N32 N45 6 nodes

8 Peer pointers (1): successors N80 0 Say m=7 N32 N45 N112 N96 N16 (similarly predecessors)

9 Peer pointers (2): finger tables N Say m=7 N32 N45 ith entry at peer with id n is first peer with id >= N112 N96 N16 i ft[i] Finger Table at N80

10 What about the files? Filenames also mapped using same consistent hash function –SHA-1(filename)  160 bit string (key) –File is stored at first peer with id greater than its key (mod ) File cnn.com/index.html that maps to key K42 is stored at first peer with id greater than 42 –Note that we are considering a different file-sharing application here : cooperative web caching –The same discussion applies to any other file sharing application, including that of mp3 files.

11 Mapping Files N80 0 Say m=7 N32 N45 File with key K42 stored here N112 N96 N16

12 Search N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here Who has cnn.com/index.html ? (hashes to K42) N112 N96 N16

13 Search N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here At node n, send query for key k to largest successor/finger entry <= k if none exist, send query to successor(n) N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

14 Search N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here At node n, send query for key k to largest successor/finger entry <= k if none exist, send query to successor(n) All “arrows” are RPCs N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

15 Analysis Search takes O(log(N)) time Proof –(intuition): at each step, distance between query and peer- with-file reduces by a factor of at least 2 (why?) Takes at most m steps: is at most a constant multiplicative factor above N, lookup is O(log(N)) –(intuition): after log(N) forwardings, distance to key is at most (why?) Number of node identifiers in a range of is O(log(N)) with high probability (why? SHA-1!) So using successors in that range will be ok Here Next hop Key

16 Analysis (contd.) O(log(N)) search time holds for file insertions too (in general for routing to any key) –“Routing” can thus be used as a building block for All operations: insert, lookup, delete O(log(N)) time true only if finger and successor entries correct When might these entries be wrong? –When you have failures

17 Search under peer failures N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X X X Lookup fails (N16 does not know N45) N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

18 Search under peer failures N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X One solution: maintain r multiple successor entries In case of failure, use successor entries N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

19 Search under peer failures Choosing r=2log(N) suffices to maintain lookup correctness w.h.p.(i.e., ring connected) –Say 50% of nodes fail –Pr(at given node, at least one successor alive)= –Pr(above is true at all alive nodes)=

20 Search under peer failures (2) N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X X Lookup fails (N45 is dead) N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

21 Search under peer failures (2) N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X One solution: replicate file/key at r successors and predecessors N112 N96 N16 K42 replicated Who has cnn.com/index.html ? (hashes to K42)

22 Need to deal with dynamic changes Peers fail New peers join Peers leave –P2P systems have a high rate of churn (node join, leave and failure) 25% per hour in Overnet (eDonkey) 100% per hour in Gnutella Lower in managed clusters, e.g., CSIL Common feature in all distributed systems, including wide-area (e.g., PlanetLab), clusters (e.g., Emulab), clouds (e.g., Cirrus), etc. So, all the time, need to:  Need to update successors and fingers, and copy keys

23 New peers joining N80 0 Say m=7 N32 N45 N112 N96 N16 N40 Introducer directs N40 to N45 (and N32) N32 updates successor to N40 N40 initializes successor to N45, and inits fingers from it N40 periodically talks to neighbors to update finger table Stabilization Protocol (followed by all nodes)

24 New peers joining (2) N80 0 Say m=7 N32 N45 N112 N96 N16 N40 N40 may need to copy some files/keys from N45 (files with fileid between 32 and 40) K34,K38

25 New peers joining (3) A new peer affects O(log(N)) other finger entries in the system, on average [Why?] Number of messages per peer join= O(log(N)*log(N)) Similar set of operations for dealing with peers leaving –For dealing with failures, also need failure detectors (we’ll see these later in the course!)

26 Experimental Results Sigcomm 01 paper had results from simulation of a C++ prototype SOSP 01 paper had more results from a 12- node Internet testbed deployment We’ll touch briefly on the first set peer system

27 Solution: Each real node pretends to be r multiple virtual nodes smaller load variation, lookup cost is O(log(N*r)= O(log(N)+log(r)) Load Balancing Large variance

28 Lookups Average Messages per Lookup Number of Nodes log, as expected

29 Stabilization Protocol Concurrent peer joins, leaves, failures might cause loopiness of pointers, and failure of lookups –Chord peers periodically run a stabilization algorithm that checks and updates pointers and keys –Ensures non-loopiness of fingers, eventual success of lookups and O(log(N)) lookups w.h.p. –[TechReport on Chord webpage] defines weak and strong notions of stability –Each stabilization round at a peer involves a constant number of messages –Strong stability takes stabilization rounds (!)

30 Churn When nodes are constantly joining, leaving, failing –Significant effect to consider: traces from the Overnet system show hourly peer turnover rates (churn) could be % of total number of nodes in system –Leads to excessive (unnecessary) key copying (remember that keys are replicated) –Stabilization algorithm may need to consume more bandwidth to keep up –Main issue is that files are replicated, while it might be sufficient to replicate only meta information about files –Alternatives Introduce a level of indirection (any p2p system) Replicate metadata more, e.g., Kelips (later in this lecture)

31 Fault-tolerance 500 nodes (avg. path len=5) Stabilization runs every 30 s 1 joins&fails every 10 s (3 fails/stabilization round) => 6% lookups fail Numbers look fine here, but does stabilization scale with N? What about concurrent joins & leaves?

Pastry – another DHT From Microsoft Research Assigns ids to nodes, just like Chord (think of a ring) Leaf Set - Each node knows its successor(s) and predecessor(s) Routing tables based prefix matching –But select the closest (in network RTT) peers among those that have the same prefix Routing is thus based on prefix matching, and is thus log(N) –And hops are short (in the underlying network) 32

33

34 Wrap-up Notes Memory: O(log(N)) successor pointer, m finger/routing entries Indirection: store a pointer instead of the actual file Do not handle partitions (can you suggest a possible solution?)

35 Summary of Chord and Pastry Chord and Pastry protocols –More structured than Gnutella –Black box lookup algorithms –Churn handling can get complex –O(log(N)) memory and lookup cost Can we do better?

Kelips – 1 hop Lookup K “affinity groups” –K ~ √ N Each node hashed to a group Node’s neighbors –(Almost) all other nodes in its own affinity group –One contact node per foreign affinity group File can be stored at any (few) node(s) Each filename hashed to a group –All nodes in the group replicate –Affinity group does not store files Lookup = 1 hop (or a few) Memory cost O( √ N) low 1.93 MB for 100K nodes, 10M files 36 … Affinity Group # 0 # 1 # k PennyLane.mp3 hashes to k-1 Everyone in this group stores

Summary – P2P Range of tradeoffs available –Memory vs. lookup cost vs. background bandwidth (to keep neighbors fresh) 37

Announcements No office hours today (talk to me now) Next week Thu: Student-led presentations start –Please Read instructions on the website! Next week Thu: Reviews start –Please Read instructions on the website! 38

39 Backup Slides

40 Wrap-up Notes (3) Virtual Nodes good for load balancing, but –Effect on peer traffic? –Result of churn? Current status of project: –File systems (CFS,Ivy) built on top of Chord –DNS lookup service built on top of Chord –Internet Indirection Infrastructure (I3) project at UCB –Spawned research on many interesting issues about p2p systems