COS 461: Computer Networks

Slides:



Advertisements
Similar presentations
P2P data retrieval DHT (Distributed Hash Tables) Partially based on Hellerstein’s presentation at VLDB2004.
Advertisements

Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Peer to Peer and Distributed Hash Tables
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
P2P Systems and Distributed Hash Tables Section COS 461: Computer Networks Spring 2011 Mike Freedman
Distributed Hash Tables CPE 401 / 601 Computer Network Systems Modified from Ashwin Bharambe and Robert Morris.
SplitStream by Mikkel Hesselager Blanné Erik K. Aarslew-Jensen.
SplitStream: High- Bandwidth Multicast in Cooperative Environments Monica Tudora.
SCRIBE A large-scale and decentralized application-level multicast infrastructure.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
1 PASTRY Partially borrowed from Gabi Kliot ’ s presentation.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony L. T.
SplitStream: High-Bandwidth Multicast in Cooperative Environments Marco Barreno Peer-to-peer systems 9/22/2003.
Secure routing for structured peer-to-peer overlay networks Miguel Castro, Ayalvadi Ganesh, Antony Rowstron Microsoft Research Ltd. Peter Druschel, Dan.
Scalable Resource Information Service for Computational Grids Nian-Feng Tzeng Center for Advanced Computer Studies University of Louisiana at Lafayette.
Pastry Partially borrowed for Gabi Kliot. Pastry Scalable, decentralized object location and routing for large-scale peer-to-peer systems  Antony Rowstron.
Spring 2003CS 4611 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
SCRIBE: A large-scale and decentralized application-level multicast infrastructure Miguel Castro, Peter Druschel, Anne-Marie Kermarrec and Antony Rowstron.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
P2P Course, Structured systems 1 Introduction (26/10/05)
Project Mimir A Distributed Filesystem Uses Rateless Erasure Codes for Reliability Uses Pastry’s Multicast System Scribe for Resource discovery and Utilization.
File Sharing : Hash/Lookup Yossi Shasho (HW in last slide) Based on Chord: A Scalable Peer-to-peer Lookup Service for Internet ApplicationsChord: A Scalable.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 24 – P2P Streaming Klara Nahrstedt Ramsés Morales.
Mobile Ad-hoc Pastry (MADPastry) Niloy Ganguly. Problem of normal DHT in MANET No co-relation between overlay logical hop and physical hop – Low bandwidth,
INTRODUCTION TO PEER TO PEER NETWORKS Z.M. Joseph CSE 6392 – DB Exploration Spring 2006 CSE, UT Arlington.
Network Layer (3). Node lookup in p2p networks Section in the textbook. In a p2p network, each node may provide some kind of service for other.
1 PASTRY. 2 Pastry paper “ Pastry: Scalable, decentralized object location and routing for large- scale peer-to-peer systems ” by Antony Rowstron (Microsoft.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
Presentation 1 By: Hitesh Chheda 2/2/2010. Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT Laboratory for Computer Science.
Node Lookup in P2P Networks. Node lookup in p2p networks In a p2p network, each node may provide some kind of service for other nodes and also will ask.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Distributed Hash Tables CPE 401 / 601 Computer Network Systems Modified from Ashwin Bharambe and Robert Morris.
Peer to Peer A Survey and comparison of peer-to-peer overlay network schemes And so on… Chulhyun Park
Structured P2P Overlays. Consistent Hashing – the Basis of Structured P2P Intuition: –We want to build a distributed hash table where the number of buckets.
Lecture 12 Distributed Hash Tables CPE 401/601 Computer Network Systems slides are modified from Jennifer Rexford.
P2PSIP Security Analysis and evaluation draft-song-p2psip-security-eval-00 Song Yongchao Ben Y. Zhao
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
PeerNet: Pushing Peer-to-Peer Down the Stack Jakob Eriksson, Michalis Faloutsos, Srikanth Krishnamurthy University of California, Riverside.
Fabián E. Bustamante, Fall 2005 A brief introduction to Pastry Based on: A. Rowstron and P. Druschel, Pastry: Scalable, decentralized object location and.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
Chapter 29 Peer-to-Peer Paradigm Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
1 Distributed Hash tables. 2 Overview r Objective  A distributed lookup service  Data items are distributed among n parties  Anyone in the network.
Peer-to-Peer Information Systems Week 12: Naming
Review session For DS final exam.
Pastry Scalable, decentralized object locations and routing for large p2p systems.
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Distributed Hash Tables
A Scalable Peer-to-peer Lookup Service for Internet Applications
(slides by Nick Feamster)
Improving and Generalizing Chord
Distributed Hash Tables
Plethora: Infrastructure and System Design
PASTRY.
EE 122: Peer-to-Peer (P2P) Networks
DHT Routing Geometries and Chord
CS5412: Using Gossip to Build Overlay Networks
5.2 FLAT NAMING.
P2P Systems and Distributed Hash Tables
COS 461: Computer Networks
Consistent Hashing and Distributed Hash Table
P2P: Distributed Hash Tables
Peer-to-Peer Information Systems Week 12: Naming
CS5412: Using Gossip to Build Overlay Networks
Presentation transcript:

COS 461: Computer Networks Applications of DHTs COS 461: Computer Networks Recitation #6 http://www.cs.princeton.edu/courses/archive/spr14/cos461/

The Chord DHT ID space mod 2160 keyid = SHA1 (name) nodeid = SHA1 (IP addr, i) for i=1..v virtual IDs Each virtual ID maintains own routing table and keyspace Key assigned to the next node found traversing the DHT ring in clockwise direction Now vN nodes, rather than N nodes. Each machine still owns of total keyspace

Fill in the routing tables for nodes above 3 6 1 3 7 1 3 4 6 7 6 1 Consider the above illustration of a Chord DHT, comprised of only 5 nodes. We show the finger tables for each node (i.e., the particular id distribution from which a node’s routing table neighbors should be drawn), but we don’t actually yet fill in the routing tables. Each node may be storing some items according to the Chord assignment rule. Fill in the routing tables for nodes above

3 6 1 3 7 1 3 4 6 7 6 1 Which node(s) will receive a query from node 1 for the item with key 4.5? Assume iterative routing. 3, 4, & 6

Suppose node 4 crashes, and the system converges to a new state. 3 6 1 3 7 1 3 4 6 7 6 1 Answers 1: the first row of 3’s table 2: 3 & 6 Suppose node 4 crashes, and the system converges to a new state. Which routing entries change? Which nodes will receive a query from node 7 for item 5?

Using DHTs for application-layer multicast routing “SplitStream: High-Bandwidth Multicast in Cooperative Environments” SOSP, 2003

Problem with P2P multicast Goals for application Capacity-aware bandwidth utilization High tolerance of network churn Load balance for all the end hosts P2P environment Peers contribute resources Different peers may have different limitations Challenge: tree-based multicast places high demand on few internal nodes

High-level design Split data into stripes, each over its own tree Recover data from any m-out-of-n stripes Each node is internal to only one tree

Background: DHT routing Built on Pastry DHT Visualize like Chord ring, but now correct for most- significant bit of address during each hop Log(n) state, log(n) hops Use proximity selection when choosing neighbors d471f1 d467c4 d462ba d46a1c d4213f Route(d46a1c) d13da3 65a1fc

Scribe publish/subcribe system Topics map to nodes Subscribers register to receive topic updates Publishers push updates to topicID Message propagate back subscriber tree Insight: Convergence of lookup paths towards key’s node topicID Publish Subscribe topicID

Design of SplitStream Choosing groupIds for the Scribe multicast trees that differ in the most significant digit ensures the property that each node is internal to only one tree and a leaf in the rest

Design of SplitStream Divides data into stripes, each using one Scribe multicast tree Choosing groupIds for the trees that all differ in the most significant digit ensures property that each node is internal to only one tree Inbound bandwidth: can achieve desired indegree while this property holds Outbound bandwidth: Harder: we'll have to look at the node join algorithm to see how this works

New Children Adoption 001* requests to join stripe 0800, but 080* has reached its outdegree limit of 4 080* accepts 001* and drops 9* because it did not share the first digit with 080* 080* accepts 085* and drops 001* because it has a shorter prefix match with stripe 0800 than the other children 085* requests to join

Spare Capacity Group Orphaned nodes recursively try to reattach to their former siblings, but if that does not work..