Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten.

Slides:



Advertisements
Similar presentations
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Advertisements

Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Prepared by Ali Yildiz (with minor modifications by Dennis Shasha)
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
CHORD: A Peer-to-Peer Lookup Service CHORD: A Peer-to-Peer Lookup Service Ion StoicaRobert Morris David R. Karger M. Frans Kaashoek Hari Balakrishnan Presented.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Speaker: Cathrin Weiß 11/23/2004 Proseminar Peer-to-Peer Information Systems.
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
1 1 Chord: A scalable Peer-to-peer Lookup Service for Internet Applications Dariotaki Roula
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up protocol.
*Towards A Common API for Structured Peer-to-Peer Overlays Frank Dabek, Ben Y. Zhao, Peter Druschel, John Kubiatowicz, Ion Stoica MIT, U. C. Berkeley,
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
Small-world Overlay P2P Network
1 Towards a Common API for Structured Peer-to-Peer Overlays Frank Dabek, Ben Zhao, Peter Druschel, John Kubiatowicz, Ion Stoica Presented for Cs294-4 by.
Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony L. T.
P2p, Spring 05 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems March 29, 2005.
Secure routing for structured peer-to-peer overlay networks Miguel Castro, Ayalvadi Ganesh, Antony Rowstron Microsoft Research Ltd. Peter Druschel, Dan.
P2P: Advanced Topics Filesystems over DHTs and P2P research Vyas Sekar.
A Scalable Content-Addressable Network Authors: S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker University of California, Berkeley Presenter:
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter: Chunyuan Liao March 6, 2002 Ben Y.Zhao, John Kubiatowicz, and.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
Distributed Object Location in a Dynamic Network Kirsten Hildrum, John D. Kubiatowicz, Satish Rao and Ben Y. Zhao.
Weaving a Tapestry Distributed Algorithms for Secure Node Integration, Routing and Fault Handling Ben Y. Zhao (John Kubiatowicz, Anthony Joseph) Fault-tolerant.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Or, Providing High Availability and Adaptability in a Decentralized System Tapestry: Fault-resilient Wide-area Location and Routing Issues Facing Wide-area.
Wide-area cooperative storage with CFS
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
1/17/01 Changing the Tapestry— Inserting and Deleting Nodes Kris Hildrum, UC Berkeley Joint work with John Kubiatowicz, Satish.
Peer-to-Peer Networks Slides largely adopted from Ion Stoica’s lecture at UCB.
Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
“Umbrella”: A novel fixed-size DHT protocol A.D. Sotiriou.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems (Antony Rowstron and Peter Druschel) Shariq Rizvi First.
Mobile Ad-hoc Pastry (MADPastry) Niloy Ganguly. Problem of normal DHT in MANET No co-relation between overlay logical hop and physical hop – Low bandwidth,
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
1 Plaxton Routing. 2 Introduction Plaxton routing is a scalable mechanism for accessing nearby copies of objects. Plaxton mesh is a data structure that.
Arnold N. Pears, CoRE Group Uppsala University 3 rd Swedish Networking Workshop Marholmen, September Why Tapestry is not Pastry Presenter.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Brocade Landmark Routing on P2P Networks Gisik Kwon April 9, 2002.
Tapestry:A Resilient Global- Scale Overlay for Service Deployment Zhao, Huang, Stribling, Rhea, Joseph, Kubiatowicz Presented by Rebecca Longmuir.
A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003.
1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.
Tapestry: A Resilient Global-scale Overlay for Service Deployment 1 Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Rowstron and Peter Druschel, Middleware 2001.
DHT-based unicast for mobile ad hoc networks Thomas Zahn, Jochen Schiller Institute of Computer Science Freie Universitat Berlin 報告 : 羅世豪.
Peer to Peer Network Design Discovery and Routing algorithms
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
CSCI 599: Beyond Web Browsers Professor Shahram Ghandeharizadeh Computer Science Department Los Angeles, CA
1 Tuesday, February 03, 2009 “Work expands to fill the time available for its completion.” - Parkinson’s 1st Law.
1 Plaxton Routing. 2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several.
Incrementally Improving Lookup Latency in Distributed Hash Table Systems Hui Zhang 1, Ashish Goel 2, Ramesh Govindan 1 1 University of Southern California.
Peer-to-Peer Networks 05 Pastry Christian Schindelhauer Technical Faculty Computer-Networks and Telematics University of Freiburg.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
Peer-to-Peer Information Systems Week 12: Naming
A Survey of Peer-to-Peer Content Distribution Technologies Stephanos Androutsellis-Theotokis and Diomidis Spinellis ACM Computing Surveys, December 2004.
IP Routers – internal view
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Accessing nearby copies of replicated objects
ECE 544 Protocol Design Project 2016
Object Location Problem: Find a close copy of an object in a large network Solution should: Find object if it exists Find a close copy of the object (no.
BGP Instability Jennifer Rexford
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
Exploiting Routing Redundancy via Structured Peer-to-Peer Overlays
A Scalable Peer-to-peer Lookup Service for Internet Applications
Peer-to-Peer Information Systems Week 12: Naming
Simultaneous Insertions in Tapestry
Presentation transcript:

Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten

Overview What is Tapestry? Terminology A Tapestry networking API Message routing Object publishing Object locating Node insertion & deletion

What is Tapestry? Peer-to-peer routing protocol Provides Decentralized Object Location and Routing (DOLR)  Routes messages to endpoints using key-based routing  Endpoints can be both nodes and objects Operations succeed nearly 100% of the time, even under extreme network conditions

Terminology Nodes and objects Each node is randomly assigned a node ID, N ID Each endpoint is randomly assigned a GUID, O G, from the same ID space Chosen randomly from same ID space Typical ID space is SHA-1 (160 bits) An optional application ID (A id ) is used to select an appropriate process on each node for delivery

DOLR networking API Four operations provided in a Tapestry API:  PublishObject(O g, A id ): Publish object O g on the local node  UnpublishObject( O g, A id ): Remove location mappings for object O g.  RouteToObject( O g, A id ): Route a message to node where O g is stored  RouteToNode( N ID, A id, Exact): Route a message to node N ID. Exact determines whether a ”close match” is sufficient for delivery

Node Characteristics Each node stores a routing table  Each level stores neighbors that match a prefix up to a certain position in the ID  If there is a hole in the routing table, there is no such node in the network For redundancy, backup neighbor links are stored Each node also stores backpointers that point to nodes that point to it

Routing Table Example Node L X L X L X L4 X

Routing Table Illustration Routing table for node 4227

Message Routing I Every object ID is mapped to a root node The root node of object O g is the node with N ID = O g (or closest match) Uses prefix routing  Lookup for 42AD: 4*** => 42** => 42A* => 42AD In case of an empty neighbour entry, use surrogate routing  Route to the next highest (if no entry for 42**, try 43**)

Message Routing II Routing message from node 5230 to 42AD

Publishing Objects I A node sends a publish message towards the root node of the object. At each hop, nodes store pointers to the source node  Data remains at source. No replication takes place anywhere.  In case of objects stored on multiple servers, pointers are stored in order of network latency Nodes periodically republish objects

Publishing Objects II Publishing Phil’s Books ( O g = 4378)

Locating Objects I Client sends message towards object’s root node Each hop checks its list of pointers  In case of a match, message is forwarded directly to the node storing the object  Otherwise, message continues towards the object’s root

Locating Objects II Locating O g 4378 from 4B4F and 4664

Node Insertion I We want to do three things with the new node, N:  Notify all existing nodes of N’s presence  Update pointers to objects for which N becomes the new root  Have N build its routing table

Node Insertion II N’s ”root node” multicasts a message to all nodes sharing the same prefix, p. In case N becomes the new root for some objects, pointers are updated to reflect this change during the multicast All nodes contacted during multicast contact N and are added to N’s routing table at level p. Levels p-1, p-2,..., 1 are filled by requesting backpointers from nodes on succeeding level

Node Deletion Voluntary node deletion  Node N tells all backpointer nodes of its decision and give each neighbour a replacement node from its own routing table Involuntary node deletion (i.e., network failure)  Redundancy is built-in through backup links in routing tables  Nodes use periodic beacons to detect outgoing link and node failures  Repair process is augmented by periodic republishing of object references