Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.

Slides:



Advertisements
Similar presentations
Brocade: Landmark Routing on Peer to Peer Networks Ben Y. Zhao Yitao Duan, Ling Huang, Anthony Joseph, John Kubiatowicz IPTPS, March 2002.
Advertisements

Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK
EECS 262a Advanced Topics in Computer Systems Lecture 21 Chord/Tapestry November 7 th, 2012 John Kubiatowicz and Anthony D. Joseph Electrical Engineering.
Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz.
Implementation and Deployment of a Large-scale Network Infrastructure Ben Y. Zhao L. Huang, S. Rhea, J. Stribling, A. D. Joseph, J. D. Kubiatowicz EECS,
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
Approximate Object Location and Spam Filtering on Peer-to-Peer Systems Feng Zhou, Li Zhuang, Ben Y. Zhao, Ling Huang, Anthony D. Joseph and John D. Kubiatowicz.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
1 PASTRY Partially borrowed from Gabi Kliot ’ s presentation.
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Bowstron & Peter Druschel Presented by: Long Zhang.
1 Towards a Common API for Structured Peer-to-Peer Overlays Frank Dabek, Ben Zhao, Peter Druschel, John Kubiatowicz, Ion Stoica Presented for Cs294-4 by.
Applications over P2P Structured Overlays Antonino Virgillito.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Rowstron and Peter Druschel Proc. of the 18th IFIP/ACM.
Brocade Landmark Routing on Structured P2P Overlays Ben Zhao, Yitao Duan, Ling Huang Anthony Joseph and John Kubiatowicz (IPTPS 2002) Goals Improve routing.
Spring 2003CS 4611 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter: Chunyuan Liao March 6, 2002 Ben Y.Zhao, John Kubiatowicz, and.
Distributed Object Location in a Dynamic Network Kirsten Hildrum, John D. Kubiatowicz, Satish Rao and Ben Y. Zhao.
Secure routing for structured peer-to-peer overlay networks (by Castro et al.) Shariq Rizvi CS 294-4: Peer-to-Peer Systems.
Weaving a Tapestry Distributed Algorithms for Secure Node Integration, Routing and Fault Handling Ben Y. Zhao (John Kubiatowicz, Anthony Joseph) Fault-tolerant.
OceanStore: An Architecture for Global-Scale Persistent Storage Professor John Kubiatowicz, University of California at Berkeley
Application Layer Multicast for Earthquake Early Warning Systems Valentina Bonsi - April 22, 2008.
CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability.
Locality Optimizations in Tapestry Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Or, Providing High Availability and Adaptability in a Decentralized System Tapestry: Fault-resilient Wide-area Location and Routing Issues Facing Wide-area.
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
1 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Tapestry: Finding Nearby Objects in Peer-to-Peer Networks Joint with: Ling Huang Anthony Joseph Robert Krauthgamer John Kubiatowicz Satish Rao Sean Rhea.
Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems (Antony Rowstron and Peter Druschel) Shariq Rizvi First.
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
1 Plaxton Routing. 2 Introduction Plaxton routing is a scalable mechanism for accessing nearby copies of objects. Plaxton mesh is a data structure that.
1 PASTRY. 2 Pastry paper “ Pastry: Scalable, decentralized object location and routing for large- scale peer-to-peer systems ” by Antony Rowstron (Microsoft.
Arnold N. Pears, CoRE Group Uppsala University 3 rd Swedish Networking Workshop Marholmen, September Why Tapestry is not Pastry Presenter.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Brocade Landmark Routing on P2P Networks Gisik Kwon April 9, 2002.
Tapestry:A Resilient Global- Scale Overlay for Service Deployment Zhao, Huang, Stribling, Rhea, Joseph, Kubiatowicz Presented by Rebecca Longmuir.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.
Paper Survey of DHT Distributed Hash Table. Usages Directory service  Very little amount of information, such as URI, metadata, … Storage  Data, such.
Tapestry: A Resilient Global-scale Overlay for Service Deployment 1 Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Rowstron and Peter Druschel, Middleware 2001.
1 Distributed Hash Table CS780-3 Lecture Notes In courtesy of Heng Yin.
Peer to Peer Network Design Discovery and Routing algorithms
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Large Scale Sharing Marco F. Duarte COMP 520: Distributed Systems September 19, 2004.
1 Plaxton Routing. 2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several.
Peer-to-peer systems ”Sharing is caring”. Why P2P? Client-server systems limited by management and bandwidth P2P uses network resources at the edges.
Implementation and Deployment of a Large-scale Network Infrastructure Ben Y. Zhao L. Huang, S. Rhea, J. Stribling, A. D. Joseph, J. D. Kubiatowicz EECS,
Peer-to-Peer Networks 05 Pastry Christian Schindelhauer Technical Faculty Computer-Networks and Telematics University of Freiburg.
Fabián E. Bustamante, Fall 2005 A brief introduction to Pastry Based on: A. Rowstron and P. Druschel, Pastry: Scalable, decentralized object location and.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
CS791Aravind Elango Maintenance-Free Global Data Storage Sean Rhea, Chris Wells, Patrick Eaten, Dennis Geels, Ben Zhao, Hakim Weatherspoon and John Kubiatowicz.
Peer-to-Peer Information Systems Week 12: Naming
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Accessing nearby copies of replicated objects
An Overlay Infrastructure for Decentralized Object Location and Routing Ben Y. Zhao University of California at Santa Barbara.
John Kubiatowicz Electrical Engineering and Computer Sciences
Locality Optimizations in Tapestry Sahara/OceanStore Winter Retreat
Tapestry: Scalable and Fault-tolerant Routing and Location
Exploiting Routing Redundancy via Structured Peer-to-Peer Overlays
Peer-to-Peer Information Systems Week 12: Naming
Brocade: Landmark Routing on Peer to Peer Networks
Simultaneous Insertions in Tapestry
Presentation transcript:

Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John D. Kubiatowicz Shawn Jeffery CS294-4 Fall 2003

Tapestry Shawn Jeffery 9/10/032 What have we seen before? Key-based routing similar to Chord, Pastry Similar guarantees to Chord, Pastry Log b N routing hops (b is the base parameter) bLog b N state on each node O(Log b 2 N) messages on insert Locality-based routing tables similar to Pastry Discussion point (for throughout presentation): What sets Tapestry above the rest of the structured overlay p2p networks?

Tapestry Shawn Jeffery 9/10/033 Decentralized Object Location and Routing: DOLR The core of Tapestry Routes messages to endpoints Both Nodes and Objects Virtualizes resources objects are known by name, not location

Tapestry Shawn Jeffery 9/10/034 DOLR Identifiers ID Space for both nodes and endpoints (objects) : 160-bit values with a globally defined radix (e.g. hexadecimal to give 40-digit IDs) Each node is randomly assigned a nodeID Each endpoint is assigned a Globally Unique IDentifier (GUID) from the same ID space Typically done using SHA-1 Applications can also have IDs (application specific), which are used to select an appropriate process on each node for delivery

Tapestry Shawn Jeffery 9/10/035 DOLR API PublishObject(O G, A id ) UnpublishObject(O G, A id ) RouteToObject(O G, A id ) RouteToNode(N, A id, Exact)

Tapestry Shawn Jeffery 9/10/036 Node State Each node stores a neighbor map similar to Pastry Each level stores neighbors that match a prefix up to a certain position in the ID Invariant: If there is a hole in the routing table, there is no such node in the network For redundancy, backup neighbor links are stored Currently 2 Each node also stores backpointers that point to nodes that point to it Creates a routing mesh of neighbors

Tapestry Shawn Jeffery 9/10/037 Routing Mesh

Tapestry Shawn Jeffery 9/10/038 Routing Every ID is mapped to a root An ID’s root is either the node where nodeID = ID or the “closest” node to which that ID routes Uses prefix routing (like Pastry) Lookup for 42AD: 4*** => 42** => 42A* => 42AD If there is an empty neighbor entry, then use surrogate routing Route to the next highest (if no entry for 42**, try 43**)

Tapestry Shawn Jeffery 9/10/039 Object Publication A node sends a publish message towards the root of the object At each hop, nodes store pointers to the source node Data remains at source. Exploit locality without replication (such as in Pastry, Freenet) With replicas, the pointers are stored in sorted order of network latency Soft State – must periodically republish

Tapestry Shawn Jeffery 9/10/0310 Object Location Client sends message towards object’s root Each hop checks its list of pointers If there is a match, the message is forwarded directly to the object’s location Else, the message is routed towards the object’s root Because pointers are sorted by proximity, each object lookup is directed to the closest copy of the data

Tapestry Shawn Jeffery 9/10/0311 Use of Mesh for Object Location Liberally borrowed from Tapestry website

Tapestry Shawn Jeffery 9/10/0312 Node Insertions A insertion for new node N must accomplish the following: All nodes that have null entries for N need to be alerted of N’s presence Acknowledged mulitcast from the “root” node of N’s ID to visit all nodes with the common prefix N may become the new root for some objects. Move those pointers during the mulitcast N must build its routing table All nodes contacted during mulitcast contact N and become its neighbor set Iterative nearest neighbor search based on neighbor set Nodes near N might want to use N in their routing tables as an optimization Also done during iterative search

Tapestry Shawn Jeffery 9/10/0313 Node Deletions Voluntary Backpointer nodes are notified, which fix their routing tables and republish objects Involuntary Periodic heartbeats: detection of failed link initiates mesh repair (to clean up routing tables) Soft state publishing: object pointers go away if not republished (to clean up object pointers) Discussion Point: Node insertions/deletions + heartbeats + soft state republishing = network overhead. Is it acceptable? What are the tradeoffs?

Tapestry Shawn Jeffery 9/10/0314 Tapestry Architecture TCP, UDP Connection Mgmt Tier 0/1: Routing, Object Location deliver(), forward(), route(), etc. OceanStore, etc Prototype implemented using Java

Tapestry Shawn Jeffery 9/10/0315 Experimental Results (I) 3 environments Local cluster, PlanetLab, Simulator Micro-benchmarks on local cluster Message processing overhead Proportional to processor speed - Can utilize Moore’s Law Message throughput Optimal size is 4KB

Tapestry Shawn Jeffery 9/10/0316 Experimental Results (II) Routing/Object location tests Routing overhead (PlanetLab) About twice as long to route through overlay vs IP Object location/optimization (PlanetLab/Simulator) Object pointers significantly help routing to close objects Network Dynamics Node insertion overhead (PlanetLab) Sublinear latency to stabilization O(LogN) bandwidth consumption Node failures, joins, churn (PlanetLab/Simulator) Brief dip in lookup success rate followed by quick return to near 100% success rate Churn lookup rate near 100%

Tapestry Shawn Jeffery 9/10/0317 Experimental Results Discussion How do you satisfactorily test one of these systems? What metrics are important? Most of these experiments were run with between nodes. Is this enough to show that a system is capable of global scale? Does the usage of virtual nodes greatly affect the results?

Tapestry Shawn Jeffery 9/10/0318 Best of all, it can be used to deploy large-scale applications! Oceanstore: a global-scale, highly available storage utility Bayeux: an efficient self-organizing application-level multicast system We will be looking at both of these systems

Tapestry Shawn Jeffery 9/10/0319 Comments? Questions? Insults?