1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.

Slides:



Advertisements
Similar presentations
Tapestry: Scalable and Fault-tolerant Routing and Location Stanford Networking Seminar October 2001 Ben Y. Zhao
Advertisements

Tapestry: Decentralized Routing and Location SPAM Summer 2001 Ben Y. Zhao CS Division, U. C. Berkeley.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK
Kademlia: A Peer-to-peer Information System Based on the XOR Metric Petar Mayamounkov David Mazières A few slides are taken from the authors’ original.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Rapid Mobility via Type Indirection Ben Y. Zhao, Ling Huang, Anthony D. Joseph, John D. Kubiatowicz Computer Science Division, UC Berkeley IPTPS 2004.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
P2P Simulation Platform Enhancement Shih Chin, Chai Superviser: Dr. Tim Moors Assessor: Dr. Robert Malaney.
The Oceanstore Regenerative Wide-area Location Mechanism Ben Zhao John Kubiatowicz Anthony Joseph Endeavor Retreat, June 2000.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Rowstron and Peter Druschel Proc. of the 18th IFIP/ACM.
Brocade Landmark Routing on Structured P2P Overlays Ben Zhao, Yitao Duan, Ling Huang Anthony Joseph and John Kubiatowicz (IPTPS 2002) Goals Improve routing.
OceanStore An Architecture for Global-scale Persistent Storage By John Kubiatowicz, David Bindel, Yan Chen, Steven Czerwinski, Patrick Eaton, Dennis Geels,
Each mesh represents a single hop on the route to a given root. Sibling nodes maintain pointers to each other. Each referrer has pointers to the desired.
Tapestry: Wide-area Location and Routing Ben Y. Zhao John Kubiatowicz Anthony D. Joseph U. C. Berkeley.
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter: Chunyuan Liao March 6, 2002 Ben Y.Zhao, John Kubiatowicz, and.
Distributed Object Location in a Dynamic Network Kirsten Hildrum, John D. Kubiatowicz, Satish Rao and Ben Y. Zhao.
OceanStore: Data Security in an Insecure world John Kubiatowicz.
Weaving a Tapestry Distributed Algorithms for Secure Node Integration, Routing and Fault Handling Ben Y. Zhao (John Kubiatowicz, Anthony Joseph) Fault-tolerant.
Tapestry Deployment and Fault-tolerant Routing Ben Y. Zhao L. Huang, S. Rhea, J. Stribling, A. D. Joseph, J. D. Kubiatowicz Berkeley Research Retreat January.
Tapestry on PlanetLab Deployment Experiences and Applications Ben Zhao, Ling Huang, Anthony Joseph, John Kubiatowicz.
CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability.
Locality Optimizations in Tapestry Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Or, Providing High Availability and Adaptability in a Decentralized System Tapestry: Fault-resilient Wide-area Location and Routing Issues Facing Wide-area.
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
P2P Course, Structured systems 1 Skip Net (9/11/05)
1/17/01 Changing the Tapestry— Inserting and Deleting Nodes Kris Hildrum, UC Berkeley Joint work with John Kubiatowicz, Satish.
Tapestry: Finding Nearby Objects in Peer-to-Peer Networks Joint with: Ling Huang Anthony Joseph Robert Krauthgamer John Kubiatowicz Satish Rao Sean Rhea.
OceanStore An Architecture for Global-Scale Persistent Storage Motivation Feature Application Specific Components - Secure Naming - Update - Access Control-
Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
“Umbrella”: A novel fixed-size DHT protocol A.D. Sotiriou.
Quantifying Network Denial of Service: A Location Service Case Study Yan Chen, Adam Bargteil, David Bindel, Randy H. Katz and John Kubiatowicz Computer.
Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
Tapestry: Decentralized Routing and Location System Seminar S ‘01 Ben Y. Zhao CS Division, U. C. Berkeley.
A Survey of Peer-to-Peer Content Distribution Technologies Stephanos Androutsellis-Theotokis and Diomidis Spinellis ACM Computing Surveys, December 2004.
1 A scalable Content- Addressable Network Sylvia Rathnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker Pirammanayagam Manickavasagam.
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
1 Plaxton Routing. 2 Introduction Plaxton routing is a scalable mechanism for accessing nearby copies of objects. Plaxton mesh is a data structure that.
1 PASTRY. 2 Pastry paper “ Pastry: Scalable, decentralized object location and routing for large- scale peer-to-peer systems ” by Antony Rowstron (Microsoft.
Arnold N. Pears, CoRE Group Uppsala University 3 rd Swedish Networking Workshop Marholmen, September Why Tapestry is not Pastry Presenter.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Brocade Landmark Routing on P2P Networks Gisik Kwon April 9, 2002.
Vincent Matossian September 21st 2001 ECE 579 An Overview of Decentralized Discovery mechanisms.
Tapestry:A Resilient Global- Scale Overlay for Service Deployment Zhao, Huang, Stribling, Rhea, Joseph, Kubiatowicz Presented by Rebecca Longmuir.
Tapestry: A Resilient Global-scale Overlay for Service Deployment 1 Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Rowstron and Peter Druschel, Middleware 2001.
1 Distributed Hash Table CS780-3 Lecture Notes In courtesy of Heng Yin.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Peer to Peer Network Design Discovery and Routing algorithms
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Improving Fault Tolerance in AODV Matthew J. Miller Jungmin So.
Large Scale Sharing Marco F. Duarte COMP 520: Distributed Systems September 19, 2004.
1 Plaxton Routing. 2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several.
CS791Aravind Elango Maintenance-Free Global Data Storage Sean Rhea, Chris Wells, Patrick Eaten, Dennis Geels, Ben Zhao, Hakim Weatherspoon and John Kubiatowicz.
The Chord P2P Network Some slides taken from the original presentation by the authors.
Peer-to-Peer Information Systems Week 12: Naming
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
OceanStore: An Architecture for Global-Scale Persistent Storage
Accessing nearby copies of replicated objects
OceanStore: Data Security in an Insecure world
John D. Kubiatowicz UC Berkeley
Locality Optimizations in Tapestry Sahara/OceanStore Winter Retreat
Tapestry: Scalable and Fault-tolerant Routing and Location
Peer-to-Peer Information Systems Week 12: Naming
Simultaneous Insertions in Tapestry
Presentation transcript:

1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors (log n)/b per level, I.e. O(log n) - secondary neighbors O(log n) - reverse neighborsO(log n) expected and O(log 2 n) w.h.p (proof appears in the technical report by the authors) -In addition, each node contains a pointer list size of the pointer list = O(log 2 n) w.h.p. So, total auxiliary memory per node = O(log 2 n) w.h.p

2 Tapestry

3 History Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D Joseph, and John Kubiatowicz: IEEE Journal on Selected Areas in Communications, January 2004, A self-organizing robust scalable wire-area infrastructure for efficient location and delivery of contents in presence of heavy load and node or link failure. It is the backbone of the Oceanstore, a persistent wide-area storage system.

4 Plaxton vs. Tapestry Basic idea similar to that proposed in Plaxton’s paper However, Tapestry provides innovative solutions to some of the bottlenecks of classical Plaxton routing. What are the limitations of Plaxton routing? -Need for global knowledge to construct neighbor table -Static architecture - no provision for node addition or deletion -Single root is a bottleneck is a single point of failure

5 Routing and Location Namespace (both nodes and objects) 160 bits using the hash function SHA-1 Each object has its own hierarchy rooted at Root f (ObjectID) = RootID, via a dynamic mapping function (in Plaxton’s scheme it was static) Suffix routing from A to B At h th hop, arrive at nearest node N h that shares suffix with B of length h digits –Example: 5324 routes to 0629 via > > > > 0629

6 Choosing root in Plaxton Given the object ID N i –Find set S of nodes in existing network nodes n matching most # of suffix digits with N i –Choose the root S i = node in S with highest valued ID Issues –Mapping must be generated statically using global knowledge –Must be kept as hard state immune to changing environment –Mapping is not well distributed, many nodes in the network are not chosen as roots

7 Choosing Root in Tapestry Given the object ID N i –Attempt to route to node ID N i (without knowing if it exists). –If it exists, then it becomes the root. But otherwise –Whenever null entry encountered, choose the next “higher” non-null pointer entry (thus, if XX53 does not exist, try xx63) –If current node S is only non-null pointer in rest of map, terminate route, and choose root(N i ) = S

8 More on Tapestry Return location of all replicas (instead of only the closest replica), and allow the application to choose a replica (mostly the first one received) “Hot Spot” detector Backpointers

Tapestry Mesh Incremental suffix-based routing (slide borrowed from the original authors) NodeID 0x43FE NodeID 0x13FE NodeID 0xABFE NodeID 0x1290 NodeID 0x239E NodeID 0x73FE NodeID 0x423E NodeID 0x79FE NodeID 0x23FE NodeID 0x73FF NodeID 0x555E NodeID 0x035E NodeID 0x44FE NodeID 0x9990 NodeID 0xF990 NodeID 0x993E NodeID 0x04FE NodeID 0x43FE

10 Fault-tolerant Routing Strategy –Detect failures via soft-state probe packets –Route around problematic hop via backup pointers Handling –3 forward pointers per outgoing route (2 backups) –Upgrade backup pointers and replace

11 Fault detection Soft-state vs. explicit fault-recovery - Soft-state periodic republish is more attractive –Expected additional traffic for periodic republish is low Redundant roots for better resilience –Object names hashed w/ small salts i.e. multiple names/roots –Queries and publishing utilize all roots in parallel

12 Dynamic Insertion of N Step 1: Build up N’s routing maps –Send messages to each hop along path from gateway to current node N –The i th hop along the path sends its i th level route table to N – N optimizes those tables where necessary Step 2: Move appropriate data from N’ to N Step 3: Use back pointers from N’ to find nodes which have null entries for N’s ID, tell them to add new entry to N Step 4: Notify local neighbors to modify paths to route through N where appropriate

13 Dynamic Insertion Example borrowed from the original slides NodeID 0x243FE NodeID 0x913FE NodeID 0x0ABFE NodeID 0x71290 NodeID 0x5239E NodeID 0x973FE NEW 0x143FE NodeID 0x779FE NodeID 0xA23FE Gateway 0xD73FF NodeID 0xB555E NodeID 0xC035E NodeID 0x244FE NodeID 0x09990 NodeID 0x4F990 NodeID 0x6993E NodeID 0x704FE NodeID 0x243FE

14 Neighbor map of 5642 Level i matches i suffix entries. Number of entries per level = ID base (here it is 8) Each entry is the suffix matching node with least cost. If no such entry exists, then pick the one that with highest id & largest suffix match These are all primary entries y2y2’ L0L1 L2 L3

15 Benefits and limitations +F ault handling using redundant routes > > Scalable All routing done using locally available data + Optimal routing distance