1/17/01 Changing the Tapestry— Inserting and Deleting Nodes Kris Hildrum, UC Berkeley Joint work with John Kubiatowicz, Satish.

Slides:



Advertisements
Similar presentations
ECE /24/2005 A Survey on Position-Based Routing in Mobile Ad-Hoc Networks Alok Sabherwal.
Advertisements

Tapestry: Scalable and Fault-tolerant Routing and Location Stanford Networking Seminar October 2001 Ben Y. Zhao
Tapestry: Decentralized Routing and Location SPAM Summer 2001 Ben Y. Zhao CS Division, U. C. Berkeley.
4/12/2015© 2009 Raymond P. Jefferis IIILect Internet Protocol - Continued.
Kademlia: A Peer-to-peer Information System Based on the XOR Metric Petar Mayamounkov David Mazières A few slides are taken from the authors’ original.
Node Lookup in Peer-to-Peer Network P2P: Large connection of computers, without central control where typically each node has some information of interest.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
The Oceanstore Regenerative Wide-area Location Mechanism Ben Zhao John Kubiatowicz Anthony Joseph Endeavor Retreat, June 2000.
Finding Nearby Objects in Peer-to-Peer Networks Kirsten Hildrum.
Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony L. T.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Rowstron and Peter Druschel Proc. of the 18th IFIP/ACM.
Positive Feedback Loops in DHTs or Be Careful How You Simulate January 13, 2004 Sean Rhea, Dennis Geels, Timothy Roscoe, and John Kubiatowicz From “Handling.
Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications Stoica et al. Presented by Tam Chantem March 30, 2007.
Scalable Adaptive Data Dissemination Under Heterogeneous Environment Yan Chen, John Kubiatowicz and Ben Zhao UC Berkeley.
Tapestry: Wide-area Location and Routing Ben Y. Zhao John Kubiatowicz Anthony D. Joseph U. C. Berkeley.
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter: Chunyuan Liao March 6, 2002 Ben Y.Zhao, John Kubiatowicz, and.
Distributed Object Location in a Dynamic Network Kirsten Hildrum, John D. Kubiatowicz, Satish Rao and Ben Y. Zhao.
Secure routing for structured peer-to-peer overlay networks (by Castro et al.) Shariq Rizvi CS 294-4: Peer-to-Peer Systems.
A Note on Finding the Nearest Neighbor in Growth-Restricted Metrics Kirsten Hildrum John Kubiatowicz Sean Ma Satish Rao.
COMP 171 Data Structures and Algorithms Tutorial 10 Hash Tables.
Weaving a Tapestry Distributed Algorithms for Secure Node Integration, Routing and Fault Handling Ben Y. Zhao (John Kubiatowicz, Anthony Joseph) Fault-tolerant.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
Locality Optimizations in Tapestry Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Or, Providing High Availability and Adaptability in a Decentralized System Tapestry: Fault-resilient Wide-area Location and Routing Issues Facing Wide-area.
Wide-area cooperative storage with CFS
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
Tapestry: Finding Nearby Objects in Peer-to-Peer Networks Joint with: Ling Huang Anthony Joseph Robert Krauthgamer John Kubiatowicz Satish Rao Sean Rhea.
Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
“Umbrella”: A novel fixed-size DHT protocol A.D. Sotiriou.
Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
Spanning Tree and Multicast. The Story So Far Switched ethernet is good – Besides switching needed to join even multiple classical ethernet networks Routing.
Multicast Communication Multicast is the delivery of a message to a group of receivers simultaneously in a single transmission from the source – The source.
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
1 Plaxton Routing. 2 Introduction Plaxton routing is a scalable mechanism for accessing nearby copies of objects. Plaxton mesh is a data structure that.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Impact of Neighbor Selection on Performance and Resilience of Structured P2P Networks IPTPS Feb. 25, 2005 Byung-Gon Chun, Ben Y. Zhao, and John Kubiatowicz.
Brocade Landmark Routing on P2P Networks Gisik Kwon April 9, 2002.
1 Reading Report 5 Yin Chen 2 Mar 2004 Reference: Chord: A Scalable Peer-To-Peer Lookup Service for Internet Applications, Ion Stoica, Robert Morris, david.
Vincent Matossian September 21st 2001 ECE 579 An Overview of Decentralized Discovery mechanisms.
Structuring P2P networks for efficient searching Rishi Kant and Abderrahim Laabid Abderrahim Laabid.
Tapestry:A Resilient Global- Scale Overlay for Service Deployment Zhao, Huang, Stribling, Rhea, Joseph, Kubiatowicz Presented by Rebecca Longmuir.
Content Addressable Network CAN. The CAN is essentially a distributed Internet-scale hash table that maps file names to their location in the network.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.
Tapestry: A Resilient Global-scale Overlay for Service Deployment 1 Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
More Distributed Garbage Collection DC4 Reference Listing Distributed Mark and Sweep Tracing in Groups.
Peer to Peer Network Design Discovery and Routing algorithms
BATON A Balanced Tree Structure for Peer-to-Peer Networks H. V. Jagadish, Beng Chin Ooi, Quang Hieu Vu.
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
1 Plaxton Routing. 2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several.
March 9, Broadcasting with Bounded Number of Redundant Transmissions Majid Khabbazian.
MZR: A Multicast Protocol based on Zone Routing
Pastry Scalable, decentralized object locations and routing for large p2p systems.
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Accessing nearby copies of replicated objects
ECE 544 Protocol Design Project 2016
B+ Trees What are B+ Trees used for What is a B Tree What is a B+ Tree
Object Location Problem: Find a close copy of an object in a large network Solution should: Find object if it exists Find a close copy of the object (no.
Locality Optimizations in Tapestry Sahara/OceanStore Winter Retreat
Tapestry: Scalable and Fault-tolerant Routing and Location
BGP Instability Jennifer Rexford
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
Simultaneous Insertions in Tapestry
Presentation transcript:

1/17/01 Changing the Tapestry— Inserting and Deleting Nodes Kris Hildrum, UC Berkeley Joint work with John Kubiatowicz, Satish Rao, and Ben Zhao

1/17/01 Tapestry with Inserts and Deletes

1/17/01 Outline Insert –Finding surrogates –Constructing Neighbor tables Delete Unplanned Delete

1/17/01 Requirement for Insert and Delete Use no central directory –No hot spot/single point of failure –Reduce danger/threat of DoS. Must be fast/touch few nodes Minimize system administrator duties Keep objects available

1/17/01 Acknowledged Multicast Algorithm Locates & Contacts all nodes with a given suffix Create a tree based on IDs as we go Starting node knows when all nodes reached The node then sends to any ?0345, any ?1345, any ?3345, etc. if possible ??345 ?1345 ? & ?4345 sends to 04345, 54345… if they exist 

1/17/01 Three Parts To Insertion 1.Establish pointers from surrogates to new node. 2.Notify the need-to-know nodes 3.Create routing tables & notify other nodes

1/17/01 Finding the surrogates The new node sends a join message to a surrogate The primary surrogate multicasts to all other surrogates. Each surrogate establishes a pointer to the new node. When all pointers established, continue ????4 ???34 Gate surrogates new node

1/17/01 Need-to-know nodes Need-to-know = a node with a hole in neighbor table filled by new node If is new node, and no 234s existed, must notify ???34 nodes Acknowledged multicast to all matching nodes During this time, object requests may go either to new node or former surrogate, but that’s okay Once done, delete pointers from surrogates.

1/17/01 Constructing the Neighbor Table via a nearest neighbor search Suppose we have a good algorithm A for finding the three nearest neighbors for a given node. To fill in a slot, apply A to the subnetwork of nodes that could fill that slot. –For ????1, run A on network of nodes ending in 1 Can do something more that requires less computation, but uses nearest neighbor.

1/17/01 Finding Nearest Neighbor Let j be such that surrogate matches new node in last j digits of node ID G = surrogate A.G sends j-list to new node; new node pings all nodes on j-list. B.If one is closer, G = closest, goto A. If not, done with this level, and let j = j-1 and goto A j-list is closest k=O(log n) nodes matching in j digits

1/17/01 Is this the nearest node? Yes, with high probability under an assumption Pink circle = ball around new node of radius d(G, new node) Progress = find any node in pink circle Consider the ball around the G containing all its j- list. Two cases: –Black ball contain pink ball; found closest node –High overlap between pink ball and G-ball so unlikely pink ball empty while G-ball has k nodes G, matches in j digits New node

1/17/01 The Grid-like assumption The algorithm for finding the first entry works for any grid-like network Same as the assumption that Plaxton, Rajaraman, and Richa make.

1/17/01 Delete - Terminology xxx45 xxxx5 In-neighbors exiting node xxxx1 out-neighbors

1/17/01 Planned Delete Notify its neighbors (O(log 2 n)) –To out-neighbors: Exiting node says “I’m no longer pointing to you” –To in-neighbors: Exiting node says it is going and proposes at least one replacement. –Exiting node republishes all objects ptrs it stores –Use republish-on-delete to clean things up Objects rooted at exiting node get new roots –Either proactive pointer copying, or –wait for republishes and mean time, switch routing planes.

1/17/01 Republish-On-Delete republish

1/17/01 Unplanned Delete Planned delete relied exiting node’s neighbor table. –List of out-neighbors –List of in-neighbors –Closest matching node for each level. Can we reconstruct this information? –Not easily –Fortunately, we probably don’t need to.

1/17/01 Handle Unplanned Delete Lazily A notices B is dead, A fixes its own state –A removes B from routing tables If removing B produces a hole, A must fill the hole, or be sure that the hole cannot be filled—use acknowledged multicast –A republishes all objs with next hop = B. Use republish-on-delete as before Good: Each node makes a local decision, so no DoS problems. Problems –Delete may never “finish” and new nodes may get outdated information. –Partial delete undetected.

1/17/01 Conclusion – Insert and Delete works! No central point of failure Touches only polylog n nodes. Minimizes system administrator duties Objects always available