Locality Optimizations in Tapestry Sahara/OceanStore Winter Retreat

Slides:



Advertisements
Similar presentations
Brocade: Landmark Routing on Peer to Peer Networks Ben Y. Zhao Yitao Duan, Ling Huang, Anthony Joseph, John Kubiatowicz IPTPS, March 2002.
Advertisements

Dynamic Replica Placement for Scalable Content Delivery Yan Chen, Randy H. Katz, John D. Kubiatowicz {yanchen, randy, EECS Department.
Supporting Rapid Mobility via Locality in an Overlay Network Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara / OceanStore Joint Session June 10,
Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility Antony Rowstron, Peter Druschel Presented by: Cristian Borcea.
Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz.
Optimizations for Locality-Aware Structured Peer-to-Peer Overlays Jeremy Stribling Collaborators: Kris Hildrum John D. Kubiatowicz The First.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Using Overlay Networks for Proximity-based Discovery Steven Czerwinski Anthony Joseph Sahara Winter Retreat January 13, 2004.
The Oceanstore Regenerative Wide-area Location Mechanism Ben Zhao John Kubiatowicz Anthony Joseph Endeavor Retreat, June 2000.
SCAN: A Dynamic, Scalable, and Efficient Content Distribution Network Yan Chen, Randy H. Katz, John D. Kubiatowicz {yanchen, randy,
Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz.
Brocade Landmark Routing on Structured P2P Overlays Ben Zhao, Yitao Duan, Ling Huang Anthony Joseph and John Kubiatowicz (IPTPS 2002) Goals Improve routing.
Supporting Rapid Mobility via Locality in an Overlay Network Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara / OceanStore Joint Session June 10,
Scalable Adaptive Data Dissemination Under Heterogeneous Environment Yan Chen, John Kubiatowicz and Ben Zhao UC Berkeley.
Tapestry: Wide-area Location and Routing Ben Y. Zhao John Kubiatowicz Anthony D. Joseph U. C. Berkeley.
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter: Chunyuan Liao March 6, 2002 Ben Y.Zhao, John Kubiatowicz, and.
Distributed Object Location in a Dynamic Network Kirsten Hildrum, John D. Kubiatowicz, Satish Rao and Ben Y. Zhao.
Weaving a Tapestry Distributed Algorithms for Secure Node Integration, Routing and Fault Handling Ben Y. Zhao (John Kubiatowicz, Anthony Joseph) Fault-tolerant.
Tapestry on PlanetLab Deployment Experiences and Applications Ben Zhao, Ling Huang, Anthony Joseph, John Kubiatowicz.
Locality Optimizations in Tapestry Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Or, Providing High Availability and Adaptability in a Decentralized System Tapestry: Fault-resilient Wide-area Location and Routing Issues Facing Wide-area.
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
Brief Announcement: Stretch Between Nearby Peers Kirsten Hildrum John Kubiatowicz Jeremy Stribling.
An Evaluation of Scalable Application-level Multicast Using Peer-to-peer Overlays Miguel Castro, Michael B. Jones, Anne-Marie Kermarrec, Antony Rowstron,
1/17/01 Changing the Tapestry— Inserting and Deleting Nodes Kris Hildrum, UC Berkeley Joint work with John Kubiatowicz, Satish.
Tapestry: Finding Nearby Objects in Peer-to-Peer Networks Joint with: Ling Huang Anthony Joseph Robert Krauthgamer John Kubiatowicz Satish Rao Sean Rhea.
Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten.
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
1 Plaxton Routing. 2 Introduction Plaxton routing is a scalable mechanism for accessing nearby copies of objects. Plaxton mesh is a data structure that.
Arnold N. Pears, CoRE Group Uppsala University 3 rd Swedish Networking Workshop Marholmen, September Why Tapestry is not Pastry Presenter.
Impact of Neighbor Selection on Performance and Resilience of Structured P2P Networks IPTPS Feb. 25, 2005 Byung-Gon Chun, Ben Y. Zhao, and John Kubiatowicz.
Brocade Landmark Routing on P2P Networks Gisik Kwon April 9, 2002.
Tapestry:A Resilient Global- Scale Overlay for Service Deployment Zhao, Huang, Stribling, Rhea, Joseph, Kubiatowicz Presented by Rebecca Longmuir.
1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.
GPSR: Greedy Perimeter Stateless Routing for Wireless Networks EECS 600 Advanced Network Research, Spring 2005 Shudong Jin February 14, 2005.
Tapestry: A Resilient Global-scale Overlay for Service Deployment 1 Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
Peer to Peer Network Design Discovery and Routing algorithms
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
Improving Fault Tolerance in AODV Matthew J. Miller Jungmin So.
1 Plaxton Routing. 2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several.
CS791Aravind Elango Maintenance-Free Global Data Storage Sean Rhea, Chris Wells, Patrick Eaten, Dennis Geels, Ben Zhao, Hakim Weatherspoon and John Kubiatowicz.
Brocade: Landmark Routing on Overlay Networks
Peer-to-Peer Information Systems Week 12: Naming
Connecting an Enterprise Network to an ISP Network
AODV-OLSR Scalable Ad hoc Routing
Presented by Tashana Landray
Notes Onur Ascigil, Vasilis Sourlas, Ioannis Psaras, and George Pavlou
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Impact of Neighbor Selection on Performance and Resilience of Structured P2P Networks Sushma Maramreddy.
TODAY’S TENTATIVE AGENDA
Accessing nearby copies of replicated objects
Turning Heterogeneity into an Advantage in Overlay Routing
Presence Information in Large Mesh Networks
Infrastructure-based Resilient Routing
(How the routers’ tables are filled in)
Distributed Systems CS
Building Peer-to-Peer Systems with Chord, a Distributed Lookup Service
John D. Kubiatowicz UC Berkeley
Research Opportunities in IP Wide Area Storage
Rapid Mobility via Type Indirection
Dynamic Replica Placement for Scalable Content Delivery
Replica Placement Heuristics of Application-level Multicast
Tapestry: Scalable and Fault-tolerant Routing and Location
Achieving Resilient Routing in the Internet
Peer-to-Peer Information Systems Week 12: Naming
Brocade: Landmark Routing on Peer to Peer Networks
Simultaneous Insertions in Tapestry
Presentation transcript:

Locality Optimizations in Tapestry Sahara/OceanStore Winter Retreat Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter Retreat January 14, 2003

Object Location in Tapestry UW Berkeley MIT

Is This Always Optimal? UW Berkeley MIT

Why Is This a Problem? Example Application: OceanStore web caching If a nearby replica exists, we must find it quickly Measure of Locality: Relative Delay Penalty (RDP) The ratio of the distance of an object in Tapestry to the minimum possible distance (i.e. over IP) Problem: finding nearby objects incurs a high RDP Two extra hops have a huge relative impact if object is close An issue for all similar systems, not just Tapestry Solution: trade storage overhead for low RDP

Optimization 1: Publish to Backups Redundancy: Routing table entries store up to c nodes Closest node is the primary neighbor, c–1 nodes are backups A simple optimization: publish to k backups Limit to the first n hops of the publish path Result Nodes near the object more likely to encounter pointers while routing to the root Storage overhead: k*n additional pointers per object MIT

Experiments run in simulation on a PlanetLab-based topology Optimization 1: Publish to Backups Experiments run in simulation on a PlanetLab-based topology

Optimization 2: Local Misroute UW Berkeley MIT

Optimization 2: Local Misroute Solution: Before taking “long” hop, misroute to closer node Look a little harder in the local area before leaving When publishing, place a pointer on local surrogate Issue: What determines a “long” hop? One metric: if next hop is more than m times longer than last hop, consider it “long” Call m the threshold factor

Optimization 2: Local Misroute Experiments run in simulation on a transit-stub topology “Using sim knowledge” indicates direct use of the topology file

Future Work Analyze more locality optimizations and different parameter configurations Take measurements on PlanetLab Test optimizations with real workloads (i.e. web caching) Complete cost analysis of storage overhead vs. RDP benefit across all optimizations