Brocade: Landmark Routing on Peer to Peer Networks

Slides:



Advertisements
Similar presentations
Brocade: Landmark Routing on Peer to Peer Networks Ben Y. Zhao Yitao Duan, Ling Huang, Anthony Joseph, John Kubiatowicz IPTPS, March 2002.
Advertisements

Dynamic Replica Placement for Scalable Content Delivery Yan Chen, Randy H. Katz, John D. Kubiatowicz {yanchen, randy, EECS Department.
Supporting Rapid Mobility via Locality in an Overlay Network Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara / OceanStore Joint Session June 10,
Perspective on Overlay Networks Panel: Challenges of Computing on a Massive Scale Ben Y. Zhao FuDiCo 2002.
Tapestry: Scalable and Fault-tolerant Routing and Location Stanford Networking Seminar October 2001 Ben Y. Zhao
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK
Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz.
Optimizations for Locality-Aware Structured Peer-to-Peer Overlays Jeremy Stribling Collaborators: Kris Hildrum John D. Kubiatowicz The First.
Implementation and Deployment of a Large-scale Network Infrastructure Ben Y. Zhao L. Huang, S. Rhea, J. Stribling, A. D. Joseph, J. D. Kubiatowicz EECS,
Approximate Object Location and Spam Filtering on Peer-to-Peer Systems Feng Zhou, Li Zhuang, Ben Y. Zhao, Ling Huang, Anthony D. Joseph and John D. Kubiatowicz.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Rapid Mobility via Type Indirection Ben Y. Zhao, Ling Huang, Anthony D. Joseph, John D. Kubiatowicz Computer Science Division, UC Berkeley IPTPS 2004.
Probabilistic Aggregation in Distributed Networks Ling Huang, Ben Zhao, Anthony Joseph and John Kubiatowicz {hling, ravenben, adj,
Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz.
Brocade Landmark Routing on Structured P2P Overlays Ben Zhao, Yitao Duan, Ling Huang Anthony Joseph and John Kubiatowicz (IPTPS 2002) Goals Improve routing.
OSMOSIS Final Presentation. Introduction Osmosis System Scalable, distributed system. Many-to-many publisher-subscriber real time sensor data streams,
Weaving a Tapestry Distributed Algorithms for Secure Node Integration, Routing and Fault Handling Ben Y. Zhao (John Kubiatowicz, Anthony Joseph) Fault-tolerant.
Squirrel: A decentralized peer- to-peer web cache Paul Burstein 10/27/2003.
Tapestry on PlanetLab Deployment Experiences and Applications Ben Zhao, Ling Huang, Anthony Joseph, John Kubiatowicz.
Locality Optimizations in Tapestry Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
Locality Aware Mechanisms for Large-scale Networks Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz UC Berkeley Future Directions in Distributed Computing.
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
Arnold N. Pears, CoRE Group Uppsala University 3 rd Swedish Networking Workshop Marholmen, September Why Tapestry is not Pastry Presenter.
Distributed Systems Concepts and Design Chapter 10: Peer-to-Peer Systems Bruce Hammer, Steve Wallis, Raymond Ho.
Brocade Landmark Routing on P2P Networks Gisik Kwon April 9, 2002.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.
P2P Group Meeting (ICS/FORTH) Monday, 28 March, 2005 A Scalable Content-Addressable Network Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp,
Plethora: Infrastructure and System Design. Introduction Peer-to-Peer (P2P) networks: –Self-organizing distributed systems –Nodes receive and provide.
Peer to Peer Network Design Discovery and Routing algorithms
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
1 Plaxton Routing. 2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several.
Querying the Internet with PIER CS294-4 Paul Burstein 11/10/2003.
Internet Traffic Engineering Motivation: –The Fish problem, congested links. –Two properties of IP routing Destination based Local optimization TE: optimizing.
Plethora: A Locality Enhancing Peer-to-Peer Network Ronaldo Alves Ferreira Advisor: Ananth Grama Co-advisor: Suresh Jagannathan Department of Computer.
Brocade: Landmark Routing on Overlay Networks
Chapter 4: Network Layer
Dynamic Routing Protocols II OSPF
Architecture and Algorithms for an IEEE 802
P4P : Provider Portal for (P2P) Applications Haiyong Xie, Y
Internet Indirection Infrastructure (i3)
Routing BY, P.B.SHANMATHI.
ICMP ICMP – Internet Control Message Protocol
Impact of Neighbor Selection on Performance and Resilience of Structured P2P Networks Sushma Maramreddy.
Kris, Karthik, Ansley, Sean, Jeremy Dick, David K, Frans, Hari
Plethora: Infrastructure and System Design
Accessing nearby copies of replicated objects
Zhichen Xu, Mallik Mahalingam, Magnus Karlsson
Turning Heterogeneity into an Advantage in Overlay Routing
Infrastructure-based Resilient Routing
An Overlay Infrastructure for Decentralized Object Location and Routing Ben Y. Zhao University of California at Santa Barbara.
Early Measurements of a Cluster-based Architecture for P2P Systems
(How the routers’ tables are filled in)
Building Peer-to-Peer Systems with Chord, a Distributed Lookup Service
John D. Kubiatowicz UC Berkeley
Chapter 4: Network Layer
Chapter 4: Network Layer
Locality Optimizations in Tapestry Sahara/OceanStore Winter Retreat
Chapter 4: Network Layer
Rapid Mobility via Type Indirection
Dynamic Replica Placement for Scalable Content Delivery
Tapestry: Scalable and Fault-tolerant Routing and Location
Exploiting Routing Redundancy via Structured Peer-to-Peer Overlays
Intro to Data & Internet
Presentation transcript:

Brocade: Landmark Routing on Peer to Peer Networks Ling Huang, Ben Y. Zhao, Yitao Duan, Anthony Joseph, John Kubiatowicz

State of the Art Routing High dimensionality and coordinate-based P2P routing Decentralized Object Location and Routing: Tapestry, Pastry, Chord, CAN, etc… Sub-linear storage and # of overlay hops per route Properties dependent on random name distribution Optimized for uniform mesh style networks

Reality Transit-stub topology, disparate resources per node Result: Inefficient inter-domain routing (b/w, latency) AS-3 AS-1 S R Inter-domain traffic  higher latency Stub nodes are low in cpu, and high in latency Nodes disparate in resources AS-2 P2P Overlay Network

Talk Outline Motivation Brocade Architecture Brocade Routing Evaluation Summary / Open Questions

Brocade: Landmark Routing Goals Eliminate unnecessary wide-area hops for inter-domain messages Eliminate traffic going through high latency, congested stub links Reduce wide-area bandwidth utilization Maintain interface: RouteToID (globally unique ID)

Brocade Architecture Brocade Layer Original Route Brocade Route S R AS-3 AS-1 S R Animate the columns rising up Mention tapestry location is not perfect, but reduce routing at stubs AS-2 P2P Network

Mechanisms Intuition: route quickly to destination domain Organize group of supernodes into secondary overlay Sender (S) sends message to local supernode SN1 SN1 finds and routes message to supernode SN2 near receiver R SN1 uses Tapestry object location to find SN2 SN2 sends message to R via normal routing

Classifying Traffic Brocade not useful for intra-domain messages P2P layer should exploit some locality (Tapestry) Undesirable processing overhead Classifying traffic by destination Proximity caches: Every node keeps list of nodes it knows to be local Need not be optimal, worst case: 1 relay through SN Cover set: Supernode keeps list of all nodes in its domain. Acts as authority on local vs. distant traffic

Entering the Brocade Route: Sender  Supernode (Sender)? IP Snooping brocade Supernode listens on P2P headers and redirects Use machines close to border gateways +: Transparent to sender –: may touch local nodes Directed brocade Sender sends message directly to supernode Sender locates supernode via DNS resolution: nslookup supernode.cs.berkeley.edu +: maximum performance –: state maintenance

Inter-supernode Routing Route: Supernode (sender)  Supernode (receiver) Locate receiver’s supernode given destination nodeID Use Tapestry object location Tapestry Routing mesh w/ built in proximity metrics Location exploits locality (finds closer objects faster) Finding supernodes Supernode “publishes” cover set on brocade layer as locally stored objects To route to node N, locate server on brocade storing N

Feasibility Analysis Some numbers State maintenance Internet: ~ 220M hosts, 20K AS’s, ~10K nodes/AS Java implementation of Tapestry on PIII 800: ~1000 msgs/second State maintenance AS of 10K nodes, assume 10% enter/leave every minute Only ~1.7*5  9% of CPU spent processing publish on Brocade If inter-supernode traffic takes X ms, Publishing takes 5 X Bandwidth: 1K/msg * 1K msg/min = 1MB/min = 160kb/s Storage requirement of Tapestry 20K AS’s, Octal Tapestry, Log8(20K2) = 10 digits 10K objects (Tapestry GUIDs) published per supernode Tapestry GUID = 160 bits = 20B Expected storage per SN: 10 * 10K * 20B = 2MB

Evaluation: Routing RDP Explain axes better Explain why IP snooping and directed brocade are different Local proximity cache on; inter-domain:intra-domain = 3:1 Packet simulator, GT-ITM 4096 T, 16 SN, CPU overhead = 1

Evaluation: Bandwidth Usage Explain axes better Explain physical hops in optimal route Local proximity cache on Bandwidth unit: (SizeOf(Msg) * Hops)

Brocade Summary P2P systems assume uniformity Extraneous hops through backbone to domains Routing across congested stubs links Constrain inter-domain routing Remove unnecessary routing through stubs Reduce expected inter-domain hops Limit misdirection in less congested backbone Result: lower latency, less bandwidth utilization

{ ravenben, hling }@eecs.berkeley.edu Ongoing Questions Performance at what cost? Keep virtualization and level of indirection, named routing May lose some fault-tolerance (how much?) Making P2P real Deployment issues? Impact of BGP routing policies on performance? Future/ongoing work Fault-tolerant supernodes Finer-grain node differentiation? Brocade as replacement for BGP? { ravenben, hling }@eecs.berkeley.edu HTTP://www.cs.berkeley.edu/~ravenben/tapestry