Presentation is loading. Please wait.

Presentation is loading. Please wait.

CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability.

Similar presentations


Presentation on theme: "CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability."— Presentation transcript:

1 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability BGP slow convergence, redundancy unexploited  Management of large scale resources / components Locate, utilize resources despite failures Decentralized Object Location and Routing (DOLR)  wide-area overlay application infrastructure Self-organizing, scalable Fault-tolerant routing and object location Efficient (b/w, latency) data delivery  Extensible, supports application-specific protocols Recent work  Tapestry, Chord, CAN, Pastry, Kademlia, Viceroy, …

2 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Tapestry: Decentralized Object Location and Routing Zhao, Kubiatowicz, Joseph, et. al. Mapping keys to physical network  Large sparse Id space N  Nodes in overlay network have NodeIds  N  Given k  N, overlay deterministically maps k to its root node (a live node in the network) Base API  Publish / Unpublish (Object ID)  RouteToNode (NodeId)  RouteToObject (Object ID)

3 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster 4 2 3 3 3 2 2 1 2 4 1 2 3 3 1 3 4 1 1 43 2 4 Tapestry Mesh Incremental prefix-based routing NodeID 0x43FE NodeID 0xEF31 NodeID 0xEFBA NodeID 0x0921 NodeID 0xE932 NodeID 0xEF37 NodeID 0xE324 NodeID 0xEF97 NodeID 0xEF32 NodeID 0xFF37 NodeID 0xE555 NodeID 0xE530 NodeID 0xEF44 NodeID 0x0999 NodeID 0x099F NodeID 0xE399 NodeID 0xEF40 NodeID 0xEF34

4 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Object Location Randomization and Locality

5 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Single Node Architecture Transport Protocols Network Link Management Application Interface / Upcall API Decentralized File Systems Application-Level Multicast Approximate Text Matching Router Routing Table & Object Pointer DB Dynamic Node Management

6 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Status and Deployment Planet Lab global network  98 machines at 42 institutions, in North America, Europe, Australia (~ 60 machines utilized)  1.26Ghz PIII (1GB RAM), 1.8Ghz PIV (2GB RAM)  North American machines (2/3) on Internet2 Tapestry Java deployment  6-7 nodes on each physical machine  IBM Java JDK 1.30  Node virtualization inside JVM and SEDA  Scheduling between virtual nodes increases latency

7 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Node to Node Routing Ratio of end-to-end routing latency to shortest ping distance between nodes All node pairs measured, placed into buckets Median=31.5, 90 th percentile=135

8 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Object Location Ratio of end-to-end latency for object location, to shortest ping distance between client and object location Each node publishes 10,000 objects, lookup on all objects 90 th percentile=158

9 http://www.cs.berkeley.edu/~ravenben/tapestry CITRIS Poster Parallel Insertion Latency Dynamically insert nodes in unison into a Tapestry of 200 Shown as function of insertion group size / network size Node virtualization effect: CPU scheduling contention results in timeout of Ping measurements, resulting in high deviation 90 th percentile=55042


Download ppt "CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability."

Similar presentations


Ads by Google