Download presentation
Presentation is loading. Please wait.
Published byVerity Williams Modified over 9 years ago
1
PlanetLab Applications and Federation Kiyohide NAKAUCHI NICT nakauchi@nict.go.jp 23 rd ITRC Symposium 2008/05/16 Aki NAKAO Utokyo / NICT nakao@iii.u-tokyo.ac.jp aki.nakao@nict.go.jp
2
(1) PlanetLab Applications CoMon: monitoring slice-level statistics 2008/05/16K.NAKAUCHI, NICT2 http://summer.cs.princeton.edu/status/index_slice.html Over 400 nodes
3
Typical Long-running Applications CDN CoDeeN[Princeton], Coral[NYU], Coweb[Cornell] Large-file transfer CoBlitz, CoDeploy[Princeton], SplitStream[Rice], Routing Overlays i3 [UCB], Pluto[Princeton] DHT / P2P middleware Bamboo[UCB], Meridian[Cornel], Overlay Weaver[UWaseda] Brokerage service Sirius[UGA] Measurement, Monitoring ScriptRoute[Maryland, UWash], S-cube[HPLab] CoMon, CoTop, PlanetFlow[Princeton] DNS, Anomaly Detection, streaming, multicast, anycast, … In addition, there are many short-term research projects on PlanetLab 2008/05/16K.NAKAUCHI, NICT3
4
CoDeeN : Academic Content Distribution Network Improve web performance & reliability 100+ proxy servers on PlanetLab Running 24/7 since June 2003 Roughly 3-4 million reqs/day aggregate One of the highest-traffic projects on PlanetLab 2008/05/16K.NAKAUCHI, NICT4
5
How CoDeen Works? Each CoDeeN proxy is a forward proxy, reverse proxy, & redirector CoDeeN Proxy Request Response Cache hit Cache miss Response Cache hit Cache miss Response Request Cache Miss 2008/05/16K.NAKAUCHI, NICT5
6
Coblitz : Scalable Large-file CDN Faster than BitTorrent by 55-86% (~500%) 2008/05/16K.NAKAUCHI, NICT6 AgentCDNClient Only reverse proxy(CDN) caches the chunks! CDN ClientAgent CDN chunk1 chunk 2 chunk 3 chunk 2 chunk 5 chunk 1 chunk 4 chunk 5 chunk 4 chunk1chunk2 chunk 3 chunk5 chunk4 CDN = Redirector + Reverse Proxy DNS coblitz.codeen.org Origin Server HTTP RANGE QUERY
7
How Does PlanetLab Behave? Node Availability 2008/05/16K.NAKAUCHI, NICT7 [Larry Peterson, et al, "experiences building PlanetLab", OSDI’06]
8
Live Slices 2008/05/16K.NAKAUCHI, NICT8 [Larry Peterson, et al, "experiences building PlanetLab", OSDI’06] 50% nodes have 5-10 live slices
9
Bandwidth 2008/05/16K.NAKAUCHI, NICT9 Bandwidth in Bandwidth out [Larry Peterson, et al, "experiences building PlanetLab", OSDI’06] Median: 500-1000 Kbps
10
(2) Extending PlanetLab Federation Distributed operation/management Private PlanetLab Private use, original configuration CORE [UTokyo, NICT] Hardware support (C/D separation) Custom hardware: Intel IXP, NetFPGA, 10GbE E.g. Supercharging PlanetLab [UWash] Edge diversity Wireless technologies integration [OneLab] E.g. HSDPA, WiFi, Bluetooth, ZigBee, 3GPP LTE GENI, VINI 2008/05/16K.NAKAUCHI, NICT10
11
Federation Split PlanetLab Several regional PlanetLabs with original policy Interconnection Share node resources among PlanetLabs Internet PLC PlanetLab 1 PlanetLab 2 PlanetLab 3 … VMM Node Mgr VM 1 VM 2 VM n 2008/05/16K.NAKAUCHI, NICT11 VMM Node Mgr VM 1 VM 2 VM n Trade
12
PlanetLab-EU Starts Federation Emerging European portion of public PlanetLab 33 nodes today (migrated from PlanetLab) Supported by OneLab project (UPMC, INRIA) Control center in Paris PlanetLab-JP will also follow federation 2008/05/16K.NAKAUCHI, NICT12
13
MyPLC for Your Own PlanetLab PlanetLab in a box Complete PlanetLab Central (PLC) portable package Easy to install, administer Isolate all code in a chroot jail Single configuration file 2008/05/16K.NAKAUCHI, NICT13 /plc PLC Linux Apach OpenSSL PostgreSQL … pl_db plc_www plc_api bootmanager bootcd_v3
14
Resource Management Resource sharing policy By contributing 2 nodes to any one PlanetLab, a site can create 10 slices that span the federated PlanetLab 2008/05/16K.NAKAUCHI, NICT14 Rspec General, Extensible, Resource Description Portals presents a higher-level front-end view of resources Portals will use RSpec as part of the back-end
15
Rspec Example <component type=”virtual access point” requestID=”siteA-ap1” physicalID=”geni.us.utah.wireless.node45”> 1000000000 Full 10 R/W FreqShared broadcast 802.11g 16 2008/05/16K.NAKAUCHI, NICT15
16
Summary PlanetLab applications 800+ network services running in their own slice Long-running infrastructure services Measurement using a set of useful monitoring tools reveals the extensive use of PlanetLab Federation Distributes operation and management Future PlanetLab = current PL + PL-EU + PL-JP +… 2008/05/16K.NAKAUCHI, NICT16
17
Backup
18
Monitoring Tools CoTop: monitoring what slices are consuming resources on each node, like “top” CoMon: monitoring statistics for PlanetLab at both a node- level and a slice-level 2008/05/16K.NAKAUCHI, NICT18
19
OpenDHT/OpenHash Publicly accessible distributed hash table (DHT) service Simple put-get interface is accessible over both Sun RPC and XML RPC 2008/05/16K.NAKAUCHI, NICT19
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.