PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services Overview Adapted from Peterson et al’s talks
Overview PlanetLab is a global research network that supports the development of new network services. Since the beginning of 2003, more than 1,000 researchers at top academic institutions and industrial research labs have used PlanetLab to develop new technologies for distributed storage, network mapping, peer-to-peer systems, distributed hash tables, and query processing.
PlanetLab Today machines spanning 4140 sites and 40 countries Supports distributed virtualization each of 600+ network services running in their own slice
Long-Running Services Content Distribution –CoDeeN: Princeton –Coral: NYU, Stanford –Cobweb: Cornell Storage & Large File Transfer –LOCI: Tennessee –CoBlitz: Princeton Information Plane –PIER: Berkeley, Intel –PlanetSeer: Princeton –iPlane: Washington DHT –Bamboo (OpenDHT): Berkeley, Intel –Chord (DHash): MIT
Services (cont) Routing / Mobile Access –i3: Berkeley –DHARMA: UIUC –VINI: Princeton DNS –CoDNS: Princeton –CoDoNs: Cornell Multicast –End System Multicast: CMU –Tmesh: Michigan Anycast / Location Service –Meridian: Cornell –Oasis: NYU
Services (cont) Internet Measurement –ScriptRoute: Washington, Maryland Pub-Sub –Corona: Cornell –ePost: Rice Management Services –Stork (environment service): Arizona –Emulab (provisioning service): Utah –Sirius (brokerage service): Georgia –CoMon (monitoring service): Princeton –PlanetFlow (auditing service): Princeton –SWORD (discovery service): Berkeley, UCSD
Usage Stats Slices: 600+ Users: Bytes-per-day: 4 TB IP-flows-per-day: 190M Unique IP-addrs-per-day: 1M
Slices
User Opt-in Server NAT Client
Per-Node View Virtual Machine Monitor (VMM) Node Mgr Local Admin VM 1 VM 2 VM n …
Virtualization Virtual Machine Monitor (VMM) Node Mgr Owner VM VM 1 VM 2 VM n … Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization) Auditing service Monitoring services Brokerage services Provisioning services
Global View … … … PLC
Design Challenges Minimize centralized control without violating trust assumptions. Balance the need for isolation with the reality of scarce resources. Maintain a stable and usable system while continuously evolving it.
PlanetLab Architecture Node manger (one per node) –Create slices for service managers When service managers provide valid tickets –Allocate resources for virtual-servers Resource Monitor (one per node) –Track node’s available resources –Tell agents about available resources
PlanetLab Architecture (cont’) Agents (centralized) –Track nodes’ free resources –Advertise resources to resource brokers –Issue tickets to resource brokers Tickets may be redeemed with node managers to obtain the resource
PlanetLab Architecture (cont’) Resource Broker (per service) –Obtain tickets from agents on behalf of service managers Service Managers (per service) –Obtain tickets from broker –Redeem tickets with node managers to acquire resources –If resources can be acquired, start service
Slice Management: Obtaining a Slice Agent Service Manager Broker
Obtaining a Slice Agent Service Manager Broker Resource Monitor
Obtaining a Slice Agent Service Manager Broker Resource Monitor
Obtaining a Slice Agent Service Manager Broker Resource Monitor ticket
Obtaining a Slice Agent Service Manager Broker ticket Resource Monitor
Obtaining a Slice Agent Service Manager Broker ticket Resource Monitor ticket
Obtaining a Slice Agent Service Manager Broker ticket
Obtaining a Slice Agent Service Manager Broker ticket
Obtaining a Slice Agent Service Manager Broker ticket
Obtaining a Slice Service Manager Broker ticket
Obtaining a Slice Agent Service Manager Broker ticket Node Manager
Obtaining a Slice Agent Service Manager Broker ticket
Obtaining a Slice Agent Service Manager Broker ticket
Trust Relationships Princeton Berkeley Washington MIT Brown CMU NYU EPFL Harvard HP Labs Intel NEC Labs Purdue UCSD SICS Cambridge Cornell … princeton_codeen nyu_d cornell_beehive att_mcash cmu_esm harvard_ice hplabs_donutlab idsl_psepr irb_phi paris6_landmarks mit_dht mcgill_card huji_ender arizona_stork ucb_bamboo ucsd_share umd_scriptroute … N x N Trusted Intermediary (PlanetLab Central ) (PLC, agent)
Trust Relationships (cont) Node Owner PLC Agent Service Developer (User) ) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust PLC to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure 1.Each node boots from an immutable file system, loading a boot manager, a public key for PLC, and a node-specific secret key 2.User accounts are created through an authorized PI associated with each site 3.PLC runs an auditing service that records info about packet flows.
Decentralized Control Owner autonomy –owners allocate resources to favored slices –owners selectively disallow un-favored slices Delegation –PLC grants tickets that are redeemed at nodes –enables third-party management services Federation –create “private” PlanetLabs now distribute MyPLC software package –establish peering agreements
Resource Allocation Decouple slice creation and resource allocation –given a “fair share” (1/N th ) by default when created –acquire/release additional resources over time, including resource guarantees Protect against thrashing and over-use –Link bandwidth, upper bound on sustained rate (protect campus bandwidth) –Memory, kill largest user of physical memory when swap at 85%
CoMon Performance monitoring services, providing a monitoring statistics for Planetlab at both a node level and a slice level. Two daemons running on each node, one for node-centric data and the other for slice-centric data The archived files contain the results of the checks every 5 minutes. They are available as D.bz2 for the slice-centric data and D.bz2 for the node-centric data. The data collected can be accessed via query interface
Node Availability