Download presentation
Presentation is loading. Please wait.
Published byJared Cooper Modified over 9 years ago
1
PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services http://www.planet-lab.org Overview Adapted from Peterson et al’s talks
2
Overview PlanetLab is a global research network that supports the development of new network services. Since the beginning of 2003, more than 1,000 researchers at top academic institutions and industrial research labs have used PlanetLab to develop new technologies for distributed storage, network mapping, peer-to-peer systems, distributed hash tables, and query processing.
3
PlanetLab Today www.planet-lab.org 838 machines spanning 4140 sites and 40 countries Supports distributed virtualization each of 600+ network services running in their own slice
4
Long-Running Services Content Distribution –CoDeeN: Princeton –Coral: NYU, Stanford –Cobweb: Cornell Storage & Large File Transfer –LOCI: Tennessee –CoBlitz: Princeton Information Plane –PIER: Berkeley, Intel –PlanetSeer: Princeton –iPlane: Washington DHT –Bamboo (OpenDHT): Berkeley, Intel –Chord (DHash): MIT
5
Services (cont) Routing / Mobile Access –i3: Berkeley –DHARMA: UIUC –VINI: Princeton DNS –CoDNS: Princeton –CoDoNs: Cornell Multicast –End System Multicast: CMU –Tmesh: Michigan Anycast / Location Service –Meridian: Cornell –Oasis: NYU
6
Services (cont) Internet Measurement –ScriptRoute: Washington, Maryland Pub-Sub –Corona: Cornell Email –ePost: Rice Management Services –Stork (environment service): Arizona –Emulab (provisioning service): Utah –Sirius (brokerage service): Georgia –CoMon (monitoring service): Princeton –PlanetFlow (auditing service): Princeton –SWORD (discovery service): Berkeley, UCSD
7
Usage Stats Slices: 600+ Users: 2500+ Bytes-per-day: 4 TB IP-flows-per-day: 190M Unique IP-addrs-per-day: 1M
8
Slices
11
User Opt-in Server NAT Client
12
Per-Node View Virtual Machine Monitor (VMM) Node Mgr Local Admin VM 1 VM 2 VM n …
13
Virtualization Virtual Machine Monitor (VMM) Node Mgr Owner VM VM 1 VM 2 VM n … Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization) Auditing service Monitoring services Brokerage services Provisioning services
14
Global View … … … PLC
15
Design Challenges Minimize centralized control without violating trust assumptions. Balance the need for isolation with the reality of scarce resources. Maintain a stable and usable system while continuously evolving it.
16
PlanetLab Architecture Node manger (one per node) –Create slices for service managers When service managers provide valid tickets –Allocate resources for virtual-servers Resource Monitor (one per node) –Track node’s available resources –Tell agents about available resources
17
PlanetLab Architecture (cont’) Agents (centralized) –Track nodes’ free resources –Advertise resources to resource brokers –Issue tickets to resource brokers Tickets may be redeemed with node managers to obtain the resource
18
PlanetLab Architecture (cont’) Resource Broker (per service) –Obtain tickets from agents on behalf of service managers Service Managers (per service) –Obtain tickets from broker –Redeem tickets with node managers to acquire resources –If resources can be acquired, start service
19
Slice Management: Obtaining a Slice Agent Service Manager Broker
20
Obtaining a Slice Agent Service Manager Broker Resource Monitor
21
Obtaining a Slice Agent Service Manager Broker Resource Monitor
22
Obtaining a Slice Agent Service Manager Broker Resource Monitor ticket
23
Obtaining a Slice Agent Service Manager Broker ticket Resource Monitor
24
Obtaining a Slice Agent Service Manager Broker ticket Resource Monitor ticket
25
Obtaining a Slice Agent Service Manager Broker ticket
26
Obtaining a Slice Agent Service Manager Broker ticket
27
Obtaining a Slice Agent Service Manager Broker ticket
28
Obtaining a Slice Service Manager Broker ticket
29
Obtaining a Slice Agent Service Manager Broker ticket Node Manager
30
Obtaining a Slice Agent Service Manager Broker ticket
31
Obtaining a Slice Agent Service Manager Broker ticket
32
Trust Relationships Princeton Berkeley Washington MIT Brown CMU NYU EPFL Harvard HP Labs Intel NEC Labs Purdue UCSD SICS Cambridge Cornell … princeton_codeen nyu_d cornell_beehive att_mcash cmu_esm harvard_ice hplabs_donutlab idsl_psepr irb_phi paris6_landmarks mit_dht mcgill_card huji_ender arizona_stork ucb_bamboo ucsd_share umd_scriptroute … N x N Trusted Intermediary (PlanetLab Central ) (PLC, agent)
33
Trust Relationships (cont) Node Owner PLC Agent Service Developer (User) 1 2 3 4 1) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust PLC to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure 1.Each node boots from an immutable file system, loading a boot manager, a public key for PLC, and a node-specific secret key 2.User accounts are created through an authorized PI associated with each site 3.PLC runs an auditing service that records info about packet flows.
34
Decentralized Control Owner autonomy –owners allocate resources to favored slices –owners selectively disallow un-favored slices Delegation –PLC grants tickets that are redeemed at nodes –enables third-party management services Federation –create “private” PlanetLabs now distribute MyPLC software package –establish peering agreements
35
Resource Allocation Decouple slice creation and resource allocation –given a “fair share” (1/N th ) by default when created –acquire/release additional resources over time, including resource guarantees Protect against thrashing and over-use –Link bandwidth, upper bound on sustained rate (protect campus bandwidth) –Memory, kill largest user of physical memory when swap at 85%
36
CoMon Performance monitoring services, providing a monitoring statistics for Planetlab at both a node level and a slice level. Two daemons running on each node, one for node-centric data and the other for slice-centric data The archived files contain the results of the checks every 5 minutes. They are available as http://summer.cs.princeton.edu/status/dump_cotop_YYYYMMD D.bz2 for the slice-centric data and http://summer.cs.princeton.edu/status/dump_comon_YYYYMMD D.bz2 for the node-centric data. The data collected can be accessed via query interface http://comon.cs.princeton.edu
37
Node Availability
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.