Download presentation
Presentation is loading. Please wait.
Published byMariah Andrews Modified over 9 years ago
1
An Overview of the PlanetLab 2008. 9. 17. SeungHo Lee
2
References PlanetLab Design Notes (PDNs) PlanetLab: An Overlay Testbed for Broad-Coverage Services (2003.1) Towards a Comprehensive PlanetLab Architecture (2005.6) PlanetLab Architecture: An Overview (2006.5) Presentations An Overview of the PlanetLab Architecture (2004.1) Tutorial Step-by-step instructions to deploying a “Hello World” application on PlanetLab And www.planet-lab.org
3
Today’s Internet Best-Effort Packet Delivery Service Limitations The internet is “opaque” making it difficult to adapt to current network conditions Applications cannot be widely distributed (typically split into two pieces: client and server)
4
Tomorrow’s Internet Collection of Planetary-Scale Service Opportunities Multiple vantage points –anomaly detection, robust routing Proximity to data sources/sinks –content distribution, data fusion Multiple, independent domains –survivable storage
5
Evolving the Internet Add a new layer to the network architecture Overlay networks –Purpose-built virtual networks that use the existing Internet for transmission –The Internet was once deployed as an overlay on top of the telephony network Challenge How to innovate & deploy at scale
6
800+ machines spanning 400 sites and 40 countries Supports distributed virtualization Each of 600+ network services running in their own slice PlanetLab
7
History 2002.3. Larry Peterson (Princeton) and David Culler (UC Berkeley and Intel Research) organize an “underground” meeting of researchers interested in planetary-scale network services, and propose PlanetLab as a community testbed. 2002.6. Brent Chun and Timothy Roscoe (Intel Research), Eric Fraser (UC Berkeley), and Mike Wawrzoniak (Princeton) bring up first PlanetLab nodes at Intel Research - Berkeley, UC Berkeley, and Princeton. The initial system (dubbed Version 0.5) leverages the Ganglia monitoring service and the RootStock installation mechanism from the Millennium cluster project. 2002.10. Initial deployment of 100 nodes at 42 sites is complete. Verion 1.0 of the PlanetLab software, with support for vserver-based virtual machines and safe raw sockets, is deployed. 2007.6. PlanetLab passes the 800 node mark. 2008.9. Currently PlanetLab has been upgraded to Version 4.2.
8
Slices
9
PlanetLab is… “A common software architecture” Distributed software package A Linux-based operating system Mechanisms for bootstrapping nodes and distributing software updates A collection of management tools –Monitor node health –Audit system activity –Control system parameters Facility for managing user accounts and distributing keys
10
PlanetLab is… “An overlay network testbed” Experiment with a variety of planetary-scale services File sharing and network-embedded storage Content distribution networks Routing and multicasting overlays QoS overlays Scalable object location Scalable event propagation Anomaly detection mechanisms Network measurement tools Advantages Under real-world conditions At large scale
11
PlanetLab is… “A deployment platform” Supporting the seamless migration of an application From early prototype, through multiple design iterations, to a popular service that continue to evolve Currently continuously-running services CoDeeN content distribution network (Princeton) ScriptRoute network measurement tool (Washington) Chord scalable object location service (MIT, Berkeley)
12
PlanetLab is… “A microcosm of the next Internet” Fold services back into PlanetLab Evolve core technologies to support overlays and slices Discover common sub-services Long-term goals Become the way users interact with the Internet Define standards that support multiple “PlanetLabs” Examples Sophia used to monitor health of PlanetLab nodes Chord provides scalable object location
13
Organizing Principles Distributed virtualization slice : a network of virtual machines Isolation –Isolate services from each other –Protect the Internet from PlanetLab Unbundled Management OS defines only local (per-node) behavior –Global (network-wide) behavior implemented by services Multiple competing services running in parallel –Shared, unprivileged interfaces
14
Principals Owner An organization that owns one or more PlanetLab nodes Each owner retains control over their own nodes, but delegates responsibility for managing those nodes to the trusted PLC intermediary User A researcher that deploys a service on a set of PlanetLab nodes Users create slices on PlanetLab nodes via mechanisms provided by the trusted PLC intermediary PlanetLab Consortium (PLC) A trusted intermediary that manages nodes on behalf a set of owners PLC creates slices on those nodes on behalf of a set of users.
15
Trust Relationships 1. PLC expresses trust in a user by issuing it credentials that lets it access slices. 2. A user trusts PLC to act as its agent, creating slices on its behalf and checking credentials so that only that user can install and modify the software running in its slice. 3. An owner trusts PLC to install software that is able to map network activity to the responsible slice. 4. PLC trusts owners to keep their nodes physically secure.
16
Virtual Machine A virtual machine (VM) is an execution environment in which a slice runs on a particular node Typically implemented by a virtual machine monitor (VMM) VMs are isolated from each other, such that The resources consumed by one VM do not unduly effect the performance of another VM One VM cannot eavesdrop on network traffic to or from another VM One VM cannot access objects (files, ports, processes) belonging to another VM
17
Per-Node View
18
Virtualization Levels Hypervisors (e.g., VMWare) don’t scale well don’t need multi-OS functionality Paravirtualization (e.g., Xen, Denali) not yet mature requires OS tweaks Virtualization at system call interface (e.g., Jail, Vservers) reasonable compromise doesn’t provide the isolation that hypervisors do Unix processes isolation is problematic Java Virtual Machine too high-level
19
Vservers Virtualization Virtualizes at system call interface Each vserver runs in its own security context –Private UID/GID name space –Limited superuser capabilities Uses chroot for filesystem isolation Scales to 1000 of vservers per node (29MB each) Vserver Root A weaker version of root allows each vserver to have its own superuser Denied all capabilities that could undermine the security of the machine Granted all other capabilities
20
Protected Raw Sockets Key design decision Users of PlanetLab should not have root access to the machines A large number of users cannot all be trusted to not misuse root privilege. But, many users will need access to services that normally require root privilege. (e.g., access to raw sockets) “Protected” version of privileged service Services are forced create sockets that are bound to specific TCP/UDP ports –Incoming packets are classified and delivered only to the service that created the socket –Outgoing packets are filtered to ensure that they are properly formed (e.g., the process does not spoof the source IP address or TCP/UDP port numbers)
21
Infrastructure Services Unbundled management PlanetLab decomposes the management function into a collection of largely independent infrastructure services Benefits It keeps the node manager as minimal as possible. It maximizes owner and provider choice, and hence, autonomy. It makes the system as a whole easier to evolve over time. Currently running services Resource brokerage services used to acquire resources Environment services that keep a slice’s software packages up-to-date Monitoring services that track the health of nodes and slices Discovery services used to learn what resources are available
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.