Download presentation
Presentation is loading. Please wait.
Published byMyra Rogers Modified over 8 years ago
1
01/27/10 What is PlanetLab? A planet-wide testbed for the R & D of network applications and distributed computing Over 1068 nodes at 493 sites, primarily in Europe and the US, also Asia, South America, India, Australia, and New Zealand Sites are largely Universities, with some big IT companies mixed in Designed to simulate a real network environment, including the usual problems of congestion and failures
2
Architecture Working Parts – Slices – Nodes – Virtual Machines Administration – Auditing Service – Slice Authority – Node Manager http://www.usenix.org/event/lisa07/tech/full_papers/jaffe/jaffe_html/=16.jaffe.Planetlab.gif
3
Node Physical or Virtual Capable of hosting one or more VMs At least 1 IP (non-shared) No specific architecture requirement, but all are x86 Identified by a node_id Very limited requirements
4
Virtual Machine Isolated x86, Linux build Guaranteed a resource package Basic execution environment Slice Collection of VMs, all from separate nodes Environment that users are given to work in Accessed by their specific slice name Are created and registered to a user by the Slice Creation Service
5
Auditing Service Run on each node Sorts out network traffic to and from the node Keeps records of which VM sends what, helps keep the system secure Slice Authority Keeps records about each slice This is the interface by which users register, create and control slices
6
Node Manager Runs on each node Creates VMs and resource pools Pairs VMs with resource pools when VMs are called to be part of a slice The node manager ensures that the resources of a node are shared fairly.
7
Interconnectedness The nodes are all connected to the internet and have a unique IP address Each node has a node manager, which differentiates the network access of the different VMs running on a single node. Management Authority maintains database of registered nodes Slice Creation Service/Slice Authority manages the grouping of VMs to slices, allowing a user to contact all of their VMs at once.
8
Sharing of Resources Note that all users must agree to the AUP meaning no hacking, spoofing, and scanning The PlanetLab OS kernel is the basic Linux kernel plus a few specialized extensions, very minimalistic overall to reduce overhead for the VMs It is an isolation kernel – tries to avoid conflicts when two vservers need to access the same raw socket for example. Any application or service is given only the minimum privileges it needs and must run at the highest level possible
9
01/27/10 Scalability Up to 1000 vservers on a single node, requiring only 29 MB of space for the root file system + 508 MB of space for the root file system that they all reference Under heavy loads, limits on the kernel resources of the VMM can hinder scalability Hardware requirements for physical nodes are lax(
10
Communication Uses the XML-RPC protocol to transport data between VMs in a slice to coordinate a distributed system. Supports many languages such as Python, C++, etc. http://www.xmlrpc.com/
11
01/27/10 Users and Uses Grad students and Profs, Corporate Researches Since PlanetLab is in some ways a microcosm of the Internet, researchers can run network mapping experiments in PlanetLab to learn more about the Internet itself. Another example, is to use distributed programming to perform a brute force attack on a encryption key
12
Programming Model Connect and authenticate – Responsibility for code is important to Planet-Lab Get info about nodes Create a slice, assign users – Only as an administrator Manipulate a slice – Designate specific nodes to run on Command line utility and shell plcsh
13
References Sources: PlanetLab Architecture: An Overview. Larry Peterson, Steve Muir, Timothy Roscoe, Aaron Klingaman. Princeton University, Intel Research – Berkeley. Ongoing draft May 2006 Planet-lab.org Image from : http://www.usenix.org/event/lisa07/tech/full_papers/jaffe/jaffe_html/=16.jaffe.Pl anetlab.gif * A Blueprint for Introducing Disruptive Technology into the Internet. Larry Peterson, Tom Anderson, David Culler, and Timothy Roscoe. Proceedings of the First ACM Workshop on Hot Topics in Networking (HotNets), October 2002. http://www.usenix.org/event/lisa07/tech/full_papers/jaffe/jaffe_html/=16.jaffe.Pl anetlab.gif * Operating System Support for Planetary-Scale Services. Andy Bavier, Mic Bowman, Brent Chun, David Culler, Scott Karlin, Steve Muir, Larry Peterson, Timothy Roscoe, Tammo Spalink, and Mike Wawrzoniak. Proceedings of the First Symposium on Network Systems Design and Implementation (NSDI), March 2004. * Experiences Building PlanetLab. Larry Peterson, Andy Bavier, Marc Fiuczynski, and Steve Muir. Proceedings of the Seventh Symposium on Operating System Design and Implementation (OSDI), November 2006. Codeen visual mapping of the Planet Lab network.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.