PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.

Slides:



Advertisements
Similar presentations
VINI and its Future Directions
Advertisements

Network Virtualization Nick Feamster, Georgia Tech Lixin Gao, UMass Amherst Jennifer Rexford, Princeton NSF NeTS-FIND PI Meeting.
VINI: Virtual Network Infrastructure
VINI Overview. PL-VINI: Prototype on PlanetLab PlanetLab: testbed for planetary-scale services Simultaneous experiments in separate VMs –Each has root.
All Rights Reserved © Alcatel-Lucent 2009 Enhancing Dynamic Cloud-based Services using Network Virtualization F. Hao, T.V. Lakshman, Sarit Mukherjee, H.
Seungmi Choi PlanetLab - Overview, History, and Future Directions - Using PlanetLab for Network Research: Myths, Realities, and Best Practices.
PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle.
PlanetLab Workshop May 12, Incentives Private PLs are happening… What direction for “public” PL? –Growth? Distributing ops? Incentives to move in.
PlanetLab V3 and beyond Steve Muir Princeton University September 17, 2004.
Internet Information Server 6.0. IIS 6.0 Enhancements  Fundamental changes, aimed at: Reliability & Availability Reliability & Availability Performance.
1 Planetary Network Testbed Larry Peterson Princeton University.
PlanetLab Architecture Larry Peterson Princeton University.
A Strategy for Continually Reinventing the Internet Larry Peterson Princeton University.
PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.
PlanetLab Operating System support* *a work in progress.
1 PlanetLab: A globally distributed testbed for New and Disruptive Services CS441 Mar 15th, 2005 Seungjun Lee
PlanetLab: Present and Future Steve Muir 3rd August, 2005 (slides taken from Larry Peterson)
Global Overlay Network : PlanetLab Claudio E.Righetti 6 October, 2005 (some slides taken from Larry Peterson)
PlanetLab Europe 2008 Thomas Bourgeau Laboratoire LIP6 – CNRS Université Pierre et Marie Curie – Paris 6
PlanetLab: Catalyzing Network Innovation October 2, 2007 Larry Peterson Princeton University Timothy Roscoe Intel Research at Berkeley.
Xen , Linux Vserver , Planet Lab
OneLab: Federating Testbeds Timur Friedman Laboratoire LIP6-CNRS Université Pierre et Marie Curie TERENA Networking Conference 2007 Lyngby, Denmark, 22.
PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services Overview Adapted from Peterson.
Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net.
NanoHUB.org online simulations and more Network for Computational Nanotechnology 1 Autonomic Live Adaptation of Virtual Computational Environments in a.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
An Overlay Data Plane for PlanetLab Andy Bavier, Mark Huang, and Larry Peterson Princeton University.
Global Overlay Network : PlanetLab Claudio E.Righetti October, 2006 (some slides taken from Larry Peterson)
Virtualization: An End or a Means? Larry Peterson Princeton University
Internet In A Slice Andy Bavier CS461 Lecture.
Center for Autonomic Computing Intel Portland, April 30, 2010 Autonomic Virtual Networks and Applications in Cloud and Collaborative Computing Environments.
Yury Kissin Infrastructure Consultant Storage improvements Dynamic Memory Hyper-V Replica VM Mobility New and Improved Networking Capabilities.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
PlanetLab: A Distributed Test Lab for Planetary Scale Network Services Opportunities Emerging “Killer Apps”: –CDNs and P2P networks are first examples.
1 st PlanetLab Asia Workshop September 17 th -18 th 2004 Yonsei University, Seoul, Korea Friday September 17 th, 1.30 – 2.30 pm –Welcome Greetings, Young-Yong.
1 Cabo: Concurrent Architectures are Better than One Jennifer Rexford Princeton University Joint work with Nick Feamster.
GENI: Catalyzing Network Research May 31, 2007 Larry Peterson Princeton University.
Healing the Web: An Overview of CoDeeN & Related Projects Vivek Pai, Larry Peterson + many others Princeton University.
VICCI: Programmable Cloud Computing Research Testbed Andy Bavier Princeton University November 3, 2011.
PlanetLab Applications and Federation Kiyohide NAKAUCHI NICT 23 rd ITRC Symposium 2008/05/16 Aki NAKAO Utokyo / NICT
PlanetLab and OneLab Presentation at the GRID 5000 School 9 March 2006 Timur Friedman Université Pierre et Marie Curie Laboratoire LIP6-CNRS PlanetLab.
1 On the Design & Evolution of an Architecture for Testbed Federation Stephen Soltesz, David Eisenstat, Marc Fiuczynski, Larry Peterson.
MDC417 Follow me on Working as Practice Manager for Insight, he is a subject matter expert in cloud, virtualization and management.
Intel IT Overlay Jeff Sedayao PlanetLab Workshop at HPLABS May 11, 2006.
An Overview of the PlanetLab SeungHo Lee.
By L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion.
Deliverable A meeting report that outlines our current thinking about Private PlanetLabs and Federation. Private PlanetLabs: Opportunities and Challenges.
1 A Blueprint for Introducing Disruptive Technology into the Internet Larry Peterson Princeton University / Intel Research.
PlanetLab: A Platform for Planetary-Scale Services Mic Bowman
Vytautas Valancius, Nick Feamster, Akihiro Nakao, and Jennifer Rexford.
PlanetLab Architecture Larry Peterson Princeton University.
Marc Fiuczynski Princeton University Marco Yuen University of Victoria PlanetLab & Clusters.
1 Testbeds Breakout Tom Anderson Jeff Chase Doug Comer Brett Fleisch Frans Kaashoek Jay Lepreau Hank Levy Larry Peterson Mothy Roscoe Mehul Shah Ion Stoica.
PlanetLab Current Status Brent Chun Timothy Roscoe David Culler 8 th November 2002.
PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.
Resources Management and Component Placement Presenter:Bo Sheng.
BNL PDN Enhancements. Perimeter Load Balancers Scaleable Performance Fault Tolerance Server Maintainability User Convenience Perimeter Security.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Hosting Wide-Area Network Testbeds: Policy Considerations Larry Peterson Princeton University.
1 A Blueprint for Introducing Disruptive Technology into the Internet Larry Peterson Princeton University / Intel Research.
D u k e S y s t e m s Some Issues for Control Framework Security GEC7 Jeff Chase Duke University.
01/27/10 What is PlanetLab? A planet-wide testbed for the R & D of network applications and distributed computing Over 1068 nodes at 493 sites, primarily.
SERVERS. General Design Issues  Server Definition  Type of server organizing  Contacting to a server Iterative Concurrent Globally assign end points.
What are people doing in/on/with/around PlanetLab?
University of Maryland College Park
Container-based Operating System Virtualization: A scalable, High-performance Alternative to Hypervisors Stephen Soltesz, Herbert Potzl, Marc E. Fiuczynski,
PlanetLab: a foundation on which the next Internet can emerge
PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.
Managing Clouds with VMM
Presentation transcript:

PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University

PlanetLab 670 machines spanning 325 sites and 35 countries nodes within a LAN-hop of > 3M users Supports distributed virtualization each of 600+ network services running in their own slice

Slices

User Opt-in Server NAT Client

Per-Node View Virtual Machine Monitor (VMM) Node Mgr Local Admin VM 1 VM 2 VM n …

Global View ……… PLC

Long-Running Services Content Distribution –CoDeeN: Princeton –Coral: NYU –Cobweb: Cornell Storage & Large File Transfer –LOCI: Tennessee –CoBlitz: Princeton Anomaly Detection & Fault Diagnosis –PIER: Berkeley, Intel –PlanetSeer: Princeton DHT –Bamboo (OpenDHT): Berkeley, Intel –Chord (DHash): MIT

Services (cont) Routing / Mobile Access –i3: Berkeley –DHARMA: UIUC –VINI: Princeton DNS –CoDNS: Princeton –CoDoNs: Cornell Multicast –End System Multicast: CMU –Tmesh: Michigan Anycast / Location Service –Meridian: Cornell –Oasis: NYU

Services (cont) Internet Measurement –ScriptRoute: Washington, Maryland Pub-Sub –Corona: Cornell –ePost: Rice Management Services –Stork (environment service): Arizona –Emulab (provisioning service): Utah –Sirius (brokerage service): Georgia –CoMon (monitoring service): Princeton –PlanetFlow (auditing service): Princeton –SWORD (discovery service): Berkeley, UCSD

Usage Stats Slices: 600+ Users: Bytes-per-day: TB IP-flows-per-day: 190M Unique IP-addrs-per-day: 1M

Two Views of PlanetLab Useful research instrument Prototype of a new network architecture What’s interesting about this architecture? –more an issue of synthesis than a single clever technique –technical decisions that address non-technical requirements

Requirements 1)It must provide a global platform that supports both short-term experiments and long-running services. –services must be isolated from each other –multiple services must run concurrently –must support real client workloads

Requirements 2)It must be available now, even though no one knows for sure what “it” is. –deploy what we have today, and evolve over time –make the system as familiar as possible (e.g., Linux) –accommodate third-party management services

Requirements 3)We must convince sites to host nodes running code written by unknown researchers from other organizations. –protect the Internet from PlanetLab traffic –must get the trust relationships right

Requirements 4)Sustaining growth depends on support for site autonomy and decentralized control. –sites have final say over the nodes they host –must minimize (eliminate) centralized control

Requirements 5)It must scale to support many users with minimal resources available. –expect under-provisioned state to be the norm –shortage of logical resources too (e.g., IP addresses)

Design Challenges Develop a management (control) plane that accommodates these often conflicting requirements. Balance the need for isolation with the reality of scarce resources. Maintain a stable and usable system while continuously evolving it.

Trust Relationships Princeton Berkeley Washington MIT Brown CMU NYU EPFL Harvard HP Labs Intel NEC Labs Purdue UCSD SICS Cambridge Cornell … princeton_codeen nyu_d cornell_beehive att_mcash cmu_esm harvard_ice hplabs_donutlab idsl_psepr irb_phi paris6_landmarks mit_dht mcgill_card huji_ender arizona_stork ucb_bamboo ucsd_share umd_scriptroute … N x N Trusted Intermediary (PLC)

Trust Relationships (cont) Node Owner PLC Service Developer (User) ) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust PLC to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure

Decentralized Control Owner autonomy –owners allocate resources to favored slices –owners selectively disallow unfavored slices Delegation –PLC grants tickets that are redeemed at nodes –enables third-party management services Federation –create “private” PlanetLabs using MyPLC –establish peering agreements

Virtualization Virtual Machine Monitor (VMM) Node Mgr Owner VM VM 1 VM 2 VM n … Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization) Auditing service Monitoring services Brokerage services Provisioning services

Active Slices

Resource Allocation Decouple slice creation and resource allocation –given a “fair share” by default when created –acquire additional resources, including guarantees Fair share with protection against thrashing –1/Nth of CPU –1/Nth of link bandwidth owner limits peak rate upper bound on average rate (protect campus bandwidth) –disk quota –memory limits not practical kill largest user of physical memory when swap at 90% reset node when swap at 95%

CPU Availability

Scheduling Jitter

Memory Availability

Evolution vs Intelligent Design Favor evolution over clean slate Favor design principles over a fixed architecture Specifically… –leverage existing software and interfaces –keep VMM and control plane orthogonal –exploit virtualization vertical: management services run in slices horizontal: stacks of VMs –give no one root (least privilege + level playing field) –support federation (divergent code paths going forward)

Other Lessons Inferior tracks lead to superior locomotives Empower the user: yum Build it and they (research papers) will come Overlays are not networks Networks are just overlays PlanetLab: We debug your network From universal connectivity to gated communities If you don’t talk to your university’s general counsel, you aren’t doing network research Work fast, before anyone cares

Collaborators Andy Bavier Marc Fiuczynski Mark Huang Scott Karlin Aaron Klingaman Martin Makowiecki Reid Moran Steve Muir Stephen Soltesz Mike Wawrzoniak David Culler, Berkeley Tom Anderson, UW Timothy Roscoe, Intel Mic Bowman, Intel John Hartman, Arizona David Lowenthal, UGA Vivek Pai, Princeton David Parkes, Harvard Amin Vahdat, UCSD Rick McGeer, HP Labs

Node Availability

Live Slices

Memory Availability

Bandwidth Out

Bandwidth In

Disk Usage

Trust Relationships (cont) Node Owner PLC Service Developer (User) ) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure SAMA MA = Management Authority | SA = Slice Authority

Slice Creation PLC (SA) VMM NMVM PI SliceCreate( ) SliceUsersAdd( ) User/Agent GetTicket( ) VM … (redeem ticket with plc.scs) CreateVM(slice) plc.scs

Brokerage Service PLC (SA) VMM NMVM … (broker contacts relevant nodes) Bind(slice, pool) VM User BuyResources( ) Broker