© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. InstaGENI and GENICloud:

Slides:



Advertisements
Similar presentations
Distributed Data Processing
Advertisements

1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
The Instageni Initiative
Sponsored by the National Science Foundation1April 8, 2014, Testbeds as a Service: GENI Heidi Picher Dempsey Internet2 Annual Meeting April 8,
Brocade VDX 6746 switch module for Hitachi Cb500
PlanetLab Architecture Larry Peterson Princeton University.
Cloud Computing Imranul Hoque. Today’s Cloud Computing.
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Xen , Linux Vserver , Planet Lab
An Approach to Secure Cloud Computing Architectures By Y. Serge Joseph FAU security Group February 24th, 2011.
Kansei Connectivity Requirements: Campus Deployment Case Study Anish Arora/Wenjie Zeng, GENI Kansei Project Prasad Calyam, Ohio Supercomputer Center/OARnet.
© Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Software Defined Networking.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
Notes to the presenter. I would like to thank Jim Waldo, Jon Bostrom, and Dennis Govoni. They helped me put this presentation together for the field.
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
1 GENI: Global Environment for Network Innovations Jennifer Rexford On behalf of Allison Mankin (NSF)
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Restricted. For HP.
Sponsored by the National Science Foundation TRANSCLOUD: Design Considerations for a High-Performance Cloud Architecture Across Multiple Administrative.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Cisco and OpenStack Lew Tucker VP/CTO Cloud Computing Cisco Systems,
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. GENI and InstaGENI: An.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Enable Cloud with Virtual.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 7 2/23/2015.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration Based on paper by Laura Grit, David Irwin, Aydan.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
GENI Racks: Infrastructure Overview
Digital Object Architecture
An emerging computing paradigm where data and services reside in massively scalable data centers and can be ubiquitously accessed from any connected devices.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. GENI Mesoscale and The.
1 Supporting the development of distributed systems CS606, Xiaoyan Hong University of Alabama.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
1 ©2010 HP Created on xx/xx/xxxxof 222 Nick Bastin, Andy Bavier, Jessica Blaine, Joe Mambretti, Rick McGeer, Rob Ricci, Nicki Watts PlanetWorks, HP, University.
GEC3www.geni.net1 GENI Spiral 1 Control Frameworks Global Environment for Network Innovations Aaron Falk Clearing.
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. InstaGENI and GENICloud/TransCloud:
1 ©2010 HP Created on xx/xx/xxxxof 222 Nick Bastin, Andy Bavier, Jessica Blaine, Joe Mambretti, Rick McGeer, Rob Ricci, Nicki Watts PlanetWorks, HP, University.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
MICROSOFT AZURE ISV PROFILE: D-SCOPE SYSTEMS D-Scope Systems is an enterprise-level medical media product and integration specialist company. It provides.
Sponsored by the National Science Foundation Monitoring Demonstration Kevin Bohan, GMOC
Server Virtualization
Marc Fiuczynski Princeton University Marco Yuen University of Victoria PlanetLab & Clusters.
Sponsored by the National Science Foundation Cluster D Working Meetings GENI Engineering Conference 5 Seattle, WA July ,
Introducing Virtualization via an OpenStack “Cloud” System to SUNY Orange Applied Technology Students SUNY Innovative Instruction Technology Grant Christopher.
SDN Management Layer DESIGN REQUIREMENTS AND FUTURE DIRECTION NO OF SLIDES : 26 1.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
Datalayer Notebook Allows Data Scientists to Play with Big Data, Build Innovative Models, and Share Results Easily on Microsoft Azure MICROSOFT AZURE ISV.
Sponsored by the National Science Foundation GENI Aggregate Manager API Tom Mitchell March 16, 2010.
CLOUD COMPUTING. What is cloud computing ? History Virtualization Cloud Computing hardware Cloud Computing services Cloud Architecture Advantages & Disadvantages.
CS525: Big Data Analytics MapReduce Computing Paradigm & Apache Hadoop Open Source Fall 2013 Elke A. Rundensteiner 1.
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. Big Data Directions Greg.
Ian Collier, STFC, Romain Wartel, CERN Maintaining Traceability in an Evolving Distributed Computing Environment Introduction Security.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
RANDY MODOWSKI COSC Cloud Computing. Road Map What is Cloud Computing? History of “The Cloud” Cloud Milestones How Cloud Computing is being used.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Prof. Jong-Moon Chung’s Lecture Notes at Yonsei University
Md Baitul Al Sadi, Isaac J. Cushman, Lei Chen, Rami J. Haddad
NextGENI: The Nation’s Edge Cloud
Auth0 Is Identity Made Simple for Developers, Built by Developers and Supported by the High Availability and Performance of Microsoft Azure MICROSOFT AZURE.
The Globus Toolkit™: Information Services
Appcelerator Arrow: Build APIs in Minutes. Connect to Any Data Source
Microsoft Virtual Academy
GENI Exploring Networks of the Future
Productive + Hybrid + Intelligent + Trusted
Presentation transcript:

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. InstaGENI and GENICloud: An Architecture for a Scalable Testbed Rick McGeer HP Labs

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

3 3 The “Grand Challenge” Phase of Research Transition from individual experimenter to institution or multi-institution team Typically necessitated because problems go beyond the scale of an individual research group Investigation of new phenomena required dramatic resources Ex: particle physics

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 4 4 Experimental Physics Before 1928 Dominated by tabletop apparatus Ex: Rutherford’s discovery of the nucleus, 1910 Done with tabletop apparatus, shown here Major complication: had to observe in darkened room

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 5 5 Example: Chadwick and the Neutron Chadwick used high-energy particles from polonium to bombard nucleus Neutron only method to account for high-energy radiation from bombardment Key apparatus “leftover plumbing” – pipe used to focus radiation beam Date: February, 1932

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 6 6 Entry of Institutional Physics Nuclear Fission, Cockcroft and Walton, April, 1932 Key: needed high voltages (est 250,000+ volts) to split nucleus Room(!) to hold apparatus major constraint Needed major industrial help (Metropolitan-Vickers)

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 7 7 What a difference two months makes.. Chadwick, 2/32 Cockcroft/Walton, 4/32

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 8 Since Then…

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 9 The era of institutional systems research Computer Systems Research, Dominated by desktop-scale systems 1980-~1995: The desktop was the experimental system Ex: Original URL of Yahoo! was akebono.cs.stanford.edu/yahoo.html Akebono was Jerry Yang’s Sun workstation! Named for a prominent American Sumo wrestler – Jerry had spent a term in Kyoto in 1992 Sometimes “servers” used to offload desktops But rarely: “Server” ca was a VAX 11, less powerful than a SUN or DEC workstation ~1995-~2005: Used servers primarily because desktop OS unsuitable for serious work ~2005-: Need clusters (and more) for any reasonable experiment The Era of Institutional Systems Research has begun 9

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 10 Why? Activity in 21 st Century Systems Research focused on massively parallel, loosely-coupled, distributed computing Content Distribution Networks Key-Value Stores Cloud Resource Allocation and Management Wide-Area Redundant Stores Fault Recovery and Robust Protocols End-system multicast Multicast messaging Key Problem: Emergent Behavior at Scale Can’t anticipate phenomena at scale from small-scale behavior Hence: Moderate-to-large scale testbeds: G-Lab, PlanetLab, OneLab,… 10

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Why Computer Science is undergoing a phase change Key: need to understand planetary-scale systems Systems and services that run all over the planet Critical, pervasive, robust Emergent behavior at scale Can only understand by experimenting near scale Require millions of simultaneous users (or at least simulations of that scale) Ex: Twitter crashed at 1m users (and needed to rebuild infrastructure) Requires planetary-scale testbed and deployment platform

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Key Differences Apparatus now takes many years to construct, costs billions Requires multi-national consortia Discoveries made by large teams of scientists Hundreds on the Top Quark team\ Thousands on the Higgs Team Experiments last for 30+ years Ex: ALICE at LHC, Babar at SLAC Experimental devices measured by energies of collisions produced Driven by cost and complexity of apparatus Cockcroft and Walton heralded era of institutional Grand Challenge physics

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 13 Problem: We can’t Build it 13 Industrial scale is rapidly outstripping academic/research scale Ex:Yahoo! “Clique” is 20,000 servers 20 VMs/server 400,000 VMs Far beyond any existing testbed PlanetLab + OneLab: nodes Emulab: ~500 nodes Glab… Single Yahoo Clique 20x our best testbeds So what do we do? Federation…

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 14 Why Federate? 14 Because we can each afford a piece… Federate a large number of small clouds and testbeds Agree on common APIs and form of authorization Ad-hoc federation

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 15 InstaGENI and GENICloud/TransCloud Two complementary elements of Federation Architecture Inspiration: the Web Can we do for Clouds what the web did for computation? Make it easy, safe, cheap for people to build small Clouds Make it easy, safe, cheap for people to run Cloud jobs at many different sites GENICloud/TransCloud Common API across Cloud Systems Access Control without identity Equivalent of http InstaGENI “Just works” out of the box small cloud “Apple II”/reference webserver of Clouds

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 16 Key Assumption Each facility implements Slice-Based Facility Interface Standard, unified means of allocating Virtual machines at each layer of the stack (“slivers”) Networks/sets of virtual machines (“slices”) Already supported by PlanetLab, ORCA, ProtoGENI Now supported by Eucalyptus and OpenStack (our contribution) 16

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 17 What we need, what we don’t What we need Method of creating slices on clouds and distributed infrastructures Method of communicating between clouds and distributed infrastructures Method of interslice communication between clouds What we don’t Single sign-on! Single AUP Single resource allocation policy or procedure Unified security policy Principle of Minimal Agreement What is the minimum set of standards we can agree on to make this happen? 17

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 18 What do we need from the clouds Building Blocks Eucalyptus: Open-source clone of EC-2 OpenStack: Open-source Widespread developer mindshare (easy to use, familiar) What we want: Slice-Based Federation Architecture Means of creating/allocating slices Authorization by Attribute-Based Access Control (ABAC) Delegation primitive Explicit costs/resource allocation primitives Need to be able to control costs for the developer 18

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 19 Why GENICloud? Minimal set of facilities to permit seamless interconnection without trust Motivation: the Web Web sites mutually untrusting Key facilities: DNS, HTTP. HTML What are the equivalents for Clouds? Our cut: Slices, ABAC, DNS conventions....transcloud.net 19

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 20 Introduction – TransCloud TransCloud = A Cloud Where Services Migrate, Anytime, Anywhere In a World Where Distance Is Eliminated Joint Project Between GENICloud, iGENI, G-Lab GENICloud Provides Seamless Interoperation of Cloud Resources Across N-Sites, N- Administrative Domains iGENI Optimizes Private Networks of Intelligent Devices G-Lab contributes networking and advanced cloud resources

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 21 Seamless Computation Services Available Anytime, Anywhere “The Cloud” offers the prospect of ubiquitous information and services…BUT… Performance of Cloud services Highly Dependent On Location Of End-User, Applications, Middle Processes, Network Topology Of Cloud Data, Compute Processes, Storage, etc Why? Performance of Legacy Protocols Highly Dependent on Latency Therefore: Want to compute anywhere convenient Want to be able to compute everywhere

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 22 What do we need to make this work? Advanced Networking and Caching Firm guarantees on bandwidth and latency on a per-application basis Application support at Layer 3 and Layer 2 Means: Private Network where possible Access to platforms wherever data lives But data lives everywhere! No organization has Points of Presence (PoP)s everywhere Need for an individual to be able to make arrangements with an cloud service provider, anywhere, efficiently, minimal overhead Common form of identity Common identity not required Common AUP not required

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 23 What do we need to make this work? Ability to instantiate and run a program anywhere Common API at each level of the stack IaaS/NaaS (VM/VN Creation) PaaS (guaranteed OS/Progamming environment) OaaS (Standard Query/Data Management API) Easy, Standard Naming Scheme I need to know the name of my VM’s, logins, store etc without asking

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 24 Solution – TransCloud Introducing TransCloud Prototype An Early Instantiation of the Architecture A Distributed Environment That Enables Component and Interoperability Evaluation A Testbed On Which Early Experimental Research Can Be Conducted An Environment That Can Be Used To Explain/Showcase New Innovative Architecture/Concepts Through Demonstrations

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 25 DEMO What is the World’s Greenest City? Answering this question through analysis of landsat data Perfect job for distributed cloud Currently running on HP Labs GENICloud But we can distribute it anywhere… 25

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 26

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 27

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 28

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 29 TransCloud Today Approx 40 nodes at 4 sites, 10 Gb/s connectivity

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 30 ©2010 HP Created on xx/xx/xxxxof 222 The Instageni rack Designed for GENI Meso-scale deployment Eight 2012 deployments, deployments ProtoGENI and FOAM as native Aggregate Managers and Control Frameworks Boots to ProtoGENI instance with OpenFlow switch Designed for wide-area PlanetLab federation PlanetLab image provided with boot InstaGENI PlanetLab Central stood up Designed for expandability Approx 30U free in rack

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Understanding the instageni rack Two big things: IT’S JUST ProtoGENI It’s this thing

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 It’s just protogeni Key Design criterion behind the InstaGENI rack Reliable, proven control framework Familiar UI to GENI experimenters and administrators Well-understood support and administrative model We’re not inventing new Control Frameworks, we’re deploying Control Frameworks and Aggregate Managers you understand and know how to use Network of baby ProtoGENI’s, with SDN native to the racks Allocation of resources with familiar tools Flack... Easy distribution and proven ability to run many images Support model well-understood If something goes wrong, we know how to fix it... PlanetLab and OpenFlow integration out-of-the-box

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 The “Apple-II of Clouds” Key insight: the Apple II wasn’t the first mass market computer because it was innovative, but because it was packaged Pre Apple-II, computers were all hobbyist kit “Much Assembly, Configuration, Software Writing, Installation required” But the Apple-II worked out of the box Plug it in and turn it on And that’s what made a revolution Same Idea Plug in the InstaGENI Rack Put in the wide-area network connection Rob will install the software and bring it up over the net You’re on the Mesoscale!

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 The InstaGENI rack Designed for easy deployability Power: 220V L6-20 receptacle (or two 110V) Network: 10/100/1000 Base-T Pre-wired from the factory On the Mesoscale Network connections pre-allocated VLANs and connectivity pre-wired before the rack arrives Designed for Remote Management HP iLO on each node Designed for flexible networking 4 1G NICs/node, 20 1G NICs, v2 linecards OpenFlow switch

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 instageni rack hardware Control Node for ProtoGENI Boss, ProtoGENI users, FOAM Controller, Image storage… HP ProLiant DL 360G7, quad-core, single-socket, dual NIC (1 Gb/sec), 12GB RAM, 4TB Disk (RAID), iLO Five Experiment Nodes HP ProLiant DL 360G7, six-core, dual-socket, quad NIC (1 Gb/sec), 48GB RAM, 1TB Disk, iLO OpenFlow Switch HP E 5406, 20 1 Gb/s, v2 linecards Hybrid mode

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Instageni planned deployment GENI funding 8 sites in Year 1 24 sites in Year 2 All in USA Other Racks US Public Sector except Federal Government: Special HP program Contact Michaela Mezo, HP SLED

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Instageni year 1 sites

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Instageni rack diagram

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Instageni rack topology

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 instageni photo

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Instageni software architecture ProtoGENI (Hardware as a Service, Infrastructure as a Service) FOAM (Networks as a Service) ProtoGENI Image PlanetLabImagePlanetLabImage InstaGENI PLC Layer 2 and 3 connectivity GENI L2/L3 Slice

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Control Infrastructure Control / External switch Data Plane Switch Control Node: Xen Hypervisor ProtoGENI “boss” ProtoGENI “ops” FOAM FlowVisor

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 (rE)Provisioning Nodes ProtoGENI Shared ProtoGENI Exclusive ProtoGENI Exclusive ProtoGENI Exclusive PlanetLab Shared

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 GENI Integration Will ship with full support for GENI AM (likely v3) Updates as GENI APIs evolve Support for Tom Lehman’s RSpec stitching extension Will have local FOAM and FlowVisor instances for OpenFlow integration Will start by affiliating with the ProtoGENI clearinghouse Switch affiliation to the GENI Clearinghouse once up

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Software Management Frequent control software updates Rarely affects running slivers VM snapshots to roll back failed updates Major software changes, rather than on a set schedule All updates done by InstaGENI personnel (Sites can make local modifications, but this “voids the warranty”) Testing period on Utah rack first Updating disk images New version of standard images distributed nightly Voluntary updates for exclusive-use nodes and VM images Scheduled updates for VM host images Security updates will be handled differently on case-by-case basis

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Operations and Management Providing GMOC with: Visibility into current users and slices Health and historical data “Kill switch” credentials for emergency shutdown Local administrators get the same access Automatic verification of slices upon setup Local admins get mail about hardware failures PlanetFlow-based mapping of address/packets to slices

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 47 ©2010 HP Created on xx/xx/xxxxof 222 InstaGENI Sites and Network: Y1 University of Utah, Princeton University, GPO, Northwestern University, Clemson University, Georgia Tech, University of Kansas, New York University University of Victoria GENInet (GENI Backbone) SL/MREN MAGPI NOX GPN UEN SOX NYSERNET CANARIE BCNETMAX

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 48 ©2010 HP Created on xx/xx/xxxxof 222 StarLight/MREN University of Illinois Urbana Champaign I2 at StarLight ESnet at StarLight GENInet at StarLight/MREN Facility MREN E1200l Switch Optical Switch ICCN/I- WIRE StarLight E1200 Switch NLR At StarLight Optical Switch I2 ION ESnet Optical Switch Multiple EU, Asian, South American Sites NDDI DYNES InstaGENI Rack With OF SW iCAIR GENI OF SW iCAIR Multiple National Regional, State Net Connections

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice ©2010 HP Created on xx/xx/xxxxof 222 Selected Other Interconnections

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 50 Conclusions and Future Work Described TransCloud, a set of proposed standards to permit computation anywhere GENICloud, the first TransCloud federate InstaGENI, a works-out-of-the-box miniature Cloud For the Future: GENICloud/TransCloud is an open set of standards, and open federation Standards very much a work in progress “Slice-Around-The-World” Demos throughout this year Join us! 50