Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. InstaGENI and GENICloud:

Similar presentations


Presentation on theme: "© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. InstaGENI and GENICloud:"— Presentation transcript:

1 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. InstaGENI and GENICloud: An Architecture for a Scalable Testbed Rick McGeer HP Labs

2 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

3 3 3 The “Grand Challenge” Phase of Research Transition from individual experimenter to institution or multi-institution team Typically necessitated because problems go beyond the scale of an individual research group Investigation of new phenomena required dramatic resources Ex: particle physics 1928-1932

4 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 4 4 Experimental Physics Before 1928 Dominated by tabletop apparatus Ex: Rutherford’s discovery of the nucleus, 1910 Done with tabletop apparatus, shown here Major complication: had to observe in darkened room

5 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 5 5 Example: Chadwick and the Neutron Chadwick used high-energy particles from polonium to bombard nucleus Neutron only method to account for high-energy radiation from bombardment Key apparatus “leftover plumbing” – pipe used to focus radiation beam Date: February, 1932

6 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 6 6 Entry of Institutional Physics Nuclear Fission, Cockcroft and Walton, April, 1932 Key: needed high voltages (est 250,000+ volts) to split nucleus Room(!) to hold apparatus major constraint Needed major industrial help (Metropolitan-Vickers)

7 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 7 7 What a difference two months makes.. Chadwick, 2/32 Cockcroft/Walton, 4/32

8 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 8 Since Then…

9 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 9 The era of institutional systems research Computer Systems Research, 1980-2010 Dominated by desktop-scale systems 1980-~1995: The desktop was the experimental system Ex: Original URL of Yahoo! was akebono.cs.stanford.edu/yahoo.html Akebono was Jerry Yang’s Sun workstation! Named for a prominent American Sumo wrestler – Jerry had spent a term in Kyoto in 1992 Sometimes “servers” used to offload desktops But rarely: “Server” ca. 1990 was a VAX 11, less powerful than a SUN or DEC workstation ~1995-~2005: Used servers primarily because desktop OS unsuitable for serious work ~2005-: Need clusters (and more) for any reasonable experiment The Era of Institutional Systems Research has begun 9

10 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 10 Why? Activity in 21 st Century Systems Research focused on massively parallel, loosely-coupled, distributed computing Content Distribution Networks Key-Value Stores Cloud Resource Allocation and Management Wide-Area Redundant Stores Fault Recovery and Robust Protocols End-system multicast Multicast messaging Key Problem: Emergent Behavior at Scale Can’t anticipate phenomena at scale from small-scale behavior Hence: Moderate-to-large scale testbeds: G-Lab, PlanetLab, OneLab,… 10

11 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 11 11 Why Computer Science is undergoing a phase change Key: need to understand planetary-scale systems Systems and services that run all over the planet Critical, pervasive, robust Emergent behavior at scale Can only understand by experimenting near scale Require millions of simultaneous users (or at least simulations of that scale) Ex: Twitter crashed at 1m users (and needed to rebuild infrastructure) Requires planetary-scale testbed and deployment platform

12 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 12 12 Key Differences Apparatus now takes many years to construct, costs billions Requires multi-national consortia Discoveries made by large teams of scientists Hundreds on the Top Quark team\ Thousands on the Higgs Team Experiments last for 30+ years Ex: ALICE at LHC, Babar at SLAC Experimental devices measured by energies of collisions produced Driven by cost and complexity of apparatus Cockcroft and Walton heralded era of institutional Grand Challenge physics

13 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 13 Problem: We can’t Build it 13 Industrial scale is rapidly outstripping academic/research scale Ex:Yahoo! “Clique” is 20,000 servers 20 VMs/server 400,000 VMs Far beyond any existing testbed PlanetLab + OneLab: 1000+ nodes Emulab: ~500 nodes Glab… Single Yahoo Clique 20x our best testbeds So what do we do? Federation…

14 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 14 Why Federate? 14 Because we can each afford a piece… Federate a large number of small clouds and testbeds Agree on common APIs and form of authorization Ad-hoc federation

15 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 15 InstaGENI and GENICloud/TransCloud Two complementary elements of Federation Architecture Inspiration: the Web Can we do for Clouds what the web did for computation? Make it easy, safe, cheap for people to build small Clouds Make it easy, safe, cheap for people to run Cloud jobs at many different sites GENICloud/TransCloud Common API across Cloud Systems Access Control without identity Equivalent of http InstaGENI “Just works” out of the box small cloud “Apple II”/reference webserver of Clouds

16 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 16 Key Assumption Each facility implements Slice-Based Facility Interface Standard, unified means of allocating Virtual machines at each layer of the stack (“slivers”) Networks/sets of virtual machines (“slices”) Already supported by PlanetLab, ORCA, ProtoGENI Now supported by Eucalyptus and OpenStack (our contribution) 16

17 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 17 What we need, what we don’t What we need Method of creating slices on clouds and distributed infrastructures Method of communicating between clouds and distributed infrastructures Method of interslice communication between clouds What we don’t Single sign-on! Single AUP Single resource allocation policy or procedure Unified security policy Principle of Minimal Agreement What is the minimum set of standards we can agree on to make this happen? 17

18 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 18 What do we need from the clouds Building Blocks Eucalyptus: Open-source clone of EC-2 OpenStack: Open-source Widespread developer mindshare (easy to use, familiar) What we want: Slice-Based Federation Architecture Means of creating/allocating slices Authorization by Attribute-Based Access Control (ABAC) Delegation primitive Explicit costs/resource allocation primitives Need to be able to control costs for the developer 18

19 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 19 Why GENICloud? Minimal set of facilities to permit seamless interconnection without trust Motivation: the Web Web sites mutually untrusting Key facilities: DNS, HTTP. HTML What are the equivalents for Clouds? Our cut: Slices, ABAC, DNS conventions....transcloud.net 19

20 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 20 Introduction – TransCloud TransCloud = A Cloud Where Services Migrate, Anytime, Anywhere In a World Where Distance Is Eliminated Joint Project Between GENICloud, iGENI, G-Lab GENICloud Provides Seamless Interoperation of Cloud Resources Across N-Sites, N- Administrative Domains iGENI Optimizes Private Networks of Intelligent Devices G-Lab contributes networking and advanced cloud resources

21 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 21 Seamless Computation Services Available Anytime, Anywhere “The Cloud” offers the prospect of ubiquitous information and services…BUT… Performance of Cloud services Highly Dependent On Location Of End-User, Applications, Middle Processes, Network Topology Of Cloud Data, Compute Processes, Storage, etc Why? Performance of Legacy Protocols Highly Dependent on Latency Therefore: Want to compute anywhere convenient Want to be able to compute everywhere

22 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 22 What do we need to make this work? Advanced Networking and Caching Firm guarantees on bandwidth and latency on a per-application basis Application support at Layer 3 and Layer 2 Means: Private Network where possible Access to platforms wherever data lives But data lives everywhere! No organization has Points of Presence (PoP)s everywhere Need for an individual to be able to make arrangements with an cloud service provider, anywhere, efficiently, minimal overhead Common form of identity Common identity not required Common AUP not required

23 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 23 What do we need to make this work? Ability to instantiate and run a program anywhere Common API at each level of the stack IaaS/NaaS (VM/VN Creation) PaaS (guaranteed OS/Progamming environment) OaaS (Standard Query/Data Management API) Easy, Standard Naming Scheme I need to know the name of my VM’s, logins, store etc without asking

24 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 24 Solution – TransCloud Introducing TransCloud Prototype An Early Instantiation of the Architecture A Distributed Environment That Enables Component and Interoperability Evaluation A Testbed On Which Early Experimental Research Can Be Conducted An Environment That Can Be Used To Explain/Showcase New Innovative Architecture/Concepts Through Demonstrations

25 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 25 DEMO What is the World’s Greenest City? Answering this question through analysis of landsat data Perfect job for distributed cloud Currently running on HP Labs GENICloud But we can distribute it anywhere… 25

26 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 26

27 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 27

28 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 28

29 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 29 TransCloud Today Approx 40 nodes at 4 sites, 10 Gb/s connectivity

30 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 30 ©2010 HP Created on xx/xx/xxxxof 222 The Instageni rack Designed for GENI Meso-scale deployment Eight 2012 deployments, 24 2013 deployments ProtoGENI and FOAM as native Aggregate Managers and Control Frameworks Boots to ProtoGENI instance with OpenFlow switch Designed for wide-area PlanetLab federation PlanetLab image provided with boot InstaGENI PlanetLab Central stood up Designed for expandability Approx 30U free in rack

31 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 31 31 ©2010 HP Created on xx/xx/xxxxof 222 Understanding the instageni rack Two big things: IT’S JUST ProtoGENI It’s this thing

32 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 32 32 ©2010 HP Created on xx/xx/xxxxof 222 It’s just protogeni Key Design criterion behind the InstaGENI rack Reliable, proven control framework Familiar UI to GENI experimenters and administrators Well-understood support and administrative model We’re not inventing new Control Frameworks, we’re deploying Control Frameworks and Aggregate Managers you understand and know how to use Network of baby ProtoGENI’s, with SDN native to the racks Allocation of resources with familiar tools Flack... Easy distribution and proven ability to run many images Support model well-understood If something goes wrong, we know how to fix it... PlanetLab and OpenFlow integration out-of-the-box

33 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 33 33 ©2010 HP Created on xx/xx/xxxxof 222 The “Apple-II of Clouds” Key insight: the Apple II wasn’t the first mass market computer because it was innovative, but because it was packaged Pre Apple-II, computers were all hobbyist kit “Much Assembly, Configuration, Software Writing, Installation required” But the Apple-II worked out of the box Plug it in and turn it on And that’s what made a revolution Same Idea Plug in the InstaGENI Rack Put in the wide-area network connection Rob will install the software and bring it up over the net You’re on the Mesoscale!

34 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 34 34 ©2010 HP Created on xx/xx/xxxxof 222 The InstaGENI rack Designed for easy deployability Power: 220V L6-20 receptacle (or two 110V) Network: 10/100/1000 Base-T Pre-wired from the factory On the Mesoscale Network connections pre-allocated VLANs and connectivity pre-wired before the rack arrives Designed for Remote Management HP iLO on each node Designed for flexible networking 4 1G NICs/node, 20 1G NICs, v2 linecards OpenFlow switch

35 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 35 35 ©2010 HP Created on xx/xx/xxxxof 222 instageni rack hardware Control Node for ProtoGENI Boss, ProtoGENI users, FOAM Controller, Image storage… HP ProLiant DL 360G7, quad-core, single-socket, dual NIC (1 Gb/sec), 12GB RAM, 4TB Disk (RAID), iLO Five Experiment Nodes HP ProLiant DL 360G7, six-core, dual-socket, quad NIC (1 Gb/sec), 48GB RAM, 1TB Disk, iLO OpenFlow Switch HP E 5406, 20 1 Gb/s, v2 linecards Hybrid mode

36 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 36 36 ©2010 HP Created on xx/xx/xxxxof 222 Instageni planned deployment GENI funding 8 sites in Year 1 24 sites in Year 2 All in USA Other Racks US Public Sector except Federal Government: Special HP program Contact Michaela Mezo, HP SLED

37 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 37 37 ©2010 HP Created on xx/xx/xxxxof 222 Instageni year 1 sites

38 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 38 38 ©2010 HP Created on xx/xx/xxxxof 222 Instageni rack diagram

39 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 39 39 ©2010 HP Created on xx/xx/xxxxof 222 Instageni rack topology

40 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 40 40 ©2010 HP Created on xx/xx/xxxxof 222 instageni photo

41 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 41 41 ©2010 HP Created on xx/xx/xxxxof 222 Instageni software architecture ProtoGENI (Hardware as a Service, Infrastructure as a Service) FOAM (Networks as a Service) ProtoGENI Image PlanetLabImagePlanetLabImage InstaGENI PLC Layer 2 and 3 connectivity GENI L2/L3 Slice

42 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 42 42 ©2010 HP Created on xx/xx/xxxxof 222 Control Infrastructure Control / External switch Data Plane Switch Control Node: Xen Hypervisor ProtoGENI “boss” ProtoGENI “ops” FOAM FlowVisor

43 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 43 43 ©2010 HP Created on xx/xx/xxxxof 222 (rE)Provisioning Nodes ProtoGENI Shared ProtoGENI Exclusive ProtoGENI Exclusive ProtoGENI Exclusive PlanetLab Shared

44 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 44 44 ©2010 HP Created on xx/xx/xxxxof 222 GENI Integration Will ship with full support for GENI AM (likely v3) Updates as GENI APIs evolve Support for Tom Lehman’s RSpec stitching extension Will have local FOAM and FlowVisor instances for OpenFlow integration Will start by affiliating with the ProtoGENI clearinghouse Switch affiliation to the GENI Clearinghouse once up

45 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 45 45 ©2010 HP Created on xx/xx/xxxxof 222 Software Management Frequent control software updates Rarely affects running slivers VM snapshots to roll back failed updates Major software changes, rather than on a set schedule All updates done by InstaGENI personnel (Sites can make local modifications, but this “voids the warranty”) Testing period on Utah rack first Updating disk images New version of standard images distributed nightly Voluntary updates for exclusive-use nodes and VM images Scheduled updates for VM host images Security updates will be handled differently on case-by-case basis

46 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 46 46 ©2010 HP Created on xx/xx/xxxxof 222 Operations and Management Providing GMOC with: Visibility into current users and slices Health and historical data “Kill switch” credentials for emergency shutdown Local administrators get the same access Automatic verification of slices upon setup Local admins get mail about hardware failures PlanetFlow-based mapping of address/packets to slices

47 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 47 ©2010 HP Created on xx/xx/xxxxof 222 InstaGENI Sites and Network: Y1 University of Utah, Princeton University, GPO, Northwestern University, Clemson University, Georgia Tech, University of Kansas, New York University University of Victoria GENInet (GENI Backbone) SL/MREN MAGPI NOX GPN UEN SOX NYSERNET CANARIE BCNETMAX

48 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 48 ©2010 HP Created on xx/xx/xxxxof 222 StarLight/MREN University of Illinois Urbana Champaign I2 at StarLight ESnet at StarLight GENInet at StarLight/MREN Facility MREN E1200l Switch Optical Switch ICCN/I- WIRE StarLight E1200 Switch NLR At StarLight Optical Switch I2 ION ESnet Optical Switch Multiple EU, Asian, South American Sites NDDI DYNES InstaGENI Rack With OF SW iCAIR GENI OF SW iCAIR Multiple National Regional, State Net Connections

49 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 49 49 ©2010 HP Created on xx/xx/xxxxof 222 Selected Other Interconnections

50 © Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. 50 Conclusions and Future Work Described TransCloud, a set of proposed standards to permit computation anywhere GENICloud, the first TransCloud federate InstaGENI, a works-out-of-the-box miniature Cloud For the Future: GENICloud/TransCloud is an open set of standards, and open federation Standards very much a work in progress “Slice-Around-The-World” Demos throughout this year Join us! 50


Download ppt "© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. InstaGENI and GENICloud:"

Similar presentations


Ads by Google