ExoGENI Racks Ilia Baldine

Slides:



Advertisements
Similar presentations
DiCloud and ViSE Cluster D Session November 2 nd, 2010.
Advertisements

D u k e S y s t e m s Foundations of a Future Inter-Cloud Architecture Jeff Chase Duke University / RENCI.
1 IU Campus GENI/Openflow Experience Matt Davy Quilt Meeting, July 22nd 2010.
ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
The Instageni Initiative
Sponsored by the National Science Foundation1April 8, 2014, Testbeds as a Service: GENI Heidi Picher Dempsey Internet2 Annual Meeting April 8,
ORCA Status Report and Roadmap GEC8 Ilia Baldine.
ORCA Overview LEARN Workshop Ilia Baldine, Anirban Mandal Renaissance Computing Institute, UNC-CH.
Kansei Connectivity Requirements: Campus Deployment Case Study Anish Arora/Wenjie Zeng, GENI Kansei Project Prasad Calyam, Ohio Supercomputer Center/OARnet.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
GEC21 Experimenter/Developer Roundtable (Experimenter) Paul Ruth RENCI / UNC Chapel Hill
Using the jFed tool to experiment from zero to hero Brecht Vermeulen FGRE, July 7 th, 2015.
Ilya Baldin 2.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 7 2/23/2015.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Resource Representations in GENI Rob Sherwood, OpenFlow Hongwei Zhang, Wireless sensor network description language Ilia Baldine, Yufeng Xin, Semantic.
Sponsored by the National Science Foundation Campus/Experiment Topics in Monitoring and I&M GENI Engineering Conference 15 Houston, TX Sarah Edwards Chaos.
GENI Racks: Infrastructure Overview
Sponsored by the National Science Foundation GENI and Cloud Computing Niky RIga GENI Project Office
D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,
SDN Dev Group, Week 2 Aaron GemberAditya Akella University of Wisconsin-Madison 1 Wisconsin Testbed; Design Considerations.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
1 ©2010 HP Created on xx/xx/xxxxof 222 Nick Bastin, Andy Bavier, Jessica Blaine, Joe Mambretti, Rick McGeer, Rob Ricci, Nicki Watts PlanetWorks, HP, University.
Sponsored by the National Science Foundation Programmable Networks and GENI Marshall Brinn, GPO GEC October 25, 2012.
VICCI: Programmable Cloud Computing Research Testbed Andy Bavier Princeton University November 3, 2011.
GEC3www.geni.net1 GENI Spiral 1 Control Frameworks Global Environment for Network Innovations Aaron Falk Clearing.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
Resource Representations in GENI: A path forward Ilia Baldine, Yufeng Xin Renaissance Computing Institute,
Sandor Acs 05/07/
1 ©2010 HP Created on xx/xx/xxxxof 222 Nick Bastin, Andy Bavier, Jessica Blaine, Joe Mambretti, Rick McGeer, Rob Ricci, Nicki Watts PlanetWorks, HP, University.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
Sponsored by the National Science Foundation GENI Current Ops Workflow Connectivity John Williams San Juan, Puerto Rico Mar
Condor in Networked Clouds Ilia Baldine, Yufeng Xin,, Anirban Mandal, Chris Heermann, Paul Ruth, Jeffery L.Tilson RENCI, UNC-CH Jeff Chase, Victor J. Orlikowski,
Sponsored by the National Science Foundation Monitoring Demonstration Kevin Bohan, GMOC
Sponsored by the National Science Foundation ExoGENI
Ilia Baldine, Jeff Chase, Mike Zink, Max Ott.  14 GPO-funded racks ◦ Partnership between RENCI, Duke and IBM ◦ IBM x3650 M3/M4 servers  1x146GB 10K.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future Sarah Edwards, GPO
GIMI Update Mike Zink University of Massachusetts Amherst GEC 13, Los Angeles, March 13 th 1.
Using GENI for computational science Ilya Baldin RENCI, UNC – Chapel Hill.
Sponsored by the National Science Foundation GENI Campus Ops Workflow Chaos Golubitsky San Juan, Puerto Rico Mar
Sponsored by the National Science Foundation Systematic Experimentation in GENI Sarah Edwards GENI Project Office.
Sponsored by the National Science Foundation 1 GEC16, March 21, 2013 Are you ready for the tutorial? 1.Did you do the pre-work? A.Are you able to login.
Sponsored by the National Science Foundation Meeting Introduction: Integrating GENI Networks with Control Frameworks Aaron Falk GENI Project Office June.
Sponsored by the National Science Foundation GENI Aggregate Manager API Tom Mitchell March 16, 2010.
Sponsored by the National Science Foundation 1 Nov 4, 2010 Cluster-D Mtg at GEC9 Tue, Nov 2, 12noon – 4:30pm Meeting Chair: Ilia Baldine (RENCI) –System.
ORCA Status Report and Roadmap GEC8 Ilia Baldine.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
GEC22 Experimenter/Developer Roundtable (Developer) Victor Orlikowski Duke University
CloudLab Aditya Akella. CloudLab 2 Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts,
NEuca - Network Extensions to Eycalyptus Ilia Baldine Renaissance Computing Institute, UNC-CH.
Data-Intensive Cloud Control for GENI Cluster D Session July 20 th, 2010.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
LEARN Integration Deniz Gurkan and Charles Chambers University of Houston 11/02/2010 GEC9 – ORCA-D. Gurkan, LEARN.
GIMI Tutorial GIMI Team GEC 16, Salt Lake City, March 19 th 1.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation ORCA-BEN, ORCA-AUG Spiral 2 Year-end Project Review RENCI UNC-CH, Duke University PI: Ilia Baldine, Jeff Chase.
ExoGENI OpenFlow support Ilya Baldin Paul Ruth RENCI Director for Network Research and.
ExoGENI GENI Going Forward Tasks Ilya Baldin RENCI Director for Network Research and Infrastructure.
Using the jFed tool to experiment from zero to hero
GENI Exploring Networks of the Future
NextGENI: The Nation’s Edge Cloud
Stitching: the ORCA View
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
NSF cloud Chameleon: Phase 2 Networking
Chameleon and ExoGENI Paul Ruth (PI) Bonus Demo:
The Infrastructure of the CDS Group
GENI Exploring Networks of the Future
Presentation transcript:

ExoGENI Racks Ilia Baldine

Testbed 14 GPO-funded racks –Partnership between RENCI, Duke and IBM –IBM x3650 M3/M4 servers 1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 6-core Intel X Ghz CPU Dual 1Gbps adapter 10G dual-port Chelseo adapter –BNT G/40G OpenFlow switch –DS3512 6TB sliverable storage iSCSI interface for head node image storage as well as experimenter slivering Each rack is a small networked cloud –OpenStack-based (some older racks run Eucalyptus) –EC2 nomenclature for node sizes (m1.small, m1.large etc) –Interconnected by combination of dynamic and static L2 circuits through regionals and national backbones 2

ExoGENI Status 2 new racks deployed –RENCI and GPO 2 existing racks (not yet OpenFlow enabled) –Duke and UNC 2 more racks available by GEC14 –FIU and UH Connected via BEN ( LEARN and NLR FrameNethttp://ben.renci.org Partner racks –NICTA (under construction) –U of Alaska Fairbanks 3

Rack diagram and connectivity Rack has a management connection to campus network It may have an optional connection to the OpenFlow campus network for experiments A connection to FrameNet or I2 ION –Direct –Via a pool of vlans with static tags 4

Rack IP address assignment /24 of publicly routable IP addresses is the best choice 2 are assigned to elements of the rack –Management/Head node –SSG5 VPN appliance (to create a secure mesh for management access between racks) The rest is used to assign IP addresses to experimenter instances –VMs and hardware nodes 5

Example rack connection (GPO/BBN) 6

Rack software CentOS 6.X base install Resource Provisioning –xCAT for bare metal provisioning –OpenStack + NEuca for VMs –FlowVisor NOX used internally by ORCA GENI Software –ORCA for VM, baremetal and OpenFlow –FOAM for OpenFlow experiments Worker and head nodes can be reinstalled remotely via IPMI + KickStart –Working on security related to updates Monitoring via Nagios (Check_MK) –ExoGENI ops staff can monitor all racks –Site owners can monitor their own rack Syslogs collected centrally 7

Rack Software Stack 8

Rack installation Particulars: –Power options include (negotiated ahead of time) 208V 3Phase 208V 1Phase 120V 1Phase Total of ~10kW of power needed. –Space: e U Rack Cabinet 79.5" H x 25.5" W x 43.5" D (2020 mm x 648 mm x 1105 mm) Racks arrive on-site pre-assembled and pre-tested by IBM with most software already pre-installed IBM representative will need to come on-site to complete install and hookup –NBD hardware support ExoGENI Ops finishes ORCA configuration GPO acceptance testing 9

Experimentation Compute nodes –Up to 100 VMs in each full rack –A few (2) bare-metal nodes True Layer 2 slice topologies can be created –Within individual racks –Between racks –With automatic and user-specified resource binding and slice topology embedding OpenFlow experimentation –Within racks –Between racks –Include OpenFlow overlays in NLR (and I2) –On-ramp to campus OpenFlow network (if available) Experimenters are allowed and encouraged to use their own virtual appliance images 10

ExoGENI slice isolation Strong isolation is the goal Compute instances are KVM based and get a dedicated number of cores –Caveat: currently all instances get 1 core (different RAM and disk). Will be remedied by Summer/Fall 2012 VLANs are the basis of connectivity –VLANs can be best effort or bandwidth-provisioned (within and between racks) –Caveat: current hardware in the racks allows best-effort VLANs only – will be remedied by Fall 2012 with support from the vendor 11

ORCA Overview Originally developed by Jeff Chase and his students at Duke Funded as Control Framework Candidate for GENI –Jointly developed by RENCI and Duke for GENI since Supported under several current NSF and DOE grants to enable ORCA to run computational networked clouds Fully distributed architecture Federated with GENI –We do not run SA’s or issue GENI credentials –We honor GPO and Emulab-issued credentials Supports ORCA-native interface, resource specification and tools –Flukes Supports GENI AM API and GENI Rspec –Omni (Almost) compatible with Gush 12

ORCA Deployment in ExoGENI Each rack runs its own SM actor that exposes –ORCA native API –GENI AM API Rack-local SM –Can only create slice topologies with resources within that rack ‘ExoSM’ has global visibility –Has access to resources in all racks –Has access to network backbone resources for stitching topologies between racks 13

ORCA deployment 14

The team Grand Pooh-bah – Jeff Chase ExoGENI Ops –Brad Viviano (RENCI) – rack hardware design –Chris Heermann (RENCI) – rack networking design –Victor Orlikowski (Duke) – software packaging and configuration –Jonathan Mills (RENCI) – operations and monitoring ORCA Development –Yufeng Xin (RENCI) –Aydan Yumerefendi (Duke) –Anirban Mandal (RENCI) –Prateek Jaipuria (Duke) –Victor Orlikowski (Duke) –Paul Ruth (RENCI) 15