Ilya Baldin 2.

Slides:



Advertisements
Similar presentations
Duke University SDN Approaches and Uses GENI CIO Workshop – July 12, 2012.
Advertisements

D u k e S y s t e m s Foundations of a Future Inter-Cloud Architecture Jeff Chase Duke University / RENCI.
1 IU Campus GENI/Openflow Experience Matt Davy Quilt Meeting, July 22nd 2010.
1 Applications Virtualization in VPC Nadya Williams UCSD.
ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
The Instageni Initiative
System Center 2012 R2 Overview
Sponsored by the National Science Foundation1April 8, 2014, Testbeds as a Service: GENI Heidi Picher Dempsey Internet2 Annual Meeting April 8,
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
Compute Aggregate 1 must advertise this link. We omit the physical port on the switch to which the node is directly connected. Network Aggregate Links.
ORCA Status Report and Roadmap GEC8 Ilia Baldine.
The ADAMANT Project: Linking Scientific Workflows and Networks “Adaptive Data-Aware Multi-Domain Application Network Topologies” Ilia Baldine, Charles.
ExoGENI Racks Ilia Baldine
ORCA Overview LEARN Workshop Ilia Baldine, Anirban Mandal Renaissance Computing Institute, UNC-CH.
Kansei Connectivity Requirements: Campus Deployment Case Study Anish Arora/Wenjie Zeng, GENI Kansei Project Prasad Calyam, Ohio Supercomputer Center/OARnet.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
GEC21 Experimenter/Developer Roundtable (Experimenter) Paul Ruth RENCI / UNC Chapel Hill
Marilyn T. Smith, Head, MIT Information Services & Technology DataSpace IS&T Data CenterMIT Optical Network 1.
Using the jFed tool to experiment from zero to hero Brecht Vermeulen FGRE, July 7 th, 2015.
QTIP Version 0.2 4th August 2015.
CRON: Cyber-infrastructure for Reconfigurable Optical Networks PI: Seung-Jong Park, co-PI: Rajgopal Kannan GRA: Cheng Cui, Lin Xue, Praveenkumar Kondikoppa,
OOI CI R2 Life Cycle Objectives Review Aug 30 - Sep Ocean Observatories Initiative OOI CI Release 2 Life Cycle Objectives Review CyberPoPs & Network.
Updated: 6/15/15 CloudLab. updated: 6/15/15 CloudLab Everyone will build their own clouds Using an OpenStack profile supplied by CloudLab Each is independent,
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Sponsored by the National Science Foundation Campus/Experiment Topics in Monitoring and I&M GENI Engineering Conference 15 Houston, TX Sarah Edwards Chaos.
D u k e S y s t e m s Virtualizing, Sharing, Interconnecting Part 2: servers and pipes Jeff Chase Dept. of Computer Science Duke University NSF CIO Meeting,
SDN Dev Group, Week 2 Aaron GemberAditya Akella University of Wisconsin-Madison 1 Wisconsin Testbed; Design Considerations.
Sponsored by the National Science Foundation Programmable Networks and GENI Marshall Brinn, GPO GEC October 25, 2012.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
GEC3www.geni.net1 GENI Spiral 1 Control Frameworks Global Environment for Network Innovations Aaron Falk Clearing.
MDC417 Follow me on Working as Practice Manager for Insight, he is a subject matter expert in cloud, virtualization and management.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
CloudNaaS: A Cloud Networking Platform for Enterprise Applications Theophilus Benson*, Aditya Akella*, Anees Shaikh +, Sambit Sahu + (*University of Wisconsin,
Resource Representations in GENI: A path forward Ilia Baldine, Yufeng Xin Renaissance Computing Institute,
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
1 ©2010 HP Created on xx/xx/xxxxof 222 Nick Bastin, Andy Bavier, Jessica Blaine, Joe Mambretti, Rick McGeer, Rob Ricci, Nicki Watts PlanetWorks, HP, University.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
FutureGrid Cyberinfrastructure for Computational Research.
Condor in Networked Clouds Ilia Baldine, Yufeng Xin,, Anirban Mandal, Chris Heermann, Paul Ruth, Jeffery L.Tilson RENCI, UNC-CH Jeff Chase, Victor J. Orlikowski,
Sponsored by the National Science Foundation ExoGENI
Ilia Baldine, Jeff Chase, Mike Zink, Max Ott.  14 GPO-funded racks ◦ Partnership between RENCI, Duke and IBM ◦ IBM x3650 M3/M4 servers  1x146GB 10K.
Sponsored by the National Science Foundation Cluster D Working Meetings GENI Engineering Conference 5 Seattle, WA July ,
Using GENI for computational science Ilya Baldin RENCI, UNC – Chapel Hill.
Sponsored by the National Science Foundation GENI Campus Ops Workflow Chaos Golubitsky San Juan, Puerto Rico Mar
LAMP: Bringing perfSONAR to ProtoGENI Martin Swany.
Sponsored by the National Science Foundation Systematic Experimentation in GENI Sarah Edwards GENI Project Office.
Sponsored by the National Science Foundation 1 GEC16, March 21, 2013 Are you ready for the tutorial? 1.Did you do the pre-work? A.Are you able to login.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
GEC22 Experimenter/Developer Roundtable (Developer) Victor Orlikowski Duke University
CloudLab Aditya Akella. CloudLab 2 Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts,
NEuca - Network Extensions to Eycalyptus Ilia Baldine Renaissance Computing Institute, UNC-CH.
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
Data-Intensive Cloud Control for GENI Cluster D Session July 20 th, 2010.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
| Basel Fabric Management with Virtual Machine Manager Philipp Witschi – Cloud Architect & Microsoft vTSP Thomas Maurer – Cloud Architect & Microsoft MVP.
ExoGENI OpenFlow support Ilya Baldin Paul Ruth RENCI Director for Network Research and.
ExoGENI GENI Going Forward Tasks Ilya Baldin RENCI Director for Network Research and Infrastructure.
Australian Institute of Marine Science Jukka Pirhonen.
GENI Enabled Software Defined Exchange (SDX) and ScienceDMZ (SD-SDMZ)
Hello everyone I am rajul and I’ll be speaking on a case study of elastic slurm Case Study of Elastic Slurm Rajul Kumar, Northeastern University
Using the jFed tool to experiment from zero to hero
NextGENI: The Nation’s Edge Cloud
Regional Software Defined Science DMZ (SD-SDMZ)
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
NSF cloud Chameleon: Phase 2 Networking
Chameleon and ExoGENI Paul Ruth (PI) Bonus Demo:
GENI Exploring Networks of the Future
Managing allocatable resources
Presentation transcript:

Ilya Baldin

2

 14 GPO-funded racks built by IBM ◦ Partnership between RENCI, Duke and IBM ◦ IBM x3650 M4 servers (X-series 2U)  1x146GB 10K SAS hard drive +1x500GB secondary drive  48G RAM 1333Mhz  Dual-socket 8-core CPU  Dual 1Gbps adapter (management network)  10G dual-port Chelseo adapter (dataplane) ◦ BNT G/40G OpenFlow switch ◦ DS3512 6TB sliverable storage  iSCSI interface for head node image storage as well as experimenter slivering ◦ Cisco(UCS-B) and Dell configuration also exist  Each rack is a small networked cloud ◦ OpenStack-based with NEuca extensions ◦ xCAT for baremetal node provisioning  3

 ExoGENI is a collection of off-the shelf institutional clouds ◦ With a GENI federation on top ◦ xCAT – IBM product ◦ OpenStack- RedHat product  Operators decide how much capacity to delegate to GENI and how much to retain for yourself  Familiar industry-standard interfaces (EC2)  GENI Interface ◦ Mostly does what GENI experimenters expect 4

6

 CentOS 6.X base install  Resource Provisioning ◦ xCAT for bare metal provisioning ◦ OpenStack + Neuca Quantum for VMs ◦ FlowVisor  Floodlight used internally by ORCA  GENI Software ◦ ORCA for VM, baremetal and OpenFlow ◦ FOAM for OpenFlow experiments  Worker and head nodes can be reinstalled remotely via IPMI + KickStart  Monitoring via Nagios (Check_MK) ◦ ExoGENI ops staff can monitor all racks ◦ Site owners can monitor their own rack  Syslogs collected centrally 7

 IBM ◦ All GPO-funded racks, NICTA  Cisco ◦ NCSU, WVnet  Dell ◦ University of Amsterdam, several others in planning 8

 ExoGENI racks are separate aggregates but also act as a single aggregate ◦ Transparent stitching of resources from multiple racks  ExoGENI is designed to bridge distributed experimentation, computational sciences and Big Data ◦ Already running HPC workflows linked to OSG and national supercomputers ◦ Newly introduced support for storage slivering ◦ Strong performance isolation is one of key goals 11

 GENI tools: Flack, GENI Portal, omni ◦ Give access to common GENI capabilities ◦ Also mostly compatible with  ExoGENI native stitching  ExoGENI automated resource binding  ExoGENI-specific tools: Flukes ◦ Accepts GENI credentials ◦ Access to ExoGENI-specific features  Elastic Cluster slices  Storage provisioning  Stitching to campus infrastructure 12

Presentation title goes here13

 Compute nodes ◦ Up to 100 VMs in each full rack ◦ A few (2) bare-metal nodes ◦ BYOI (Bring Your Own Image)  True Layer 2 slice topologies can be created ◦ Within individual racks ◦ Between racks ◦ With automatic and user-specified resource binding and slice topology embedding ◦ Stitching across I2, ESnet, NLR, regional providers. Dynamic wherever possible  OpenFlow experimentation ◦ Within racks ◦ Between racks ◦ Include OpenFlow overlays in NLR (and I2) ◦ On-ramp to campus OpenFlow network (if available)  Experimenters are allowed and encouraged to use their own virtual appliance images  Since Dec 2012 ◦ slices 14

Virtual network exchange Virtual colo campus net to circuit fabric Multi-homed cloud hosts with network control Computed embedding Workflows, services, etc.

 ExoGENI rack in Sydney, Australia  Multiple VLAN tags on a pinned path from Sydney to LA  Internet2/OSCARS ORCA-provisioned dynamic circuit ◦ LA, Chicago  NSI statically pinned segment with multiple VLAN tags ◦ Chicago, NY, Amsterdam ◦ Planning to add dynamic NSI interface  ExoGENI rack in Amsterdam ◦ ~14000 miles ◦ 120ms delay

 Strong isolation is the goal  Compute instances are KVM based and get a dedicated number of cores  VLANs are the basis of connectivity ◦ VLANs can be best effort or bandwidth-provisioned (within and between racks) 17

 Workflow Management Systems ◦ Pegasus, Custom scripts, etc.  Lack of tools to integrate with dynamic infrastructures ◦ Orchestrate the infrastructure in response to application ◦ Integrate data movement with workflows for optimized performance ◦ Manage application in response to infrastructure  Scenarios ◦ Computational with varying demands ◦ Data-driven with large static data-set(s) ◦ Data-driven with large amount of input/output data

21 462,969 condor jobs since using the on-ramp to engage-submit3 (OSG)

Hardware-in-the loop slices Hardware-in-the-Loop Facility Using RTDS & PMUs (FREEDM Center, NCSU)

New GENI-WAMS Testbed Latency & Processing Delays Packet Loss Network Jitters Cyber-security Man-in-middle attacks Aranya Chakrabortty, Aaron Hurz (NCSU) Yufeng Xin (RENCI/UNC)

 ◦ ExoBlog  28