Regional Software Defined Science DMZ (SD-SDMZ)

Slides:



Advertisements
Similar presentations
ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
Advertisements

© 2010 Cisco and/or its affiliates. All rights reserved. 1 Microsoft Exchange Server Ready UCS Data Center & Virtualizacion Cisco Latin America Valid through.
Cisco GENI Rack Introduction and Solution Configuration details
System Center 2012 R2 Overview
Brocade VDX 6746 switch module for Hitachi Cb500
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
Microsoft Virtual Academy Module 4 Creating and Configuring Virtual Machine Networks.
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
Cisco and OpenStack Lew Tucker VP/CTO Cloud Computing Cisco Systems,
Extreme Networks Confidential and Proprietary. © 2010 Extreme Networks Inc. All rights reserved.
Updated: 6/15/15 CloudLab. updated: 6/15/15 CloudLab Everyone will build their own clouds Using an OpenStack profile supplied by CloudLab Each is independent,
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
Today’s Plan Everyone will build their own clouds
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
Mehdi Ghayoumi Kent State University Computer Science Department Summer 2015 Exposition on Cyber Infrastructure and Big Data.
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
Sandor Acs 05/07/
Mike Truitt Sr. Product Planner Bryon Surace Sr. Program Manager
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Hyper-V Performance, Scale & Architecture Changes Benjamin Armstrong Senior Program Manager Lead Microsoft Corporation VIR413.
The Data Logistics Toolkit Martin Swany Professor, School of Informatics and Computing Executive Associate Director, Center for Research in Extreme Scale.
 SDNX GENI Integration Plans CC-NIE IntegrationAward Number: Brian Voss Tripti Sinha Jaroslav Flidr
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
CLOUD COMPUTING. What is cloud computing ? History Virtualization Cloud Computing hardware Cloud Computing services Cloud Architecture Advantages & Disadvantages.
Rick Claus Sr. Technical Evangelist,
Microsoft Virtual Academy. System Center 2012 Virtual Machine Manager SQL Server Windows Server Manages Microsoft Hyper-V Server 2008 R2 Windows Server.
CloudLab Aditya Akella. CloudLab 2 Underneath, it’s GENI Same APIs, same account system Even many of the same tools Federated (accept each other’s accounts,
EResearch Director, AARNet Guido Aben AARNet update.
FlexPod Converged Solution. FlexPod is… A prevalidated flexible, unified platform featuring: Cisco Unified Computing System™ Programmable infrastructure.
SENSE Architecture SENOS Architecture SENSE Resource Computation Engine.
Level 300 Windows Server 2012 Networking Marin Franković, Visoko učilište Algebra.
UNM SCIENCE DMZ Sean Taylor Senior Network Engineer.
Extreme Scale Infrastructure
GENI Enabled Software Defined Exchange (SDX) and ScienceDMZ (SD-SDMZ)
Today’s Plan Everyone will build their own clouds
Instructor Materials Chapter 7: Network Evolution
Organizations Are Embracing New Opportunities
RHEV Platform at LHCb Red Hat at CERN 17-18/1/17
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Multi-layer software defined networking in GÉANT
By: Raza Usmani SaaS, PaaS & TaaS By: Raza Usmani
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Bridges and Clouds Sergiu Sanielevici, PSC Director of User Support for Scientific Applications October 12, 2017 © 2017 Pittsburgh Supercomputing Center.
Internet2 Cloud Integration Plans
Traditional Enterprise Business Challenges
SENSE: SDN for End-to-end Networked Science at the Exascale
Red Hat User Group June 2014 Marco Berube, Cloud Solutions Architect
If you Want pass Your Exam In first Attempt practice-questions.html.
The Brocade Cloud Manageability Vision
Low Latency Analytics HPC Clusters
Running on the Powerful Microsoft Azure Platform,
Versatile HPC: Comet Virtual Clusters for the Long Tail of Science SC17 Denver Colorado Comet Virtualization Team: Trevor Cooper, Dmitry Mishin, Christopher.
Scalable SoftNAS Cloud Protects Customers’ Mission-Critical Data in the Cloud with a Highly Available, Flexible Solution for Microsoft Azure MICROSOFT.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
NSF cloud Chameleon: Phase 2 Networking
ESnet and Science DMZs: an update from the US
Cloud-Enabling Technology
Ceph Appliance – SAFE Storage Appliance For Enterprise
Nolan Leake Co-Founder, Cumulus Networks Paul Speciale
Mario open Ethernet drive architecture Introducing a new technology from HGST Mario Blandini
OpenStack for the Enterprise
Presentation transcript:

Regional Software Defined Science DMZ (SD-SDMZ) Supercomputing 2017 November 12-17, 2017 Denver, Colorado Tom Lehman Director of Research UMD/MAX Xi Yang Senior Research Scientist UMD/MAX

Team and Research Projects UMD/MAX Team Tom Lehman Xi Yang Alberto Jimenez Multiple Students Results from several research projects including: High Performance Computing with Data and Networking Acceleration (HPCDNA) Resource Aware Intelligent Network Services(RAINS) GENI Enabled Software Defined Exchange (SDX)

Software Defined Science DMZ Traditional SDMZ: Bare metal Data Transfer Nodes (DTNs, perfSONAR nodes, manual switch/router control SD-SDMZ: Local Compute Cluster, OpenStack, Ceph, SDN Network Control. On Demand, scalable, traditional services and advanced hybrid cloud services

Regional Based Software Defined ScienceDMZ Regional resource serving the MAX participant community Built upon technologies similar to those used in modern data centers this "cloudified SDMZ" provides a scalable, multi-tenancy environment Embedding this resource in regional network allows the SD-SDMZ to leverage rich connectivity to advance cyberinfrastructure to provide new and flexible services.

UMD SD-SDMZ

SD-SDMZ Services Advanced Hybrid Cloud (AHC) Service On Demand, Application Specific, Hybrid Topologies which include one of more of the following: Local OpenStack Virtual Machines (with SRIOV interfaces to network and storage) Dedicated Local Ceph Storage Resources and Connections Integrated AWS Resources (Virtual Private Cloud (VPC) or Public) User controlled AWS resources, or SD-SDMZ facility provided AWS resources (EC2, Storage, S3 endpoints) Network Connections AWS Direct Connect integration for access to AWS Public or VPC resources Layer2/Layer2 External Connections across Internet2, ESnet, others Customized topology templates for individual user requirements Future: Service connections/integration with other public cloud infrastructures Service connections/integration with other R&E cloud, HPC, data repositories, etc. Schedulable Services

SD-SDMZ Services Data Transfer Node (DTN) Service SDMZ DTN service for data movement to/from HPC and other systems Built using AHC Service (Local OpenStack Virtual Machines with SRIOV interfaces to network and Ceph storage) Globus Endpoints Dataplane integration with HPC file systems (via IB/Ethernet Gateway) HPC System compute nodes mount SD-SDMZ CephFS On-Demand Scalable DTN infrastructure (dedicated DTN nodes on a per project or user basis available)

SD-SDMZ Services Data Movement Systems Globus Data Transfer https://www.globus.org/data-transfer Fast Data Transfer (FDT) https://github.com/fast-data-transfer/fdt Aspera http://asperasoft.com

Our Approach and Solution Multi-Resource Orchestration: integrating and orchestrating the network and network services with the things that attach to the network – compute, storage, clouds, and instruments. Model Driven: using models to describe resources in order to allow integrated reasoning, abstraction, and user centered services Intelligent Computation Services: Model driven, multi-resource computation services to enable orchestration services in response to high level user requests. We want to “Orchestrate the Automaters”

SD-Science DMZ Hardware/Software Brocade MLXe: OpenFlow 1.3 Capable 8x100G Ports 4x40G Ports 24x10G Ports 48x1G Ports Cisco Unified Computing System (UCS): 12 Compute Blades, dual socket, multi-core 2x2 redundant Fabric Interconnects with FEX technology 2x3548 Nexus OpenFlow capable switch running OpenStack Liberty Ceph (luminous)/Ethernet High Performance File System: 6 Object Storage Devices at 36 TB each (12x3TB drives) Approximately 200 Terabytes high performance cache storage Each OSD chassis 2U Chassis, 2 Intel Xeon E5-2609 Quad-Core, 2.4Ghz CPUs 8GB Memory LSI MegaRaid 9280-16i4 SAS, 6GB/s PCI-e RAID Card Dual Port 10Gbe NIC card 12 3 Tbyte SATA 6GB/s Hard Drives

StackV Software Architecture Conceptual View UMD/MAX developed StackV is an open source model driven orchestration system: github.com/MAX-UMD/stackV.community Native StackV Application (Northbound) API Access via GENI Aggregate Manager API Multiple Drivers, Southbound APIs

Model Driven Orchestration Modeling schemas based on OGF Network Markup Language (NML). Developed extensions to allow resources which are connected to the network to also be modeled: Multi-Resource Markup Language (MRML) https://github.com/MAX-UMD/nml-mrml Utilizes Resource Description Framework (RDF) and OWL 2 Web Ontology Language W3C Specifications

SD-SDMZ Demonstration

SD-SDMZ Demonstration

SD-SDMZ Demonstration Advanced Hybrid Cloud Orchestration AWS OpenStack Compute Nodes Compute Nodes V-Router (6) (2) Quagga BGP Ceph RBD BGP VM-y-2 (2) VM-x-1 VM-y-1 SR-IOV Profile-2 SR-IOV Profile-3 ……… SR-IOV Profile-1 VM-x-m (4) VM-y-k (5) Ceph RBD VPC Direct Connect SDN UCS Fabric subnet vgw VTN subnet (7) (1) VLAN (3) (1)

SD-SDMZ Demonstration Advanced Hybrid Cloud SD-SDMZ Demonstration Video: https://www.youtube.com/watch?v=osfRJhWHakA See us for an individual demonstration

Thanks

Example SD-SDMZ Use Cases Global Land Cover Facility (GLCF) Hybrid cloud topology to facilitate data download from R&E and AWS S3 locations to local HPC filesystem Pan-STARRS Astronomy Local compute/storage resources to facilitate download and inline processing of telescope data Large Scale Geosimulation Hybrid cloud topology to facilitate Hadoop cluster set up with local nodes and scalable bursting in to AWS