Daniel Murphy-Olson Ryan Aydelott1

Slides:



Advertisements
Similar presentations
SLA-Oriented Resource Provisioning for Cloud Computing
Advertisements

Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
Opensource for Cloud Deployments – Risk – Reward – Reality
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
August 27, 2008 Platform Market, Business & Strategy.
K E Y : SW Service Use Big Data Information Flow SW Tools and Algorithms Transfer Application Provider Visualization Access Analytics Curation Collection.
Research Business Technology Pfizer Enterprise Elastic HPC Mike Miller Pfizer Research Business Technology May 18 th Prism Meeting Stockholm Sweden.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Magellan: Experiences from a Science Cloud Lavanya Ramakrishnan.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
K E Y : SW Service Use Big Data Information Flow SW Tools and Algorithms Transfer Transformation Provider Visualization Access Analytics Curation Collection.
Rick Claus Sr. Technical Evangelist,
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
Vignesh Ravindran Sankarbala Manoharan. Infrastructure As A Service (IAAS) is a model that is used to deliver a platform virtualization environment with.
K E Y : DATA SW Service Use Big Data Information Flow SW Tools and Algorithms Transfer Hardware (Storage, Networking, etc.) Big Data Framework Scalable.
Evaluate container lifecycle support in TOSCA TOSCA – 174 Adhoc TC.
Data Center Automation using Python
IMPROVEMENT OF COMPUTATIONAL ABILITIES IN COMPUTING ENVIRONMENTS WITH VIRTUALIZATION TECHNOLOGIES Abstract We illustrates the ways to improve abilities.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Canadian Bioinformatics Workshops
Warehouse Scaled Computers
Shaopeng, Ho Architect of Chinac Group
Md Baitul Al Sadi, Isaac J. Cushman, Lei Chen, Rami J. Haddad
Hello everyone I am rajul and I’ll be speaking on a case study of elastic slurm Case Study of Elastic Slurm Rajul Kumar, Northeastern University
CLOUD ARCHITECTURE Many organizations and researchers have defined the architecture for cloud computing. Basically the whole system can be divided into.
Grid and Cloud Computing
Cloud Technology and the NGS Steve Thorn Edinburgh University (Matteo Turilli, Oxford University)‏ Presented by David Fergusson.
Organizations Are Embracing New Opportunities
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Reducing Risk with Cloud Storage
Network Attached Storage Overview
Web application hosting with Openshift, and Docker images
Web application hosting with Openshift, and Docker images
XNAT at Scale June 7, 2016.
HTCondor at Syracuse University – Building a Resource Utilization Strategy Eric Sedore Associate CIO HTCondor Week 2017.
Methodology: Aspects: cost models, modelling of system, understanding of behaviour & performance, technology evolution, prototyping  Develop prototypes.
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Jeremy Maris Research Computing IT Services University of Sussex
Bridges and Clouds Sergiu Sanielevici, PSC Director of User Support for Scientific Applications October 12, 2017 © 2017 Pittsburgh Supercomputing Center.
Cloud Management Mechanisms
Grid Computing.
CMS analysis job and data transfer test results
Traditional Enterprise Business Challenges
Recap: introduction to e-science
2TCloud - Veeam Cloud Connect
Study course: “Computing clusters, grids and clouds” Andrey Y. Shevel
Oracle Solaris Zones Study Purpose Only
GGF15 – Grids and Network Virtualization
The Brocade Cloud Manageability Vision
USF Health Informatics Institute (HII)
HII Technical Infrastructure
Versatile HPC: Comet Virtual Clusters for the Long Tail of Science SC17 Denver Colorado Comet Virtualization Team: Trevor Cooper, Dmitry Mishin, Christopher.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
InLoox PM Web App product presentation
CLUSTER COMPUTING.
COMP4442 Cloud Computing: Assignment 1
MMG: from proof-of-concept to production services at scale
OpenStack Summit Berlin – November 14, 2018
Can (HPC)Clouds supersede traditional High Performance Computing?
The Cambridge Research Computing Service
06 | SQL Server and the Cloud
Presentation transcript:

Integrating HPC, Cloud, and Containers for Data-Intensive Computing Platforms Daniel Murphy-Olson (d@anl.gov)1, Ryan Aydelott1 1 Computing, Environment and Life Sciences Directorate, Argonne National Laboratory abstract Argonne provides a broad portfolio of computing resources to researchers. Since 2011 we have been providing a cloud computing resource to researchers, primarily using Openstack. Over the last year we’ve been working to better support containers in the context of HPC. Several of our operating environments now leverage a combination of the three technologies which provides infrastructure tailored to the needs of the specific workload. This poster summarizes some of our experiences integrating HPC, Cloud, and Container environments. Magallan is a 717 node OpenStack cluster with heterogeneous hardware resources. The system has two interconnects, gigabit ethernet and QDR infiniband. Gigabit ethernet is used for instance to instance communication, and QDR infiniband is used for storage connectivity. Jupiter is a traditional HPC cluster with GPFS storage. The system has two interconnects, gigabit ethernet and QDR infiniband. QDR infiniband is used for node to node communication as well as storage connectivity. OVERVIEW Infiniband to Ethernet gateway (Mellanox SX6036G) is the primary link between the two systems NFS is exported from the GPFS server to tenant networks Docker-based application, HPC jobs, and cloud images all have access to the same GPFS data store Tenant users have full access to the files exported from the GPFS server VLANs enforce separation between tenants and various exports from the GPFS server Projects commonly use group access controls to files, so this level of access control is sufficient to maintain data security INTEGRATION OF MAGELLAN AND JUPITER ESNet ESNet ESNet It is possible to architect integrated environments that provide both cloud and HPC-like resources Simple IP-based access controls along with tenant networks can allow for VM access to HPC data stores With this model system users can utilize their own images and directly access HPC data stores without needing to transfer data between systems conclusions Improve performance between OpenStack instances and GPFS NFS exports Link scheduling between application portals on Jupiter and Magellan OpenStack provisioning Evaluate provisioning elements of Jupiter with OpenStack Experiment with additional HPC data stores (Lustre, GlusterFS) next steps