What is HPC? High Performance Computing (HPC)

Slides:



Advertisements
Similar presentations
Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1.
Advertisements

QCloud Queensland Cloud Data Storage and Services 27Mar2012 QCloud1.
Building an Operational Enterprise Architecture and Service Oriented Architecture Best Practices Presented by: Ajay Budhraja Copyright 2006 Ajay Budhraja,
Holland Computing Center David R. Swanson, Ph.D. Director.
 Virtual Desktop Infrastructure Duke University Office of Information Technology Common Solutions Group Summer 2012 Evan Levine.
 Amazon Web Services announced the launch of Cluster Compute Instances for Amazon EC2.  Which aims to provide high-bandwidth, low- latency instances.
Dell IT Innovation Labs in the Cloud “The power to do more!” Andrew Underwood – Manager, HPC & Research Computing APJ Solutions Engineering Team.
1 Supplemental line if need be (example: Supported by the National Science Foundation) Delete if not needed. Supporting Polar Research with National Cyberinfrastructure.
Bill Wrobleski Director, Technology Infrastructure ITS Infrastructure Services.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
Research Support Services Research Support Services.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
EXPOSE GOOGLE APP ENGINE AS TASKTRACKER NODES AND DATA NODES.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
RNA-Seq 2013, Boston MA, 6/20/2013 Optimizing the National Cyberinfrastructure for Lower Bioinformatic Costs: Making the Most of Resources for Publicly.
Common Practices for Managing Small HPC Clusters Supercomputing 12
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Center for Research Computing at Notre Dame Jarek Nabrzyski, Director
The CRI compute cluster CRUK Cambridge Research Institute.
Tom Furlani Director, Center for Computational Research SUNY Buffalo Metrics for HPC September 30, 2010.
Natick Public Schools Technology Update October 6, 2008 Dennis Roche, CISA Director of Technology.
Terascala – Lustre for the Rest of Us  Delivering high performance, Lustre-based parallel storage appliances  Simplifies deployment, management and tuning.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Bio-IT World Conference and Expo ‘12, April 25, 2012 A Nation-Wide Area Networked File System for Very Large Scientific Data William K. Barnett, Ph.D.
GOOGLE APP ENGINE By Muktadiur Rahman. Contents  Cloud Computing  What is App Engine  Why App Engine  Development with App Engine  Quote & Pricing.
Galaxy Community Conference July 27, 2012 The National Center for Genome Analysis Support and Galaxy William K. Barnett, Ph.D. (Director) Richard LeDuc,
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Ben Rogers August 18,  High Performance Computing  Data Storage  Hadoop Pilot  Secure Remote Desktop  Training Opportunities  Grant Collaboration.
High Performance Computing (HPC)
Elastic Cyberinfrastructure for Research Computing
The Future of Volunteer Computing
The CLoud Infrastructure for Microbial Bioinformatics
A Brief Introduction to NERSC Resources and Allocations
Organizations Are Embracing New Opportunities
Buying into “Summit” under the “Condo” model
Volunteer Computing for Science Gateways
Scaling Science Communities Lessons learned by and future plans of the Open Science Grid Frank Würthwein OSG Executive Director Professor of Physics UCSD/SDSC.
CyVerse Tools and Services
Tools and Services Workshop
Joslynn Lee – Data Science Educator
HTCondor at Syracuse University – Building a Resource Utilization Strategy Eric Sedore Associate CIO HTCondor Week 2017.
HR-ZOO Infrastructure
Bridges and Clouds Sergiu Sanielevici, PSC Director of User Support for Scientific Applications October 12, 2017 © 2017 Pittsburgh Supercomputing Center.
Usage of Openstack Cloud Computing Architecture in COE Seowon Jung Systems Administrator, COE
What is a Science Gateway?
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Putting All The Pieces Together: Developing a Cyberinfrastructure at the Georgia State University Library Tim Daniels, Learning Commons Coordinator Doug.
XSEDE’s Campus Bridging Project
Success with Collaboration Software
Shared Research Computing Policy Advisory Committee (SRCPAC)
Semiconductor Manufacturing (and other stuff) with Condor
Introduce yourself Presented by
Bringing HPC to Your Campus
Cyberinfrastructure for the Life Sciences
The National Grid Service Mike Mineter NeSC-TOE
Done by:Thikra abdullah
The Cambridge Research Computing Service
OpenStack for the Enterprise
Presentation transcript:

Supporting the UH Research Mission with HPC and Data Services Ron Merrill merrill@hawaii.edu

What is HPC? High Performance Computing (HPC) Scales across many nodes, needs a low latency network High Throughput Computing (HTC) Scale out, each node independent, same code different data Advanced Computing Bigger than your laptop Includes HPC, HTC, cloud, storage, vizualization, etc

Advanced Computing at UH ITS We use a condo model for campus computing Base system – FREE to all UH students, staff and affiliates PI’s can purchase nodes or time for priority access Condo model Common to many US research universities Alternative: Departmental and lab scale resources Also common and co-existing here at UH

Condo-nomics Sharing economy: most like Air B&B Increase efficiency by increasing asset utilization Expand access to goods Provide income to owners Enabled by low transaction costs Air B&B enabled by the web Condo model enabled by job schedulers, ex. SLURM The owner shared nodes are like an Air B&B, but one where you stay for free. Then the bears come home and quickly kick you to the curb. The income to owners is free scratch space, free sysadmin , free procurement, free hosting. Kill queue users that have check pointing or short enough jobs end up making progress.

Cray CS-300 10/2014 – 4/14/2015 $1.79M Delivered, accepted, early adopters 4/15/2015 Open to all UH students, staff and affiliates Hardware – 3800 cores / 28.9 TB RAM 178 nodes, 20 core/node, 128GB RAM 6 nodes, 40 core/node, 1024 TB 582 TB Lustre filesystem

Cray CS-300

March 2016 HPC Upgrade 8 PI’s, $712k Hardware 33 nodes, 20 core/256GB RAM 58 nodes, 24 core/128GB RAM 1 GPU node, 20 core/128GB RAM, 2xNVidia K40 Core count: 3800 -> 5872, 54%

Cray CS-300

Rear View Layout White are all of the original standard nodes.

HPC

Fall 2016 HPC Upgrade 4 PI’s, $155k Hardware Core count: 8 nodes, 24 core/256GB RAM 1 large memory node, 72 core/1024GB RAM 2 nodes, 24 core/128GB RAM 1 GPU node, 20 core/128GB RAM, 2xNVidia K40 Core count: 3800 -> 5872 (54%) -> 6204 (63%) -> ~6828 (80%)

So far… 280 UH researchers, faculty and students attended on-boarding training received accounts UH ITS HPC cluster has delivered... 39 million CPU hours 783,000 compute jobs

Research Data Services Data Management Services Enable workflows by integrating storage and compute assets Relational and NoSQL database design Sharing and distribution Data Management Plans Federal funding requirement Science Gateways Domain focused data repositories

Storage: Data Services Foundation Value Storage NetApp NAS filer – available only to UH-HPC users for a fee CIFS deployment on hold OwnCloud “an open source, self-hosted file sync and share app platform” 10 years old Deployment: EPSCOR and other early adopters

CI People Gwen Jacobs, Director of Cyberinfrastructure (CI) Sean Cleveland, CI Research Scientist David Schanzenbach, Lead Software Architect Michelle Choe, EPSCoR Program Assistant TBD, CI Software Engineer

Thank you!