Buying into “Summit” under the “Condo” model

Slides:



Advertisements
Similar presentations
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
Advertisements

CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Project Cysera Hardware Configuration Drafted by Zoebir Bong.
Introduction to DoC Private Cloud
Disk Array Performance Estimation AGH University of Science and Technology Department of Computer Science Jacek Marmuszewski Darin Nikołow, Marek Pogoda,
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Cluster Components Compute Server Disk Storage Image Server.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
HPC at IISER Pune Neet Deo System Administrator
ICER User Meeting 3/26/10. Agenda What’s new in iCER (Wolfgang) Whats new in HPCC (Bill) Results of the recent cluster bid Discussion of buy-in (costs,
LARGE SCALE DEPLOYMENT OF DAP AND DTS Rob Kooper Jay Alemeda Volodymyr Kindratenko.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
COMPUTER COMPARISON Period 4 By : Matthew Walker Joseph Deahn Philip Wymer Joshua Deloraya.
The DCS lab. Computer infrastructure Peter Chochula.
Patryk Lasoń, Marek Magryś
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
High Performance Computing Center ACI-REF Virtual Residency August 7-13, 2016 How do Design a Cluster Dana Brunson Asst. VP for Research Cyberinfrastructure.
Brief introduction about “Grid at LNS”
Oracle & HPE 3PAR.
NIIF HPC services for research and education
A Brief Introduction to NERSC Resources and Allocations
Partner Advantages of FlexPod Integration with Tech Data (FIT)
In Memory Computing using SAP HANA
What is HPC? High Performance Computing (HPC)
HPC usage and software packages
Low-Cost High-Performance Computing Via Consumer GPUs
The demonstration of Lustre in EAST data system
NEW Dell PowerEdge Tower Servers Dell PowerEdge Tower Servers
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
DELL NEW PRODUCT ANNOUNCEMENT PC XPS
Southwest Tier 2.
High-Performance Computing System
Regional Software Defined Science DMZ (SD-SDMZ)
Low Latency Analytics HPC Clusters
,Dell PowerEdge 13 gen servers rental.
Scientific Computing At Jefferson Lab
Shared Research Computing Policy Advisory Committee (SRCPAC)
NSF cloud Chameleon: Phase 2 Networking
Advanced Computing Facility Introduction
NEW Dell PowerEdge Tower Servers Dell PowerEdge Tower Servers
Tamnun Hardware.
PowerEdge Servers SPECIAL PROMO
K computer RIKEN Advanced Institute for Computational Science
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
K computer RIKEN Advanced Institute for Computational Science
The Cambridge Research Computing Service
Presentation transcript:

Buying into “Summit” under the “Condo” model Pat Burns, VP for IT Rick Casey, HPC Manager H.J. Siegel, Chair of the ISTeC MAC

Agenda 1. Welcome, introductions – HJ 2. Review NSF Summit award – Pat 3.      Review Summit configuration – Rick 4. Present “Condo” buy-in models – Rick 5.      Q&A – HJ

Welcome from ISTeC ISTeC: CSU's Information Science & Technology Center Current HPC at CSU: ISTeC Cray from 9/09 $630K NSF grant NEW NSF award to ISTeC for greatly enhanced HPC Thanks to those who helped us get this award through ISTeC

The Joint NSF MRI Award NSF MRI proposal submitted under the RMACC (http://www.rmacc.org) Joint award to CSU and CU: 450 TFLOPs (~200 on top 500) CSU: $850k (23%) CU: $2.5m (67%) 10% of cycles offered to RMACC participants Giant RFP process: award to Dell/DDN: “Summit” system Housed and operated at CU (CSU fiber connected), no cost to CSU Undergoing final acceptance testing now Limited opportunity to buy Into the system, subsidized for common infrastructure

Summary of Benefits Great pricing via large scale purchase Zero operational costs Hardware subsidy Central user and application support

Summit: Schematic Rack Layout Storage rack Compute rack 1 Compute rack 2 Compute rack 3 Compute rack 4 Compute rack 5 Compute rack 6 Compute rack 7 1 PB scratch GPFS DDN SFA14K HiMem nodes (5) 2 TB RAM / node Ethernet mgt. nodes OmniPath leaf nodes Nvidia K80 GPU nodes (10) core nodes Intel Knights Landing Phi nodes (20) OPA fabric Gateway nodes Intel Haswell nodes (376) Note: actual rack layout may differ from this schematic

CPU Nodes 376 CPU nodes Dell Poweredge C6320 9,024 total Intel Haswell CPU cores 4 nodes, 96 cores per chassis 200 GB SATA SSD / chassis 2X Intel Xeon E5-2680v3; 2.5 GHz, per node 24 CPU cores / node 128 GB RAM / node 5.3 GB RAM / CPU core

GPU Nodes 10 GPU nodes Dell Poweredge C4130 99,840 GPU cores 2X Nvidia K80 GPU cards / node 2X Intel Xeon E5-2680v3; 2.5 GHz / node 200 GB SATA SSD / node 24 CPU cores / node 128 GB RAM / node 5.3 GB RAM / CPU core Nvidia K80

HiMem Nodes 5 HiMem nodes Dell Poweredge R930 4X Intel Xeon E7-4830v3; 2.1 GHz 2 TB RAM / node (DDR4) 48 CPU cores / node 42 GB RAM / CPU core 200 GB SAS SSD / node 12 TB SAS HDD / node

Interconnect Intel OmniPath interconnect 100 Gbyte / sec. bandwidth Fat tree topology, 2:1 blocking

Storage 1 Petabyte (PB) scratch storage DDN SFA14K block storage appliance GRIDScaler (GPFS integration) Direct native connect to OmniPath

Access to Summit (In Process) Via CSU’s fiber infrastructure Identical for CU and CSU users Simple account applications required Start-up, small allocations “automatic” Goal is to support widespread usage Larger allocations will be given in accordance with needs Limited “whole machine” runs will be available Will require eID/Duo for authentication Will require Globus for file transfers Recommended from CSU’s Research DMZ network/storage

“Condo” Model Buy-in Buy-in in units of chasses CPU chassis has 4 nodes (band together to buy?) Allocations will be given equal to 8,760 hrs./year x purchased size Ex.: 1 CPU node purchased, allocation = 8,760 x 24 core-hrs./yr. All resources are shared when available: scale up to larger sizes All shared, common elements are subsidized, until $$$ run out Power, cooling, data center space, staff Power distribution units (PDU’s) Ethernet switches Omnipath common fabric (you must purchase card for each node) Omnipath cabling

Deadlines (for both CU and CSU) Nov. 10 for commitments Get commitments (specs and account numbers) to Richard.Casey@colostate.edu, (970) 980-5975 PO by 12/1

Excel Spreadsheet Discussed

Q&A Most Welcome Thank You!