Presentation is loading. Please wait.

Presentation is loading. Please wait.

Buying into “Summit” under the “Condo” model

Similar presentations


Presentation on theme: "Buying into “Summit” under the “Condo” model"— Presentation transcript:

1 Buying into “Summit” under the “Condo” model
Pat Burns, VP for IT Rick Casey, HPC Manager H.J. Siegel, Chair of the ISTeC MAC

2 Agenda 1. Welcome, introductions – HJ 2. Review NSF Summit award – Pat
3.      Review Summit configuration – Rick 4. Present “Condo” buy-in models – Rick 5.      Q&A – HJ

3 Welcome from ISTeC ISTeC: CSU's Information Science & Technology Center Current HPC at CSU: ISTeC Cray from 9/09 $630K NSF grant NEW NSF award to ISTeC for greatly enhanced HPC Thanks to those who helped us get this award through ISTeC

4 The Joint NSF MRI Award NSF MRI proposal submitted under the RMACC ( Joint award to CSU and CU: 450 TFLOPs (~200 on top 500) CSU: $850k (23%) CU: $2.5m (67%) 10% of cycles offered to RMACC participants Giant RFP process: award to Dell/DDN: “Summit” system Housed and operated at CU (CSU fiber connected), no cost to CSU Undergoing final acceptance testing now Limited opportunity to buy Into the system, subsidized for common infrastructure

5 Summary of Benefits Great pricing via large scale purchase
Zero operational costs Hardware subsidy Central user and application support

6 Summit: Schematic Rack Layout
Storage rack Compute rack 1 Compute rack 2 Compute rack 3 Compute rack 4 Compute rack 5 Compute rack 6 Compute rack 7 1 PB scratch GPFS DDN SFA14K HiMem nodes (5) 2 TB RAM / node Ethernet mgt. nodes OmniPath leaf nodes Nvidia K80 GPU nodes (10) core nodes Intel Knights Landing Phi nodes (20) OPA fabric Gateway nodes Intel Haswell nodes (376) Note: actual rack layout may differ from this schematic

7 CPU Nodes 376 CPU nodes Dell Poweredge C6320
9,024 total Intel Haswell CPU cores 4 nodes, 96 cores per chassis 200 GB SATA SSD / chassis 2X Intel Xeon E5-2680v3; 2.5 GHz, per node 24 CPU cores / node 128 GB RAM / node 5.3 GB RAM / CPU core

8 GPU Nodes 10 GPU nodes Dell Poweredge C4130 99,840 GPU cores
2X Nvidia K80 GPU cards / node 2X Intel Xeon E5-2680v3; 2.5 GHz / node 200 GB SATA SSD / node 24 CPU cores / node 128 GB RAM / node 5.3 GB RAM / CPU core Nvidia K80

9 HiMem Nodes 5 HiMem nodes Dell Poweredge R930
4X Intel Xeon E7-4830v3; 2.1 GHz 2 TB RAM / node (DDR4) 48 CPU cores / node 42 GB RAM / CPU core 200 GB SAS SSD / node 12 TB SAS HDD / node

10 Interconnect Intel OmniPath interconnect 100 Gbyte / sec. bandwidth Fat tree topology, 2:1 blocking

11 Storage 1 Petabyte (PB) scratch storage DDN SFA14K block storage appliance GRIDScaler (GPFS integration) Direct native connect to OmniPath

12 Access to Summit (In Process)
Via CSU’s fiber infrastructure Identical for CU and CSU users Simple account applications required Start-up, small allocations “automatic” Goal is to support widespread usage Larger allocations will be given in accordance with needs Limited “whole machine” runs will be available Will require eID/Duo for authentication Will require Globus for file transfers Recommended from CSU’s Research DMZ network/storage

13 “Condo” Model Buy-in Buy-in in units of chasses
CPU chassis has 4 nodes (band together to buy?) Allocations will be given equal to 8,760 hrs./year x purchased size Ex.: 1 CPU node purchased, allocation = 8,760 x 24 core-hrs./yr. All resources are shared when available: scale up to larger sizes All shared, common elements are subsidized, until $$$ run out Power, cooling, data center space, staff Power distribution units (PDU’s) Ethernet switches Omnipath common fabric (you must purchase card for each node) Omnipath cabling

14 Deadlines (for both CU and CSU)
Nov. 10 for commitments Get commitments (specs and account numbers) to (970) PO by 12/1

15 Excel Spreadsheet Discussed

16 Q&A Most Welcome Thank You!


Download ppt "Buying into “Summit” under the “Condo” model"

Similar presentations


Ads by Google