Download presentation
Presentation is loading. Please wait.
Published byErnest Lucas Modified over 9 years ago
1
Fast computers, big/fast storage, fast networks Marla Meehl Manager, Network Engineering and Telecommunications, NCAR/UCAR, Manager of the Front Range GigaPoP Computational & Information Systems Laboratory National Center for Atmospheric Research
2
35 systems over 35 years
3
Current HPC Systems at NCAR System Name Vendor System # Frames # Proces sors Processor Type GHz Peak TFLOPs Production Systems bluefire (IB) IBM p6 Power 575 114096POWER64.777.00 Research Systems, Divisional Systems & Test Systems Firefly (IB) IBM p6 Power 575 1192POWER64.73.610 frost (BG/L) IBM BlueGene/L 48192PowerPC-4400.722.935
4
High Utilization, Low Queue Wait Times 4 l Bluefire system utilization is routinely >90% l Average queue-wait time < 1hr
5
Bluefire Usage
6
CURRENT NCAR COMPUTING >100TFLOPS
9
Disk Storage Systems l Current Disk Storage for HPC and Data Collections: l215 TB for Bluefire (GPFS) l335 TB for Data Analysis and Vis (DAV) (GPFS) l110 TB for Frost (Local) l152 TB for Data Collections (RDA, ESG) (SAN) l Undergoing extensive redesign and expansion. lAdd ~2PB to (existing GPFS system) to create large central (x- mounted) filesystem
10
Data Services Redesign l Near-term Goals: –Creation of unified and consistent data environment for NCAR HPC –High-performance availability of central filesystem from many projects/systems (RDA, ESG, CAVS, Bluefire, TeraGrid) l Longer-term Goals: –Filesystem cross-mounting –Global WAN filesystems l Schedule: –Phase 1: Add 300TB (now) –Phase 2: Add 1 PB (March 2010 ) –Phase 3: Add.75 PB (TBD)
11
Mass Storage Systems l NCAR MSS (Mass Storage System) lIn Production since 70s l Transitioning from the NCAR Mass Storage System to HPSS –Prior to NCAR-Wyoming Supercomputer Center (NWSC) commissioning in 2012 –Cutover Jan 2011 –Transition without data migration Silos/Tape Drives l Tape Drives/Silos lSun/StorageTek T10000B (1 TB media) with SL8500 libraries lPhasing out tape silos Q2 2010 lOozing 4 PBs (unique), 6 PBs with duplicates n “Online” process n Optimized data transfer n Average 20 TB/day with 10 streams
13
Estimated Growth
14
Facility Challenge Beyond 2011 l NCAR Data Center –NCAR data center limits of power/cooling/space have been reached –Any future computing augmentation will exceed existing center capacity –Planning in progress l Partnership l Focus on Sustainability and Efficiency l Project Timeline
15
NCAR-Wyoming Supercomputer Center (NWSC) Partners l NCAR & UCAR l University of Wyoming l State of Wyoming l Cheyenne LEADS l Wyoming Business Council l Cheyenne Light, Fuel & Power Company l National Science Foundation (NSF) http://cisl.ucar.edu/nwsc/ Cheyenne University of Wyoming NCAR – Mesa Lab
16
Focus on Sustainability l Maximum energy efficiency l LEED certification l Achievement of the smallest possible carbon footprint l Adaptable to the ever-changing landscape of High- Performance Computing l Modular and expandable space that can be adapted as program needs or technology demands dictate
17
Focus On Efficiency l Don’t fight mother nature l The cleanest electron is an electron never used l The computer systems drive all of the overhead loads –NCAR will evaluate in procurement –Sustained performance / kW l Focus on the biggest losses –Compressor based cooling –UPS losses –Transformer losses
18
Design Progress l Will be one of the most efficient data centers –Cheyenne Climate –Eliminate refrigerant based cooling for ~98% of the year –Efficient water use –Minimal transformations steps electrically l Guaranteed 10% Renewable l Option for 100% (Wind Power) l Power Usage Effectiveness (PUE) –PUE = Total Load / Computer Load = 1.08 ~ 1.10 for WY –Good data centers 1.5 –Typical Commercial 2.0
19
Expandable and Modular l Long term –24 – acre site –Dual substation electrical feeds –Utility commitment up to 36MW –Substantial capability fiber optics l Medium term – Phase I / Phase II –Can be doubled –Enough structure for three or –four generations –Future container solutions or other advances l Short term – Module A – Module B –Each Module 12,000 sq.ft. –Install 4.5MW initially can be doubled in Module A –Upgrade Cooling and Electrical systems in 4.5MW increments
20
Project Timeline l 65% Design Review Complete (October 2009) l 95% Design Complete l 100% Design Complete (Feb 2010) l Anticipated Groundbreaking (June 2010) l Certification of Occupancy (October 2011) l Petascale System in Production (January 2012)
21
NCAR-Wyoming Supercomputer Center
22
HPC services at NWSC l PetaFlop Computing – 1 PFLOP peak minimum l 100’s PetaByte Archive (+20 PB yearly growth) –HPSS –Tape archive hardware acquisition TBD l PetaByte Shared File System(s) (5-15PB) –GPFS/HPSS HSM integration (or Lustre) –Data Analysis and Vis –Cross Mounting to NCAR Research Labs, others l Research Data Archive data servers and Data portals l High-Speed Networking and some enterprise services
23
NWSC HPC Projection 23
24
NCAR Data Archive Projection 24
25
Tentative Procurement Timeline Jan-12Jul-11Dec-10 Jul-10 Jan-10Jul-09
27
Front Range GigaPoP (FRGP) l 10 years of operation by UCAR l 15 members l NLR, I2, Esnet peering @ 10Gbps l Commodity – Qwest and Level3 l CPS and TransitRail Peering l Intra-FRGP peering l Akamai l 10Gbps ESnet peering l www.frgp.net
29
Fiber, fiber, fiber l NCAR/UCAR intra-campus fiber – 10Gbps backbone l Boulder Research and Administration Network (BRAN) l Bi-State Optical Network (BiSON) – upgrading to 10/40/100Gbps capable –Enables 10Gbps NCAR/UCAR Teragrid connection –Enable 10Gbps NOAA/DC link via NLR lifetime lambda l DREAM – 10Gbps l SCONE – 1Gbps
32
Western Regional Network (WRN) l A multi-state partnership to ensure robust, advanced, high-speed networking available for research, education, and related uses l Increased aggregate bandwidth l Decreased costs
34
Questions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.