Fast computers, big/fast storage, fast networks Marla Meehl Manager, Network Engineering and Telecommunications, NCAR/UCAR, Manager of the Front Range.

Slides:



Advertisements
Similar presentations
FRGP – NLR/I2 Marla Meehl Manager of the FRGP 3/4/08.
Advertisements

The Hierarchy of Networking Marla Meehl Manager of the Network Engineering and Telecommunications Section (NETS), the Front Range GigaPoP (FRGP), and the.
Network & Services Overview June 2012 Jeff Ambern
Networking updates: Westnet 1 Marla Meehl UCAR/FRGP/UPoP/BiSON Manager UCAR/NCAR
Front Range GigaPoP (FRGP), UCAR Point of Presence (UPoP), Bi-State Optical Network (BiSON) 1 Marla Meehl UCAR/FRGP/UPoP/BiSON Manager UCAR/NCAR
Engenio 7900 HPC Storage System. 2 LSI Confidential LSI In HPC LSI (Engenio Storage Group) has a rich, successful history of deploying storage solutions.
FRGP review State of Colorado 14-Aug-2012 Jeff Custard.
Front Range GigaPoP (FRGP), UCAR Point of Presence (UPoP), Bi-State Optical Network (BiSON) 1 Marla Meehl UCAR/FRGP/UPoP/BiSON Manager UCAR/NCAR
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Disk and Tape Storage Cost Models Richard Moore & David Minor San Diego Supercomputer.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Return of the Large Data Center. Computing Trends Computing power is now cheap, power hungry, and hot. Supercomputers are within reach of all R1 universities.
SmartMeter Program Overview Jana Corey Director, Energy Information Network Pacific Gas & Electric Company.
NWfs A ubiquitous, scalable content management system with grid enabled cross site data replication and active storage. R. Scott Studham.
Evolution of Enterprise Services in the Statistics Canada IT Environment Silver Buckler Chief, Managed Storage Section Informatics Technology Services.
Network updates Westnet January 2011 Steve Corbato &Marla Meehl.
FRGP/UPoP/BiSON 1 Marla Meehl Manager NCAR/UCAR Network Engineering & Telecommunications Section, FRGP, UPoP, BiSON CENIC Meeting – 3/11/13.
FRGP Update Marla Meehl, NCAR. Review and update topic list Summit Western Region Networking (WRN) Stimulus Internet2 Membership Model.
Networking updates: Westnet Marla Meehl UCAR/FRGP/BiSON/WRN Manager UCAR/NCAR
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
NWSC Network Planning Bryan Anderson, Jeff Custard, Fabian Guerrero, Marla Meehl, David Mitchell, Jim Van Dyke NCAR/CISL Network Engineering & Telecommunications.
Communications Pool FY ’06 Marla Meehl Friday, 10/21/05 NETS – Network Engineering & Telecommunications Section Enterprise Services Computer Security.
Networking discussion NSRC Visitors 04-Aug-2011 Jeff Custard.
Sustainability and Public-Private Initiatives Sean Patton Director, Energy Solutions LOCKHEED MARTIN CORPORATION JUNE 8, 2012 Sustainability for the Future.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
NCAR Supercomputing Center (NSC) Project Status Update to the CHAP 4 October 2007 Krista Laursen NSC Project Director.
Outline IT Organization SciComp Update CNI Update
Demystifying Deduplication. Global SMB Event Marketing 2 APPROACH: What is deduplication? Eliminate redundant data Start with the backup environment as.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Network updates Westnet June 2011 Marla Meehl. Reviews and updates  Front Range GigaPoP (FRGP)  UCAR Point of Presence (UPoP)  Western Region Network.
Never Down? A strategy for Sakai high availability Rob Lowden Director, System Infrastructure 12 June 2007.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
SCD Update Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder, CO USA User Forum May 2005.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Renewable Technology Integration Dave Mooney October 30, 2009.
RDA Data Support Section. Topics 1.What is it? 2.Who cares? 3.Why does the RDA need CISL? 4.What is on the horizon?
Next Generation Networking Front Range Computing Research Consortium 9/24/11 Marla Meehl.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
A Data Centre for Science and Industry Roadmap. INNOVATION NETWORKING DATA PROCESSING DATA REPOSITORY.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
Boulder Research and Administration Network (BRAN), Front Range GigaPoP (FRGP), Bi-State Optical Network (BiSON) 1 Marla Meehl UCAR/FRGP/BiSON Manager.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
29 March 2004 Steven Worley, NSF/NCAR/SCD 1 Research Data Stewardship and Access Steven Worley, CISL/SCD Cyberinfrastructure meeting with Priscilla Nelson.
Big Data in the Geosciences, University Corporation for Atmospheric Research (UCAR/NCAR), and the NCAR Wyoming Supercomputing Center (NWSC) Marla Meehl.
Evolving Scientific Data Workflow CAS 2011 Pamela Gillman
NCAR RP Update Rich Loft NCAR RPPI May 7, NCAR Teragrid RP Developments Current Cyberinfrastructure –5.7 TFlops/2048 core Blue Gene/L system –100.
BiSON – Enabling HPC along the Colorado-Wyoming Front Range David Mitchell Blake Caldwell NCAR/CISL Network Engineering & Telecommunications Section.
Optimizing Power and Data Center Resources Jim Sweeney Enterprise Solutions Consultant, GTSI.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
Networking updates: Westnet 1 Marla Meehl UCAR/FRGP/UPoP/BiSON Manager UCAR/NCAR
NOAA R&D High Performance Computing Colin Morgan, CISSP High Performance Technologies Inc (HPTI) National Oceanic and Atmospheric Administration Geophysical.
Next Generation Data and Computing Facility NCAR “Data Center” project An NCAR-led computing ‘facility’ for the study of the Earth system.
How the Bi-State Optical Network (BiSON) serves UCAR and the NWSC John Hernandez Network Engineer IV NCAR/CISL Network Engineering & Telecommunications.
RDA Data Support Section. Topics 1.What is it? 2.Who cares? 3.Why does the RDA need CISL? 4.What is on the horizon?
Tackling I/O Issues 1 David Race 16 March 2010.
Computing Strategies. A computing strategy should identify – the hardware, – the software, – Internet services, and – the network connectivity needed.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Demystifying Deduplication
NWSC Wide Area Network Planning
Introduce yourself Presented by
ICG BRAN Extension Marla Meehl NCAR 07/17/02.
Data Management Components for a Research Data Archive
Successful Data Curation for Large Data Archives
Presentation transcript:

Fast computers, big/fast storage, fast networks Marla Meehl Manager, Network Engineering and Telecommunications, NCAR/UCAR, Manager of the Front Range GigaPoP Computational & Information Systems Laboratory National Center for Atmospheric Research

35 systems over 35 years

Current HPC Systems at NCAR System Name Vendor System # Frames # Proces sors Processor Type GHz Peak TFLOPs Production Systems bluefire (IB) IBM p6 Power POWER Research Systems, Divisional Systems & Test Systems Firefly (IB) IBM p6 Power POWER frost (BG/L) IBM BlueGene/L 48192PowerPC

High Utilization, Low Queue Wait Times 4 l Bluefire system utilization is routinely >90% l Average queue-wait time < 1hr

Bluefire Usage

CURRENT NCAR COMPUTING >100TFLOPS

Disk Storage Systems l Current Disk Storage for HPC and Data Collections: l215 TB for Bluefire (GPFS) l335 TB for Data Analysis and Vis (DAV) (GPFS) l110 TB for Frost (Local) l152 TB for Data Collections (RDA, ESG) (SAN) l Undergoing extensive redesign and expansion. lAdd ~2PB to (existing GPFS system) to create large central (x- mounted) filesystem

Data Services Redesign l Near-term Goals: –Creation of unified and consistent data environment for NCAR HPC –High-performance availability of central filesystem from many projects/systems (RDA, ESG, CAVS, Bluefire, TeraGrid) l Longer-term Goals: –Filesystem cross-mounting –Global WAN filesystems l Schedule: –Phase 1: Add 300TB (now) –Phase 2: Add 1 PB (March 2010 ) –Phase 3: Add.75 PB (TBD)

Mass Storage Systems l NCAR MSS (Mass Storage System) lIn Production since 70s l Transitioning from the NCAR Mass Storage System to HPSS –Prior to NCAR-Wyoming Supercomputer Center (NWSC) commissioning in 2012 –Cutover Jan 2011 –Transition without data migration Silos/Tape Drives l Tape Drives/Silos lSun/StorageTek T10000B (1 TB media) with SL8500 libraries lPhasing out tape silos Q lOozing 4 PBs (unique), 6 PBs with duplicates n “Online” process n Optimized data transfer n Average 20 TB/day with 10 streams

Estimated Growth

Facility Challenge Beyond 2011 l NCAR Data Center –NCAR data center limits of power/cooling/space have been reached –Any future computing augmentation will exceed existing center capacity –Planning in progress l Partnership l Focus on Sustainability and Efficiency l Project Timeline

NCAR-Wyoming Supercomputer Center (NWSC) Partners l NCAR & UCAR l University of Wyoming l State of Wyoming l Cheyenne LEADS l Wyoming Business Council l Cheyenne Light, Fuel & Power Company l National Science Foundation (NSF) Cheyenne University of Wyoming NCAR – Mesa Lab

Focus on Sustainability l Maximum energy efficiency l LEED certification l Achievement of the smallest possible carbon footprint l Adaptable to the ever-changing landscape of High- Performance Computing l Modular and expandable space that can be adapted as program needs or technology demands dictate

Focus On Efficiency l Don’t fight mother nature l The cleanest electron is an electron never used l The computer systems drive all of the overhead loads –NCAR will evaluate in procurement –Sustained performance / kW l Focus on the biggest losses –Compressor based cooling –UPS losses –Transformer losses

Design Progress l Will be one of the most efficient data centers –Cheyenne Climate –Eliminate refrigerant based cooling for ~98% of the year –Efficient water use –Minimal transformations steps electrically l Guaranteed 10% Renewable l Option for 100% (Wind Power) l Power Usage Effectiveness (PUE) –PUE = Total Load / Computer Load = 1.08 ~ 1.10 for WY –Good data centers 1.5 –Typical Commercial 2.0

Expandable and Modular l Long term –24 – acre site –Dual substation electrical feeds –Utility commitment up to 36MW –Substantial capability fiber optics l Medium term – Phase I / Phase II –Can be doubled –Enough structure for three or –four generations –Future container solutions or other advances l Short term – Module A – Module B –Each Module 12,000 sq.ft. –Install 4.5MW initially can be doubled in Module A –Upgrade Cooling and Electrical systems in 4.5MW increments

Project Timeline l 65% Design Review Complete (October 2009) l 95% Design Complete l 100% Design Complete (Feb 2010) l Anticipated Groundbreaking (June 2010) l Certification of Occupancy (October 2011) l Petascale System in Production (January 2012)

NCAR-Wyoming Supercomputer Center

HPC services at NWSC l PetaFlop Computing – 1 PFLOP peak minimum l 100’s PetaByte Archive (+20 PB yearly growth) –HPSS –Tape archive hardware acquisition TBD l PetaByte Shared File System(s) (5-15PB) –GPFS/HPSS HSM integration (or Lustre) –Data Analysis and Vis –Cross Mounting to NCAR Research Labs, others l Research Data Archive data servers and Data portals l High-Speed Networking and some enterprise services

NWSC HPC Projection 23

NCAR Data Archive Projection 24

Tentative Procurement Timeline Jan-12Jul-11Dec-10 Jul-10 Jan-10Jul-09

Front Range GigaPoP (FRGP) l 10 years of operation by UCAR l 15 members l NLR, I2, Esnet 10Gbps l Commodity – Qwest and Level3 l CPS and TransitRail Peering l Intra-FRGP peering l Akamai l 10Gbps ESnet peering l

Fiber, fiber, fiber l NCAR/UCAR intra-campus fiber – 10Gbps backbone l Boulder Research and Administration Network (BRAN) l Bi-State Optical Network (BiSON) – upgrading to 10/40/100Gbps capable –Enables 10Gbps NCAR/UCAR Teragrid connection –Enable 10Gbps NOAA/DC link via NLR lifetime lambda l DREAM – 10Gbps l SCONE – 1Gbps

Western Regional Network (WRN) l A multi-state partnership to ensure robust, advanced, high-speed networking available for research, education, and related uses l Increased aggregate bandwidth l Decreased costs

Questions