Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.

Slides:



Advertisements
Similar presentations
Computer Room Requirements for High Density Rack Mounted Servers Rhys Newman Oxford University.
Advertisements

STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
HPC in Poland Marek Niezgódka ICM, University of Warsaw
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
Cooling Product Positioning
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Rael Haiboullin System Engineer Change Manager.
PI – Monitoring Energy in the Data Center Peter Vieites Technology Architect Microsoft Technology Center - New York.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
02/24/09 Green Data Center project Alan Crosswell.
Copyright Green Revolution Cooling
All content in this presentation is protected – © 2008 American Power Conversion Corporation Rael Haiboullin System Engineer Capacity Manager.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Data Centre Power Trends UKNOF 4 – 19 th May 2006 Marcus Hopwood Internet Facilitators Ltd.
SGI Proprietary SGI Update IDC HPC User Forum September, 2008.
1 Distributed Big Data & Analytics University of Cincinnati –- Computational Fluid Dynamics Research - Aerospace Project/Research Title: Study of Active.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
COMP 4923 A2 Data Center Cooling Danny Silver JSOCS, Acadia University.
IT Infrastructure Chap 1: Definition
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Cyberinfrastructure: Enabling New Research Frontiers Sangtae “Sang” Kim Division Director – Division of Shared Cyberinfrastructure Directorate for Computer.
Progress Energy Corporate Data Center Rob Robertson February 17, 2010 of North Carolina.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Most organization’s data centers that were designed before 2000 were we built based on technologies did not exist or were not commonplace such as: >Blade.
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Dealing with Hotspots in Datacenters Caused by High-Density Computing Peter Hannaford Director of Business Development EMEA.
Overview of the Texas Advanced Computing Center and International Partnerships Marcia Inger Assistant Director Development & External Relations April 26,
VO Sandpit, November 2009 e-Infrastructure for Climate and Atmospheric Science Research Dr Matt Pritchard Centre for Environmental Data Archival (CEDA)
50th HPC User Forum Emerging Trends in HPC September 9-11, 2013
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Patryk Lasoń, Marek Magryś
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
HIGH PERFORMANCE COMPUTING TIM CARROLL HPC DEVELOPMENT & STRATEGY
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Optimizing Power and Data Center Resources Jim Sweeney Enterprise Solutions Consultant, GTSI.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
Energy Systems Integration Facility May Renewable and Efficiency Technology Integration ESIF Supports National Goals National carbon goals require.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
Extreme Scale Infrastructure
Accessing the VI-SEEM infrastructure
Tools and Services Workshop
Jay Boisseau, Director Texas Advanced Computing Center
Joslynn Lee – Data Science Educator
Performance Technology for Scalable Parallel Systems
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Western Analysis Facility
Appro Xtreme-X Supercomputers
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
News and computing activities at CC-IN2P3
Clustered Systems Introduction
Cloud Computing Data Centers
EGI Webinar - Introduction -
TeraScale Supernova Initiative
Panel: Building the NRP Ecosystem
Presentation transcript:

Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011

TACC Mission & Strategy The mission of the Texas Advanced Computing Center is to enable scientific discovery and enhance society through the application of advanced computing technologies. To accomplish this mission, TACC: –Evaluates, acquires & operates advanced computing systems –Provides training, consulting, and documentation to users –Collaborates with researchers to apply advanced computing techniques –Conducts research & development to produce new computational technologies Resources & Services Research & Development

TACC Resources are Terascale, Comprehensive and Balanced HPC systems to enable larger simulations analyses and faster turnaround times Scientific visualization resources to enable large data analysis and knowledge discovery Data & information systems to store large datasets from simulations, analyses, digital collections, instruments, and sensors Distributed/grid/cloud computing servers & software to integrate all resources into computational grids and clouds Network equipment for high-bandwidth data movements and transfers between systems

Recent History of Systems at TACC 2001 – IBM Power4 system, 1 TFlop, ~300kW 2003 – Dell Linux cluster, 5 TFlops, ~300 kW 2006 – Dell Linux blade cluster, 62 TFlops ~500 kW, 16 kW per rack 2008 – Sun Linux blade cluster, Ranger, 579 TFlops, 2.4 MW, 30kW per rack 2011 – Dell Linux blade cluster, Lonestar 4, 302 Tflops, 800 kW, 20kW per rack

TACC Data Centers Commons Center (CMS) –Originally built in 1986 with 3,200 sq. ft. –Designed to house large Cray systems –Retrofitted multiple times to increase power/cooling infrastructure, ~1 MW total power –18” raised floor, standard CRAC cooling units Research Office Complex (ROC) –Built in 2007 as part of new office building –6,400 sq.ft., 1 MW original designed power –Refitted to support 4 MW total power for Ranger –30” raised floor, CRAC and APC In-Row Coolers

CMS Data Center Previously

CMS Data Center Now

Lonestar 4 Dell Intel 64-bit Xeon Linux Cluster 22,656 CPU cores (302 TFlops) 44 TB memory, 1.8 PB disk

Lonestar 4 Front Row

Lonestar 4 End of Rows

Lonestar 4 Electrical Panels

ROC Data Center Houses Ranger, Longhorn, Corral, and other support systems Built in 2007 and already nearing capacity

Ranger

Data Center of the Future Exploring flexible and efficient data center designs Planning for 50 kW per rack, 10 MW total system power in the near future Prefer 480V power distribution to racks Exotic cooling ideas not excluded –Thermal storage tanks –Immersion cooling

Immersive Cooling – Green Revolution Cooling Servers suspended in mineral oil Improves heat transfer and more efficient “transport” of heat than air Requires refit of servers to remove fans

Summary Data center/rack power densities increasing Efficiency of delivering power and cooling the heat generated becoming substantial Air cooling reaching limits of cooling capability Future data centers will require more “exotic” or customized cooling solutions for very high power density