The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.

Slides:



Advertisements
Similar presentations
RAL Tier1: 2001 to 2011 James Thorne GridPP th August 2007.
Advertisements

Computer Room Requirements for High Density Rack Mounted Servers Rhys Newman Oxford University.
Computer Room Provision in Atlas and R89 Graham Robinson.
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
MidAmerican Energy Holdings Company Telecom Power Infrastructure Analysis Premium Power for Colocation Telecom Power Infrastructure Analysis February 27,
Cooling Product Positioning
FBCLS Facilities Master Plan September 14, Future Planning Team Dave Esely Sheryl Franke Bud Hertzog Blake McKinney Wendell Shaffer Sara Taylor.
State Data Center Re-scoped Projects With emphasis on reducing load on cooling systems in OB2 April 4, 2012.
1 Northwestern University Information Technology Data Center Elements Research and Administrative Computing Committee Presented October 8, 2007.
02/24/09 Green Data Center project Alan Crosswell.
Avon High School Proposed Renovations and Additions September 7, 2006.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
September 18, 2009 Critical Facilities Round Table 1 Introducing the Heat Wheel to the Data Center Robert (Dr. Bob) Sullivan, Ph.D. Data Center Infrastructure.
Data Center Construction Project Updates and Experiences Gary Stiehr May 6, 2008 HEPiX Spring 2008.
Architecture for Modular Data Centers James Hamilton 2007/01/08
1 May 12, 2010 Federal Data Center Consolidation Initiative.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
Data Centre Power Trends UKNOF 4 – 19 th May 2006 Marcus Hopwood Internet Facilitators Ltd.
CSG Panel – May 10, 2006 Key Lessons for New Data Centers Project overview Requirements Lessons learned Going Forward.
Stoughton Public Library Feasibility & Design Study Report by CBT January 28, 2011 Current Library Built in 1969Proposed Library for the Future.
Scholarship Needham Public Schools Proposed Needham High School Cafeteria & Classroom Expansion Presented to Board of Selectmen April 15, 2015.
IT Department 29 October 2012 LHC Resources Review Board2 LHC Resources Review Boards Frédéric Hemmer IT Department Head.
University of Michigan, CSG, May 2006 UM/MITC Data Center.
Outline IT Organization SciComp Update CNI Update
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
Compiled by Load Profiling ERCOT Energy Analysis & Aggregation
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Development Committee School Usage and Building Recommendations Big Walnut School District Board of Education Meeting June 9, 2008.
1. 1 3/14/11 Church Conference 14-Mar-11 Welcome and Introduction Opening Prayer Presentation of Building Project Motion to Approve Project Vote on Proceeding.
FAMILY LIFE CENTER Building Committee Update November 23, 2014.
Progress Energy Corporate Data Center Rob Robertson February 17, 2010 of North Carolina.
Lincoln Consolidated Schools Citizens Steering Committee Bond Issue Final Recommendation Board of Education Meeting October 18, :00 p.m.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
North Carolina ABC Distribution Center Graham B. Thompson Project Manager LB&B Associates Inc.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
U.S. ATLAS Tier 1 Planning Rich Baker Brookhaven National Laboratory US ATLAS Computing Advisory Panel Meeting Argonne National Laboratory October 30-31,
Computer Centre Upgrade Status & Plans Post-C5, June 27 th 2003 CERN.ch.
HEPiX Fall 2014 Tony Wong (BNL) UPS Monitoring with Sensaphone: A cost-effective solution.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Nikhef/(SARA) tier-1 data center infrastructure
Requirements for computing room A. Gianoli, M. Serra, P. Valente.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
ILC Test Facility at New Muon Lab (NML) S. Nagaitsev Fermilab April 16, 2007.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Power and Cooling of HPC Data Centers Requirements Roger A Panton Avetec Executive Director DICE
Multi-core CPU’s April 9, Multi-Core at BNL First purchase of AMD dual-core in 2006 First purchase of Intel multi-core in 2007 –dual-core in early.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter HEPiX May 2011.
Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
West Cambridge Data Centre Ian Tasker Information Services.
1.  Quick Overview of the History and Need  What is Planned for the Fire Department  What is Planned for the Police Department  Financial Details.
West Contra Costa USD General Obligation Bond Program Status Presentation to the Facilities Subcommittee June 11, 2013.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Expansion Plans for the Brookhaven Computer Center HEPIX – St. Louis November 7, 2007 Tony Chan - BNL.
CHEP 2016 – San Francisco Tony Wong Brookhaven National Laboratory
Data Center Stabilization
CITIZENS BLUE RIBBON COMMITTEE
CERN Data Centre ‘Building 513 on the Meyrin Site’
Architecture for Modular Data Centers
Presentation transcript:

The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL

Background Rapid growth in the last few years caused space, power and cooling problems Rapid growth in the last few years caused space, power and cooling problems Increasing capacity for RHIC/ATLAS and other activities cannot be accommodated with current facility infrastructure Increasing capacity for RHIC/ATLAS and other activities cannot be accommodated with current facility infrastructure Search for additional data center space began in 2007 Search for additional data center space began in 2007 Update of talk originally given at HEPIX in St. Louis (Nov. 2007) Update of talk originally given at HEPIX in St. Louis (Nov. 2007)

Vital Statistics Currently housing 165 racks of equipment (disk storage, cpu, network, etc) + 9 robotic silos Currently housing 165 racks of equipment (disk storage, cpu, network, etc) + 9 robotic silos Approximately 35 PB of tape storage, 9 PB of disk storage capacity and 10,200 computing cores Approximately 35 PB of tape storage, 9 PB of disk storage capacity and 10,200 computing cores Average power usage ~ 650 kW (~60% of maximum UPS capacity) with peak load ~ 790 kW Average power usage ~ 650 kW (~60% of maximum UPS capacity) with peak load ~ 790 kW Cooling capacity for a maximum of ~1000 kW Cooling capacity for a maximum of ~1000 kW

The Growth of Computing

Total Distributed Storage Capacity

Evolution of Space Usage Capacity of old data center Intel dual and quad-core deployed

Evolution of Power Usage Existing UPS Capacity

The Search For Solutions (1) Engaged Lab management – Spring 2007 Engaged Lab management – Spring 2007 Discussion on possible solutions – Summer 2007 Discussion on possible solutions – Summer 2007 CostCost TimeTime LocationLocation Recommendation to Lab management – Fall 2007/Winter 2008 Recommendation to Lab management – Fall 2007/Winter 2008 Two-phase solution to meet cost and time constraintsTwo-phase solution to meet cost and time constraints Identify and renovate existing space to meet short-term requirementsIdentify and renovate existing space to meet short-term requirements New building to meet long-term requirementsNew building to meet long-term requirements Funding for two construction projects approved – Spring 2008 Funding for two construction projects approved – Spring 2008

The Search for Solutions (2) Renovate existing floor space (US $0.6 million) Renovate existing floor space (US $0.6 million) Tender award – April 2008Tender award – April 2008 Renovations begin – June 2008Renovations begin – June 2008 Renovations end -- October 2008Renovations end -- October 2008 Occupancy – November 2008Occupancy – November 2008 New building (US $5 million) New building (US $5 million) Finalize design – May 2008Finalize design – May 2008 Tender award – June 2008Tender award – June 2008 Construction starts – August 2008Construction starts – August 2008 Construction ends – August 2009Construction ends – August 2009 Occupancy – October 2009Occupancy – October 2009 From first proposal to occupancy took 2½ years From first proposal to occupancy took 2½ years

Facility Development Timeline Recent Past ( ) Recent Past ( ) More efficient use of facility resourcesMore efficient use of facility resources Supplemental cooling system in existing facilitySupplemental cooling system in existing facility Near-Term (2008 to present) Near-Term (2008 to present) Renovation of (2000 ft 2 ) 185 m 2 of unused floor with 300 kW of powerRenovation of (2000 ft 2 ) 185 m 2 of unused floor with 300 kW of power New building with (6600 ft 2 ) 622 m 2 and 1.0 MW of powerNew building with (6600 ft 2 ) 622 m 2 and 1.0 MW of power Mission-specific facility (redundant cooling, deep raised floors, etc)Mission-specific facility (redundant cooling, deep raised floors, etc) Room for ~ 150 racks and 7 robotic silosRoom for ~ 150 racks and 7 robotic silos Long-Term (2017 and beyond) Long-Term (2017 and beyond) New BNL data center with ft 2 (2300 m 2 ) after 2018New BNL data center with ft 2 (2300 m 2 ) after 2018

Rack-Top Cooling Units

Rear Door Heat Exchanger

Data Center Layout in 2007

Data Center Layout in 2009

DATA CENTER EXPANSION – PART 1 September 12, 2008

Data Center Expansion – Part 1 July 15, 2009

Data Center Expansion – Part 2 January 14, 2009

Data Center Expansion – Part 2 October 10, 2009

Data Center Expansion – Part 2 October 10, 2009

Data Center Expansion – Part 2 October 10, 2009

New Data Center Layout

Where We Are Today (1) Data Center Expansion (part 1) is similar to existing facility Data Center Expansion (part 1) is similar to existing facility 12-in (30.48 cm) raised floor12-in (30.48 cm) raised floor Redundant cooling capacityRedundant cooling capacity No support for racks > 10 kWNo support for racks > 10 kW No support for supplemental coolingNo support for supplemental cooling Cable trays for power and networkCable trays for power and network Data Center Expansion (part 2) was designed for high density equipment Data Center Expansion (part 2) was designed for high density equipment 30-in (76.2 cm) raised floor30-in (76.2 cm) raised floor Support for racks > 10 kW (blades, half-depths, etc)Support for racks > 10 kW (blades, half-depths, etc) Redundant cooling capacityRedundant cooling capacity Support for racks > 2,500 lbs (1,135 kg)Support for racks > 2,500 lbs (1,135 kg) 13-ft ceiling (4 m) for high-profile racks13-ft ceiling (4 m) for high-profile racks Cable trays for power and networkCable trays for power and network Support for supplemental coolingSupport for supplemental cooling Environmentally-friendly buildingEnvironmentally-friendly building

Where We Are Today (2) Facility expanded from 5000 ft 2 (465 m 2 ) to 13,600 ft 2 (1260 m 2 ) of floor space Facility expanded from 5000 ft 2 (465 m 2 ) to 13,600 ft 2 (1260 m 2 ) of floor space Equipment capacity Equipment capacity from ~150 to ~300 racksfrom ~150 to ~300 racks from 6 to 13 robotic silosfrom 6 to 13 robotic silos Infrastructure support Infrastructure support from ~1 to ~2.0 MW of UPS-backed power (up to 4 MW capacity)from ~1 to ~2.0 MW of UPS-backed power (up to 4 MW capacity) Cooling capacity grew from ~1 to ~ 2 MW (up to 4 MW capacity)Cooling capacity grew from ~1 to ~ 2 MW (up to 4 MW capacity) 3 robotic silos and 6 racks of worker nodes first occupants of CDCE (October 2009) 3 robotic silos and 6 racks of worker nodes first occupants of CDCE (October 2009) Is this sufficient until 2018? Is this sufficient until 2018?

Unresolved Issues Insufficient funds to: Insufficient funds to: add 2 nd flywheel UPS for CDCEadd 2 nd flywheel UPS for CDCE diesel generator to support additional flywheel UPS unitsdiesel generator to support additional flywheel UPS units install additional 1 MW of cooling capacity (equipment already purchased)install additional 1 MW of cooling capacity (equipment already purchased) Estimated cost is additional US $2-3 million Estimated cost is additional US $2-3 million Estimate CDCE will exceed 2 MW of UPS power and cooling by 2012Estimate CDCE will exceed 2 MW of UPS power and cooling by 2012 Lead time to approve funds and pre-installation is 12 months  decision by 2011 Lead time to approve funds and pre-installation is 12 months  decision by 2011

Reason for Optimism? Multi-core cpu’s and increasing storage density have helped restrain a feared unsustainable growth in power and space needs Multi-core cpu’s and increasing storage density have helped restrain a feared unsustainable growth in power and space needs Rack counts have not increased at the same rate as computing and storage deployments Rack counts have not increased at the same rate as computing and storage deployments Somewhat hopeful that continued technological gains will further restrain data center growth Somewhat hopeful that continued technological gains will further restrain data center growth

Summary Facility footprint nearly tripled since 2007 Facility footprint nearly tripled since 2007 Applied lessons learned in design of data center expansion (part 2) Applied lessons learned in design of data center expansion (part 2) Must increase cooling efficiency with new technologies Must increase cooling efficiency with new technologies Rack-top cooling unitsRack-top cooling units Rear-door heat exchangerRear-door heat exchanger Hot aisle containmentHot aisle containment Significant increases in power efficiency and technology (power supply, multi-core cpu, etc) is a positive development, but some unresolved issues remain Significant increases in power efficiency and technology (power supply, multi-core cpu, etc) is a positive development, but some unresolved issues remain