The Genome Sequencing Center's Data Center Plans Gary Stiehr

Slides:



Advertisements
Similar presentations
Challenges in optimizing data center utilization
Advertisements

Environmental Controls I/IG Lecture 14 Mechanical System Space Requirements Mechanical System Exchange Loops HVAC Systems Lecture 14 Mechanical System.
VSE Corporation Proprietary Information
Smart Devices. Smart Buildings. Smart Business The Potential for DCx Technology Enabled HVAC Operation Scot Duncan, P.E.
MidAmerican Energy Holdings Company Telecom Power Infrastructure Analysis Premium Power for Colocation Telecom Power Infrastructure Analysis February 27,
Cooling Product Positioning
Architect of the Capitol (AOC) Presentation to American Council of Engineering Companies of Metropolitan Washington February 16, 2012.
Leadership in Environmental Stewardship in a Fiscally Prudent Manner WAUKESHA COUNTY SUSTAINABILITY EFFORTS.
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
Telenor Tier 3 Data Center April About Telenor Tier 3 Data Center Telenor built it´s own Data Centar in accordance with the latest industrial standards.
Team 2: Kelly Foster Chad Mills Lirjon Llusho Dave Rosso September 30, 2009.
1 Northwestern University Information Technology Data Center Elements Research and Administrative Computing Committee Presented October 8, 2007.
02/24/09 Green Data Center project Alan Crosswell.
Cloud Computing Economics Ville Volanen
An Academic Data Center Build Out at the University of Michigan Paul Killey Tempe, AZ Feb
Utility Planning For North Campus Upgrades Area in and around Cook Physical Science Building Sal Chiarelli, Director of Physical Plant November 15, 2007.
44 th Annual Conference & Technical Exhibition By Thomas Hartman, P.E. The Hartman Company Georgetown, Texas Sustainable Chilled Water.
CIT 470: Advanced Network and System AdministrationSlide #1 CIT 470: Advanced Network and System Administration Data Centers.
Integration of Mechanical System Redesign Geothermal Heat Pump Redesign Wesley S. Lawson Architectural Engineering Mechanical Option Pennsylvania State.
Chapter 8Basic Computer Maintenance  8.1Preventive Maintenance 8.1Preventive Maintenance 8.1Preventive Maintenance  8.2Monitoring System Performance.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
Green School Design Gary A. Jerue ED 846 Spring 2010.
Data Center Construction Project Updates and Experiences Gary Stiehr May 6, 2008 HEPiX Spring 2008.
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
Charles F. Hurley Building Case Study B.J. Mohammadipour Bureau of State Office Buildings.
Current trends in data centre outsourcing
CNN Center John Hester Turner Properties, Inc.. CNN Center Built in ,583,000 square feet on 18 floors Five structures joined by a common atrium.
Data Centre Masterclasses The King of Green Data Centres is the one that does not need to be built. Chris Butler, on365 Product Manger for Enterprise Management.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
CSG Panel – May 10, 2006 Key Lessons for New Data Centers Project overview Requirements Lessons learned Going Forward.
Auraria Higher Education Center Case Study John Ismert & Frank Ellis Auraria Higher Education Center Facilities Management.
Photovoltaics. Agenda What is PV? System Types Mounting Options Product Options System cost & performance System sizing considerations Advantages of PV.
Connectivity Week Santa Clara Convention Center May 23, 2011.
Chilled water Meyrin consolidation Study 1 st Part Many thanks for their contribution to: Pasquale Alemanno, Fortunato Candito, Alexander Putzu.
Optimising Data Centre Power Planning and Managing Change in Data Centres - 28th November Cirencester.
APC InfraStruxure TM Central Smart Plug-In for HP Operations Manager Manage Power, Cooling, Security, Environment, Rack Access and Physical Layer Infrastructure.
Foothill College & Space Science Center Bill Kelly Viron Energy Services (510) ext 13,
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
By Justin Cryer. Motherboard (ROM) In personal computers, a motherboard is the central printed circuit board and holds many of the crucial components.
MISSION CRITICAL COLOCATION 360 Technology Center Solutions.
N VAV system varies the air quantity rather than temperature to each zone n Single main duct is run from AHU n Branch duct are run from this main through.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
The Data Center Challenge
Utility Engineers, PC.  Generation  Transmission  Distribution.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
Introduction to Energy Management. Week/Lesson 12 Advanced Technology for Effective Facility Control.
Cooling plant upgrade Jose Botelho Direito, Michele Battistin, Stephane Berry, Sebastien Roussee 2 nd SPD Cooling Workshop 30/11/201112nd SPD.
Virtualization Supplemental Material beyond the textbook.
Karl Williams Mechanical Engineer AD/MS Fluids Group Absorber RAW Cooling Systems.
Data Center Energy Use, Metrics and Rating Systems Steve Greenberg Energy Management Engineer Environmental Energy Technologies Division Lawrence Berkeley.
High Availability Environments cs5493/7493. High Availability Requirements Achieving high availability Redundancy of systems Maintenance Backup & Restore.
Capacity Planning in a Virtual Environment Chris Chesley, Sr. Systems Engineer
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
1 ITM 1.2 How IT Decisions Impact Data Center Facilities: The Importance of Collaboration Lars Strong P.E. Upsite Technologies, Inc.
PC COMPONENTS. System Unit Cases This is the cabinet that holds the main components of a computer. It includes a plastic front panel for aesthetic purpose.
InfraStruXure Systems Alex Tavakalov
West Cambridge Data Centre Ian Tasker Information Services.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Enterprise Vitrualization by Ernest de León. Brief Overview.
Dell EMC Modular Data Centers
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
The Data Center Challenge
CERN Data Centre ‘Building 513 on the Meyrin Site’
Enterprise Systems Management – ESM821S
The Benefit of Including Energy Recovery System Analysis
2018 ING Chicago.
Where Does the Power go in DCs & How to get it Back
Presentation transcript:

The Genome Sequencing Center's Data Center Plans Gary Stiehr

The task Design a data center to support sequencing over the next 5-7 years

The challenges What network, computing, and storage infrastructure will be required to support our efforts over the next 5-7 years? We know what data Sanger-based sequencing methods generate and how to process them. But what about the next-generation sequencing technologies from companies like 454, Solexa/Illumina, Applied Biosystems, and Helicos? How much data will they generate? How much processing will the data require?

The challenges TechnologyRun Size (GB) Sanger (AB 3730xl) Solexa/Illumina700 AB SOLiD4000 Heliscope10,000 What about other technologies?

The challenges So in designing a data center, we found ourselves trying to answer questions not only about the future of network, computing, and storage technologies, but also about the very uncertain future of next-gen sequencing technologies. One thing was for sure: we would need a lot of everything. One other thing was for certain: there was not a lot of space to put everything

The challenges Land locked  Space limited  Dense compute and disk  Massive power and cooling needs  Massive power and cooling require lots of space

4444 Forest Park Ave Initial estimates for computing and storage needs over the next three years were translated into power and cooling requirements. These requirements would have necessitated a $2.5M upgrade to building electrical and chilled water infrastructure. That is before any money was spent on the data center and further expansion of the space was not possible.

The task 

222 S Newstead Ave The University was in the process of acquiring a building across the street from 4444 Forest Park Estimates for purchase and razing of this space came to about $1M As an added bonus, this space provided about 5 times as large a floor plan

Power/Cooling Requirements Of course, predicting requirements in 5-7 years in advance is difficult: New computer hardware types. New sequencer technologies. New projects. Based off of historical purchase trends adjusted for current and anticipated projects, 20 racks per year: 2/3 of racks will contain 8 kW per rack 1/3 of racks will contain 25 kW per rack

Data Center Requirements Power and cool an average of around 13.7 kW per rack, with some racks up to 25 kW. Desire to last at least six years before additional space is needed. With 20 racks per year, needed to fit at least 120 racks. Each rack needs redundant power paths backed by UPS and generator. Cooling system needs some redundancy. Avoid single points of failure.

Cooling Options Chilled water-based. -Single closed loop water piping a single point of failure? +larger initial cost, potential initial unused capacity Refrigerant-based (e.g. Liebert XD-series). -Refrigerant piping single point of failure (not sure)? -Higher maintenance costs for numerous condensers? -Components inside the data center (but shouldn’t require much maintenance?). +smaller initial cost, scales as needed (assuming piping is pre-installed).

Cooling Design Designed to cool an average of 15 kW per rack (with ability to cool 25 kW in the mix). N+1 redundancy of chilled water plants and air handlers. Floor grates rather than perfs. Hot/cold aisles partitioned with plastic barriers above racks and at ends of aisles. Closed loop water piping.

Electrical Design Redudant paths all of the way to utility: - dual utility power feeds+2MW generator - dual transformers and associated gear - dual UPS (one battery, one flywheel) - multiple panels in RPP (giving each rack access to both UPSs). Branch circuit monitoring A platform (partial second floor) built to hold additional electrical equipment.

Other Design Elements Can withstand 150+ MPH winds Receiving and storage areas Building monitoring integrated with campus- wide system LEED (Leadership in Energy and Environmental Design) certified. Dual fiber paths to connect to existing infrastructure

Surprises Out of 16,000 square feet of available floor space (including platform), only approximately 3200 square feet of usable data center floor space. Electrical and cooling infrastructure ate most of the space.

Construction Ground breaking to move in less than 1 year (phase 1+2). Phased build out (due to budget/timing): Phase 1 30 racks 2 chillers 3 air handlers 1 generator Phase 2 60 racks 2 air handlers

Construction Phase 3 90 racks 1 chiller 2 air handler 1 generator Phase racks 1 air handlers 1 generator

Other Considerations Standard racks, PDUs? Power whips to racks--anticipating outlet and PDU types (e.g., 1U vs. blades). Initially trying 30A 208V 3-pole circuits, power whips with L21-30R connectors. Blades with IEC C19/C20, Disks with C13/C14. Some systems with N+1 power supplies--how to maintain redundancy? For C13, 208V or 120V?