05/15/09 Green Data Center Program Alan Crosswell.

Slides:



Advertisements
Similar presentations
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Green Datacenter Initiatives at SDSC Matt Campbell SDSC Data Center Services.
Advertisements

Clair Christofersen Mentor – Aaron Andersen August 2, 2012
Using Copper Water Loop Heat Pipes to Efficiently Cool CPUs and GPUs Stephen Fried President Passive Thermal Technology, Inc.
Columbia University’s Advanced Concepts Data Center Pilot June 17, 2011.
The Value of DOD Installation Energy Management Control Systems (EMCS) and Command Centers for Improved Operations and Increased Energy Efficiency Moderator:
ENERGY CONSUMPTION: MANZANITA HALL ENERGY UPGRADES BEN COHNDANNY GARNETT AUSTIN FROGGEPAUL SAIGEON MATT DUNCAN.
Variable Frequency Drives VFD Basics
Maximizing Your IT Budget through Energy Efficiency Heather Feigum Focus on Energy Advisor.
Cooling Product Positioning
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
PG&E and Altera Data Center Energy Efficiency Project.
SEMI-THERM FEBRUARY 23, 2010 SANTA CLARA, CALIFORNIA Capturing efficiency opportunities in data centers has become a business imperative, allowing managers.
Utility-Function-Driven Energy- Efficient Cooling in Data Centers Authors: Rajarshi Das, Jeffrey Kephart, Jonathan Lenchner, Hendrik Hamamn IBM Thomas.
Measuring and Validating Attempts to Green Columbia’s Data Center October 14, 2010 Rich Hall Peter M Crosta Alan Crosswell Columbia University Information.
1 NYSERDA Data Center Efficiency Program Columbia University Green Data Center Winter Workshop January 7 th, 2011.
10/06/2009 Fall Internet2 Member Meeting, San Antonio Green Data Center Program Alan Crosswell.
02/24/09 Green Data Center project Alan Crosswell.
03/20/09 Green Data Center Program (short version:-) Alan Crosswell.
Green Data Center Program
05/15/09 Green Data Center Program Ian Katz Power Metering, The New Frontier.
Columbia University’s Green Data Center Winter Workshop Agenda – January 7, :00amRegistration & Breakfast 9:30 – 10:15Welcome and Opening Remarks.
14 June 2010 Green Data Center Program Alan Crosswell.
 Site  Requirements  Local Resources  Initial layout ideas  Brief material selection  Supply options.
Computer Room Experiences A medium sized tier-2 site view Pete Gronbech GridPP Project Manager HEPIX April 2012.
Commodity Data Center Design
September 18, 2009 Critical Facilities Round Table 1 Introducing the Heat Wheel to the Data Center Robert (Dr. Bob) Sullivan, Ph.D. Data Center Infrastructure.
Cutting the Green IT Hype: Fact vs. Fiction Kenneth G. Brill, Executive Director Uptime Institute Inc
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
Building Systems Integration - Energy and Cost Analysis The Milton Hershey School New Supply Center Justin Bem AE Senior Thesis – Spring 2007 Mechanical.
CNN Center John Hester Turner Properties, Inc.. CNN Center Built in ,583,000 square feet on 18 floors Five structures joined by a common atrium.
Data Center Consolidation & Energy Efficiency in Federal Facilities
Chapter 16: Energy Efficiency and Renewable Energy –Evaluating Energy Resources.
High Density Cooling Air conditioning for high density applications
Best Practices in HVAC Design/Retrofit
Overview of Energy Efficiency & Renewable Energy (Draft version) Prepared by Natural Resources Canada in association with Indian and Northern Affairs Canada.
Energy Conservation Western North Carolina Recycler’s Networking Council Black Mountain, North Carolina December 9, 2004 Keyes McGee, E.I. North Carolina.
Optimising Data Centre Power Planning and Managing Change in Data Centres - 28th November Cirencester.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
Mission Energy: Energy Efficient Cooling How Data Centre energy consumption and operating costs can be significantly reduced through readily available.
© 2008 Data Power Services, LLC By: Tom Taranto 2010 Energy Expo Hosted by National Grid NYSERDA Reducing Compressed Air Energy Cost Using a Systems Approach.
Data centre air management Case studies Sophia Flucker.
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
Name of Building(s) or Project Speaker(s) Organization(s)
Overview of Data Center Energy Use Bill Tschudi, LBNL
Management and Organisation of Electricity Use Electrical System Optimisation Belgrade November 2003.
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
STANFORD UNIVERSITY INFORMATION TECHNOLOGY SERVICES Research Data Centers- Stanford Phil Reese, Stanford University CSG Meeting, June.
50th HPC User Forum Emerging Trends in HPC September 9-11, 2013
The Data Center Challenge
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Authors: William Tschudi, Lawrence Berkeley National Lab Stephen Fok, Pacific Gas and Electric Company Stephen Fok, Pacific Gas and Electric Company Presented.
#watitis2015 TOWARD A GREENER HORIZON: PROPOSED ENERGY SAVING CHANGES TO MFCF DATA CENTERS Naji Alamrony
Building Systems Integration - Energy and Cost Analysis The Milton Hershey School New Supply Center Justin Bem AE Senior Thesis – Spring 2007 Mechanical.
Optimizing Power and Data Center Resources Jim Sweeney Enterprise Solutions Consultant, GTSI.
Data Center Energy Use, Metrics and Rating Systems Steve Greenberg Energy Management Engineer Environmental Energy Technologies Division Lawrence Berkeley.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
1 PCE 4.4 New Development In DC Containment Steve Howell.
1 ITM 1.2 How IT Decisions Impact Data Center Facilities: The Importance of Collaboration Lars Strong P.E. Upsite Technologies, Inc.
1 PCE 2.1: The Co-Relationship of Containment and CFDs Gordon Johnson Senior CFD Manager at Subzero Engineering CDCDP (Certified Data Center Design Professional)
InfraStruXure Systems Alex Tavakalov
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
Fifty Questions What Business and IT Officers Need to Know about their Campus’ Carbon Emissions Did your CEO sign the American College and University.
Unit 2: Chapter 2 Cooling.
The Data Center Challenge
UMCP Learning Objectives
CERN Data Centre ‘Building 513 on the Meyrin Site’
Closing the Gap to Free Cooling in the Data Center
Where Does the Power go in DCs & How to get it Back
Commodity Data Center Design
The Greening of IT November 1, 2007.
Presentation transcript:

05/15/09 Green Data Center Program Alan Crosswell

05/15/09 Agenda The Opportunities The CUIT Data Center Other Data Centers around Columbia Current and Developing Data Center Best Practices Future State Goals Our NYSERDA Advanced Concepts Datacenter project Next Steps 2

05/15/09 The Opportunities Data centers consume 3% of all electricity in New York State (1.5% nationally as of 2007). Use of IT systems for administrative and especially research applications continues to grow. We need space for academic purposes such as labs. Columbia’s commitment to PlaNYC carbon footprint reduction of 30% by Gov. Paterson’s 15x15 goal: (15% electricity demand reduction by 2015) 3

05/15/09 CUIT Data Center Architectural –Built in 1963, updated somewhat in the 1980's. –4400 sf raised floor machine room space. –1750 sf raised floor space, now offices for GSB IT. –12” raised floor –Adequate support spaces nearby Staff Staging Storage Mechanical & fire suppression (future) UPS room

05/15/09 CUIT Data Center Electrical –Supply: 3-phase 208V from automatic transfer switch. –Distribution: 208V to wall-mounted panels; 120V to most servers. –No central UPS; lots of rack-mounted units. –Generator: 1750 kW shared with other users & at capacity. –No metering. (Spot readings every decade or so:-)‏ –IT demand load tripled from

CUIT Data Center 05/15/09 6

CUIT Data Center Mechanical –On floor CRAC units served by campus chilled water. –Also served by backup glycol dry coolers. –Supplements a central overhead air system. –Heat load is shared between the overhead and CRAC. –No hot/cold aisles –Rows are in various orientations –Due to tripling of demand load, the backup (generator-powered) CRAC units lack sufficient capacity. 7

IT systems –A mix of mostly administrative (non- research) systems. –Most servers dual-corded 120V power input. –Many ancient servers. –Due to lack of room UPS, each rack has UPSes taking up 30-40% of the space. –Lots of spaghetti in the racks and under the floor. 05/15/09 CUIT Data Center 8

05/15/09 Power Use Efficiency (PUE) =2.17 This is a SWAG! 9

LBNL Average PUE for 12 Data Centers 05/15/09 Power Use Efficiency (PUE) =

05/15/09 Other Data Centers around Columbia Many school, departmental & research server rooms all over the place. –Range from huge (7,000 sf C2B2) … to tiny (2-3 servers in a closet) –Several mid-sized Most lack electrical or HVAC backup. Many could be better used as academic space (labs, offices, classrooms). Growth in research HPC putting increasing pressure on these server rooms. Lots of wasted money building new server rooms for HPC clusters that are part of faculty startup packages, etc. 11

Making the server slice bigger, the pie smaller and green. Reduce the PUE ratio by improving electrical & mechanical efficiency. –Google claims a PUE of 1.2claims Consolidate data centers (server rooms) –Claimed more efficient when larger (prove it!) –Free up valuable space for wet labs, offices, classrooms. Reduce the overall IT load through –Server efficiency (newer, more efficient hardware) –Server consolidation & sharing Virtualization Shared research clusters Move servers to a zero-carbon data center 05/15/09 12

Data center electrical best practices 95% efficient 480V room UPS –Basement UPS room –vs. wasting 40% of rack space 480V distribution to PDUs at ends of rack rows –Transformed to 208/120V at PDU –Reduces copper needed, transmission losses 208V power to servers vs. 120V –More efficient (how much?) Variable Frequency Drives for cooling fans and pumps –Motor power consumption increases as the cube of the speed. Generator backup 05/15/09 13

Data center mechanical best practices Air flow – reduce mixing, increase delta-T –Hot/cold or double hot aisle separation –24-36” under floor plenum –Plug up leaks in floor –Increase temperature –Perform CFD modeling Alternative cooling technique: In-row or in-rack cooling –Reduces or eliminates hot/cold air mixing –More efficient transfer of heat (how much?) –Supports much higher power density –Water-cooled servers are making a comeback 05/15/09 14

Data center green power best practices Locate data center near a renewable source –Hydroelectric power upstate or in Canada –Wind power – but most wind farms lack transmission capacity. 40% of power is lost in transmission. So bring the servers to the power. Leverages our international high speed data networks Use “free cooling” (outside air) –Stanford proposal will free cool almost always Implement “follow the Sun” data centers –Move the compute load to wherever the greenest power is currently available. 05/15/09 15

Energy Saving Best Practices Efficient lighting, HVAC, windows, appliances, etc. –LBNL and other nations’ 1W standby power proposalsLBNL Behavior modification –Turn off the lights! –Enable power-saving options on computers –Social experiment in Watt Hall Co-generation –Waste heat is recycled to generate energy –Planned for Manhattanville campus –Possibly for Morningside campus Columbia participation in PlaNYCparticipationPlaNYC 05/15/09 16

04/27/09

Barriers to implementing best practices Capital costs Grant funding restrictions Short-term and parochial thinking Saving electricity is not well incented as nobody is billed for their electrical use close enough to where they use it. Distance –Synchronous writes for data replication are limited to about 30 miles –Bandwidth*delay product impact on transmission of large amounts of data –Reliability concerns –Server hugging –Staffing needs 05/15/09 18

Key Recommendations from Bruns-Pak 2008 Study Allocate currently unused spaces for storage, UPS, etc. Consolidate racks to recapture floor space Generally improve redundancy of electrical & HVAC Upgrade electrical systems –750 kVA UPS module –New 480V 1500 kVA service –Generator improvements Upgrade HVAC systems –200-ton cooling plant –VFD pumps & fans –Advanced control system 05/15/09 19

05/15/09 Future State Goals – Next 5 years Begin phased upgrades of the Data Center to Improve power and space efficiency. Overall cost ~ $25M. Consolidate and replace pizza box servers with blades (& virtualization). Consolidate and simplify storage systems. Accommodate growing demand for HPC research clusters –Share clusters among researchers to be more efficient Accommodate server needs of new Interdisciplinary Science Building. Develop internal cloud services. Explore external cloud services. –Stanford giving Amazon EC2 credits for faculty startup 20

05/15/09 Future State Goals – Next 5-10 years Build a new data center of 10,000-15,000 sf –Perhaps cooperatively with others –Possibly in Manhattanville –Not necessarily in NYC Consolidate many small server rooms. Significant use of green-energy cloud computing resources. From 21

05/15/09 Our NYSERDA Project New York State Energy Research & Development Authority Program Opportunity Notice 1206 ~$1.2M ($447K from NYSERDA awarded pending contract)‏ Improve space & power efficiency of primarily administrative servers. Contribute to Columbia's PlaNYC carbon footprint reduction goal. Make room for shared research computing in the existing data center. Measure and test vendor claims of energy efficiency improvements. 22

05/15/09 Our NYSERDA Project – Specific Tasks Identify 30 old servers to replace. Instrument server power consumption and data center heat load. “Real time” with SNMP. Establish PUE profile (use DoE DC Pro survey tool)‏ Implement 9 racks of high-density cooling (in-row/rack). Implement proper UPS and high-voltage distribution. Compare old & new research clusters' power consumption for the same workload. Implement advanced server power management and measure improvements. Review with internal, external and research faculty advisory groups. Communicate results. 23

Measuring Power Consumption Use SNMP which enables comparison with other metrics 05/15/09 24

Measuring Power Consumption Can measure power use with SNMP at: –Main electrical feeder, panels, subpanels, circuits. –UPS –Power strips –Some servers –HP c7000 blade chassis SNMP MIB includes Power supply load Cooling fan airflow etc. 05/15/09 25

Measuring Heat Rejection Data Center chilled water goes through a plate heat exchanger to the campus chilled water loop. Measure the amount of heat rejected to the campus loop with SNMP instrumented: –BTU meter & –Flow meter 05/15/09 26

FIN This work is supported in part by the New York State Energy Research and Development Authority. 05/15/09 27