Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta.

Slides:



Advertisements
Similar presentations
Green Datacenters solution March 17, 2009 By: Sanjay Sharma.
Advertisements

Energy and heat-aware metrics for data centers Jaume Salom, Laura Sisó IREC - Catalonia Institute for Energy Research Ariel Oleksiak, Mateusz.
1 Optimizing the Efficiency of the NCAR-Wyoming Supercomputing Center Ademola Olarinde Team Member: Theophile Nsengimana Mentor: Aaron Andersen August.
EE 394J10 Distributed Technologies
IBM Energy & Environment © 2008 IBM Corporation Energy Efficiency in the Data Centre … and beyond Peter Richardson UK Green Marketing Leader.
IT Equipment Efficiency Peter Rumsey, Rumsey Engineers.
Chandrakant Patel, Ratnesh Sharma, Cullen Bash, Sven Graupner HP Laboratories Palo Alto Energy Aware Grid: Global Workload Placement based on Energy Efficiency.
Anand Vanchi- Intel IT Ravi Giri – Intel IT Sujith Kannan – Intel Corporate Services Comprehensive Energy Efficiency of Data Centers – Case study shared.
Utility-Function-Driven Energy- Efficient Cooling in Data Centers Authors: Rajarshi Das, Jeffrey Kephart, Jonathan Lenchner, Hendrik Hamamn IBM Thomas.
John Daily Bill Best Nate Slinin Project Concept Our focus is centered around addressing the growing demands placed on the cooling infrastructure in.
Measuring and Validating Attempts to Green Columbia’s Data Center October 14, 2010 Rich Hall Peter M Crosta Alan Crosswell Columbia University Information.
02/24/09 Green Data Center project Alan Crosswell.
Workshop on HFC Management and 35th OEWG Meeting of the Parties to the Montreal Protocol Bangkok - Thailand What are the costs of replacing conventional.
1 The Problem of Power Consumption in Servers L. Minas and B. Ellison Intel-Lab In Dr. Dobb’s Journal, May 2009 Prepared and presented by Yan Cai Fall.
Presentation Outline Introduction CHP Analysis Electrical Analysis Acoustical Analysis Thermal Storage Analysis System Optimization Analysis Conclusion.
Computer Room Experiences A medium sized tier-2 site view Pete Gronbech GridPP Project Manager HEPIX April 2012.
September 18, 2009 Critical Facilities Round Table 1 Introducing the Heat Wheel to the Data Center Robert (Dr. Bob) Sullivan, Ph.D. Data Center Infrastructure.
Steam Power Station Presented By Ashvin G. Patel Asst. Prof. (E.E.)
Data Centre Power Trends UKNOF 4 – 19 th May 2006 Marcus Hopwood Internet Facilitators Ltd.
Data Center Consolidation & Energy Efficiency in Federal Facilities
Green IT and Data Centers Darshan R. Kapadia Gregor von Laszewski 1.
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
Connectivity Week Santa Clara Convention Center May 23, 2011.
Data Centers - They’re Back… E SOURCE Forum September, 2007 William Tschudi
SOLAR THERMAL AIR CONDITIONER Design Team 8. Introduction Solar Air Conditioner Introduction Design Testing Conclusion 5 April 2012 Team 8 Slide 2 of.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
Sensor-Based Fast Thermal Evaluation Model For Energy Efficient High-Performance Datacenters Q. Tang, T. Mukherjee, Sandeep K. S. Gupta Department of Computer.
COMP 4923 A2 Data Center Cooling Danny Silver JSOCS, Acadia University.
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
ET 493 Senior Design Spring 2013 By: Justin Cifreo, Benjamin Gabriel, Nathan Taylor Instructor: Dr. Cris Koutsougeras Advisor: Dr. Junkun Ma Mechanical.
Lecture Objectives: Specify Exam Time Finish with HVAC systems –HW3 Introduce Projects 1 & 2 –eQUEST –other options.
DELL CONFIDENTIAL 1 GREEN IT SOLUTIONS. GREEN IS EVERYWHERE DELL CONFIDENTIAL 2.
Overview of Data Center Energy Use Bill Tschudi, LBNL
Most organization’s data centers that were designed before 2000 were we built based on technologies did not exist or were not commonplace such as: >Blade.
Alex Gee Jon Locke Joe Cooper Kylie Rhoades Clara Echavarria Ice Energy Extraction.
Thermo Scientific has “Gone Green” 2014 MSU Greening the Supply Chain New technologies and processes to reduce the overall energy usage, during manufacturing.
Thermal Aware Data Management in Cloud based Data Centers Ling Liu College of Computing Georgia Institute of Technology NSF SEEDM workshop, May 2-3, 2011.
Performance and Energy Efficiency Evaluation of Big Data Systems Presented by Yingjie Shi Institute of Computing Technology, CAS
Energy Efficient Data Centers Update on LBNL data center energy efficiency projects June 23, 2005 Bill Tschudi Lawrence Berkeley National Laboratory
We can…. 2 GLOBAL REFERENCES Rev: 00 References :
COMP 4923 A2 Data Center Cooling Danny Silver JSOCS, Acadia University.
1 CTO Challenge William Tschudi February 27, 2008.
Operated by the Southeastern Universities Research Association for the U.S. Department of Energy Thomas Jefferson National Accelerator Facility Page 1.
Datacenter Energy Efficiency Research: An Update Lawrence Berkeley National Laboratory Bill Tschudi July 29, 2004.
Increasing DC Efficiency by 4x Berkeley RAD Lab
Optimizing Power and Data Center Resources Jim Sweeney Enterprise Solutions Consultant, GTSI.
Data Center Energy Use, Metrics and Rating Systems Steve Greenberg Energy Management Engineer Environmental Energy Technologies Division Lawrence Berkeley.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
Monitoreo y Administración de Infraestructura Fisica (DCIM). StruxureWare for Data Centers 2.0 Arturo Maqueo Business Development Data Centers LAM.
Data Center Energy: Going Forward John Tuccillo, APC by Schneider Electric Founding Board of Director Member.
Date of download: 6/29/2016 Copyright © ASME. All rights reserved. From: Hot Spot Cooling and Harvesting Central Processing Unit Waste Heat Using Thermoelectric.
Data Center Energy Efficiency SC07 Birds of a Feather November, 2007 William Tschudi
P08441:Thermoelectric Auto Exhaust Power Generation Project Introduction : The motivation for this project stems from an increasing need for highly efficient.
Extreme Scale Infrastructure
Enabling High Efficient Power Supplies for Servers
Unit 2: Chapter 2 Cooling.
Microgrid Concepts and Distributed Generation Technologies
The Data Center Challenge
Electrical Systems Efficiency
Greening Your Information Systems
CERN Data Centre ‘Building 513 on the Meyrin Site’
Closing the Gap to Free Cooling in the Data Center
IT Equipment Efficiency
Cloud Computing Data Centers
IT Equipment Efficiency
Towards Green Aware Computing at Indiana University
Cloud Computing Data Centers
Gustavo Rau de Almeida Callou
Presentation transcript:

Thermodynamic Feasibility 1 Anna Haywood, Jon Sherbeck, Patrick Phelan, Georgios Varsamopoulos, Sandeep K. S. Gupta

INTRODUCTION Thermal architecture side: II-EN: BlueTool: Infrastructure for Innovative Cyberphysical Data Center Management Research NSF funded award # In response to an acknowledged problem Increasing amount of data center energy use, Currently at 3% of all US energy consumption ~50% of which is used for cooling the data center 2

The BlueTool project 3

Objective The overall objective of thermal project is to reduce the grid power consumption of the cooling system for data centers. 4 HOW? use heat-driven, LiBr absorption chiller to reduce the cooling load on a typical Computer Room AC (CRAC) heat to drive the chiller will be originated from the data center itself.

Challenges 1. generating enough high-temperature heat from the blade components inside the data center, Target heat source = CPUs 2. and then capturing and transporting that heat effectively and efficiently to a Li-Br heat-activated absorption unit. dynamic/fluctuating heat output of CPUs 5

Overall Concept: Capture 90% of CPU heat and send to chiller 6 CPUs dissipate most heat on the board  high heat fraction (HHF)  a capture fraction (CF) of Low loss LiBr heat- activated chiller  T budget =70-95 o C

High Heat Fraction 7 42U chassis IT blade server CPUs Figure 1. IT server equipment. Dell DataCenter Capacity Planner Tool: 103W/CPU, 294W/blade and 19.78kW/rackTool HHF: CPU heat /blade heat = 0.7 (206W/294W) 10blades/chassis 5 chassis/rack 7U

Target Heat Source Dell PE 1855 Intel® Xeon® Nocona Processors 3.20 GHz 2 CPUs/server blade 103W/CPU = 206W/blade 72 o C 8

How much heat required from CPUs to run chiller for best performance? Cooling Capacity: 10 ton =35.2 kW and COP C = 0.7 *Goal: 50.3 kW *Translates into 269 server blades of the Dell PE 1855 with dual Xeon Nocona CPUs.

Apply Steady-State System Analysis Data center + cooling system layout System equations applied to PUE Apply equations to gauge system performance Analyze power effectiveness of data center PUE metric: Power Usage Effectiveness Ratio of power delivered to the facility divided by power used exclusively for the IT equipment 10

System diagram: work and heat flow paths 11 Contributors: Dr. Phelan, Anna Haywood, Jon Sherbeck, Phani Domalapally

PUE is traditionally defined assuming conventional electric supply configurations For non-conventional configurations, using alternative sources or reusing heat, 12. PUELevel of Efficiency 3.0Very Inefficient 2.5Inefficient 2.0Average 1.5Efficient 1.2Very Efficient Industry benchmarked PUE values (GreenGrid 2009)  PUE may fall below 1.0

PUE applied to our system diagram 13 Relates electric power for compressor to the heat load on the CRAC CPU heat load removal

Equations relating PUE to HHF, CF, Q EXT 14 total heat flow from data center as a load on the cooling equipment chiller’s cooling capacity reduces heat load on CRAC heat extraction from the CPUs to the storage Portion of rack heat driving chiller

15 Rearranging terms and simplifying PUE can become less than one. This cooling portion (heat removal) is divided by COP CRAC to represent electric power. Term suggests that external heating can generate excess cooling that can be “exported,” i.e., used to cool adjacent rooms or facilities, -- (pwr out) Win = Q L /COP CRAC

Calculated values for our data center with 6 racks 16 HHF0.70 ( 6 racks ) kWe (CPUs) kWth 0.55 kWe 0.50 kWe 0.21 kWe kWe Coefficients of performance COP CRAC 3.9 *typical COP R COP C 0.7 optimum Capture Fraction (CF) 0.90 Expected PUE 0.99 Heat pwr Elec pwr

PUE “very efficient” for our Data Center 17 PUE=0.99 PUE

PUE even better with Solar Source Added 18 PUE=0.81 PUE

Conclusion The potential exists to utilize some of the waste heat generated by data centers to drive absorption chillers, which would then relieve some of the cooling load on the conventional computer room air conditioner (CRAC). By reusing data center waste heat and supplementing the high-temperature heat captured from the CPUs with an external source of heating, such as from solar energy, it is theoretically possible to generate a PUE (Power Usage Effectiveness) ratio of less than one. 19

Extra material 20

Example using reused heat Take an initial PUE of % of that goes to servers If can utilize 30% of dissipated heat, then PUE drops to