Where Does the Power go in DCs & How to get it Back

Slides:



Advertisements
Similar presentations
Energy-efficient Task Scheduling in Heterogeneous Environment 2013/10/25.
Advertisements

Demand Response: The Challenges of Integration in a Total Resource Plan Demand Response: The Challenges of Integration in a Total Resource Plan Howard.
CP1610: Introduction to Computer Components Computer Power Supplies.
Commodity Data Center Design James Hamilton
Variable Frequency Drives VFD Basics
Cooling Product Positioning
PG&E and Altera Data Center Energy Efficiency Project.
1 Page 1 Public Interest Energy Research (PIER) California Institute for Energy & Environment (CIEE) Lawrence Berkeley National Laboratory Bill Tschudi.
Where Does the Power go in DCs & How to get it Back Foo Camp James Hamilton web: blog:
Utility-Function-Driven Energy- Efficient Cooling in Data Centers Authors: Rajarshi Das, Jeffrey Kephart, Jonathan Lenchner, Hendrik Hamamn IBM Thomas.
Application Models for utility computing Ulrich (Uli) Homann Chief Architect Microsoft Enterprise Services.
Datacenter Power State-of-the-Art Randy H. Katz University of California, Berkeley LoCal 0 th Retreat “Energy permits things to exist; information, to.
44 th Annual Conference & Technical Exhibition By Thomas Hartman, P.E. The Hartman Company Georgetown, Texas Sustainable Chilled Water.
Architecture for Modular Data Centers James Hamilton 2007/01/17
 Site  Requirements  Local Resources  Initial layout ideas  Brief material selection  Supply options.
24 x 7 Energy Efficiency February, 2007 William Tschudi
Commodity Data Center Design
Cutting the Green IT Hype: Fact vs. Fiction Kenneth G. Brill, Executive Director Uptime Institute Inc
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
Architecture for Modular Data Centers James Hamilton 2007/01/08
CS 423 – Operating Systems Design Lecture 22 – Power Management Klara Nahrstedt and Raoul Rivas Spring 2013 CS Spring 2013.
Data Center Consolidation & Energy Efficiency in Federal Facilities
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
Folklore Confirmed: Compiling for Speed = Compiling for Energy Tomofumi Yuki INRIA, Rennes Sanjay Rajopadhye Colorado State University 1.
Cloud Computing Economies of Scale MIX 2010 James Hamilton, 2010/3/15 VP & Distinguished Engineer, Amazon Web Services web: mvdirona.com/jrh/work.
Data Centers - They’re Back… E SOURCE Forum September, 2007 William Tschudi
Mission Energy: Energy Efficient Cooling How Data Centre energy consumption and operating costs can be significantly reduced through readily available.
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
Lindab Solus - Simply the natural choice.... lindab | comfort Chilled beam revolution! + Save up to 45 % cooling energy!* + Installation and investment.
Overview of Data Center Energy Use Bill Tschudi, LBNL
Thermal-aware Issues in Computers IMPACT Lab. Part A Overview of Thermal-related Technologies.
50th HPC User Forum Emerging Trends in HPC September 9-11, 2013
Energy Efficient Data Centers Update on LBNL data center energy efficiency projects June 23, 2005 Bill Tschudi Lawrence Berkeley National Laboratory
Electrical Systems Efficiency Bill Tschudi, LBNL
We can…. 2 GLOBAL REFERENCES Rev: 00 References :
1 CTO Challenge William Tschudi February 27, 2008.
Data Center Energy Efficiency Sonoma Mountain Village November 29, 2007 William Tschudi
Authors: William Tschudi, Lawrence Berkeley National Lab Stephen Fok, Pacific Gas and Electric Company Stephen Fok, Pacific Gas and Electric Company Presented.
#watitis2015 TOWARD A GREENER HORIZON: PROPOSED ENERGY SAVING CHANGES TO MFCF DATA CENTERS Naji Alamrony
Increasing DC Efficiency by 4x Berkeley RAD Lab
FPGA-Based System Design: Chapter 6 Copyright  2004 Prentice Hall PTR Topics n Low power design. n Pipelining.
Energy Savings in CERN’s Main Data Centre
LP Series 400V CE GE Consumer & Industrial September 2004.
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
CS203 – Advanced Computer Architecture
Power and Cooling 101 Technical Sales Training Michelle Wangler, Dale Estes and Chris Antaya.
Data Center Energy Efficiency SC07 Birds of a Feather November, 2007 William Tschudi
BLADE HEMAL RANA BLADE TECHNOLOGIES PRESENTED BY HEMAL RANA COMPUTER ENGINEER GOVERNMENT ENGINEERING COLLEGE,MODASA.
Dell EMC Modular Data Centers
Enabling High Efficient Power Supplies for Servers
Introduction to Electric Power System and A. C. Supply
Overview: Cloud Datacenters II
CS203 – Advanced Computer Architecture
Green IT Focus: Server Virtualization
The Data Center Challenge
Electrical Systems Efficiency
Green cloud computing 2 Cs 595 Lecture 15.
Data Center Research Roadmap
IT Equipment Efficiency
Energy Efficiency in District Coiling System
Cloud Computing Data Centers
Robicon Perfect Harmony.
Architecture for Modular Data Centers
Commodity Data Center Design
Direct Current (DC) Data Center
Cloud Computing Data Centers
The Greening of IT November 1, 2007.
The University of Adelaide, School of Computer Science
Presentation transcript:

Where Does the Power go in DCs & How to get it Back Foo Camp 2008 2008-07-12 James Hamilton email :JamesRH@microsoft.com web: http://mvdirona.com blog: http://perspectives.mvdirona.com

Agenda Power is the important measure Power drives costs in that Data Center costs are 80% providing power and cooling infrastructure Increasing concern about DC power consumption Work done/watt Power In: Power Distribution & Optimizations Servers: Critical Load & Optimizations Heat Out: Mechanical Systems & Optimization 7/12/2008 http://perspectives.mvdirona.com

Power Distribution: Utility to CPU Power Conversions to server (each roughly 98%) High (115kVAC) to medium(13.2kVAc) [differs by geo] Uninterruptable Power Supply & Generators: Running at 13.2VAC UPS: can be rotary or battery Good ones in 97% range. Much more common 93 to 94% Common: rectify to DC, trickle to batteries, then invert to AC (~93%) No loss at generators (please don’t start them: ~130gallons/hour * 10 or so) 13.2kVACto 480VAC 480VAC to 208VAC Conversions in Server to CPU & Memory: Power Supply: 208VAC to 12VDC (80% common, ~95% affordable) VRM: 12VDC to ~1.5VDC (80% common, 90% affordable) 7/12/2008 http://perspectives.mvdirona.com

Power Redundancy at Geo-Level Over 20% of entire DC costs is in power redundancy Batteries able to supply up to 15 min at some facilities N+2 generation (2.5MW) at over $2M each Instead, use more smaller, cheaper data centers Eliminate redundant power & bulk of shell costs Average UPS in the 93% range Over 1MW wasted in 15MW facility 7/12/2008 http://perspectives.mvdirona.com

Power Distribution Optimization Rules to minimize power distribution losses: Avoid conversions (Less transformer steps & efficient or no UPS) Increase efficiency of conversions High voltage as close to load as possible Size voltage regulators (VRM/VRDs) to load & use efficient parts DC distribution potentially a small win With regulatory issues Two interesting approaches: 480VAC (or higher) to rack & 48VDC (or 12VDC) within 480VAC to PDU and 277VAC to load 1 leg of 480VAC 3-phase distribution Common design: 44% lost in distribution 1*.98*.98*.93*.98*.8*.8 => 56% (~4.4MW lost on 10MW total) Affordable technology: 1*.99*.99*.95*.95 => 88% (~1.2MW total) 7/12/2008 http://perspectives.mvdirona.com

Critical Load Optimization Power proportionality is great but “off” is even better Today: Idle server consumes ~60% power of full load Industry secret: “good” data center server utilization around ~30% Off requires changing workload location What limits 100% dynamic workload distribution? Networking constraints VIPs can’t span L2 nets, ACLs are static, manual configuration, etc. Data Locality Hard to efficiently move several TB & workload needs to be close to data Workload management: Scheduling work over resources optimizing for power with SLA constraint Server power management Most workloads don’t fully utilize all resources on server Need ability to shut off or de-clock unused server resources Very low power states recover more quickly Move from 30% utilization to 80% 7/12/2008 http://perspectives.mvdirona.com

CEMS: Thin Slice Computing Cooperating Expendable Micro-Slice Servers Correct system balance problem with less-capable CPU Too many cores, running too fast for memory, bus, disk, … Power consumption scales with cube of clock frequency Goal: ¼ the price & much less than ½ the power Utilize high-volume client parts in server environment Goal: 20 to 50W at under $500 1U form factor or less with service-free design Longer term goals High-density, shared power supply & boot disk Eliminate non-server required components Establish viability of service free designs 7/12/2008 http://msblogs/JamesRH

Conventional Mechanical Design Server fans (from components to air) CRACs: (from air to chilled water) Air moving over long distance expensive Air control often poor with hot/cold mixing Secondary water circuit (variable flow) Primary water circuit (fixed flow) Water side economizer & A/C evaporator Condensate circuit A/C condenser Water side economizer Cooling tower 7/12/2008 http://perspectives.mvdirona.com

Mechanical Optimization Simple rules to minimize cooling costs: Raise data center temperatures Tight control of airflow with short paths Cooling towers rather than A/C Air side economization (open the window) Low grade, waste heat energy reclamation Best current designs have water close to load but don’t use direct water cooling Lower heat densities could be 100% air cooled but density trends suggest this won’t happen Common mechanical designs: 24% lost in cooling Assume reduction to 1/3 current 24% to 8% for 16% savings 7/12/2008 http://perspectives.mvdirona.com

Summary Some low-scale facilities incredibly bad Assuming current high-scale installation: Power distribution savings ~32% Save 8% in power distribution to server Save further 24% power distribution losses in server Cooling Savings: ~16% Conservatively estimate 1/3 the power using air-side economization 24% loss down to 8% for a 16% power savings Server Utilization: ~90% Move from 30% to 80% through DC-wide workload scheduling 30% load @ 60% of full load power to 80% load @ 100% of full load power 2.6x work at 1.7x more power for a gain of 90% Cooperative, Expendable, Micro-slice Servers: ~12% ½ the power but less capable server (most workloads are memory or disk I/O bound) Conservatively assume .8x work done .5x power => 30% savings 4.0x gains in work done/watt look attainable: 1*1.32*1.16*1.90*1.30 => 3.8x (some overlap between CEMS & power dist savings) Power is #3 expense in DC behind server h/w, power distribution & cooling Data center capital expense savings nearly 100% driven by power Reductions in power reduce capex, opex & is good for environment 7/12/2008 http://perspectives.mvdirona.com

Slides Perspectives Blog: These Slides: http://mvdirona.com/jrh/TalksAndPapers/JamesRH_DCPo werSavingsFooCamp08.ppt Perspectives Blog: http://perspectives.mvdirona.com 7/12/2008 http://msblogs/JamesRH Microsoft