Presentation is loading. Please wait.

Presentation is loading. Please wait.

Energy Efficient Data Centers - an Update December, 2006 William Tschudi

Similar presentations


Presentation on theme: "Energy Efficient Data Centers - an Update December, 2006 William Tschudi"— Presentation transcript:

1 Energy Efficient Data Centers - an Update December, 2006 William Tschudi wftschudi@lbl.gov

2 The Cast Bill Tschudi Bill Tschudi Dale Sartor Dale Sartor Steve Greenberg Steve Greenberg Tim Xu Tim Xu Evan Mills Evan Mills Bruce Nordman Bruce Nordman Jon Koomey Jon Koomey Paul Mathew Paul Mathew Arman Shehabi Arman Shehabi Subcontractors Subcontractors –Ecos Consulting –EPRI Solutions –EYP Mission Critical Facilities –Rumsey Engineers –Syska & Hennesy

3 A “research roadmap” was developed for the California Energy Commission and outlined key areas for energy efficiency research, development, and demonstration Data Center Research Roadmap

4 Data Center research activities  Benchmarking and 22 data center case studies  Self-benchmarking protocol  Power supply efficiency study  UPS systems efficiency study  Standby generation losses  Performance metrics – Computation/watt

5 LBNL Data Center demonstrations  DC powering demonstrations – Facility level – Rack level  “Air management” demonstration  Outside air economizer demonstration – Contamination concerns – Humidity control concerns

6 Case studies/benchmarks  Banks/financial institutions  Web hosting  Internet service provider  Scientific Computing  Recovery center  Tax processing  Storage and router manufacturers  others

7 IT equipment load density

8 Benchmarking energy end use

9 Overall power use in Data Centers Courtesy of Michael Patterson, Intel Corporation

10 Overall power use in Data Centers Courtesy of Intel Corporation

11 Data Center performance differences

12 Performance varies The relative percentages of the energy actually doing computing varied considerably.

13 Percentage of power delivered to IT equipment Average 0.49

14 HVAC system effectiveness We observed a wide variation in HVAC performance

15 Benchmark results helped to find best practices The ratio of IT equipment power to the total is an indicator of relative overall efficiency. Examination of individual systems and components in the centers that performed well helped to identify best practices.

16 Best practices topics identified through benchmarking

17 Best practices led to design guides Design Guides were developed based upon observed best practices Self benchmarking protocol is available

18 Design guidelines were developed in collaboration with PG&E Guides available through PG&E’s Energy Design Resources Website

19 Design guidance summarized in a training resource A web-based training resource is available http://hightech.lbl.gov/dctraining/TOP.html

20 Performance metrics Couple existing computing benchmark programs with energy use Couple existing computing benchmark programs with energy use Computations/Watt Computations/Watt Energy Star interest Energy Star interest

21 “Air Management” demonstration Goal: Demonstrate better cooling and energy savings through improvements in air distribution in a high density environment.

22 Demonstration procedure Once test area was isolated, air conditioner fan speed was reduced using existing VFD’s Once test area was isolated, air conditioner fan speed was reduced using existing VFD’s Temperatures at the servers were monitored Temperatures at the servers were monitored IT equipment and fan energy was monitored IT equipment and fan energy was monitored Chilled water temperatures were monitored Chilled water temperatures were monitored Hot aisle return air temperatures were monitored – ΔT was determined Hot aisle return air temperatures were monitored – ΔT was determined

23 Demonstration description The as-found conditions were monitored The as-found conditions were monitored –Temperatures –Fan energy –IT energy An area containing two high-intensity rows and three computer room air conditioning units was physically isolated from rest of the center – approximately 175W/sf An area containing two high-intensity rows and three computer room air conditioning units was physically isolated from rest of the center – approximately 175W/sf

24 Demonstration description, con’t Two configurations were demonstrated Two configurations were demonstrated Air temperatures monitored at key points Air temperatures monitored at key points IT equipment and computer room air conditioner fans energy was measured IT equipment and computer room air conditioner fans energy was measured Chilled water temperature was monitored Chilled water temperature was monitored Chilled water flow was not able to be measured Chilled water flow was not able to be measured

25 First alternate - cold aisle isolation

26 Second alternate - hot aisle isolation

27 Demonstration procedure Once test area was isolated, air conditioner fan speed was reduced using existing VFD’s Once test area was isolated, air conditioner fan speed was reduced using existing VFD’s Temperatures at the servers were monitored Temperatures at the servers were monitored IT equipment and fan energy were monitored IT equipment and fan energy were monitored Chilled water temperatures were monitored Chilled water temperatures were monitored Hot aisle return air temperatures were monitored – ΔT was determined Hot aisle return air temperatures were monitored – ΔT was determined

28 Fan energy savings – 75% Since there was no mixing of cold supply air with hot return air - fan speed could be reduced

29 Temperature variation improved

30 Better temperature control would allow raising the temperature in the entire data center ASHRAE Recommended Range Ranges during demonstration

31 Additional air system savings Since air is not mixing, the overall air supply temperature could be raised (typically centers are below ASHRAE recommended minimums because of mixing) Since air is not mixing, the overall air supply temperature could be raised (typically centers are below ASHRAE recommended minimums because of mixing) Hotter air is returned to the computer room air conditioners - Larger temperature difference (ΔT) allows air conditioning unit to have more capacity Hotter air is returned to the computer room air conditioners - Larger temperature difference (ΔT) allows air conditioning unit to have more capacity

32 Additional chilled water savings For an entire center, it may be possible to raise chilled water supply temperature improving chiller efficiency or reduce chilled water flow and lower pumping energy. For an entire center, it may be possible to raise chilled water supply temperature improving chiller efficiency or reduce chilled water flow and lower pumping energy.

33 Reliability improvement Better temperature uniformity in cold aisle Better temperature uniformity in cold aisle Elimination of hot spots due to hot aisle air mixing Elimination of hot spots due to hot aisle air mixing Potentially less wear and tear on air handlers and chilled water system equipment Potentially less wear and tear on air handlers and chilled water system equipment

34 Other air management improvement areas Seal all unwanted openings in floor Seal all unwanted openings in floor Blank off openings in racks Blank off openings in racks Training – Objective is cooling the equipment rather than people comfort. Training – Objective is cooling the equipment rather than people comfort. –Hot air return is a good thing –Recommended environmental conditions at inlet to equipment set by ASHRAE – generally above 68 degrees (so why are return temperatures set at 68 and supply temps in the 50s?) – Recommended humidity ranges are broad

35 Encouraging use of outside air economizers Goal: encourage use of outside air economizers where climate is appropriate Goal: encourage use of outside air economizers where climate is appropriate Strategy: Strategy: –address data center professional’s biggest concerns: contamination and humidity control –quantify energy savings at one center

36 Preliminary work Case studies of centers successfully using outside air Case studies of centers successfully using outside air Literature review to determine failure mechanisms Literature review to determine failure mechanisms Collaboration with ASHRAE data center committee Collaboration with ASHRAE data center committee

37 Features of current work Snapshot over several days – one time of the year Snapshot over several days – one time of the year Continuous monitoring equipment in place at one center. Data collection over several months Continuous monitoring equipment in place at one center. Data collection over several months Before and after capability at three centers Before and after capability at three centers Additional savings summary from utility information for other centers Additional savings summary from utility information for other centers

38 Calculating additional savings Calculate estimated yearly savings at one site – using utility data. Calculate estimated yearly savings at one site – using utility data. Possible additional savings over original calculated savings. Possible additional savings over original calculated savings. –Raise high temperature cut off to increase hours of operation –Better hot and cold isolation – raise ΔT –No low temperature cut off limit –Raise supply air temperature to ASHRAE guidelines –Optimize chiller sequencing when using outside air

39 Findings Water soluble salts in combination with high humidity can cause failures Water soluble salts in combination with high humidity can cause failures Static electricity can occur with very low humidity Static electricity can occur with very low humidity Humidity control for make-up air (or no controls) is usually sufficient Humidity control for make-up air (or no controls) is usually sufficient Particle concentrations with normal filtration are orders of magnitude lower than recommended Particle concentrations with normal filtration are orders of magnitude lower than recommended

40 Contamination is orders of magnitude less at the servers

41 Particle concentrations by size

42 Economizer in service

43 Encouraging economizers – next steps Analyze material captured on filters Analyze material captured on filters “After” measurements at two centers “After” measurements at two centers Estimates of savings Estimates of savings Final report Final report Collaboration with ASHRAE Collaboration with ASHRAE

44 DC powering data centers Goal: Show that a DC system could be assembled with commercially available components and measure actual energy savings – a proof of concept demonstration.

45 Inverter InOut Bypass Battery/Charger Rectifier Internal Drive External Drive I/O Memory Controller  Processor SDRAM Graphics Controller DC/DC AC/DC DC/DC AC/DC Multi output PS Voltage Regulator Modules 5V 12V 3.3V 12V 1.5/2. 5V 1.1V- 1.85V 3.3V 12V PWM/PFC Switcher Unregulated DC To Multi Output Regulated DC Voltages Data Center power conversions AC voltage conversions

46 Prior research illustrated large losses in power conversion Uninterruptible Power Supplies (UPS) Power Supplies in IT equipment

47 UPS and Power Supply efficiency We observed a wide range of performance from the worst to the best We observed a wide range of performance from the worst to the best Our original goal was to move the market to the higher performing systems Our original goal was to move the market to the higher performing systems Incentive programs, labeling, education programs were all options – and still are Incentive programs, labeling, education programs were all options – and still are

48 UPS draft labeling standard  Based upon proposed European Standard  Possible use in incentive programs

49 Included in the demonstration Side-by-side comparison of traditional AC system with new DC system Side-by-side comparison of traditional AC system with new DC system – Facility level distribution – Rack level distribution Power measurements at conversion points Power measurements at conversion points Servers modified to accept Servers modified to accept 380 V. DC Artificial loads to more fully simulate data center Artificial loads to more fully simulate data center

50 Additional items included Racks distributing 48 volts to illustrate that other DC solutions are available, however no energy monitoring was provided for this configuration Racks distributing 48 volts to illustrate that other DC solutions are available, however no energy monitoring was provided for this configuration DC lighting DC lighting

51 Typical AC distribution today 480 Volt AC

52 Facility-level DC distribution 380V.DC 480 Volt AC

53 Rack-level DC distribution 480 Volt AC

54 AC system loss compared to DC AC system loss compared to DC 7-7.3% measured improvement 2-5% measured improvement Rotary UPS

55 Results Facility level overall efficiency improvement: Facility level overall efficiency improvement: 10 to 20% Smaller rack level efficiency improvement but other benefits include: Smaller rack level efficiency improvement but other benefits include: – Thermal benefits –Smaller power supply in server – Transition strategy for existing centers

56 Implications could be even better for a typical data center Redundant UPS and server power supplies operate at reduced efficiency Redundant UPS and server power supplies operate at reduced efficiency Cooling loads would be reduced. Cooling loads would be reduced. Both UPS systems used in the AC base case were “best in class” systems and performed better than benchmarked systems – efficiency gains compared to typical systems could be higher. Both UPS systems used in the AC base case were “best in class” systems and performed better than benchmarked systems – efficiency gains compared to typical systems could be higher. Further optimization of conversion devices/voltages is possible Further optimization of conversion devices/voltages is possible

57 Industry Partners in the Demonstration Alindeska Electrical Contractors APC Baldwin Technologies Cisco Systems Cupertino Electric Dranetz-BMI Emerson Network Power Industrial Network Manufacturing (IEM) Intel Nextek Power Systems Pentadyne Rosendin Electric SatCon Power Systems Square D/Schneider Electric Sun Microsystems UNIVERSAL Electric Corp. Equipment and Services Contributors:

58 Other firms collaborated 380voltsdc.com CCG Facility Integration Cingular Wireless Dupont Fabros EDG2, Inc. EYP Mission Critical Gannett Hewlett Packard Morrison Hershfield Corporation NTT Facilities RTKL SBC Global TDI Power Verizon Wireless Stakeholders:

59 Picture of demonstration set-up

60 DC power – next steps DC power pilot installation(s) DC power pilot installation(s) Standardize distribution voltage Standardize distribution voltage Standardize DC connector and power strips Standardize DC connector and power strips Server manufacturers develop power supply specification Server manufacturers develop power supply specification Power supply manufacturers develop prototype Power supply manufacturers develop prototype UL and communications certification UL and communications certification

61 LBNL website: http://hightech.lbl.gov/datacenters/

62 Discussion/Questions??


Download ppt "Energy Efficient Data Centers - an Update December, 2006 William Tschudi"

Similar presentations


Ads by Google