Presentation is loading. Please wait.

Presentation is loading. Please wait.

Vette LiquiCoolTM Solution

Similar presentations


Presentation on theme: "Vette LiquiCoolTM Solution"— Presentation transcript:

1 Vette LiquiCoolTM Solution
Rob Perry Executive Manager Arlene Allen University of California Santa Barbara Director, Information Systems & Computing

2 Data Center Trends - Exponential Growth
428 Data Center Trends - Exponential Growth Explosive demand for services is driving Data Center spend 31 Billion Google Searches every month – 10X growth in last 2 years 1 Billion Internet devices today – 1,000X growth since 1992 Number of daily text messages exceeds world population 2,000 4,000 6,000 8,000 10,000 12,000 14,000 16,000 18,000 2000 2001 2002 2003 2004 2005 2006 2010 Year Installed volume servers (000) – USA Source: EPA 2007 Report to congress 3X Growth BRU_

3 Data Center Trends - Staggering Energy Consumption and Cost of Energy
46 Data Center Trends - Staggering Energy Consumption and Cost of Energy Energy unit price has increased an average of 4% YOY in the USA and 11% YOY Globally Data Center energy consumption is growing by 12% annually Source: EPA 2007 Report to Congress BRU_

4 46 Data Center Trends – Operating Expense Exceeds Capital Expense in less than 1 year Data Center facility costs are growing 20% vs. IT spend of 6% Operating costs over lifetime of a server ~ 4X original purchase cost Cooling infrastructure can consume up to 55% of Data Center energy Source: Belady, C., “In the Data Center, Power and Cooling Costs More than IT Equipment it Supports”, Electronics Cooling Magazine (Feb 2007) BRU_

5 Increasing Carbon Footprint
Today, the average Data Center consumes energy equivalent to 25,000 houses 90% of large Data Centers will require more power and cooling in the next 3 years Without changes, Data Center greenhouse emissions are predicted to quadruple by 2020 Source: McKinsey & Company

6 UCSB – “The Problem” UCSB’s existing Data Center is being renovated for research computing and is forcing the corporate/miscellaneous IT equipment into a new space. This new space is not designed to be a Data Center. The footprint is small, the power is limited by existing building wiring and using traditional air-cooling topology is not feasible. The new space limitations requires the load density to increase from a typical density of 6kW or less to a higher density of 10-16kW per rack

7 LiquiCool - “The Solution”
Move the corporate/miscellaneous IT racks into the new space Tap into the existing building chilled water system Install Vette LiquiCool Rear Door Heat Exchangers on every rack Install Vette LiquiCool Coolant Distribution Unit for a secondary loop Install rack-mount UPS in every rack

8 LiquiCool - “The Solution”
LiquiCool™ – A complete cooling solution for the consolidation and scale-out of compute infrastructure in today’s sustainable Data Centers Reduces white space requirements by more than 55% Cuts cooling energy consumption by 50% or more when compared to traditional air-cooled Data Centers Allows 8X the amount of compute power in a typical IT enclosure Lowers carbon footprint by more 50% or more vs. air-cooling Bottom Line: Payback in less than 1 year when compared to traditional computer room air-conditioning

9 LiquiCool - How does it work?
Based on IBM IP & Technology Licenses (>30 years of water cooling experience) Rear Door Heat Exchanger (RDHx) replaces existing rear door of IT enclosure RDHx has chilled water Supply & Return quick connections at bottom OR top Raised floor becomes optional Chilled water circulates through tube+fin coil from Supply connection Equipment exhaust air passes through coil and is cooled before re-entering the room Fin + tube Heat exchanger Front of Enclosure Rear of Enclosure Cold Supply Water Heated Water

10 LiquiCool System Passive RDHx provides 100% sensible cooling
No condensation, no need for reheat or humidification CDU creates a fully isolated, temperature controlled Secondary Loop Chilled water source - city water, building chilled water, packaged chiller… Temperature:10-17oC 50-63oF Water pressure:30-70 psi Temperature:7oC / 45o F Water pressure: psi 10 10

11 RDHx - External View Passive No electrical connections No moving parts
No Fans No power No noise Attaches to rear No need to rearrange racks Does not consume valuable floor space, adds 4-6” to rear Close-coupled Neutralizes at the source Top Feed Connections Bottom Feed Connections

12 RDHx - Internal View Protective barrier Air-bleed valves
Bottom Feed Hose Connections and drain valve Tube & Fin coil

13 Thermal Image - Before & After 100% Heat Neutralization

14 RDHx reduces Leaving Temperature by 28ºF (15.4ºC)!
RDHx Cooling in Action Temperature readings taken in the rear of a fully populated Enclosure Rear Door Heat Exchanger Door opened Server Leaving Temp 102ºF (38.9ºC) Rear Door Heat Exchanger Door closed Server Leaving Temp: 74ºF (23.5ºC) RDHx reduces Leaving Temperature by 28ºF (15.4ºC)!

15 RDHx is Compatible with most major IT Enclosures
Industry Standard Enclosure Mount Transition Frame (if needed) Remove existing rack rear door & hinges

16 RDHx General Specifications
Max. Cooling Capacity: 33kW Coolant: Chilled Water (above dew point) Dimensions: “ H x 4.6“ D x 23.6“ W (1945mm x 117mm x 600mm) Weight – empty: 63lbs (29kg) Liquid Volume: Gallons (5.7 Liters) Liquid Flow Rate: GPM (23-38 L/min) Head Loss: 7 psi (48 kPa) at 10 GPM (38 L/min) System Input Power: None required Noise: None Couplings: Drip-free stainless steel quick connects Connection Location: Bottom or Top Feed

17 Coolant Distribution Unit (CDU)
Water to water heat exchanger with pumps, controls and chilled water valve Creates an isolated secondary cooling loop 100% sensible cooling, no condensation Small water volume (tens of gallons) Easier to control water quality Redundant, fault-tolerant design 120kW or 150kW capacity Supports 6-12 RDHx Optional internal manifold for quick expansion SNMP & ModBus communications Power Consumption: 2.6 kW Pump Capacity: 63 GPM at 30psi (240 L/min at 207 kPa) Primary Head Loss: psi at 63 GPM (70 kPa at 240 L/min) Minimum Approach Temperature (100% load): 120kW unit - 12°F (6.7 °C) 150kW unit - 8°F (4.4 °C) 63 GPM (240 L/min) on primary and secondary

18 CDU Simplified

19 Floor-mount CDU Internal - Front
Controller Brazed plate heat exchanger Inverter drive Redundant valves Reservoir tank Redundant variable speed pumps Casters and Drain

20 Floor-mount CDU Internal - Rear
Optional Secondary Loop Distribution Manifold Primary side water filter Primary supply and return connections Optional Secondary Loop Flex Tails

21 Hose Kits & External Manifolds
Connects to flex tails on CDU secondary side ISO B or Sweated Connections Standard & custom configurations Each Vette Hose Kit consists of a flexible Supply hose and a Return hose Factory assembled and tested to IBM specifications and standards Quick-connect drip-free couplings on one end OR both ends Straight hoses for raised floor environments, right angle hoses for non-raised floor environments Standard lengths from 3ft. to 50ft.

22 Treatment of Cooling Water
Water Treatment Treatment of Cooling Water Potential Effects of Non-Treatment Loss of heat transfer Reduced system efficiency Reduced equipment life Equipment failures or leaks De-ionized water without inhibitors is corrosive! SCALE FOULING MICROBIO CORROSION

23 Scenario I – Out of Space
Add RDHx – Double your load per rack Eliminate CRAC units 56% Recovery of White Space!

24 Scenario II – Out of Power/Capacity
Add RDHx Remove (2) CRAC units Reduces cooling energy consumption to free up capacity for growth

25 Scenario III – High Density
CRAC units can typically provide efficient environmental control for rack densities of up to 5kw per rack Adding RDHx allows 8X the Compute Power!

26 Reference Sites Warwick University, Coventry, UK
Rack Entering Air Temperature (EAT): 80°F, 30%RH – Within revised ASHRAE TC9.9 recommended operating range RDHx Entering Water Temperature (EWT): 45°F RDHx Water Flow Rate:10 GPM National Center for HPC, Taiwan

27 Georgia Tech Super Computing Facility - 12 racks at ~24kW each
Reference Sites Front view Rear view Georgia Tech Super Computing Facility - 12 racks at ~24kW each

28 Silicon Valley Leadership Group Case Study - Modular Cooling Systems
Sun Microsystems hosted the assessment of four commercially available modular cooling systems for data centers. The 4 systems were installed at Sun Microsystems’ data center n Santa Clara, California and they included APC InRow RC, IBM/Vette RDHx, Liebert XD and Rittal LCP+. LBNL (Lawrence Berkeley National Laboratory) initiated the study to investigate energy implications of commercially available modular cooling systems compared to that of traditional data centers. Each rack density was 10kW. RDHx Coefficient Of Performance (COP -ratio of cooling provided by the module to the power to drive the module varied from 64.4 to 229.0, compared to other systems between 4.7 to RDHx power index (PI, ratio of power demand of the cooling module to compute load) varied from to while competitors figures were between 0.10 to 0.20, with Liquid Cooled Rack at over 0.40. RDHx COP & PI vales indicate dramatically high energy efficiency than the other systems. The results were released on July 1, 2008, during the Silicon Valley Leadership Group (SVLG) Data Center Energy Summit in Santa Clara, California.

29 SVLG “Chill Off” Results
Vette’s LiquiCool™ solution led the field in cooling capacity and in cooling efficiency! Sun Microsystems hosted the assessment of four commercially available modular cooling systems for data centers. The 4 systems were installed at Sun Microsystems’ data center n Santa Clara, California and they included APC InRow RC, IBM/Vette RDHx, Liebert XD and Rittal LCP+. LBNL (Lawrence Berkeley National Laboratory) initiated the study to investigate energy implications of commercially available modular cooling systems compared to that of traditional data centers. Each rack density was 10kW. RDHx Coefficient Of Performance (COP -ratio of cooling provided by the module to the power to drive the module varied from 64.4 to 229.0, compared to other systems between 4.7 to RDHx power index (PI, ratio of power demand of the cooling module to compute load) varied from to while competitors figures were between 0.10 to 0.20, with Liquid Cooled Rack at over 0.40. RDHx COP & PI vales indicate dramatically high energy efficiency than the other systems. The results were released on July 1, 2008, during the Silicon Valley Leadership Group (SVLG) Data Center Energy Summit in Santa Clara, California. Vette

30 LiquiCool - Conclusive Savings for Energy, Space & Cost
Largest % of Data Center OPEX growth is power & cooling related Cost of energy for cooling is a large (and growing) cost component Data Center consolidation, virtualization and advanced hardware technology are driving higher power densities per rack and associated white space constraints Traditional air-cooling is less likely feasible Purchasing decisions can no longer be made solely on CAPEX TCO must not only be considered, but is core Value Summary: Reduces white space requirements by more than 55% Cuts cooling energy consumption by 50% or more when compared to traditional air-cooled Data Centers Allows 8X the amount of compute power in a typical IT enclosure Lowers carbon footprint by more 50% or more vs. air-cooling Bottom Line: Payback in less than 1 year when compared to traditional computer room air-conditioning

31 End Thank You


Download ppt "Vette LiquiCoolTM Solution"

Similar presentations


Ads by Google