Clustered Systems Introduction

Slides:



Advertisements
Similar presentations
Challenges in optimizing data center utilization
Advertisements

SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Green Datacenter Initiatives at SDSC Matt Campbell SDSC Data Center Services.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
Potomac AFCOM Quarterly Meeting Hosted by: DuPont Fabros Technology.
Brocade VDX 6746 switch module for Hitachi Cb500
Using Copper Water Loop Heat Pipes to Efficiently Cool CPUs and GPUs Stephen Fried President Passive Thermal Technology, Inc.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Cooling Product Positioning
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
Rice University 2015 Oil & Gas HPC Workshop. © LiquidCool Solutions The Decade of Data Mining “Finding and extracting oil and gas for the sub surface.
John Daily Bill Best Nate Slinin Project Concept Our focus is centered around addressing the growing demands placed on the cooling infrastructure in.
02/24/09 Green Data Center project Alan Crosswell.
Copyright Green Revolution Cooling
1 AppliedMicro X-Gene ® ARM Processors Optimized Scale-Out Solutions for Supercomputing.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
MODULAR DATA CENTER PUE
Make IT Simple, Make Business Agile ——Huawei IT New Products.
2nd Workshop on Energy for Sustainable Science at Research Infrastructures Report on parallel session A3 Wayne Salter on behalf of Dr. Mike Ashworth (STFC)
Lucht circulatie in Data Centres een Low Speed Ventilation benadering Michiel de Jong, Low Speed Ventilation GreenIT EIA, Amsterdam, 2015.
Overview of Liquid Cooling Systems Peter Rumsey, Rumsey Engineers.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
COMP 4923 A2 Data Center Cooling Danny Silver JSOCS, Acadia University.
Data centre air management Case studies Sophia Flucker.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Electronics Enclosures
Dealing with Hotspots in Datacenters Caused by High-Density Computing Peter Hannaford Director of Business Development EMEA.
Patryk Lasoń, Marek Magryś
Sugon Server TC5600-H v3 Moscow, 12/2015.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Energy Savings in CERN’s Main Data Centre
All content in this presentation is protected – © 2008 American Power Conversion Corporation Row Cooling.
GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage SuperBlade ® Configuration Training Francis Lam Blade.
1 PCE 4.4 New Development In DC Containment Steve Howell.
1 ITM 1.2 How IT Decisions Impact Data Center Facilities: The Importance of Collaboration Lars Strong P.E. Upsite Technologies, Inc.
Customer Business Benefit Customer Use Case Competitive Differentiation Additional Information Enables Cisco Unified Fabric for next generation data centers.
1 Implementing a Virtualized Dynamic Data Center Solution Jim Sweeney, Principal Solutions Architect, GTSI.
InfraStruXure Systems Alex Tavakalov
[Eco 2 System] A Fundamental Twist To Data Centre Cooling Ecological Friendly & Economical.
Extreme Scale Infrastructure
Free Air Cooling for Data Centres
Lars Strong P.E. Upsite Technologies, Inc.
LHCb and InfiniBand on FPGA
Data Center Network Topologies
Emerson Network Power Datacenter Infrastructure
ARAC/H/F Air-cooled water chillers, free-cooling chillers and heat pumps Range: kW.
MODULAR DATA CENTER PUE
5. Outdoor unit Installation
Fujitsu Training Documentation Hardware Installation
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Flex System Enterprise Chassis
Appro Xtreme-X Supercomputers
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
June 13,2016 Kevin Werely Regional Sales Director
Why PC Based Control ?.
Cloud Computing Data Centers
IBM System x Technical Principles V9
Where Does the Power go in DCs & How to get it Back
High-Power GPU Rack: Hybrid Cooling
Cloud Computing Data Centers
© 2016 Global Market Insights, Inc. USA. All Rights Reserved Data Center Liquid Cooling Market Size to exceed $2.5 bn by 2025 growing.
Objective Use financial modeling of your data center costs to optimize their utilization.
Liebert DSE High efficiency thermal management
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
Door Heat Exchanger: Specification
ACS Cold Plate Introduction
OCP Cold Plate Work Session

Future Focus on HX Roadmap
Presentation transcript:

Clustered Systems Introduction Phil Hughes CEO, Clustered Systems Company, Inc. phil@clusteredsystems.com 415 613 9264

Clustered Management Phil Hughes CEO Robert Lipp COO, CTO Business partners since 1998 Invented and developed revolutionary pumped refrigerant cooling architecture - 3 Patents Invented and developed distributed switching architecture for 2.5 Tb/s SONET/SDH switch Raised over $110M in venture funding Collectively have 20 patents 9/17/2018

Cooling System Development Prototype system with 2 phase contact cooling technology Licensed technology to Liebert, Inc. for XDS which “wins” Chill-Off II Built 100kW rack installed at SLAC $3M DoE grant Intel 130W CPUs 9/17/2018

SLAC System Detail Ethernet Switches 480VDC to 380VDC converter 100kW Chassis PCIe Switches 80 ports

Chassis Construction PDU Cold Plates Dual Server Blade May 2013

Cooling System 2 rows of 16 cold plates for motherboard cooling >1200W/plate, 2.4KW/ blade** 4 plates for activeplane cooling 500W per plate Interoperable with Liebert XD refrigerant based cooling system March 17 2013

Plate Thermal Testing May 2013

Blade and Cold Plate 22” x 6” active area >1.2kW cooling capacity 2 per blade, 32 per chassis S2600JF motherboard May 2013

Chassis Rear Exit manifold Switch 1 Switch 2 Coolant inlet manifold Switch Cold plates April 23, 2013

PCIe Switch Architecture Blade server rows External PCIe Switches CPU Blade Dual 10Gb Ethernet I/O Module 8696 PCIe Switch COM Module CPU x4 x8 Switch Blade Dual 10Gb Ethernet ports Row 1 16 CPU blades Switch Blade (optional) CPU Blade 20 rows of CPU blades 8 or 16 external 20-port PCIe switch units CPU Blade 16 CPU blades Switch Blade (optional) Dual 10Gb Ethernet I/O Module 8696 PCIe Switch COM Module CPU x4 x8 Dual 10Gb Ethernet ports (optional) Row 20 May 2013

PCIe Switch Board 16 PCIe 2.0 x 4 to servers 8 PCIe x4 to external switches 16 x 1GbE from servers, 2 external ports May 2013

Recommended Cooling System Refrigerant Lines Cooling Water Dry/Adiabatic Cooler Liebert XDP May also user CRAH return water Due to low CPU to refrigerant thermal resistance, 30C water provides sufficient cooling One Liebert XDP can cool 2-3 racks 23 April 2013

Competition Water Closeteers Dunkers Circulate water in rack All have server to rack liquid connectors Expensive (if reliability required) 20-30kW/rack Fan assist Players IBM, Bull, Fujitsu, Asetec, CoolIT, Cray, SGI, Cool Flo, Hitachi, Eurotech etc. Servers are immersed in a dielectric fluid 20-30kW/rack Players Green Revolution, Iceotope, LiquidCool 9/17/2018 13

2 Phase Contact Cooling offers: HPC Data Center Very high power density which eases communication 0.3 PFLOP/ rack now (100kW) 0.75 PFLOP (200kW) feasible With today’s GPUs Very high power density which allows DCs to be much smaller FB Prineville 75 W/sq ft CSys rack 4,000 W/sq ft “Exascale enabler” John Gustafson “Game Changer” Jack Pouchet, VP Exascale, Emerson PUE 1.07 (mech & elec) measured at SLAC No air movement so systems can be placed virtually anywhere Systems are totally silent (no more OSHA issues) 9/17/2018

Change will happen but it will have to be driven from top down Advantage Summary Clustered Air Cuts DC CAPEX up to 50% Cuts OPEX up to 30% PUE ~1.07 3 month lead times Pay as you go 3-6 year depreciation Low maintenance Small or existing building Simple Easily handles GPUs Disruptive technology Narrow industry support High CAPEX 100% OPEX Best PUE ~1.15 (incl svr fans) 2 year lead times (high risk) Costs 90% up front Up to 39 year depreciation High maintenance (fans, filters) Giant buildings Complex GPU need heroic efforts Known, many practitioners 90% products are air only Change will happen but it will have to be driven from top down 9/17/2018

By the Numbers CAPEX $K/MW OPEX $K/MW Nr of servers Total CAPEX Free Cooling Air w CRAH Air w rear door Clustered W/sq ft Data room 93 150 430 2,500 Built area 10,714 8,000 2,784 400 Nr of servers 2,857 3,200 DC Construction $ 2,000 $ 696 $ 32 Electrical system $ 1,000 $ 910 $ 537 Mechanical $ 791 $ 1,050 $ 342 Cabinets $ 250 $ 390 $ 1,000 Other $ 1,051 $ 792 $ 344 Total CAPEX $ 7,500 $ 5,091 $ 3,839 $ 2,255 Per server $ 2.63 $ 1.78 $ 1.34 $ 0.70 OPEX $K/MW Free Cooling Air w CRAH Air w rear door Clustered Amortization $ 300 $ 422 $ 372 $ 295 Power cost $ 1,008 $ 1,175 $ 1,105 $ 947 Maintenance $ 1,084 $ 1,376 $ 1,250 $ 985 Total OPEX $ 2,392 $ 2,974 $ 2,727 $ 2,227 Per server $ 0.84 $ 1.04 $ 0.95 $ 0.70 9/17/2018

Summary Installed cost significantly less than traditional data centers Includes building, infrastructure Enables increased investment in hardware Low TCO Cooling costs, maintenance, amortization etc. May 2013

By the Numbers CAPEX $K/MW OPEX $K/MW Nr of servers Total CAPEX Free Cooling Air w CRAH Air w rear door Clustered W/sq ft Data room 93 150 430 2,500 Built area 10,714 8,000 2,784 400 Nr of servers 2,857 3,200 DC Construction $ 2,000 $ 696 $ 32 Electrical system $ 1,000 $ 910 $ 537 Mechanical $ 791 $ 1,050 $ 342 Cabinets $ 250 $ 390 $ 1,000 Other $ 1,051 $ 792 $ 344 Total CAPEX $ 7,500 $ 5,091 $ 3,839 $ 2,255 Per server $ 2.63 $ 1.78 $ 1.34 $ 0.70 OPEX $K/MW Free Cooling Air w CRAH Air w rear door Clustered Amortization $ 300 $ 422 $ 372 $ 295 Power cost $ 1,008 $ 1,175 $ 1,105 $ 947 Maintenance $ 1,084 $ 1,376 $ 1,250 $ 985 Total OPEX $ 2,392 $ 2,974 $ 2,727 $ 2,227 Per server $ 0.84 $ 1.04 $ 0.95 $ 0.70 9/17/2018