Transforming B513 into a Computer Centre for the LHC Era Tony Cass —

Slides:



Advertisements
Similar presentations
Computer Room Requirements for High Density Rack Mounted Servers Rhys Newman Oxford University.
Advertisements

Computer Room Provision in Atlas and R89 Graham Robinson.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
Cloud Computing Data Centers Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of Computing, UNF.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
How to Replace Your Desktop PC’s Power Supply Computer Based Training Objectives.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
Backup Rationalisation Reorganisation of the CERN Computer Centre Backups David Asbury IT/DS Friday 6 December 2002.
CERN IT Department CH-1211 Genève 23 Switzerland t Options for Expanding CERN’s Computing Capacity Without A New Building Medium term plans.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
SR1 ongoing workH.Pernegger21/2/02 Ongoing work in SR1 and preparations for detector macro assembly Heinz Pernegger.
CV activities on LHC complex during the long shutdown Serge Deleval Thanks to M. Nonis, Y. Body, G. Peon, S. Moccia, M. Obrecht Chamonix 2011.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Air Conditioning and Computer Centre Power Efficiency The Reality Christophe Martel Tony Cass.
Local Area Networks (LAN) are small networks, with a short distance for the cables to run, typically a room, a floor, or a building. - LANs are limited.
Computer Operations Group Data Centre Facilities Hitendra Patel June 2015.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Planning the LCG Fabric at CERN openlab TCO Workshop November 11 th 2003 CERN.ch.
Building A Computer Centre HEPiX Large Cluster SIG October 21 st 2002 CERN.ch.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
1 5 December 2012 Laurent Roy Infrastructure / Electronics Upgrade -Cabling (long distance) -Rack and Crate (space, electrical distribution, cooling) -Detector.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
Computer Rooms 101 Geoffrey Day. Will the floor collapse? 42RU rack - 115kg 40 * 1Ru servers - HP DL360, 16.78kg each Shelves, cables etc 15KG per rack.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
LS1 : EL Upgrades & Consolidation Chamonix– February 8 th 2012 F. Duval on behalf of EN/EL The goal of LS1 is to implement everything needed for a safe.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter / Vincent Doré.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
ALMA Archive Operations Impact on the ARC Facilities.
Computer Centre Upgrade Status & Plans Post-C5, June 27 th 2003 CERN.ch.
Niko Neufeld PH/LBC. Detector front-end electronics Eventbuilder network Eventbuilder PCs (software LLT) Eventfilter Farm up to 4000 servers Eventfilter.
The LHC PC Rack Project Fred Wickens On behalf of the LHC PC-Rack Study Group 10th LECC Workshop Boston Sep 2004.
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
Space, labs & testing equipment Paolo Petagna Meeting on CO 2 Cooling CERN 04 February 2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Nikhef/(SARA) tier-1 data center infrastructure
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
26 Nov 1999 F Harris LHCb computing workshop1 Development of LHCb Computing Model F Harris Overview of proposed workplan to produce ‘baseline computing.
Requirements for computing room A. Gianoli, M. Serra, P. Valente.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
ILC Test Facility at New Muon Lab (NML) S. Nagaitsev Fermilab April 16, 2007.
J. Pedersen, ST/EL Consolidation of the CERN technical infrastructure and civil engineering. J. Pedersen ST 6 th ST workshop, Thoiry, 1 – 3 April 2003.
Reliability of KLOE Computing Paolo Santangelo for the KLOE Collaboration INFN LNF Commissione Scientifica Nazionale 1 Roma, 13 Ottobre 2003.
1 13 Octobre 2011 Laurent Roy ATCA Crate / Infrastructure in place -Rack in D3: dimension -Electrical power availability -Tell1 racks running in D3 (air.
Energy Savings in CERN’s Main Data Centre
A New Vision for the Computer Centre Tony Cass —
Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
Energy Basics Understand it, Control it and Save.
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
CV Activities for PSB-LIU POPS-B (new MPS for the PSB-LIU) B 245 PSB Demi water cooling plant Stefano Moccia, EN-CV Meeting 07/03/16.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
CERN Data Centre ‘Building 513 on the Meyrin Site’
Background B513 built in early 1970’s for mainframe era.
Luca dell’Agnello INFN-CNAF
Reuters - on the left once you see this leaving Vesenaz
B. 163 facility upgrade: work space options in B. 163 and B. 103 F
COOLING & VENTILATION INFRASTRUCTURE
Luca dell’Agnello Daniele Cesini GDB - 13/12/2017
Development of LHCb Computing Model F Harris
Presentation transcript:

Transforming B513 into a Computer Centre for the LHC Era Tony Cass —

2 Tony Cass B513 Ground Floor Layout O F F I C E S OPS/UCO Machine Room DELIVERYDELIVERY BARNBARN

3 Tony Cass ATLAS & HPPLUS Machine Room Use Today Total Floor Area: 1400m 2 Network Oracle AFS & CSF NICE VXCERN and Mail Services PC Farms SHIFT STK Silos PC Farms Access AS Web/ASIS AFS and CORE Networking DELPHI, DXPLUS and RSPLUS/BATCH Operations

4 Tony Cass What will we have in 2005?  Computing farms –For ATLAS, CMS, ALICE and LHCb –For other experiments  Will still need –Networking Infrastructure –“Home Directory” servers –Database Servers –Administrative Servers  What else? What will an LHC Computing Farm look like?  What will an LHC Computing Farm look like?

250 Gbps 0.8 Gbps 8 Gbps ………… 5600 processors 1400 boxes 160 clusters 40 sub-farms 24 Gbps* 960 Gbps* 6 Gbps* 1.5 Gbps 100 drives 12 Gbps 5400 disks 340 arrays ……... LAN-WAN routers CERN CMS Offline Farm at CERN circa 2006 lmr for Monarc study- april 1999 tapes 0.8 Gbps (daq) 0.8 Gbps 5 Gbps disks processors storage network farm network 0.5 M SPECint95 5,600 processors 0.5 PByte disk 5,400 disks

Layout and power - CMS or ATLAS processors 200 m 2 tapes 120 m 2 18 metres 24 metres disks 50 m KW 14 KW 245 KW totals 400 kWatts 370 m 2 lmr for Monarc study- april 1999

7 Tony Cass Space and Power Needed  Assume 5 Farms –2000 m 2 floor space –2MW power  Plus same space as today for everything else.  Space available in B513: Computer Room1,400m 2 Vault1,050m 2 MG Room200m 2 Barn650m 2

8 Tony Cass Electricity Supply  Transformers in B513 limit supply to 2MW –Replacing these would be expensive!  2MW can be distributed to machine room but – most services do not have an independent supply: »power lines run “vertically” but »services are arranged “horizontally” –Emergency Power Off tests are required each year  How much work is required to fix these problems? –Remove Emergency Power Off? (only Local Power Off required) –Rearrange services “vertically”? –See later for a discussion of fire detection.

9 Tony Cass UPS  Installed in 1992 –Consists of 3 modules, each rated at 400kVA, a 4 th is available but no batteries are installed. –Autonomy is 10 minutes at rated load, but around 30 minutes with current load. –Battery life is 8 years… Replacement is indicated now (some corrosion visible); cost estimated at 220K. »But could survive with just two modules… 150K or 80K for cheap batteries with 5 year life.  What autonomy is required in 2005? –Auto transfer between Swiss/French supplies in operation since January this year (only!). »Do we need more than 5-10 minutes autonomy? –Are the Swiss and French supplies really independent? –2MW diesel backup would require a dedicated installation for IT

10 Tony Cass  2MW power means 2MW of heat to remove!  But also need to remember  Total cooling load of 3.2MW compared to capacity of 2.4MW today –Need to add new water chiller on roof. Space is available but this will cost at least 350KCHF Air Conditioning — Total Load B513 offices and B31500kW The Sun heats the machine room as well200kW Equipment overhead500kW

11 Tony Cass Air Conditioning — Where to cool?  The cooling capacity for the combined machine room and barn area could be increased from 1.6MW to 2.1MW. –This requires increasing the »air flow and the »temperature difference between incoming and outgoing air.  We can install up to 500kW cooling in the vault. –Total capacity is limited by ceiling height. –Installation would cost ~600KCHF »and take 1 year from giving the go-ahead to ST.  MG room capacity should be reduced to 140kW –Combine with vault cooling to reduce equipment space?

12 Tony Cass Fire Detection  We have one Very Early Smoke Detection (VESDA) system which samples air extracted from the machine room. –A second unit should be installed for redundancy.  Localised fire detection linked to automatic cut of electricity supply is possible. Do we want this? –Does it require work to general power distribution or can we use the local distribution boxes (and circuit breakers) we are now putting on each rack? –We should not use closed racks, though, as this creates problems for cooling.  Fire detection in the false floor void should be improved.

13 Tony Cass The False Floor — I  The state of the false floor void is unacceptable! Coffee stirrer Cable ends 30 years of dust accumulation General Rubbish

14 Tony Cass The False Floor — II  In addition to all the rubbish, there are many cables that should be removed. Particular problems are –the many obsolete FDDI fibres arriving at the backbone area, and –the mess in the old CS/2 area—which is where we are planning to install many PCs…  There are no more shutdowns. Can cables be removed –at any time? –outside “accelerator operation and conference season” periods? and at what risk to services?  What powered equipment is installed in the void?

15 Tony Cass Between 2000 and 2005  The planned LHC Testbed would lead to –200 PCs being installed at the end of 2000 –300 more PCs being installed at the end of 2001 and –1000 more PCs being installed at the end of  At current packing densities, the testbed requires some 25m 2 this year, 40m 2 in 2001 and 135m 2 in –But this doesn’t include space for any extra silos…  Fortunately, the PC area will shrink as screens are removed so we can probably cope for 2000/2001. –But this assumes no-one else wants more space!  On current plans, LEP equipment isn’t going to be removed until the end of –This, particularly the disks, occupies a fair bit of space.

16 Tony Cass Racking Options and Density  Air-conditioning considerations exclude a mezzanine floor in the machine room.  Today’s racks are ~2m high. Packing density would double if we used taller racks. However, –this isn’t true for tape robotics, –access considerations probably require wider gangways, –such racks must be placed under the air extraction ducts (which run “vertically”), and –standard 19” racks are not 4m tall.  PC density could be increased by moving to 1U systems or special solutions but –standard 19” racks are not 4m tall, –special solutions limit potential suppliers and flexibility, and –what are the cost implications? »10,000PCs * 100CHF/PC = 1MCHF

17 Tony Cass Anything else?  Anything we’ve forgotten?  When will we know exactly which bits of LHC computing equipment will be installed where?  Will other people want to install equipment in the Computer Centre? “Other people” means both –other CERN Divisions and –experiments.

18 Tony Cass Conclusions  Either the Barn or the Vault must be converted for use as a machine room area.  Additional chilled water capacity is required at a cost of at least 350KCHF.  UPS: –Existing batteries must be replaced very soon. –A solution providing 10minutes autonomy for 2MW is required by  Fire detection should be improved. – Automatic power off on localised smoke detection is probably desirable.

19 Tony Cass How to get there from here?  Either the Vault or the Barn must be equipped as a machine room area during –Converting the barn would be cheaper but requires moving PC, Mac and Printer workshops, offices for WMData staff and EP labs.  Space has to be managed. –We need carefully thought out plans for service evolution and equipment installation and removal schedules. »Precision at box level not possible, but what is planning for, e.g., SL/LEP machine databases? And LHC?

20 Tony Cass Questions to be answered  What are your space needs? How will they change?  Can we afford to pay for flexibility? –Converting the vault now (and using the barn later) would cost ~1MCHF but would provide much greater flexibility than just using the machine room and barn.  Can we manage without flexibility? –Equipment has been moved around to clear space in the past. Do we want to do this in future? »Can we? There are no Christmas Shutdowns any more.