A New Vision for the Computer Centre Tony Cass —

Slides:



Advertisements
Similar presentations
Computer Room Provision in Atlas and R89 Graham Robinson.
Advertisements

Fire Precautionary Arrangements Presented by Mick Leadbetter Fire Safety Adviser.
Preparing for Power Outages Like any other part of the infrastructure, electrical power to the campus can fail, either as an isolated incident (e.g., tripped.
Timber Research Centre Michael Anderson– Mohamed Farid Pablo Prallong – Lewis Macleod Ross Turbet & Group A 26/04/20054 th Presentation.
REDUNDANT ARRAY OF INEXPENSIVE DISCS RAID. What is RAID ? RAID is an acronym for Redundant Array of Independent Drives (or Disks), also known as Redundant.
PG&E and Altera Data Center Energy Efficiency Project.
Telenor Tier 3 Data Center April About Telenor Tier 3 Data Center Telenor built it´s own Data Centar in accordance with the latest industrial standards.
UPS Improvements to Beam Availability at the Australian Synchrotron Light Source Don McGilvery.
LHC UPS Systems and Configurations: Changes during the LS1 V. Chareyre / EN-EL LHC Beam Operation Committee 11 February 2014 EDMS No /02/2014.
1 Northwestern University Information Technology Data Center Elements Research and Administrative Computing Committee Presented October 8, 2007.
MPE Workshop 14/12/2010 Building 107- project Raphaël Berberat
During a mains supply interruption the entire protected network is dependent on the integrity of the UPS battery as a secondary source of energy. A potential.
Managing a computerised PO Operating environment 1.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
CERN IT Department CH-1211 Genève 23 Switzerland t Options for Expanding CERN’s Computing Capacity Without A New Building Medium term plans.
Generation Aino Ahonen CABABILITY OF APROS IN THE ANALYSES OF DIESEL LOADING SEQUENCES E. Raiko, H.Kontio, K.Porkholm, presented by A. Ahonen.
Atrium Fire Protection
Computer Operations Group Data Centre Facilities Hitendra Patel June 2015.
1 November 2008 David Hall Sales Manager – New Business Data Centre Hosting for the M-Business.
Transforming B513 into a Computer Centre for the LHC Era Tony Cass —
Lyndonville Electric Department Feasibility Analysis Review December 2,
MISSION CRITICAL COLOCATION 360 Technology Center Solutions.
Building A Computer Centre HEPiX Large Cluster SIG October 21 st 2002 CERN.ch.
Review of the Project for the Consolidation and Upgrade of the CERN Electrical Distribution System October 24-26, 2012 THE HIGH-VOLTAGE NETWORK OF CERN.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter / Vincent Doré.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Computer Centre Upgrade Status & Plans Post-C5, June 27 th 2003 CERN.ch.
Consolidation and Upgrade of the SPS Electrical Networks Current Status JP. Sferruzza, on behalf of EN/EL.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
Nikhef/(SARA) tier-1 data center infrastructure
Phase II Purchasing LCG PEB January 6 th 2004 CERN.ch.
Data Center Facility & Services Overview. Data Center Facility.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
J. Pedersen, ST/EL Consolidation of the CERN technical infrastructure and civil engineering. J. Pedersen ST 6 th ST workshop, Thoiry, 1 – 3 April 2003.
ATC/ABOC Days, 22 January 2007 The safety electrical networks at CERN Anne Funken, TS-EL.
Calder Hall Sellafield Decommissioning Martin Adam Heriot-Watt Edinburgh 2008.
ABOC/ATC days Electrical Network: Selectivity and Protection Plan José Carlos GASCON TS–EL Group 22 January 2007.
Energy Savings in CERN’s Main Data Centre
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter HEPiX May 2011.
Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.
MH...CH LECT-021 SYSTEMS CONCEPT Adopting a materials handling systems from overall optimization point of view. Adopting a materials handling systems.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
POWER QUALITY AND NETWORK DISTURBANCES in CERN’s electrical network K. KAHLE, TS-EL Diploma students R. Sternberger, M. Neubert 1 st TS Workshop Archamps,
Uninterruptible Power Supply Improves Reliability at The Australian Synchrotron Sean Murphy – ARW 2013 Melbourne.
Design considerations for the FCC electrical network architecture FCC week 2016, Rome, April 2016 Davide Bozzini - CERN/EN/EL With the contribution.
Using existing lifts in existing buildings to evacuate disabled persons Derek Smith Technical Director UK Lift and Escalator Industry Association.
COOLING & VENTILATION PLANTS M. Nonis – CERN EN Department / CV Group Annual Meeting of the FCC study – Rome 14 th April 2016.
West Cambridge Data Centre Ian Tasker Information Services.
Meeting Nantucket’s demand for electricity, today and tomorrow. Upgrading Nantucket’s Electric Grid.
Module 11 -Cost Estimation This module covers how to estimate project costs including: –Equipment costs – combination of procurement and installation (including.
Dell EMC Modular Data Centers
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
CERN Data Centre ‘Building 513 on the Meyrin Site’
Background B513 built in early 1970’s for mainframe era.
the high-voltage network of CERN ToMORROW
LV Safe Powering from UPS to Clients
Reuters - on the left once you see this leaving Vesenaz
the CERN Electrical network protection system
Electrical and electronic systems projects
COOLING & VENTILATION INFRASTRUCTURE
“ Highlighting the risks and challenges presented by modern data-center cooling systems in fire detection and suppression applications. “ Dave Boyack Sales.
Engineering Project Cable Management.
Collimator Control (SEUs & R2E Outlook)
Maintenance at ALBA facility: Strategy for the conventional services
Presentation transcript:

A New Vision for the Computer Centre Tony Cass —

2 Tony Cass The Team  Tony Cass, Dave Underhill, Mario Vergari, Dick Minchin and Tim Whibley.  Anne Funken—ST Project Manager and Electrical planning.  Jukka Lindroos—Air conditioning planning.  Roland Bachelard—Historical information about B513.  Nigel Baddams—Civil Engineering consultancy.

3 Tony Cass Objectives ATLAS & HPPLUS Network Oracle AFS & CSF NICE VXCERN and Mail Services PC Farms SHIFT STK Silos PC Farms Access AS Web/ASIS AFS and CORE Networking DELPHI, DXPLUS and RSPLUS/BATCH Operations Turn this  into a computer centre for LHC era computing. Estimate of LHC infrastructure needs: — raised floor area of at least 2000m 2 — equipment with total power consumption of 2MW.

4 Tony Cass Requirements (Logical Order)  Additional space (at least 1000m 2 )  Additional (and more reliable) power  Additional cooling  Improved fire precautions  Improved isolation/separation of services  Improved manageability

5 Tony Cass Requirements (Practical Order)  Additional space (at least 1000m 2 )  Additional cooling  Additional (and more reliable) power  Improved fire precautions  Improved isolation/separation of services  Improved manageability

6 Tony Cass Space, the Final Frontier  Where can we find an extra 1000m 2 ?

7 Tony Cass Space, the Final Frontier  Where can we find an extra 1000m 2 ? –Why bother? Use the full machine room height. But, as discussed last year, extra height racking has to be positioned carefully with respect to air conditioning equipment and more clearance is needed. Overall, the space gain is small—4-500m 2.

8 Tony Cass Space, the Final Frontier  Where can we find an extra 1000m 2 ? –Why bother? Use the full machine room height. –Use the Barn. This is an attractive solution. The Barn was always imagined as overflow space and the 800m 2 available is OK at a pinch. However, it would probably be difficult to move the current vault occupants in the short term. This is also not a cost free option—equipment in the barn competes with offices for air conditioning capacity. The Barn is a good backup solution if we need more space in the future—if we have underestimated needs or for other purposes (e.g. CIXP)—but is not the best solution now.

9 Tony Cass Space, the Final Frontier  Where can we find an extra 1000m 2 ? –Why bother? Use the full machine room height. –Use the Barn. –Use the Vault. This is a very attractive solution. About 1100m 2 of space is available today and more could be recuperated if we remove obsolete installations—such as the MG room air conditioning. However, clearance is limited. The height available is only 3470mm and some 700mm is taken up by cable trays and air conditioning ducts. A false floor of even 500mm reduces clearance to just 2270mm—less than is needed for STK silos.

10 Tony Cass Space, the Final Frontier  Where can we find an extra 1000m 2 ? –Why bother? Use the full machine room height. –Use the Barn. –Use the Vault. –Install a Mezzanine. Another attractive option—and one that has been explored before. However, a mezzanine floor above the machine room or the barn creates air conditioning problems for the space below. Money is needed to remedy these problems. Additionally, mezzanine installation costs must be added to the costs for equipping the equipment area.

11 Tony Cass Space, the Final Frontier  Where can we find an extra 1000m 2 ? –Why bother? Use the full machine room height. –Use the Barn. –Use the Vault. –Install a Mezzanine.  A Computer Room consultancy firm favoured the Mezzanine option over adapting the vault.

12 Tony Cass Space, the Final Frontier  Where can we find an extra 1000m 2 ? –Why bother? Use the full machine room height. –Use the Barn. –Use the Vault. –Install a Mezzanine.  A Computer Room consultancy firm favoured the Mezzanine option over adapting the vault. –But only on the grounds of limited clearance in the vault. Can we increase the clearance?

13 Tony Cass Vault Air Conditioning Options  Initially, we considered an air conditioning system with plant in the old MG room and ducts to distribute cold air and recuperate hot air.  However, all the computer centres we visited use “in room” air conditioning units.  This solution has been investigated for the vault. –In room units do not need ducts, thus increasing clearance by at least 200mm. –In room units require less space. –In room units are cheaper.

Proposal The Vault should be used to provide the additional space required for LHC Computing. If we have underestimated space requirements then the Barn is available for overflow.

15 Tony Cass Power Supply to and within B513  Current arrangements –Two 2MVA transformers in B513 are fed by a spur from the sitewide 18kV loop. –We have a 1.2MW UPS with battery capacity for 10 minutes at rated load. –Low voltage distribution is via Normabars.  Current disadvantages –We are vulnerable to problems in the substation feeding the spur. –UPS coverage is insufficient for anything but the shortest interruption. –ST have to intervene to install normabar connections. –There is no clear division of services across power connections—one Normabar supplies many services.

16 Tony Cass Upgrading the High Voltage Supply  ST propose to include B513 in the 18kV loop. –Requires extra equipment—such as 18kV switchgear.  Supporting an active load of 2MW requires four 2MVA transformers. –2MW > 2MVA given the nature of the load, so two transformers are needed, plus an additional transformer for redundancy and a fourth for critical loads (including air conditioning!).  The transformers, high voltage and low voltage switchgear must be in close proximity and will occupy some 180m 2. –Of this, some 40m 2 for the transformers should normally be external to B513.

17 Tony Cass UPS considerations  What are the likely needs for a UPS? –To cover microcuts and smooth supply. »This coverage is achieved merely by installing a UPS. –To cover problems in the Swiss/French supply to CERN until the French/Swiss supply takes over. »10 minutes autonomy is largely sufficient. u The 7 minute delay in the auto-transfer on June 2 nd is exceptional. The system is supposed to function within 2 minutes. –To maintain services in the event of a serious failure. »Although the main diesels are being refurbished they will never be able to support a 2MW load in B513. »A serious failure can last up to 2 hours. No static UPS (battery based) can support a 2MW load for 2 hours. »Which services do we need to maintain? ST estimate that serious failures will occur only once every 5 years.

18 Tony Cass UPS Solutions  “Infinite” protection for 2MW requires Rotary (diesel based) UPS solutions. –Unfortunately, these are expensive (~8MCHF), as is a combination of a static UPS with diesel generators.  The pragmatic solution is a dual system: –a 2MW UPS for 10 minutes, plus –a “200”kW UPS for 2hours (or less in combination with site diesels).  The impact? –Physics load will die abruptly in case of serious failure. »Can’t shut down in 10 mins—and even if we could, we would have to start immediately on failure and look silly if power restored. –The basic computing infrastructure (n/w, mail, home dir, …) will be maintained even across serious failures. »How large is this basic load?

Proposal Install a two tier UPS arrangement. Accept a sudden loss of physics load. If the level of power failures is found to be unacceptable then a private diesel backup can be added later. We should rely on site diesels for basic computing infrastructure if this is feasible.

20 Tony Cass Low Voltage Distribution  Maintain the Normabar system. –This is a tidy and flexible underfloor solution.  Clearly demarcate power zones. –Racks and equipment installed along, not across the normabar direction. –Special zone in vault (and later machine room) for the basic computing infrastructure.  Pre-equip normabars with 1- and 3-phase sockets. –No ST intervention required when adding equipment, –although separate zones for 1- and 3-phase racks reduces flexibility somewhat.

21 Tony Cass Fire Protection - I  Computer equipment is not halogen free. –Even a small fire releases acrid smoke which causes widespread damage.  TIS recommend the use of smoke curtains to separate room into 3-4 zones. –Smoke curtains are small (~50cm) boards hanging from the ceiling which contain lateral flow of smoke. –TIS would prefer compartmentalisation of the raised floor area, but this is infeasible.  Smoke extraction duct runs at right angles to smoke curtains with an inlet per zone. –Maximise extraction capacity where it is needed. –Ducts for replacement fresh air under the false floor. »Fresh air could be preheated if an inrush of air at –12ºC is likely to cause equipment damage.

22 Tony Cass Fire Protection — II  We will install a VESDA system with at least one detector per zone, if not one per line of racks.  The PDUs will allow each Normabar to be powered down individually. –Smoke detection will lead to normabars being powered off either individually or by zone.  No automatic extinction system is planned. –Technically feasible in vault but practically infeasible. –The Fire Brigade is in close proximity. –However, we will »ensure adequate provision of CO 2 extinguishers, »develop an intervention plan with the fire brigade, and »investigate mist based sprinkler systems (“Hi-Fog”).

23 Tony Cass Machine Room Safety and Isolation  Industry safety codes make it clear that Machine Room safety is compromised by –unnecessary access and transit, and –presence of unnecessary equipment and, especially, waste material.  Our situation today is not satisfactory. –Desktop PCs are delivered to the archive store and then wheeled through the machine room to the barn. –New equipment, obsolete equipment and waste are all regularly left in the machine room and the vault for the lack of clear and accessible storage and removal zones.  Any reworking of B513 should facilitate improved working practices.

24 Tony Cass The Plan  Configure the vault as a second Machine Room –Physical remodelling once silos are moved to B613

31 Tony Cass The Plan  Configure the vault as a second Machine Room –Physical remodelling once silos are moved to B613 –Install air-conditioning, smoke curtains & extraction and fire detection

32 Tony Cass The Plan  Configure the vault as a second Machine Room –Physical remodelling once silos are moved to B613 –Install air-conditioning, smoke curtains & extraction and fire detection –Provide “Physics” and “essential” power using normabars pre-equipped with 1- and 3-phase sockets

33 Tony Cass The Plan  Configure the vault as a second Machine Room –Physical remodelling once silos are moved to B613 –Install air-conditioning, smoke curtains & extraction and fire detection –Provide “Physics” and “essential” power using normabars pre-equipped with 1- and 3-phase sockets  Install new computing equipment in the vault

34 Tony Cass The Plan  Configure the vault as a second Machine Room –Physical remodelling once silos are moved to B613 –Install air-conditioning, smoke curtains & extraction and fire detection –Provide “Physics” and “essential” power using normabars pre-equipped with 1- and 3-phase sockets  Install new computing equipment in the vault  Rework the 18kV supply to B513

35 Tony Cass The Plan  Configure the vault as a second Machine Room –Physical remodelling once silos are moved to B613 –Install air-conditioning, smoke curtains & extraction and fire detection –Provide “Physics” and “essential” power using normabars pre-equipped with 1- and 3-phase sockets  Install new computing equipment in the vault  Rework the 18kV supply to B513  Renew the UPS configuration

36 Tony Cass The Plan  Configure the vault as a second Machine Room –Physical remodelling once silos are moved to B613 –Install air-conditioning, smoke curtains & extraction and fire detection –Provide “Physics” and “essential” power using normabars pre-equipped with 1- and 3-phase sockets  Install new computing equipment in the vault  Rework the 18kV supply to B513  Renew the UPS configuration  Renew Machine Room power distribution, air conditioning and fire protection once existing equipment is removed through “natural wastage”.

37 Tony Cass Timing and Costs  Real work can only start once silos are removed –unlikely to be before end October for operational reasons »B613 expected to be ready at the end of September.  Still, the Vault could be usable by end-May  Costs Vault Conversion (space provision) Space Reconfiguration50-100K False floor & fittings160K Air Conditioning350K Smoke Extraction80K Network infrastructure120K LV Distribution525K Total~1,300K Future costs Energy Centre Building600-1,000K New 18kV supply2,000K UPS780K Chilled water capacity350K Machine room LV1,500K Machine room HVAC??

Proposal We give ST the go ahead to start implementing this plan. Timing and costs to be monitored by TS and Divisional Management.

39 Tony Cass Machine Room Operations  Initial discussions between TS, CS, PDP & DB. Common agreement that greater rigour required.  Installation service for hardware will be provided. –Only TS have access to the machine room! Well, almost ;->  Anonymous PCs for ease of installation and management –addresses allocated by DHCP »but with consistent MAC  IP address mapping  Power Management will be an issue. –Normabars are remotely addressable, but inrush current likely to cause problems if all racks on a normabar start at once. –Addressable power distribution on racks?  We need to prototype management and operations procedures, not just hardware! –Tim Whibley & Fabio Trevisani are looking into these issues.