Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.

Slides:



Advertisements
Similar presentations
Computer Room Requirements for High Density Rack Mounted Servers Rhys Newman Oxford University.
Advertisements

Computer Room Provision in Atlas and R89 Graham Robinson.
Zycko’s data centre proposition The very best products and technologies Focus on what your customers need Turning products into solutions Wide range of.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
Distributed Databases John Ortiz. Lecture 24Distributed Databases2  Distributed Database (DDB) is a collection of interrelated databases interconnected.
MICE Refurbishment of CERN RF equipment for MICE M. Vretenar, CERN AB/RF.
Nelson Hydro Downtown Conversion July 26, 2010 Project Overview.
1 1 Substation Asset Strategy Kevin Dasso, Senior Director Engineering and Operations IEEE/PES Annual Substations Committee Meeting April 7, 2008.
02/24/09 Green Data Center project Alan Crosswell.
Solar and Geothermal power By Matthew Goforth. Solar Power Solar power is energy that is converted from the suns rays. These rays are converted into energy.
How PC Power Supplies Work If there is any one component that is absolutely vital to the operation of a computer, it is the power supply. Without it, a.
Chapter 8Basic Computer Maintenance  8.1Preventive Maintenance 8.1Preventive Maintenance 8.1Preventive Maintenance  8.2Monitoring System Performance.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
Building Systems Integration - Energy and Cost Analysis The Milton Hershey School New Supply Center Justin Bem AE Senior Thesis – Spring 2007 Mechanical.
Charles F. Hurley Building Case Study B.J. Mohammadipour Bureau of State Office Buildings.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
AIR QUANTITY REQUIRED TO TRANSFER HEAT IN A FORCED AIR SYSTEM In all four mechanical systems considered, the transfer of heat, either by taking it out.
The HiLumi LHC Design Study is included in the High Luminosity LHC project and is partly funded by the European Commission within the Framework Programme.
Point 5 - Cooling Network - Key Primary Water Chilled Water Mixed Water Eau Bruite Standard Power Diesel Backup Comp Air Primary Water at 20-25C – Produced.
1 MICE-Video Conference Status of MICE Hall Contents: General status Electrical Mechanical Preparing for Steps II & III Risks and worries Andy Nichols,
CV activities on LHC complex during the long shutdown Serge Deleval Thanks to M. Nonis, Y. Body, G. Peon, S. Moccia, M. Obrecht Chamonix 2011.
Local Area Networks (LAN) are small networks, with a short distance for the cables to run, typically a room, a floor, or a building. - LANs are limited.
FIO Services and Projects Post-C5 February 22 nd 2002 CERN.ch.
Transforming B513 into a Computer Centre for the LHC Era Tony Cass —
Energy Usage in Cloud Part2 Salih Safa BACANLI. Cooling Virtualization Energy Proportional System Conclusion.
Planning the LCG Fabric at CERN openlab TCO Workshop November 11 th 2003 CERN.ch.
Building A Computer Centre HEPiX Large Cluster SIG October 21 st 2002 CERN.ch.
PPD Computing “Business Continuity” David Kelsey 3 May 2012.
Review of the Project for the Consolidation and Upgrade of the CERN Electrical Distribution System October 24-26, 2012 THE HIGH-VOLTAGE NETWORK OF CERN.
1 Computer and Network Bottlenecks Author: Rodger Burgess 27th October 2008 © Copyright reserved.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
ALMA Archive Operations Impact on the ARC Facilities.
Computer Centre Upgrade Status & Plans Post-C5, June 27 th 2003 CERN.ch.
Consolidation and Upgrade of the SPS Electrical Networks Current Status JP. Sferruzza, on behalf of EN/EL.
CERN.ch 1 Issues  Hardware Management –Where are my boxes? and what are they?  Hardware Failure –#boxes  MTBF + Manual Intervention = Problem!
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
Nikhef/(SARA) tier-1 data center infrastructure
Managing the CERN LHC Tier0/Tier1 centre Status and Plans March 27 th 2003 CERN.ch.
Phase II Purchasing LCG PEB January 6 th 2004 CERN.ch.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Building Systems Integration - Energy and Cost Analysis The Milton Hershey School New Supply Center Justin Bem AE Senior Thesis – Spring 2007 Mechanical.
LS1 Review P.Charrue. Audio/Video infrastructure LS1 saw the replacement of BI and RF analog to digital video transport Was organised in close collaboration.
Energy Savings in CERN’s Main Data Centre
A New Vision for the Computer Centre Tony Cass —
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
Physical Security Concerns for LAN Management By: Derek McQuillen.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Virtual Server Server Self Service Center (S3C) JI July.
Introduction to the Data Centre Track HEPiX St Louis 7 th November 2007 CERN.ch.
FRONT VIEW OF PETROLEUM HOUSE 1. PLOT SIZE.175 ft x 200ft (35000 Sft) ( Sq yard) (7.778 Kannals) 2. DATE OF START STIPULATED.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
West Cambridge Data Centre Ian Tasker Information Services.
CV Activities for PSB-LIU POPS-B (new MPS for the PSB-LIU) B 245 PSB Demi water cooling plant Stefano Moccia, EN-CV Meeting 07/03/16.
CD-doc-650 Fermilab Computing Division Physical Infrastructure Requirements for Computers FY04 – 07 (1/4/05)
Dell EMC Modular Data Centers
CERN Data Centre ‘Building 513 on the Meyrin Site’
Background B513 built in early 1970’s for mainframe era.
the high-voltage network of CERN ToMORROW
Luca dell’Agnello INFN-CNAF
the CERN Electrical network protection system
COOLING & VENTILATION INFRASTRUCTURE
Administering Your Network
The logic progression... Northern Ireland has high wind penetration,
Presentation transcript:

Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch

CERN.ch 2 Agenda  Some History  Vault –Conversion –Migration  Electrical Supply Upgrade –Requirements –Backup supply arrrangements –Substation planning

CERN.ch 3 Some History

CERN.ch 4 Computer Centre Upgrade — History  LHC computing requires additional space, power and air conditioning capacity in B513.  Following studies in 2000/2001, we developed this plan: –Convert the tape vault to a machine room area in 2002 –Use this space from 2003, both for new equipment and to empty part of the existing machine room –Upgrade the electrical distribution in the machine room during , using the vault space as a buffer. –Create a dedicated substation to meet power needs.  For air conditioning reasons, the vault should be used for bulky equipment with low heat dissipation. –e.g. Tape robotics.

CERN.ch 5 Vault Conversion

CERN.ch 6 Vault Conversion Status  Almost complete. Just need to –add a few more perforated false floor tiles –test air conditioning units –test fire detection and smoke extraction  Hand over to IT ~21 st October. –But no access control as yet. Doors at either end will be locked. »Access control will be strict. The vault is not a corridor! u Badge required to go in and out.

CERN.ch 8

9

10

CERN.ch 11

CERN.ch 12 Using the Vault  Now we have this space, we can use it.  Remember, aim is to free space in the machine room to enable upgrade there.  Migrate equipment from “right hand side” of the machine room. –PCs first, robots to move from January. –Big SHIFT systems to be removed.

CERN.ch 13 Practical Details  Migration being coordinated by FIO/OPT –Regular item in CCSR meetings since July »First discussion with FIO for batch servers, DS for disk servers and CS for network infrastructure. »Will need to handle external networking area carefully.  Aim is for more control over equipment. We should know where it is, what it is and why it is there!  Network infrastructure is “decentralised”—with switches next to equipment. –All equipment in a rack on the same network service.  CS plans for most network services in the vault to be on the “unrouted” network (172.17) –This means no outgoing access. Is the impact for, e.g., Grid Services understood?

CERN.ch 14 And then?  Upgrade the cleared area of the machine room. –Clean out under the false floor. –Upgrade the electrical distribution. –Upgrade network infrastructure.  Move equipment from the left half –If to the machine room, aligned with normabarres!  Upgrade the left half of the machine room.  An air conditioning upgrade is probably needed as well.  All of this work is being planned now. More details in Spring 2003.

CERN.ch 15 Questions?

CERN.ch 16 New Substation

CERN.ch 17 What are the demands?  LHC requirements in Hoffmann Review are for CPU capacity of 2.3MSI95 in 2007, plus disk storage and tape drives and robotics. –And don’t forget the general infrastructure.  The prediction in 2000 was 15,000±5,000 boxes for physics.  2MW equipment load, constant (or even falling) over LHC lifetime.  With LHC delayed, the box count has gone down, but we now think PC power is constant at ~1W/SI95. –  2.4MW for physics in 2008 and rising over LHC lifetime.  Fortunately, the “natural” substation arrangement allows for an active load of up to 2.5MW. –We plan to support an active load of up to 2.5MW. »We are being careful not to exclude an expansion to 4MW. However, cooling is a major problem and a lot of space is needed for all the electrical equipment.

CERN.ch 18 Power Supply Reliability  Power distribution is via 18kV “loops”. These are powered by EdF/EOS with an autotransfer mechanism to cover failure of the primary supply. –In operation since 2000 (but not in winter for ). –Switches between supplies in <30s. –We need a UPS to cover any such break as well as any micro cuts. Battery life of 5-10 minutes is sufficient.  Today, B513 is on a spur from one of these 18kV loops:  New substation includes B513 in an 18kV loop directly: This improves reliability. B513 18kV B513

CERN.ch 19 What if the Switchover Fails?  If the switchover mechanism fails (maybe once every 5-10 years) the sitewide diesels take over within 2 minutes. –Yes, the diesels have a poor reputation in IT, but »the control system is being refurbished in 2003 »annual real load tests have restarted.  A maximum of 1.4MVA is available for IT to cover –safety systems, air conditioning, … and –up to 250kW of computing load.  A dedicated UPS is required for “critical computing equipment” to be powered by the diesels. –This has been imagined with a battery lifetime of 2hours, but we propose a 5-10minute battery lifetime. »A 2 hour UPS would be a 3 rd level backup with significant costs. u In terms of both space and batteries (which need replacing periodically).

CERN.ch 20 What is “Critical Computing Equipment”?  Is: –Network Infrastructure –Databases for accelerator operation –informatics infrastructure (home directory, web, mail, …) –“master” servers such as sure/remedy/iss/zephyr (but maybe only one of a primary/backup pair).  Isn’t: –Anything that scales with physics computing demand »CPU servers, disk servers, tape servers, …  Maybe (depends on total load): –Things that “scale slowly” with physics demand »“scale slowly”  ”one per experiment”? »Castor name servers? POOL infrastructure?

CERN.ch 21 Questions?

CERN.ch 22 Substation Equipment and Space Needs  18kV switchgear (we are now in the loop). –~30m 2  18kV  400V Transformers –Enough to cover machine load, air conditioning and general services. –~150m 2  400V switchgear (for machines, hvac, …) –~180m 2  Critical UPS and batteries (10minutes) –~30m 2  Physics UPS and batteries –~220m 2  Overall, ~450m 2 (Transformers can go outside) –c.f. ~220m 2 today. And we have to keep services running while we build the new substation. –More space needed! Where?

CERN.ch 23 Where to put a Substation?  Various locations considered. Overall, the best solution is to –create more space near the existing substation –install new equipment –remodel and reuse existing areas.  “More space near the existing substation”  Build a “bunker” under the car park between B513 and Restaurant 2. –Many complications—air ducts, sloping ground, need to maintain access to the barn doors—but possible.

CERN.ch 24

CERN.ch 25

CERN.ch 26

CERN.ch 27 When to put a Substation?  Critical factor is the diesel backup. Today, this can support ~500kW. –Why not 250kW? Not as much hvac equipment.  What level of secure power do we need? –Computer Centre load today is ~450kW. –Experiments request 600 CPU servers plus ~100 disk servers for More will be requested in –Some machines being removed, but shift3 consumption just 1kW.  By end-2004 at the latest, we will not be covered by the diesels. –Unacceptable level of risk for critical services. –New UPS (which allows the dedicated diesel connection) required for critical equipment by mid-2004.

CERN.ch 28 Substation Construction Timeline 03/03Start bunker construction 09/03Install 18kV & 400V switchgear plus UPS for critical equipment 02/04New substation enters production. Physics on current UPS (640kW) 04/05Remodel existing electrical supply areas in B513 1H/06Install new 800kW UPS system 1H/07Add 800kW UPS capacity (1.6MW total) 1H/08Add 800kW UPS capacity (2.4MW total)

CERN.ch 29 How does this affect my services?  No power cut, but all systems connected to the existing electrical infrastructure will be without UPS cover for 2 days when the new substation is commissioned.  Systems in the critical equipment area will have UPS cover (if at least one power supply is connected to the right normabarre).  Once the new substation is in service, only equipment in the critical computing area will be connected to the diesel backup supply.

CERN.ch 30 Substation cost

CERN.ch 31 What if the load is really 4MW?  The substation size doubles. –No problem: extend the bunker up to the bike shed and out under the car park (it would be fully underground here). Can easily find another 360m 2. –But… Would need more diesel power for the air conditioning during a power cut  None left for the critical computing equipment, so new arrangements required here. »Very difficult (and expensive!) to plan now to cover this.  Not clear that the air conditioning can cope –Major changes needed to chilled water piping and air conditioning stations (the noisy bits facing the saleve, not the things on the roof.) –Certainly requires the barn as machine room space.

CERN.ch 32 Questions?