Nikhef/(SARA) tier-1 data center infrastructure

Slides:



Advertisements
Similar presentations
Challenges in optimizing data center utilization
Advertisements

Computer Room Provision in Atlas and R89 Graham Robinson.
Welcome to a world of expertise Renewable Heating Solutions Jamie Boyd Sales Manager - Renewables & Installed Heating Glen Dimplex Northern Ireland.
Enhancing protection for the most valuable assets of the Knowledge Economy.
The CDCE BNL HEPIX – LBL October 28, 2009 Tony Chan - BNL.
Cooling Product Positioning
Nordic Show Room on Energy Quality Management, th August 2014, O. GUDMUNDSSON, DANFOSS A/S | 1| 1 Utilization of return water in district.
Telenor Tier 3 Data Center April About Telenor Tier 3 Data Center Telenor built it´s own Data Centar in accordance with the latest industrial standards.
Multi-layer ICT Management Presented by Andy Park.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Electrical demand in the future through the eyes of the tragedy of the commons problem.
 Site  Requirements  Local Resources  Initial layout ideas  Brief material selection  Supply options.
Computer Room Experiences A medium sized tier-2 site view Pete Gronbech GridPP Project Manager HEPIX April 2012.
Solar Energy Technology for Commercial Facilities John Archibald American Solar, Inc. Association of Energy Engineers Baltimore Chapter.
Empowering Business in Real Time. © Copyright 2009, OSIsoft Inc. All rights Reserved. Critical Facilities, Data Center,& IT Regional Seminar Series Mike.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
A New Building Data Center Upgrade capacity and technology step 2011 – May the 4 th– Hepix spring meeting Darmstadt (de) Pascal Trouvé (Facility Manager.
Steve Craker K-12 Team Lead Geoff Overland IT and Data Center Focus on Energy Increase IT Budgets with Energy Efficiency.
October 23rd, 2009 Visit of CMS Computing Management at CC-IN2P3.
Data Centre Design. PROJECT BRIEF Develop a low energy data centre for a major computing company. Building is to be a prime example of sustainable, low.
1 Provision of sustainable energy. 2 The market of energy as we know it The energy market contains a predefined business suppliers users access and settlement.
Solar energy a global warming solution By, Shane Horn “I’d put my money on the sun and solar energy. what a source of power! I hope we don’t have to wait.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Smart Grid: Opportunities in the Data Center January 22, 2010 Charles O’Donnell, Vice President, Engineering Liebert AC Power Emerson Network Power.
Building A Computer Centre HEPiX Large Cluster SIG October 21 st 2002 CERN.ch.
Preventing Common Causes of loss. Common Causes of Loss of Data Accidental Erasure – close a file and don’t save it, – write over the original file when.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Physical Infrastructure Issues In A Large Centre July 8 th 2003 CERN.ch.
Computer Rooms 101 Geoffrey Day. Will the floor collapse? 42RU rack - 115kg 40 * 1Ru servers - HP DL360, 16.78kg each Shelves, cables etc 15KG per rack.
National Institute for Subatomic Physics Frank Linde, Nikhef director.
Progress Energy Corporate Data Center Rob Robertson February 17, 2010 of North Carolina.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter / Vincent Doré.
The DutchGrid Platform – An Overview – 1 DutchGrid today and tomorrow David Groep, NIKHEF The DutchGrid Platform Large-scale Distributed Computing.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
HEPiX Fall 2014 Tony Wong (BNL) UPS Monitoring with Sensaphone: A cost-effective solution.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
119 May 2003HEPiX/HEPNT National Institute for Nuclear Physics and High Energy Physics Coordinates all (experimental) subatomic physics research in The.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Physics Data Processing at NIKHEF Jeff Templon WAR 7 May 2004.
Computer Centre Upgrade Status & Plans Post-C5, October 11 th 2002 CERN.ch.
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
Research organization technology David Groep, October 2007.
Possible Governance-Policy Framework for Open LightPath Exchanges (GOLEs) and Connecting Networks June 13, 2011.
Vault Reconfiguration IT DMM January 23 rd 2002 Tony Cass —
Energy Basics Understand it, Control it and Save.
PIC port d’informació científica First operational experience from a compact, highly energy efficient data center module V. Acín, R. Cruz, M. Delfino,
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Grid Computing Jeff Templon Programme: Group composition (current): 2 staff, 10 technicians, 1 PhD. Publications: 2 theses (PD Eng.) 16 publications.
A Simply Smarter Solution economical, secure, integrality, peace of mind User Friendly Information Systems, Inc. Data Co-location Center, the ultimate.
Status of the NL-T1. BiG Grid – the dutch e-science grid Realising an operational ICT infrastructure at the national level for scientific research (e.g.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Dell EMC Modular Data Centers
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
Site report of Wigner Datacenter
A Dutch LHC Tier-1 Facility
CERN Data Centre ‘Building 513 on the Meyrin Site’
Maintenance at ALBA facility: Strategy for the conventional services
DATA CENTERS C21 Engineering Consultancy Services for Telecom Industry
Presentation transcript:

Nikhef/(SARA) tier-1 data center infrastructure Tier-1 facts Expanding the Nikhef center Wim Heubers / Nikhef Amsterdam NL

LCG Tier-1 Amsterdam Science Park Nikhef - National institute for subatomic physics LHC (ATLAS, LHCb, ALICE), astroparticle physics data center: 500 m2, 800KW incl cooling grid services (disk storage, clusters) internet exchange AMS-IX SARA - Computing and Networking Services colo services, consulting data center: 1500 m2, 2 MW incl cooling, national super, national cluster, netherlight, etc grid services (tape and disk storage, clusters) HEPix May 2008 Wim Heubers / Nikhef

LCG Tier-1 Amsterdam Science Park More infrastructure: SURFnet - national research network provides connectivity to LCG OPN Big Grid - the dutch e-science grid provides resources for LCG tier-1 and other domains 2008-2011. Amsterdam Internet Exchange AMS-IX major and neutral internet exchange six housing locations including SARA and Nikhef HEPix May 2008 Wim Heubers / Nikhef

Nikhef-SARA LCG Tier-1 Nikhef - SARA share … campus, building, on-site security, restaurant LCG OPN connections tier-1 operations (!) Nikhef - SARA do NOT share … power and cooling infrastructure sysadmin tier-1 resources (grid services, clusters, storage) SARA does and Nikhef does not … provide hierarchical storage (tape, dCache) generic grid services Nikhef does and SARA does not … middleware development (VOMS, LCAS, etc) scaling and validation test beds tier-3 services colo HEPix May 2008 Wim Heubers / Nikhef

Computing colo HEPix May 2008 Wim Heubers / Nikhef

Disk Storage colo HEPix May 2008 Wim Heubers / Nikhef

Tape Storage colo HEPix May 2008 Wim Heubers / Nikhef

LCG HEP resources Tier-1 installations: Computing SARA 60% Nikhef 40% Disk storage SARA 60% Nikhef 40% Tape storage SARA 100% Note: ‘Big Grid’ budget until 2011. colo HEPix May 2008 Wim Heubers / Nikhef

Expanding the Nikhef data center HEPix May 2008 Wim Heubers / Nikhef

Data center layout nikhef amsterdam grid internet exchange colo HEPix May 2008 Wim Heubers / Nikhef

Amsterdam Internet Exchange AMS-IX neutral and independent started 15 years ago at Science Park now: distributed housing at 6 locations in Amsterdam large exchange: 300 connected parties Nikhef housing: 200 racks, 100 customers Nikhef provides: UPS power, cooling, security, access assistance during office hours HEPix May 2008 Wim Heubers / Nikhef

Amsterdam Internet Exchange AMS-IX zero-down-time HEPix May 2008 Wim Heubers / Nikhef

Nikhef - power demands controlled linear increase HEPix May 2008 colo controlled linear increase HEPix May 2008 Wim Heubers / Nikhef

Expanding the data center we need more … floor space, power, cooling security, fire suppression, alarm procedures monitoring of critical infrastructure but it has to be … realized within the existing (institute) building without affecting ams-ix operations (zero-down-time) colo HEPix May 2008 Wim Heubers / Nikhef

What happened … many discussions with management gained experience reliable infrastructure is very expensive! gained experience visit commercial data centers, visit conferences like ‘Datacenter Dynamics’ hired technical external expertise and project management incident due to overloaded circuit breaker monitoring and capacity planning are essential put effort into temporarily measures colo HEPix May 2008 Wim Heubers / Nikhef

Temporarily measures (1) backup generators HEPix May 2008 Wim Heubers / Nikhef

Temporarily measures (2) This week: add extra cooling for 50 KW grid resources, just in time for May run CCRC08 HEPix May 2008 Wim Heubers / Nikhef

finished April 2009 (I hope) Planning … install new cooling equipment (on the roof) integrate a 2nd UPS and generator into infrastructure remember ‘zero-down-time’ install new fire suppression and climate handling systems convert library into new data room on the 2nd floor move grid clusters and storage from 1st to 2nd floor extent AMS-IX housing on the 1st floor Make the grid resources visible … colo finished April 2009 (I hope) HEPix May 2008 Wim Heubers / Nikhef

From library to grid … HEPix May 2008 Wim Heubers / Nikhef

Monitoring power to the racks main power distribution: connected to facility control system (alarm -> standby service) current (amps) per phase in power distribution units power drop per phase on distribution rails power usage in racks: connected to ‘our’ IT control system current (amps) and power usage (KWh) per phase in racks needed for capacity planning and billing energy costs to users Note: monitoring the grid clusters and storage is done separately (Ganglia) colo HEPix May 2008 Wim Heubers / Nikhef

Power monitoring Amps and KWhrs HEPix May 2008 Wim Heubers / Nikhef

Cooling AMS-IX housing (can’t change too much): 10 years ago designed for 1.8 KW average per rack yes, this is still the average today! [telco equipment] but we have annoying ‘hot spots’ on the floor too many obstacles under raised floor and above ceiling grid housing (new floor!) maximum 50 racks and 300KW total power raised floor, but limited space above the racks proposed solution: cold corridor principle save energy … free cooling, optimize cold air flow increase room temperature and cold water temperature (10-16 C) colo HEPix May 2008 Wim Heubers / Nikhef

Fire suppression now: only smoke detection choice between: leave it as it is suppression with inert gas (Argon) suppression with chemical gas (Novec-1230) Suggestions? colo HEPix May 2008 Wim Heubers / Nikhef

Extending an existing facility It is expensive in time and money You don’t get what you really want Piping and fitting through concrete floors zero-down-time: stressing colo HEPix May 2008 Wim Heubers / Nikhef

Remarks and conclusions from idea to realization: it takes two years you have to position yourself between IT and infrastructure sustainability: can a data center be green? Cooling Grid : how to guarantee an optimal usage of resources? if you can start all over again: do it! colo HEPix May 2008 Wim Heubers / Nikhef

Questions? HEPix May 2008 Wim Heubers / Nikhef