CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t The new (remote) Tier 0 What is it, and how will it be used? The new (remote) Tier 0 What.

Slides:



Advertisements
Similar presentations
Testing Relational Database
Advertisements

CERN IT Department CH-1211 Genève 23 Switzerland t The Wigner Data Centre An Extension to the CERN Data Centre.
MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
Chapter 13 Network Design and Management
Network Design and Implementation
Oracle Data Guard Ensuring Disaster Recovery for Enterprise Data
“An integrator view on storage consolidation & Information Lifecycle Management (ILM)” Ulrik Van Schepdael - Marketing Manager Belgacom Network & System.
ISIDORE Project Progress, Performance and Future.
All content in this presentation is protected – © 2008 American Power Conversion Corporation Rael Haiboullin System Engineer Capacity Manager.
CERN IT Department CH-1211 Genève 23 Switzerland t Options for Expanding CERN’s Computing Capacity Without A New Building Medium term plans.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Business Continuity Overview Wayne Salter HEPiX April 2012.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
1 The SpaceWire Internet Tunnel and the Advantages It Provides For Spacecraft Integration Stuart Mills, Steve Parkes Space Technology Centre University.
Windows Server ® Virtualization Infrastructure Planning and Design Published: November 2007 Updated: January 2012.
IT Department 29 October 2012 LHC Resources Review Board2 LHC Resources Review Boards Frédéric Hemmer IT Department Head.
Business Continuity and Disaster Recovery Chapter 8 Part 2 Pages 914 to 945.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Remote Hosting First Experiences Wayne Salter (with input.
1 November 2008 David Hall Sales Manager – New Business Data Centre Hosting for the M-Business.
IT Infrastructure Chap 1: Definition
Delivering Quality Infrastructure: Going Beyond Technology TNC 2007 Victor Reijs, HEAnet Ann Harding, HEAnet.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS From data management to storage services to the next challenges.
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
Operating Systems & Information Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS Drupal Database Selection Tim Bell 6 th June.
CERN IT Department CH-1211 Genève 23 Switzerland t Tier0 Status - 1 Tier0 Status Tony Cass LCG-LHCC Referees Meeting 18 th November 2008.
CERN IT Department CH-1211 Genève 23 Switzerland t Tier0 Status - 1 Tier0 Status Tony Cass LCG-LHCC Referees Meeting 6 th July 2009.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter / Vincent Doré.
Computer Centre Upgrade Status & Plans Post-C5, June 27 th 2003 CERN.ch.
CERN IT Department CH-1211 Genève 23 Switzerland t The Agile Infrastructure Project Part 1: Configuration Management Tim Bell Gavin McCance.
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Upgrade Project Wayne Salter HEPiX November.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Nikhef/(SARA) tier-1 data center infrastructure
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Managing the CERN LHC Tier0/Tier1 centre Status and Plans March 27 th 2003 CERN.ch.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Ian Bird, WLCG MB; 27 th October 2015 October 27, 2015
Network design Topic 6 Testing and documentation.
[IT INFRASTRUCTURE OVERVIEW] Paul Ashurst Executive Director, EIS Global Network Services IT Corporate.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public ITE PC v4.0 Chapter 1 1 Help Desk Working at a Small-to-Medium Business or ISP – Chapter 2.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
The Technical Network in brief Jean-Michel Jouanigot & all IT/CS.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
Chapter 12 The Network Development Life Cycle
Project Status Report Ian Bird Computing Resource Review Board 20 th April, 2010 CERN-RRB
CERN - IT Department CH-1211 Genève 23 Switzerland t IT Dept Presentation [September 2009] - 1 User Support - Future Changes in Policy and.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN IT Monitoring and Data Analytics Pedro Andrade (IT-GT) Openlab Workshop on Data Analytics.
Energy Savings in CERN’s Main Data Centre
STORAGE ARCHITECTURE/ MASTER): Where IP and FC Storage Fit in Your Enterprise Randy Kerns Senior Partner The Evaluator Group.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Facilities Evolution Wayne Salter HEPiX May 2011.
CERN - IT Department CH-1211 Genève 23 Switzerland t Operating systems and Information Services OIS Proposed Drupal Service Definition IT-OIS.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public ITE PC v4.0 Chapter 1 1 Planning a Network Upgrade Working at a Small-to-Medium Business or.
Taking your Business Technology Further. First Communications: At A Glance Technology Provider since 1998, serving thousands of Businesses throughout.
110 th November 2010EN/CV/DC Detector Cooling Project - advancement status of Work Package N3 Thermosiphon project Michele Battistin 18 May 2011.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
Computer Security: Principles and Practice First Edition by William Stallings and Lawrie Brown Lecture slides by Lawrie Brown Chapter 17 – IT Security.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Remote Hosting Project
LHCOPN update Brookhaven, 4th of April 2017
CERN Data Centre ‘Building 513 on the Meyrin Site’
Luca dell’Agnello INFN-CNAF
Olof Bärring LCG-LHCC Review, 22nd September 2008
Remote Hosting Project
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t The new (remote) Tier 0 What is it, and how will it be used? The new (remote) Tier 0 What is it, and how will it be used? Ian Bird, CERN WLCG Workshop 19 th May 2012 Accelerating Science and Innovation Accelerating Science and Innovation 19/05/20121

CERN IT Department CH-1211 Genève 23 Switzerland t Agenda Why a new computer centre? Initiatives for CC extensions Call for tender Comments on the selected site How will the remote CC be used?

CERN IT Department CH-1211 Genève 23 Switzerland t History In ~2005 we foresaw a problem with the limited power available in the CERN Computer Centre –Limited to 2.5 MW –No additional electrical power available on Meyrin site –Predictions for evolving needs anticipated ~10 MW by 2020 In addition we uncovered some other concerns –Overload of existing UPS systems –Significant lack of critical power (with diesel backup) –No real facility for business continuity if there was a major catastrophic failure in the CC In ~2006 we proposed to build a new CC at CERN on the Prevessin site –The idea was to have a modular design that could expand with need

CERN IT Department CH-1211 Genève 23 Switzerland t History - 2 Studies for a new CC on Prévessin site –Four conceptual designs (2008/2009) –Lack of on site experience –Expensive! –Uncertainty in the requirements Bought 100 kW of critical power in a hosting facility in Geneva –Addressed some concerns of critical power and business continuity –In use since mid-2010 Planned to consolidate existing CC –Minor upgrades brought power available from 2.5 to 2.9 MW –Upgrade UPS and critical power facilities – additional 600 kW of critical power, bringing total IT power available to 3.5 MW Interest from Norway to provide a remote hosting facility –Initial proposal not deemed suitable –Formal offer not convincing –Interest from other member states

CERN IT Department CH-1211 Genève 23 Switzerland t New CC history - 1 Call for interest at FC June 2010 –How much computing capacity for 4MCHF/year? –Is such an approach technically feasible? –Is such an approach financially interesting? –Deadline end of November 2010 Response –Surprising level of interest – 23+ proposals –Wide variation of solutions and capacity offered –Many offering > 2MW (one even > 5MW) –Assumptions and offers not always clearly understood –Wide variation in electricity tariffs (factor of 8!) Many visits and discussions in 2010/2011 Official decision to go ahead taken in spring 2011 and all potential bidders informed Several new consortia expressed interest

CERN IT Department CH-1211 Genève 23 Switzerland t New CC History – 2 Call for tender –Sent out on 12 th Sept –Specification with as few constraints as possible –Draft SLA included –A number of questions for clarification were received and answered (did people actually read the documents?) –Replies were due by 7 th Nov

CERN IT Department CH-1211 Genève 23 Switzerland t Tender Specification - I Contract length Reliable hosting of CERN equipment in a separated area with controlled access –Including all infrastructure support and maintenance Provision of full configured racks including intelligent PDUs Essentially all services which cannot be done remotely –Reception, unpacking and physical installation of servers –All network cabling according to CERN specification –Smart ‘hands and eyes’ –Repair operations and stock management –Retirement operations

CERN IT Department CH-1211 Genève 23 Switzerland t Evaluation Process The financial offers were reviewed and in some cases corrected The technical compliance of a number of offers were reviewed (those which were within a similar price range) Meetings were held with 5 consortia to ensure that –we understood correctly what was being offered –they had correctly understood what we were asking for –errors were discovered in their understanding Site selected and approved at FC 14 th March

CERN IT Department CH-1211 Genève 23 Switzerland t Wigner Data Centre, Budapest

CERN IT Department CH-1211 Genève 23 Switzerland t Technical Overview New facility due to be ready at the end of m² (725m²) in an existing building but new infrastructure 2 independent HV lines to site Full UPS and diesel coverage for all IT load (and cooling) –3 UPS systems per block (A+B for IT and one for cooling and infrastructure systems) Maximum 2.7MW –In-row cooling units with N+1 redundancy per row (N+2 per block) –N+1 chillers with free cooling technology (under 18°C * ) Well defined team structure for support Fire detection and suppression for IT and electrical rooms Multi-level access control; site, DC area, building, room Estimated PUE of 1.5

CERN IT Department CH-1211 Genève 23 Switzerland t Business Continuity With a 2 nd DC it makes sense to implement a comprehensive BC approach First classification of services against three BC options: 1.Backup 2.Load balancing 3.Re-installation An internal study has been conducted to see what would be required to implement BC from a networking point of view Requires a network hub to be setup independently of the CC –Requires an air-conditioned room of about sqm for ~ 10 racks with kW of UPS power and good external fibre connectivity –Several locations are being studied All IT server deliveries are now being managed as if at a remote location

CERN IT Department CH-1211 Genève 23 Switzerland t Use of Remote Hosting Logical extension of physics data processing –Batch and disk storage split across the two sites –The aim is to make the use as invisible as possible to the user Business continuity –Benefit from the remote hosting site to implement a more complete business continuity strategy for IT services

CERN IT Department CH-1211 Genève 23 Switzerland t Possible Network Topology

CERN IT Department CH-1211 Genève 23 Switzerland t Major Challenges Large scale remote hosting is new –Doing more with same resources Networking is complex Localization to reduce network traffic? For BC; full classification of services Latency issues? –Expect 20-30ms latency to remote site –A priori no known issues but a number of services will need to be tested Bandwidth limitations –Possible use of QoS Disaster recovery – backup strategy

CERN IT Department CH-1211 Genève 23 Switzerland t Status and Future Plans Contract was signed on 8 th May 2012 During 2012 –Tender for network connectivity –Small test installation –Define and agree working procedures and reporting –Define and agree SLA –Integrate with CERN monitoring/ticketing system –Define what equipment we wish to install and how it should be operated 2013 –1Q 2013: install initial capacity (100kW plus networking) and beginning larger scale testing –4Q 2013: install further 500kW Goal for 1Q 2014 to be in production as IaaS with first BC services

CERN IT Department CH-1211 Genève 23 Switzerland t IO rates from the last 12 months Castor IO contains: 1.Tape IO, including repack 2.LHCOPN IO 3.Batch IO 4.Part of EOS IO 5.Experiment IO (pit) Batch IO contains: 1.Castor IO 2.EOS IO 3.Part of LHCONE IO 4.Part of Internet IO Take max. Castor + 50% max. EOS as average reference figure  Average 12 GB/s Peak 19 GB/s Traffic today

CERN IT Department CH-1211 Genève 23 Switzerland t Split the batch and EOS resources between the two centers, but keep them as complete logical entities Network available in 2013  24 GB/s Possible usage model

CERN IT Department CH-1211 Genève 23 Switzerland t Possible strategies If necessary to reduce network traffic –Possibly introduce “locality” for physics data access –E.g. adapt EOS replication/allocation depending on IP address –Geographical mapping of batch queues and data seys –More sophisticated strategies later if needed 18

CERN IT Department CH-1211 Genève 23 Switzerland t Latency & QoS? Latency –Inside CERN centre  0.3 ms –CC to Safehost or experiment pits  0,6 ms –CERN to Wigner (today)  35 ms; eventual ms –Possible side effects on services: Response times, limit number of control statements, random I/O –Test now with delay box with lxbatch and EOS Quality of Service –Try some simple ways to implement a first order ‘QoS’ to avoid network problems in case of peak network loads; E.g. splitting TCP and UDP traffic –Needs to be tested 19

CERN IT Department CH-1211 Genève 23 Switzerland t Summary CERN CC will be expanded with a remote Centre in Budapest Intention is to implement a “transparent” usage model Network bandwidth not a problem –Dependence on latency needs to be tested –More complex solutions involving location dependency to be tested but hopefully not necessary at least initially