Ian Bird WLCG Networking workshop CERN, 10 th February 2014 10 February 2014

Slides:



Advertisements
Similar presentations
Ian Bird WLCG Workshop, Copenhagen 12 th November Nov 2013 Ian Bird; WLCG Workshop1.
Advertisements

FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Ian Bird WLCG Management Board CERN, 17 th February 2015.
CERN IT Department CH-1211 Genève 23 Switzerland t Status and Plans TERENA 2010 Vilnius, Lithuania John Shade /CERN.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Ian Bird LHCC Referees’ meeting; CERN, 11 th June 2013 March 6, 2013
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
Ian Bird LHCC Referee meeting 23 rd September 2014.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
Progress in Computing Ian Bird ICHEP th July 2010, Paris
Summary of RSG/RRB Ian Bird GDB 9 th May 2012
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Ian Bird Trigger, Online, Offline Computing Workshop CERN, 5 th September 2014.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
LHC Computing, CERN, & Federated Identities
DJ: WLCG CB – 25 January WLCG Overview Board Activities in the first year Full details (reports/overheads/minutes) are at:
Ian Bird, CERN 2 nd February Feb 2016
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
WLCG after 1 year with data: Prospects for the future Ian Bird; WLCG Project Leader openlab BoS meeting CERN4 th May 2011.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
WLCG – Status and Plans Ian Bird WLCG Project Leader openlab Board of Sponsors CERN, 23 rd April 2010.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
WLCG Network Discussion
Ian Bird WLCG Workshop San Francisco, 8th October 2016
ALICE internal and external network
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Computing models, facilities, distributed computing
evoluzione modello per Run3 LHC
for the Offline and Computing groups
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Project Status Report Computing Resource Review Board Ian Bird
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
Computing at the HL-LHC
Presentation transcript:

Ian Bird WLCG Networking workshop CERN, 10 th February February 2014

High level view from WLCG  Networking is working very well  There is no perceived problem  Indeed the intention is to make more and better use of the networks to more effectively manage data and storage resources 10 February 2014

LHCOPN  The LHCOPN guarantees the raw data export traffic between Tier 0 to the Tier 1s  Necessary to fulfil the requirements of the MoU for the Tier 1s and the data export  No desire or reason to change this  New Tier 1s should also fulfil this requirement and join the LHCOPN (Aside – the MoU requirement:)  99% availability averaged over a year to accept raw data o This is essentially a 3.5 day/year allowed downtime; and is achieved to all Tier 1s 10 February 2014

Inter-Tier traffic  Originally LHCOne was proposed as a way to address a perceived problem  Today many countries have more than adequate bandwidth internally that LHCOne is not needed  Often using LHCone may incur additional costs  Some countries find it a useful concept  May be a political need – helps to get funding and better bandwidth  Some NRENs like to segregate LHC from other science traffic  Therefore: essentially a national (NREN) decision driven by national needs and funding scenario  From WLCG point of view: keep LHCOne structure in place for those countries that find it useful  Address operational models 10 February 2014

perfSONAR deployment  WLCG agreed on perfSONAR as the core toolkit for network monitoring in the infrastructure  Strong push came from experiments  Deployment of perfSONAR has been (and still is) sometimes problematic  Some sites refuse to install it at all  Some sites still run very old versions  perfSONAR needs to be treated as any other service in WLCG  Including the level of commitment in installing, configuring, operating it. 10 February

Evolution of requirements 10 February 2014 Estimated evolution of requirements : Actual deployed capacity Line: extrapolation of actual resources Curves: expected potential growth of technology with a constant budget (see next) CPU: 20% yearly growth Disk: 15% yearly growth Higher trigger (data) rates driven by physics needs Based on understanding of likely LHC parameters; Foreseen technology evolution (CPU, disk, tape) Experiments work hard to fit within constant budget scenario

A lot more to come … 10 February 2014

LHCb & Run 3 40 MHz 5-40 MHz 20 kHz (0.1 MB/event) 2 GB/s Storage Reconstruction + Compression 50 kHz 75 GB/s 50 kHz (1.5 MB/event)  PEAK OUTPUT 

ATLAS & Run GB/s Storage Level 1 HLT 5-10 kHz (2MB/event) 40 GB/s Storage Level 1 HLT 10 kHz (4MB/event)  PEAK OUTPUT 

Data: Outlook for HL-LHC Very rough estimate of a new RAW data per year of running using a simple extrapolation of current data volume scaled by the output rates. To be added: derived data (ESD, AOD), simulation, user data… PB

CPU: Online + Offline Very rough estimate of new CPU requirements for online and offline processing per year of data taking using a simple extrapolation of current requirements scaled by the number of events. Little headroom left, we must work on improving the performance. ONLINE GRID Moore’s law limit MHS06 Historical growth of 25%/year Room for improvement

Conclusions  Networking has been shown to be a very stable and functional service for WLCG  Has enabled us to significantly evolve the computing models  Networking is key for the future evolution of WLCG  Bandwidths needed will fit within the expected evolution of technology (given 25 year history), even on the HL-LHC timescale  No reason to change to current way of using LHCOPN or the general Tier-Tier connectivity  The real problem to be addressed is the connectivity to Eastern Europe, Asia, Africa, etc. 10 February 2014