Computing at the HL-LHC

Slides:



Advertisements
Similar presentations
Ian Bird WLCG Workshop, Copenhagen 12 th November Nov 2013 Ian Bird; WLCG Workshop1.
Advertisements

Data Preservation at the Exa-Scale and Beyond Challenges of the Next Decade(s) APA Conference, Brussels, October 2014.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
ECFA HL LHC Workshop: ECFA High Luminosity LHC Experiments Workshop For the TDOC preparatory group Niko Neufeld CERN/PH.
Software for HL-LHC David Rousseau, LAL-Orsay, for the TDOC preparatory group 3 Oct 2013.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Tracking at the ATLAS LVL2 Trigger Athens – HEP2003 Nikos Konstantinidis University College London.
ALICE O 2 Plenary | October 1 st, 2014 | Pierre Vande Vyvre O2 Project Status P. Buncic, T. Kollegger, Pierre Vande Vyvre 1.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
Belle II Data Management System Junghyun Kim, Sunil Ahn and Kihyeon Cho * (on behalf of the Belle II Computing Group) *Presenter High Energy Physics Team.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Lessons from HLT benchmarking (For the new Farm) Rainer Schwemmer, LHCb Computing Workshop 2014.
SuperBelle Collaboration Meeting December 2008 Martin Sevior University of Melbourne A Computing Model for SuperBelle This is an idea for discussion only!
KLOE Computing Update Paolo Santangelo INFN LNF KLOE General Meeting University of Rome 2, Tor Vergata 2002, December
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
ALICE Offline Week | CERN | November 7, 2013 | Predrag Buncic AliEn, Clouds and Supercomputers Predrag Buncic With minor adjustments by Maarten Litmaath.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Wesley Smith, Graeme Stewart, June 13, 2014 ECFA - TOOC - 1 ECFA - TOOC ECFA - TOOC ECFA HL-LHC workshop PG7: Trigger, Online, Offline and Computing preparatory.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
1 HL-LHC Experiment ECFA Workshop, Aix-les-Bains Oct Preparatory group meeting May P. Allport, D. Contardo Steering committee of the workshop:
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ALICE O 2 | 2015 | Pierre Vande Vyvre O 2 Project Pierre VANDE VYVRE.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
Workshop ALICE Upgrade Overview Thorsten Kollegger for the ALICE Collaboration ALICE | Workshop |
1 ECFA HL-LHC Experiments Workshop Aix-les-Bains Oct. 1-3 D. Contardo, P. Allport CERN 3 rd September ECFA steering committee and preparatory group meeting.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Current activities and short term plans 24/04/2015 P. Hristov.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
Domenico Elia1 ALICE computing: status and perspectives Domenico Elia, INFN Bari Workshop CCR INFN / LNS Catania, Workshop Commissione Calcolo.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
Ideas and Plans towards CMS Software and Computing for Phase 2 Maria Girone, David Lange for the CMS Offline and Computing group April
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Overview of the computing activities of the LHC experiments Alessandro De Salvo T. Boccali – G. Carlino - C. Grandi - D. Elia A. De Salvo – 28.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 24/05/2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
CEPC software & computing study group report
Atlas IO improvements and Future prospects
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Workshop Concluding Remarks
SuperB and its computing requirements
The “Understanding Performance!” team in CERN IT
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
evoluzione modello per Run3 LHC
Workshop Computing Models status and perspectives
for the Offline and Computing groups
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
ALICE, ATLAS, CMS & LHCb joint workshop on DAQ
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Metrics? Efficiency? Cost?
ALICE Computing Model in Run3
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
ALICE Computing Upgrade Predrag Buncic
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
New strategies of the LHC experiments to meet
Grid Canada Testbed using HEP applications
Nuclear Physics Data Management Needs Bruce G. Gibbard
Presentation transcript:

Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory Group ALICE: Pierre Vande Vyvre, Thorsten Kollegger, Predrag Buncic; ATLAS: David Rousseau, Benedetto Gorini, Nikos Konstantinidis; CMS: Wesley Smith, Christoph Schwick, Ian Fisk, Peter Elmer ; LHCb: Renaud Legac, Niko Neufeld

Computing @ HL-HLC Run4 ATLAS Run3 CMS ALICE + LHCb Run2 Run1 CPU needs (per event) will grow with track multiplicity (pileup) Storage needs are proportional to accumulated luminosity 2015-2017 2024-2027 2018-2022

LHCb & ALICE @ Run 3 Reconstruction + Compression Storage 50 kHz 40 MHz 5-40 MHz 20 kHz (0.1 MB/event) 2 GB/s Storage Reconstruction + Compression 50 kHz 75 GB/s 50 kHz (1.5 MB/event)  PEAK OUTPUT 

ATLAS & CMS @ Run 4 Level 1 Level 1 HLT HLT Storage Storage 10 kHz (4MB/event) 5-10 kHz (2MB/event) 10-20 GB/s  PEAK OUTPUT  40 GB/s

How do we score today? RAW + Derived data @ Run1 PB

Data: Outlook for HL-LHC PB Very rough estimate of a new RAW data per year of running using a simple extrapolation of current data volume scaled by the output rates. To be added: derived data (ESD, AOD), simulation, user data…

CPU: Online requirements ALICE & LHCb @ Run 3,4 HLT farms for online reconstruction/compression or event rejections with aim to reduce data volume to manageable levels Event rate x 100, data volume x 10 ALICE estimate: 200k today’s core equivalent, 2M HEP-SPEC-06 LHCb estimate: 880k today’s core equivalent, 8M HEP-SPEC-06 Looking into alternative platforms to optimize performance and minimize cost GPUs ( 1 GPU = 3 CPUs for ALICE online tracking use case) ARM, Atom… ATLAS & CMS @ Run 4 Trigger rate x 10, CMS 4MB/event, ATLAS 2MB/event CMS: Assuming linear scaling with pileup, a total factor of 50 increase (10M HEP-SPEC-06) of HLT power would be needed wrt. today’s farm

CPU: Online + Offline Moore’s law limit MHS06 Room for improvement GRID Historical growth of 25%/year ONLINE Very rough estimate of new CPU requirements for online and offline processing per year of data taking using a simple extrapolation of current requirements scaled by the number of events. Little headroom left, we must work on improving the performance.

Conclusions Resources needed for Computing at HL-LHC are going to be large but not unprecedented Data volume is going to grow dramatically Projected CPU needs are reaching the Moore’s law ceiling Grid growth of 25% per year is by far not sufficient Speeding up the simulation is very important Parallelization at all levels is needed for improving performance (application, I/O) Requires reengineering of experiment software New technologies such as clouds and virtualization may help to reduce the complexity and help to fully utilize available resources