The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)

Slides:



Advertisements
Similar presentations
Computing for LHC Dr. Wolfgang von Rüden, CERN, Geneva ISEF students visit CERN, 28 th June - 1 st July 2009.
Advertisements

DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
An Overview of Our Regulatory Proposal
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
Data Preservation in High Energy Physics Towards a Global Effort for Sustainable Long-Term Data Preservation in HEP
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Ian Bird CERN, 17 th July 2013 July 17, 2013
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Summary of HEP SW workshop Ian Bird MB 15 th April 2014.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
1 Kostas Glinos European Commission - DG INFSO Head of Unit, Géant and e-Infrastructures "The views expressed in this presentation are those of the author.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
Hall D Computing Facilities Ian Bird 16 March 2001.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Rob Byrd Chief Enterprise Architect Enterprise Architecture – A Citywide Service Delivery Strategy Aligning Information Technology Services to the Citizen.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
CEPC software & computing study group report
Status of WLCG FCPPL project
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Data Management Future
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Computing models, facilities, distributed computing
The “Understanding Performance!” team in CERN IT
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
evoluzione modello per Run3 LHC
FUTURE ICT CHALLENGES IN SCIENTIFIC COMPUTING
for the Offline and Computing groups
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
Global Citizens Movement
LHCb Software & Computing Status
Dagmar Adamova, NPI AS CR Prague/Rez
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Thoughts on Computing Upgrade Activities
HSF & CWP.
ALICE Computing Model in Run3
Vision for CERN IT Department
Use of computers to represent real life situations
ALICE Computing Upgrade Predrag Buncic
Strategy document for LHCC – Run 4
Huawei : Clouds at Scale
New strategies of the LHC experiments to meet
US ATLAS Physics & Computing
Grid Canada Testbed using HEP applications
EGI Webinar - Introduction -
What does DPHEP do? DPHEP has become a Collaboration with signatures from the main HEP laboratories and some funding agencies worldwide. It has established.
LHC plans for LS2 and beyond
Computing at the HL-LHC
The LHC Computing Grid Visit of Professor Andreas Demetriou
Workflow and HPC erhtjhtyhy Doug Benjamin Argonne National Lab.
Presentation transcript:

The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN) The LHC performance beats any expectations in Run 2 delivering integrated luminosities larger than originally planned. Run 3 and especially Run 4 will bring even more increased luminosities and higher pileup. This will translate into much more data collected by the experiments, more complex events and this in turn will require more sophisticated physics generators, massive simulation campaigns and more complicated and longer running reconstruction. There are concerns about the scale of computing and storage resources required to make the physics program of Run 3 and Run 4 real. Considering the costs, can the LHC community afford it? 2017: Integrated luminosity 50 fb-1 Pileup  ~ 30 to 60 2026: Integrated luminosity >300 fb-1 Pileup:  ~ 130 to 200

Implications for Run 3 and Run 4 based on the Run 2 experience In 2017, in the middle of Run 2 and in the Exabyte era, the prospects of ever growing amounts of produced data especially anticipated for the HL-LHC raise concerns about management of the costs for providing the desired resource levels for HL-LHC. The current projection of the computing needs into the HL-LHC era, taking into account the standard growth of ~20% CPU/year and ~15% in storage/year and the anticipated developments in technology, predict shortfalls of ~ 4x for CPU and ~ 7x for disk. T0 + T1 + T2 CPU (kHS06) x 3.7 The ultimate luminosity profile beyond Run 4 Anticipated growth in CPU resources towards Run 4 T0 + T1 + T2 disk (PB) x 7.1 Anticipated growth in disk capacity towards Run 4 Increase of time needed for reconstruction of one event depending on the pileup (ATLAS experiment)

will be limited by affordable software and computing, not by physics” The strategy documents: giving roadmaps and timelines for work to provide capacities needed for the HL-LHC physics program Since the end of the very successful Run 1 the LHC experiments have faced constraints in availability of CPU and storage. A number of activities were launched to help bridge the gap, including the re-structuring of computing models, adaptation of software to fast and/or cheap CPU architectures, and speeding up simulation and reconstruction processes. These endeavours were mostly done by experiments individually except for developments in ROOT and Geant4/GeantV. In 2017 WLCG management called for blueprint documents defining roadmaps and timelines not only common to the LHC experiments, but also relevant to other big HEP projects, to help prepare for the demands of HL-LHC and related endeavours. The Community White Paper was delivered by the HEP Software Foundation at the end of 2017 and a WLCG Strategy document is to appear soon. Various working groups were established and the topics to pursue include: Improving software performance and efficiency Reducing data volume Reducing processing costs Reducing infrastructure and operation costs Ensuring sustainability “The amount of data that experiments can collect and process in the future will be limited by affordable software and computing, not by physics” Taken from “Community White Paper” of the HEP Software Foundation http://hepsoftwarefoundation.org/activities/cwp.html

Worldwide LHC Computing Grid in 2017 The distributed computing and storage infrastructure for the LHC experiments, WLCG, has been providing a safe framework for processing of LHC data and delivery of physics results ever since the startup of the LHC operations. It will be a challenge to keep the capacity sufficient through Run 3, Run 4 and beyond !