New strategies of the LHC experiments to meet

Slides:



Advertisements
Similar presentations
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Advertisements

T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
CEPC software & computing study group report
Review of the WLCG experiments compute plans
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
HEP LTDP Use Case & EOSC Pilot
Operating IP8 at high luminosity in the HL-LHC era
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Computing models, facilities, distributed computing
U.S. ATLAS Tier 2 Computing Center
SuperB and its computing requirements
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
The “Understanding Performance!” team in CERN IT
evoluzione modello per Run3 LHC
Workshop Computing Models status and perspectives
Data Challenge with the Grid in ATLAS
for the Offline and Computing groups
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
Russian Regional Center for LHC Data Analysis
Dagmar Adamova, NPI AS CR Prague/Rez
The LHC Computing Grid Visit of Her Royal Highness
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Thoughts on Computing Upgrade Activities
ALICE Computing Model in Run3
Vision for CERN IT Department
ALICE Computing Upgrade Predrag Buncic
ILD Ichinoseki Meeting
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
Computing at the HL-LHC
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN) Worldwide LHC Computing Grid (WLCG) in October 2016: - 63 MoU signatories 167 sites 42 countries - Up to ~600k CPU cores, but equivalent to ~ 350k today’s fastest cores - Disks: ~ 400 PB - Tape storage: ~390 PB LHC in 2016: Reached luminosity 40% above the design value. Integrated luminosity collected was 40 fb-1 (the goal for 2016 was 25 fb-1). Run 2 has set new records in all areas of performance and capabilities. The WLCG and experiments have managed the challenges successfully. Continuous efforts in various areas like improvements of experiment software performance, changes of computing models, adoption of new data management strategies, upgrades of network capacity or utilization of external resources – high performance computing centers (HPC) and computational clouds make it possible to adapt the system to the constantly evolving demands on performance.

Outlook: HL-LHC might/will create a computational problem 6.5 to 7 TeV Peak luminosity 1.6-1.7x1034cm-2s-1 7 TeV Peak luminosity 2-3x1034cm-2s-1 Upgrade ALICE + LHCb for Run 3 75 Gbyte/s for 6 weeks/year Upgrade CMS + ATLAS for Run 4 10-40 GByte/s data rates in p-p run Luminosity upgrades  higher pile-up, new trigger strategies, higher event complexity. Detector upgrades  more channels, more complex simulations.  More data Estimates still vary a lot: 65 – 200 times larger needs than for LHC Run 2.

Estimates of data volumes and resource needs for HL-LHC and the computing challenge Data growth estimate: Raw 2016: ~50 PB  2027: 600 PB Derived (1 copy): 2016: ~80 PB  2027: 900 PB Very rough estimate of RAW data produced per year of running using a simple extrapolation of current data volume scaled by the output rates (in PB, follow-up of 2015). To be added: derived data (ESD, AOD), simulation, user data… The estimates are at least 10 times above what is realistic to expect from technology with reasonably constant cost. Without continuous efforts to optimize the system, computing could risk to become a bottleneck for LHC science. Both CERN IT and experiments are increasingly held accountable for science throughput per investment by funding agencies . New Computing TDR is under preparation for 2020 It will be a checkpoint on a roadmap – the current working system will always continue to evolve. In addition to technological challenges, there are also sociological aspects: without recognizing computing and software activities as being equally important as analysis in physics careers, the amount of effort available for computing matters might become a problem for LHC science. CPU: 60 times upscale from 2016 needed These estimates are based on today’s computing models, but with expected HL-LHC operating parameters. They are very preliminary and may change significantly over the coming years.