Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)

Slides:



Advertisements
Similar presentations
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Advertisements

Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Predrag Buncic, October 3, 2013 ECFA Workshop Aix-Les-Bains - 1 Computing at the HL-LHC Predrag Buncic on behalf of the Trigger/DAQ/Offline/Computing Preparatory.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Big Data for Big Discoveries How the LHC looks for Needles by Burning Haystacks Alberto Di Meglio CERN openlab Head DOI: /zenodo.45449, CC-BY-SA,
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Perspectives on LHC Computing José M. Hernández (CIEMAT, Madrid) On behalf of the Spanish LHC Computing community Jornadas CPAN 2013, Santiago de Compostela.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
CEPC software & computing study group report
The CMS Experiment at LHC
Review of the WLCG experiments compute plans
Status of WLCG FCPPL project
Track 3 summary Distributed Computing
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Grid site as a tool for data processing and data analysis
Computing models, facilities, distributed computing
Overview of the Belle II computing
SuperB and its computing requirements
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
The “Understanding Performance!” team in CERN IT
evoluzione modello per Run3 LHC
WLCG Manchester Report
for the Offline and Computing groups
WLCG: TDR for HL-LHC Ian Bird LHCC Referees’ meting CERN, 9th May 2017.
Status and Prospects of The LHC Experiments Computing
The LHC Computing Challenge
Dagmar Adamova, NPI AS CR Prague/Rez
LHC DATA ANALYSIS INFN (LNL – PADOVA)
Readiness of ATLAS Computing - A personal view
The LHC Computing Grid Visit of Her Royal Highness
Thoughts on Computing Upgrade Activities
ALICE Computing Model in Run3
Vision for CERN IT Department
ALICE Computing Upgrade Predrag Buncic
WLCG Collaboration Workshop;
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
New strategies of the LHC experiments to meet
Scientific Computing At Jefferson Lab
Computing at the HL-LHC
WLCG Status – 1 Use remains consistently high
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

Computing for the LHC: operations during Run2 and getting ready for Run 3 Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN) The Worldwide LHC Computing Grid (WLCG) WLCG is a global computing infrastructure which provides computing resources to store, distribute and analyze the data generated by the LHC experiments. It enables access to the data for all participating institutions regardless of their physical location. WLCG resources in 2015: nearly 170 sites 41 countries ~ 350 000 CPU cores ~ 500 PB of storage > 2 million jobs/day 10 – 100 Gbit/s links WLCG is the world’s largest Computing grid.

Performance of WLCG during Run 1 and Run 2 Data transfers to CASTOR tape storage at CERN Tier-0 during Run 1 and Run 2, in TB/month. Ramp up in the end of 2015. LHC computing performed well during the first year of Run 2. The raw LHC data recorded in 2015 was almost 40 PB (compare with 15, 23, 27 PB in 2010-2012). During the Heavy Ion run, the data rates to CASTOR had peaks up to 10.5 GB/s. There are a number of improvements, upgrades, and optimizations in the LHC computing in Run 2 which allow for the good performance of the computing in spite of limitations in budgets. Delivery of CPU resources during Run 1 and Run 2, in billion HepSpec06. Ramp-up almost by 1/3 in 2015.

Improvements, upgrades and optimizations in Run 2 - Revision of experiments computing models to be more flexible. - Opportunistic use of experiment HLT farms. - Use of clouds and other opportunistic and commercial resources. - Further simplification and consolidation of middleware services and their clients. - Growth at new and existing WLCG sites. - Enhanced experiment software performance: -- Simulation faster up to a factor 2. -- Reconstruction up to 4 times faster. -- Multicore jobs: more efficient memory usage. - More use of network access to data. - Other optimizations: -- Reduced number of (re-)processing passes. -- New optimized analysis data formats. -- More organized analysis processing workflows (“analysis trains”). -- Faster calibration and alignment (ALICE), online calibration (LHCb). Use of the ATLAS HLT farm in 2015.: Over 33 thousand running jobs in peaks. CMS: scale testing campaign to validate the project “Any data Anytime Anywhere (AAA)”.

Longer term prospects, beyond Run 2 The basic question: What physics can be done with the computing we can afford? 5-year cost of WLCG hardware and operations is almost the same as the construction cost of ATLAS or CMS detectors. The costs of facilities and power motivates an idea that commercially provisioned computing may soon be more cost effective for HEP. Predictions for the LHC raw data taking beyond 2016: - Run 3 (2020-2022): ~150PB/year - Run 4 (2023-2029): ~600PB/year Predictions of the total data volumes generated beyond 2016 . Other important developments to meet Run-3 and -4 challenges: - The continued optimization of the reconstruction and physics code to get more performance out of evolving traditional CPU architectures (x86_64). - A reimplementation of such code to work on new architectures that will be less expensive (e.g. ARM, GP-GPU). The challenge for the coming years: to be able to evolve the system while fitting within constrained budgets without compromising the physics output. CERN Commercial Cloud Grid Report monitoring: Load report in the period 1.11.2015 – 19.12.2015.