Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.

Slides:



Advertisements
Similar presentations
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Advertisements

Trains status&tests M. Gheata. Train types run centrally FILTERING – Default trains for p-p and Pb-Pb, data and MC (4) Special configuration need to be.
ALICE Operations short summary and directions in 2012 Grid Deployment Board March 21, 2011.
ALICE Operations short summary LHCC Referees meeting June 12, 2012.
ALICE Operations short summary and directions in 2012 WLCG workshop May 19-20, 2012.
Statistics of CAF usage, Interaction with the GRID Marco MEONI CERN - Offline Week –
Status of CMS Matthew Nguyen Recontres LCG-France December 1 st, 2014 *Mostly based on information from CMS Offline & Computing Week November 3-7.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Status of 2015 pledges 2016 requests RRB Report Concezio Bozzi INFN Ferrara LHCb NCB, November 3 rd 2014.
Level 3 Muon Software Paul Balm Muon Vertical Review May 22, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Real data reconstruction A. De Caro (University and INFN of Salerno) CERN Building 29, December 9th, 2009ALICE TOF General meeting.
Tracking Task Force Predrag Buncic Offline Week, 19 March 2014.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting September 22, 2015.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
PWG3 Analysis: status, experience, requests Andrea Dainese on behalf of PWG3 ALICE Offline Week, CERN, Andrea Dainese 1.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
2012 RESOURCES UTILIZATION REPORT AND COMPUTING RESOURCES REQUIREMENTS September 24, 2012.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Predrag Buncic CERN Future of the Offline. Data Preparation Group.
Physics selection: online changes & QA M Floris, JF Grosse-Oetringhaus Weekly offline meeting 30/01/
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Analysis experience at GSIAF Marian Ivanov. HEP data analysis ● Typical HEP data analysis (physic analysis, calibration, alignment) and any statistical.
Ian Bird, CERN WLCG LHCC Referee Meeting 1 st December 2015 LHCC; 1st Dec 2015 Ian Bird; CERN1.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
Feb. 3, 2007IFC meeting1 Beam test report Ph. Bruel on behalf of the beam test working group Gamma-ray Large Area Space Telescope.
RAW and MC production cycles ALICE Offline meeting June 21, 2010.
1 Reconstruction tasks R.Shahoyan, 25/06/ Including TRD into track fit (JIRA PWGPP-1))  JIRA PWGPP-2: Code is in the release, need to switch setting.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Next Generation of Apache Hadoop MapReduce Owen
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Domenico Elia1 ALICE computing: status and perspectives Domenico Elia, INFN Bari Workshop CCR INFN / LNS Catania, Workshop Commissione Calcolo.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Some topics for discussion 31/03/2016 P. Hristov 1.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
Predrag Buncic CERN Data management in Run3. Roles of Tiers in Run 3 Predrag Buncic 2 ALICEALICE ALICE Offline Week, 01/04/2016 Reconstruction Calibration.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
ALICE Offline Week – 22 Oct Visualization of embedding Matevz Tadel, CERN Adam Kisiel, Ohio State University.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 24/05/2015.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Calibration: preparation for pa
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
evoluzione modello per Run3 LHC
Workshop Computing Models status and perspectives
Production status – christmas processing
ALICE analysis preservation
ALICE – First paper.
The CREAM CE: When can the LCG-CE be replaced?
Offline data taking and processing
Offline shifter training tutorial
LHCb Software & Computing Status
Readiness of ATLAS Computing - A personal view
Disk capacities in 2017 and 2018 ALICE Offline week 12/11/2017.
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Presentation transcript:

Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015

Grid usage 2 70% 16% 8% 6%

Data processing in 2015 MC – 96 cycles – usual number of productions – 1,952,334,979 events – p-p, p-Pb, Pb-p, some Pb-Pb, some G4 RAW – reprocessed Run1 data 3

Run 2 progress Registered data volume in 2015 – equal to the sum of PB RAW

pp √s = 13 TeV At highest interaction rates, events with pileup comparable to Pb+Pb collisions TPC tracking code was modified to be able to correct for large distortions caused by the space charge build-up

6 New calibration step required to take these (time dependent) distortions Comparing standard reconstruction with reconstruction using new correction: ITS Matching rate for triggered BC increases by ~ 60% N TPC clusters per track also increases Note: ITS-matching was affected not only by the TPC track loss due to the distortions but also by wrong extrapolation from TPC to ITS Test on 400 KHz interaction rate

Pb-Pb at √s NN = 5 TeV Initial track seeding in the presence of distortions required to open the cuts in track search leading to increased memory usage => Request for temporary large memory queues on T1 sites

8 Memory consumption Tests on the worst LHC11h data chunk Now we are here Last results obtained with modified code We remain under 2 GB/job limit for reconstruction

9 CPU performance Tests on the same LHC11h data chunk Gain of ~20% as by-product of memory optimization and optimization of TPC code Unfortunately this gains were quickly lost due to increased complexity of events in Run 2

Status of Pb-Pb processing Data taking – Smooth operation – 40% of the data is replicated to T1s Processing – Fast cycle for Muon analysis and calorimeter calibration – Full offline calibration cycle for global processing – Special selection of runs for first physics TB of RAW in one week

Expected disk space evolution (T0+T1s+T2s+O2) 11 CRSG Scrutinized request PB To be discussed with CSRG % / year (T0,T1) +5%/ year (T2) Follow-up to O2 TDR 6-Oct: Meeting convened by the DRC with the 4 experiments to understand the global requirements 16-Oct: Meeting of ALICE (with IT and the WLCG Conclusions – Clarified projected evolution of resources at CERN – O 2 architecture compatible with the WLCG. – Bandwidth need from P2 to the Tier 0 will need to be increased. precise costing is being established. – Bandwidth needs to the Grid will also increase and the requests to the Tier 1’s will be done in the near future.

First O2 milestones

Dynamical Deployment System (DSS) 18/09/1513 Current stable release - DDS v1.0 ( , Home site: User’s Manual: DDS now provides pluggable mechanism for external schedulers

14 Agile lifecycle: from code to published builds quickly, with automated tests Philosophy: automatize as much as possible, trivialize human intervention Objective for O²: refreshing aging infrastructure and paradigm Setup the infrastructure for running builds and tests: done Basic tools to build+publish current software and O²: done Visualization interface for the tests: In progress Automatic test and flagging of Pull Requests for O²: In progress Basic infrastructure is ready way ahead of time: as of Sep 9, old system retired and already using it in production for current software Develop Build and test Deploy and publish Git-basedIntegration buildsCVMFS, RPM, Ubuntu source code tarballs test results Software Lifecycle

15 Software Tools and Procedures Common license agreed: GPL v3 for ALICE O2 and LGPL v3 for ALFA

Summary Data processing in 2015 follows usual pattern – All requests have been fulfilled – Number of tasks in the pipeline is manageable – MC is, as usual, the main resources user (70%), followed by user tasks (22%) and RAW (8%) Resources and infrastructure is able to cope with the production load – Computing resources are stable and growing Unexpectedly large distortions in TPC at high interaction rate required an extra calibration step and additional software development – Now under control including memory per job Work on O2 has started following the plan presented in the TDR 16