Marco Cattaneo LHCb computing status for LHCC referees meeting 14 th June 2011 1.

Slides:



Advertisements
Similar presentations
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Advertisements

Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Status of CMS Matthew Nguyen Recontres LCG-France December 1 st, 2014 *Mostly based on information from CMS Offline & Computing Week November 3-7.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
Ian Bird LHCC Referee meeting 23 rd September 2014.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Status of 2015 pledges 2016 requests RRB Report Concezio Bozzi INFN Ferrara LHCb NCB, November 3 rd 2014.
CERN IT Department CH-1211 Genève 23 Switzerland t Tier0 Status - 1 Tier0 Status Tony Cass LCG-LHCC Referees Meeting 18 th November 2008.
GridPP3 Project Management GridPP20 Sarah Pearce 11 March 2008.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Successful Common Projects: Structures and Processes WLCG Management.
Analysis of the ROOT Persistence I/O Memory Footprint in LHCb Ivan Valenčík Supervisor Markus Frank 19 th September 2012.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
Ian Bird, WLCG MB; 27 th October 2015 October 27, 2015
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
Online System Status LHCb Week Beat Jost / Cern 9 June 2015.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
WLCG Status Ian Bird Overview Board 3 rd December 2010.
LHCbComputing Resources requests : changes since LHCb-PUB (March 2013) m Assume no further reprocessing of Run I data o (In.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
LHCb Readiness for Run WLCG Workshop Okinawa
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
Marco Cattaneo Core software programme of work Short term tasks (before April 2012) 1.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES CVMFS deployment status Ian Collier – STFC Stefan Roiser – CERN.
Predrag Buncic CERN ALICE Status Report LHCC Referee Meeting 01/12/2015.
Computing Operations Report 29 Jan – 7 June 2015 Stefan Roiser NCB 8 June 2015.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Ian Bird, CERN WLCG LHCC Referee Meeting 1 st December 2015 LHCC; 1st Dec 2015 Ian Bird; CERN1.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
The Teacher Computing Scheduling [CP5]. The Teacher Computing Multiprogramming A number of jobs are queued and loaded into memory to be processed. A number.
LHCb 2009-Q4 report Q4 report LHCb 2009-Q4 report, PhC2 Activities in 2009-Q4 m Core Software o Stable versions of Gaudi and LCG-AA m Applications.
Domenico Elia1 ALICE computing: status and perspectives Domenico Elia, INFN Bari Workshop CCR INFN / LNS Catania, Workshop Commissione Calcolo.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Best 20 jobs jobs sites.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
Ian Bird LHCC Referees; CERN, 2 nd June 2015 June 2,
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Dynamic Extension of the INFN Tier-1 on external resources
Review of the WLCG experiments compute plans
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
evoluzione modello per Run3 LHC
Controlling a large CPU farm using industrial tools
LHCb Software & Computing Status
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
SA1 ROC Meeting Bologna, October 2004
CPU efficiency Since May CMS O+C has launched a dedicated task force to investigate CMS CPU efficiency We feel the focus is on CPU efficiency is because.
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
WLCG Collaboration Workshop;
R. Graciani for LHCb Mumbay, Feb 2006
Grid Computing 6th FCPPL Workshop
DØ MC and Data Processing on the Grid
Computing at the HL-LHC
Presentation transcript:

Marco Cattaneo LHCb computing status for LHCC referees meeting 14 th June

Marco Cattaneo Data taking status  Constant luminosity ~10 pb -1 /day  HLT rate as foreseen 3kHz “physics” 2

Marco Cattaneo Tier 1 resource usage and shares Pledges Tier0/1% of total DE15.4% CERN29.3% NL20.9% IT11.2% UK16.9% ES6.3% FR27.6%

Marco Cattaneo Tier 1 usage pledge

Marco Cattaneo Reprocessing during data taking 5 Reprocessing Live data

Marco Cattaneo Reprocessing during data taking  Reminder: end of year reprocessing will take ~3 months, must start before end of run  Have reprocessed all 2011 data taken before May Technical stop Started during TS, continued in parallel to data taking Saturated Tier1’s, built up large queues of waiting jobs  Number of running jobs at several Tier1 is below pledges Reasons under investigation 6

Marco Cattaneo Tier 2 usage (simulation) pledge

Marco Cattaneo Reprocessing prospects  Commissioning of Tier2 for reprocessing Pilot sites selected Production tests imminent  Use of online farm for reprocessing Being investigated within online project  In both cases, we hope to be able to use some of these resources for 2011 reprocessing Largely dependent on availability of manpower for commissioning, same manpower as for operations 8

Marco Cattaneo Memory usage issue  Observation of logarithmic increase in memory usage of jobs Proportional to volume of I/O  Due to way in which LHCb uses ROOT via POOL  Implemented a new persistency directly in ROOT To be deployed before end of year Allows more efficient (longer) stripping jobs 9 POOL ROOT