Computing Operations Report 29 Jan – 7 June 2015 Stefan Roiser NCB 8 June 2015.

Slides:



Advertisements
Similar presentations
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Wahid Bhimji Andy Washbrook And others including ECDF systems team Not a comprehensive update but what ever occurred to me yesterday.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
15/07/2010Swiss WLCG Operations Meeting Summary of the last GridKA Cloud Meeting (07 July 2010) Marc Goulette (University of Geneva)
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Andrew McNabNorthGrid, GridPP8, 23 Sept 2003Slide 1 NorthGrid Status Andrew McNab High Energy Physics University of Manchester.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
8-Jul-03D.P.Kelsey, LCG-GDB-Security1 LCG/GDB Security (Report from the LCG Security Group) RAL, 8 July 2003 David Kelsey CCLRC/RAL, UK
Update on replica management
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Caitriana Nicholson, CHEP 2006, Mumbai Caitriana Nicholson University of Glasgow Grid Data Management: Simulations of LCG 2008.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
LHCb T2D sites A.Tsaregorodtsev, CPPM. Why T2D sites for LHCb  The T2D concept introduced in 2013  to allow non-T1 country sites to controbute storage.
Marco Cattaneo LHCb computing status for LHCC referees meeting 14 th June
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
Computing Resources for ILD Akiya Miyamoto, KEK with a help by Vincent, Mark, Junping, Frank 9 September 2014 ILD Oshu City a report on work.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Data Management Highlights in TSA3.3 Services for HEP Fernando Barreiro Megino,
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
LHCb Readiness for Run WLCG Workshop Okinawa
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
LHCbDirac and Core Software. LHCbDirac and Core SW Core Software workshop, PhC2 Running Gaudi Applications on the Grid m Application deployment o CVMFS.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
Accounting Update John Gordon. Outline Multicore CPU Accounting Developments Cloud Accounting Storage Accounting Miscellaneous.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Margaret Votava / Scientific Computing Division FIFE Workshop 20 June 2016 State of the Facilities.
18/12/03PPD Christmas Lectures 2003 Grid in the Department A Guide for the Uninvolved PPD Computing Group Christmas Lecture 2003 Chris Brew.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
LHCb distributed computing during the LHC Runs 1,2 and 3
SuperB – INFN-Bari Giacinto DONVITO.
LCG Service Challenge: Planning and Milestones
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Overview of the Belle II computing
LHCb Software & Computing Status
Luca dell’Agnello INFN-CNAF
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
R. Graciani for LHCb Mumbay, Feb 2006
Romanian Sites Current Status
ETHZ, Zürich September 1st , 2016
CHIPP - CSCS F2F meeting CSCS, Lugano January 25th , 2018.
The LHCb Computing Data Challenge DC06
Presentation transcript:

Computing Operations Report 29 Jan – 7 June 2015 Stefan Roiser NCB 8 June 2015

Resource Usage Online will mostly not be available during Run 2 86 % MCSimulation 12 % User Our 9 T0/1 sites are among the first 12 sites + – 1 st Online farm – 7 th Yandex – 10 th St de Compostela Simulation dominates – will change for next report ;-) Decent utilization of pledges 8 June '15NCB - Operations Report - StR2

Efficiency Done+Completed = 95.4 % successful jobs Simulation, 87 % CPU User Jobs, 11 % CPU Run 2 workflow validation MCStripping, 0,76 ‰ wall Job Success Rate CPU Efficiency NB: Completed job did process successful, just the output file is uploaded with delay to destination Organized workflows with very high CPU efficiency, ( except MCStripping, consumes very little time) * User jobs ~ >80 % 8 June '15NCB - Operations Report - StR3

Run 2 Offline Workflow Validation First files (3 runs) for offline processing available last Friday – Managed to test all data processing workflows 11 different productions in 4 streams out of the pit – No major issues, few small ones LHCbDIRAC issues fixed on the spot Application problems identified and patches rolled out – Will continue with validation after polarity switch tonight with new data Very good collaboration of all involved parties !!! 8 June '15NCB - Operations Report - StR4

TIER2D STATUS Slides from Andrew McNab 8 June '15NCB - Operations Report - StR5

T2-D Status - - June 2015 LHCb NCB Current sites  Now 10(9) official T2-Ds − dCache: CSCS.ch, IHEP.su, RAL-HEP.uk, UKI-LT2-IC-HEP.uk − DPM: CPPM.fr, LAL.fr, LPNHE.fr, Manchester.uk, NCBJ.pl, NIPNE- 07.ro  LAL.fr and LPNHE.fr share some infrastructure and offered 300TB together  Sites agreed to provide 300TB for start of Run 2  Difficult to try to enforce this as we ’ re not yet filling them up  More candidates:  CBPF not currently an official T2-D, still in test  DESY-HH, Syracase, and Yandex are also in various stages of discussion  Two other UK sites in discussion dependent on GridPP5

T2-D Status - - June 2015 LHCb NCB Capacity at Run 2 startup  Already at 2427 TB online and accessible to analysis jobs today − Which surpasses the 2015 request of 1.9 PB  If 9 offers of 300TB, then at least 2700 TB should be available − Should be even more (eg RAL-HEP already providing 460 TB)  If three more join and so 12 offers of 300 TB, then at least 3600 TB will be available. − Which is not that far from the 2016 request of 4 PB  Any of these scenarios (including today's status) mean the original goal of providing storage comparable to another Tier-1 has been achieved.

T2-D Status - - June 2015 LHCb NCB Storage used at T2-Ds Stripping 21

T2-D Status - - June 2015 LHCb NCB User jobs at T2-Ds (not all using data!)

T2-D Status - - June 2015 LHCb NCB User jobs data usage at T2-Ds