LHCbDirac and Core Software. LHCbDirac and Core SW Core Software workshop, PhC2 Running Gaudi Applications on the Grid m Application deployment o CVMFS.

Slides:



Advertisements
Similar presentations
Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Advertisements

Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
6/4/20151 Introduction LHCb experiment. LHCb experiment. Common schema of the LHCb computing organisation. Common schema of the LHCb computing organisation.
EventStore Managing Event Versioning and Data Partitioning using Legacy Data Formats Chris Jones Valentin Kuznetsov Dan Riley Greg Sharp CLEO Collaboration.
Pilots 2.0: DIRAC pilots for all the skies Federico Stagni, A.McNab, C.Luzzi, A.Tsaregorodtsev On behalf of the DIRAC consortium and the LHCb collaboration.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
David Adams ATLAS ATLAS Distributed Analysis David Adams BNL March 18, 2004 ATLAS Software Workshop Grid session.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Marianne BargiottiBK Workshop – CERN - 6/12/ Bookkeeping Meta Data catalogue: present status Marianne Bargiotti CERN.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Nick Brook Current status Future Collaboration Plans Future UK plans.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
Bookkeeping Tutorial. Bookkeeping & Monitoring Tutorial2 Bookkeeping content  Contains records of all “jobs” and all “files” that are created by production.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
The Experiments – progress and status Roger Barlow GridPP7 Oxford 2 nd July 2003.
DDM Monitoring David Cameron Pedro Salgado Ricardo Rocha.
LHCb The LHCb Data Management System Philippe Charpentier CERN On behalf of the LHCb Collaboration.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
David Adams ATLAS DIAL/ADA JDL and catalogs David Adams BNL December 4, 2003 ATLAS software workshop Production session CERN.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
Transformation System report Luisa Arrabito 1, Federico Stagni 2 1) LUPM CNRS/IN2P3, France 2) CERN 5 th DIRAC User Workshop 27 th – 29 th May 2015, Ferrara.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Marco Cattaneo Core software programme of work Short term tasks (before April 2012) 1.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
1 LHCb computing for the analysis : a naive user point of view Workshop analyse cc-in2p3 17 avril 2008 Marie-Hélène Schune, LAL-Orsay for LHCb-France Framework,
Savannah to Jira Migration LHCb Computing Workshop 19 th of May 2014.
LHCb status and plans Ph.Charpentier CERN. LHCb status and plans WLCG Workshop 1-2 Sept 2007, Victoria, BC 2 Ph.C. Status of DC06  Reminder:  Two-fold.
11/01/2012B.Couturier - Core Software Workshop 1 Software Development Infrastructure Main Topics Development tools Build and Release tools Tracking/Management.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
GAG meeting, 5 July 2004, CERN1 LHCb Data Challenge 2004 A.Tsaregorodtsev, Marseille N. Brook, Bristol/CERN GAG Meeting, 5 July 2004, CERN.
The GridPP DIRAC project DIRAC for non-LHC communities.
+ AliEn status report Miguel Martinez Pedreira. + Touching the APIs Bug found, not sending site info from ROOT to central side was causing the sites to.
T Project Review Magnificent Seven Final demonstration
David Adams ATLAS ADA: ATLAS Distributed Analysis David Adams BNL December 15, 2003 PPDG Collaboration Meeting LBL.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
LHCb 2009-Q4 report Q4 report LHCb 2009-Q4 report, PhC2 Activities in 2009-Q4 m Core Software o Stable versions of Gaudi and LCG-AA m Applications.
Page 1Imperial College LondonUlrik Egede21 October 2004 Status of distributed analysis for LHCb Presented by Ph.Charpentier 21 October 2004 U. Egede Imperial.
LHCb/DIRAC week A.Tsaregorodtsev, CPPM 7 April 2011.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Lessons learned administering a larger setup for LHCb
Real Time Fake Analysis at PIC
L’analisi in LHCb Angelo Carbone INFN Bologna
Database Replication and Monitoring
Work report Xianghu Zhao Nov 11, 2014.
ALICE analysis preservation
INFN-GRID Workshop Bari, October, 26, 2004
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN B00le.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Job workflow Pre production operations:
WLCG Collaboration Workshop;
LHCb Grid Computing LHCb is a particle physics experiment which will study the subtle differences between matter and antimatter. The international collaboration.
The LHCb Computing Data Challenge DC06
Presentation transcript:

LHCbDirac and Core Software

LHCbDirac and Core SW Core Software workshop, PhC2 Running Gaudi Applications on the Grid m Application deployment o CVMFS on all Tier1s (but Gridka) and a few Tier2s o Pushing for it (GDB) o Other sites: mostly NFS or GPFS P Installation done with install_project in dedicated jobs o If SW not available, local installation in the job P Support for dual installation (shared / local) mandatory o Other use case: running ROOT in LHCbDirac P Need to set up environment 1. Tell user to “SetupProject LHCbDirac ROOT” 2. Set it up internally: currently broken by dual installation changes o LHCbDirac is also deployed on the Grid and CVMFS P Not used (yet) by jobs, but by users m Application support o Can we be more generic, i.e. support any Gaudi Application? o Support random application? o Support for just ROOT: SetupProject ROOT?

LHCbDirac and Core SW Extra packages, AppConfig m First sight, all OK now o Versions fixed in step manager m Dynamic SQLite installation (see yesterday’s talk from MCl) o Transparent? m Additional options (set up by LHCbDirac) o Input dataset o Output dataset o Special options P Setting time stamps, defining MsgSvc format o Should LHCbDirac know about them? o Should AppConfig contain templates with just placeholders P E.g. file name replaced by actual name by Dirac, e.g. d to be replaced with _0012.bhadron.dst P Mechanical operation rather than “knowledge” Core Software workshop, PhC3

LHCbDirac and Core SW Job finalization m LHCbDirac needs information from the application o Success / failure ! o Bookkeeping report P Nb of events processed P Nb of events per output stream, GUID (also on catalog) P Memory used, CPU time? o Production system P Files successfully processed P Failed files d Event number of crash? m Most (all) information now in XML summary reports o XML summary browsing implemented (Mario) P Needs thorough testing in jobs (already tested with many cases) o Get rid (!) of AnalyseLogFile… m Any specific requirements for MC simulation? o Should info be added to BK? Core Software workshop, PhC4

LHCbDirac and Core SW Bookkeeping m Is there enough information? m Can it be used for producing reports (e.g. performance benchmark)? m Accessing metadata from within a job: o Use case: determine the DDDB to be used for MC o Would BK sustain the load? o Where to query: jobs, BK GUI, ganga? P What if more than one in the dataset? m User job bookkeeping? o Is it worth investing? o Definition of requirements Core Software workshop, PhC5

LHCbDirac and Core SW Step manager and production requests m Is the interface adequate? o Reco / Stripping: production manager mostly o MC: WG representatives, Gloria m Is it a limitation that the Condition should be in the BK for creating a request? o E.g. prepare a reconstruction production before data is available? o Can this be avoided? m Is the request progress monitoring OK? m Currently when enough events are in BK, one kills jobs and deletes files o Is this useful? Should we just let go and flush? P Disk extra usage vs wasted CPU Core Software workshop, PhC6

LHCbDirac and Core SW Tools for users m Not directly for Core Software but… education! m How to get a proper PFN? o Too many users of PFN:/castor/cern.ch/… P Triggers a disk 2 disk copy to a smallish pool (overload) P Should be root://castorlhcb.cern.ch/ o Currently available: P PFNs from BK GUI P CLI tools: to be improved to get directly the PFN at a site d Resurrect genXMLCatalog and include in LHCbDirac d Documentation and Education! m How to replicate datasets? o To local disk o To shared SE o Tools exist but are not streamlined nor documented Core Software workshop, PhC7

LHCbDirac and Core SW Software tools m Synergy between LHCbDirac and Core Software o Eclipse o SVN vs git P Why are branches / merging a nightmare for LHCbDirac and not for Core SW? o Savannah vs JIRA o Benefit from the CS and Applications’ experience P Is LHCbDirac that different? P Is packaging a bonus or a nuisance? Monolithic vs packages P Is getpack useful? How to couple it with Eclipse? P Setup the environment from Eclipse working area m Can one use SetupProject to get LHCbDirac on WNs? o LHCb private pilot jobs o Any particular requirement? m For LHCbDirac services and agents: o How to get a controlled Grid environment without doing GridEnv? Core Software workshop, PhC8