LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.

Slides:



Advertisements
Similar presentations
ALICE © | RRB | 17 April 2013 | Catherine Decosse 24 th meeting of the ALICE Resources Review Board CERN-RRB
Advertisements

LHCbComputing LHCb Computing Tasks. December Status of Computing Personnel m Currently available people insufficient to cover all activities o Estimate.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Simulation Project Organization update & review of recommendations Gabriele Cosmo, CERN/PH-SFT Application Area Internal.
Analysis demos from the experiments. Analysis demo session Introduction –General information and overview CMS demo (CRAB) –Georgia Karapostoli (Athens.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Assessment of Core Services provided to USLHC by OSG.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
K.Harrison CERN, 21st November 2002 GANGA: GAUDI/ATHENA AND GRID ALLIANCE - Background and scope - Project organisation - Technology survey - Design -
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Rackspace Analyst Event Tim Bell
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
LCG Applications Area – Overview, Planning, Resources Torre Wenaus, BNL/CERN LCG Applications Area Manager LHCC Comprehensive Review.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
CERN Physics Database Services and Plans Maria Girone, CERN-IT
G. Martellotti1CSN1 14 / 10 / 2002 LHCb Category A M&O status for 2002 and estimate for 2003 Report on Common Fund and CORE expenses for RRB Ott.
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
SEAL Core Libraries and Services CLHEP Workshop 28 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
Status of ATLAS Resources Presentation to RRB Markus Nordberg ATLAS Resources Coordination CERN-RRB
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
EGEE MiddlewareLCG Internal review18 November EGEE Middleware Activities Overview Frédéric Hemmer EGEE Middleware Manager EGEE is proposed as.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
CMS Computing and Core-Software Report to USCMS-AB (Building a Project Plan for CCS) USCMS AB Riverside, May 18, 2001 David Stickland, Princeton University.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGI Operations Tiziana Ferrari EGEE User.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
Computing Performance Recommendations #10, #11, #12, #15, #16, #17.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Julia Andreeva on behalf of the MND section MND review.
LHC Computing, CERN, & Federated Identities
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
EGEE is a project funded by the European Union under contract IST Roles & Responsibilities Ian Bird SA1 Manager Cork Meeting, April 2004.
SMI 7 May 2008B. Franek SMI++ Framework Knowledge Exchange seminar 1 SMI++ Object-Oriented Framework for Designing and Implementing Distributed Control.
LHCbComputing Personnel status Preparation of discussion at next CB.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
Summary of persistence discussions with LHCb and LCG/IT POOL team David Malon Argonne National Laboratory Joint ATLAS, LHCb, LCG/IT meeting.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Most Common Fund spending now goes on infrastructure. In the course of next year, essentially all the remaining funds will be spent on finishing the sub-detectors.
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
CERN GS Department CH-1211 Genève 23 Switzerland cern.ch/gs-dep Internet Services GS AIS General Services Department GS Advanced Information Services EVM.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES The Common Solutions Strategy of the Experiment Support group.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
Status of ATLAS Resources Presentation to RRB Markus Nordberg ATLAS Resources Coordination CERN-RRB
Production System 2 manpower and funding issues Alexei Klimentov Brookhaven National Laboratory Aug 19, 2013 Production System Technical Meeting CERN.
The StratusLab Distribution and Its Evolution 4ème Journée Cloud (Bordeaux, France) 30 November 2012.
LHCbComputing Update of LHC experiments Computing & Software Models Selection of slides from last week’s GDB
Collaboration Board 27/09/ Next RRB in October - Core and Common Funds in Projected M&O Cat. A budgets for the coming years - Received funds.
Bob Jones EGEE Technical Director
EGEE Middleware Activities Overview
Ian Bird GDB Meeting CERN 9 September 2003
POW MND section.
Status of ATLAS Resources
Status of ATLAS Resources
WLCG Collaboration Workshop;
Input on Sustainability
Benoît DAUDIN (GS-AIS-PM – CERN) 22-March-2012
ATLAS Resources Review Board CERN-RRB April 2019
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

LHCbComputing Manpower requirements

Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate m In particular, there may be omissions (at the level of FTEs) in the tables describing currently available effort 2

3 Long term computing project responsibilities and needs o Project management (2 FTE) P Coordination, Planning (resources, activities, development), Liaison with outside bodies (WLCG, RRB, LHC experiments) o Software engineering support (4 FTE) P Code and release management, nightly builds, software performance infrastructure, user environment, tutorials, documentation o Central infrastructure support (1 FTE) P VO management, CERN-IT liaison, Web services, Vidyo o Applications coordination, maintenance, integration (6 FTE) P Framework maintenance (Gaudi, Persistency, Event model etc.) P Conditions database development, coordination, deployment P Physics applications release planning, integration, performance and regression testing, validation d Gauss, Boole, Brunel, DaVinci, Moore, Event display etc. o Computing operations (8 FTE) P Production planning, production management, data management, grid operations, user support o Distributed computing software maintenance (8 FTE) P Dirac+Ganga coordination and integration, book-keeping, databases, production tools, monitoring, accounting

Manpower currently committed to core activities 4 CountryFTE Brazil0.4 France0.5 Germany0.6 Italy3.5 Russia1.1 Spain1.5 CERN8 Switzerland0.5 Netherlands1.0 United Kingdom5.0 United States0.5 TOTAL22.6 (c.f. 29 needed)

Current manpower m Current manpower insufficient to cover core activities o Estimate 29 FTE needed, 22.6 FTE available P Some activities not covered (see next slide) m Very little manpower available for non-core activities o ~4 FTE at CERN in principle working on Gaudi and Dirac software development P In practice making up some of above missing manpower o Small pockets of effort in various countries, for example: P Spain (DIRAC development) P Italy, UK, CERN (Data Preservation and Outreach) P Italy, Netherlands (Multicore R&D) m Barely sufficient to keep our software and computing abreast with evolving technology 5

New activities m Core activities not covered by existing manpower o e.g. documentation, tutorials, event display, software validation, performance and regression testing m Software improvement activities for upgrade conditions o Application software development P e.g. Coordination of GPU activities, frameworks for multicore, adoption of Root6. o Software optimisation P e.g. vectorisation, architecture dependent compilation, C++11 o Data Management P e.g. Use of data federations, data popularity, remote access to data, event indices, optimisation of Root I/O o Distributed Computing P e.g. Virtualisation, Interfaces to Clouds, Multicore queues, DIRAC scalability m Data preservation and open access m Preliminary estimate: a further 10 FTEs needed 6

What do other experiments do? m Atlas, CMS, Alice all have some core activities covered by M&O A (either cash or in-kind manpower contribution) P Software engineering support P Central productions and operation P Central infrastructure support o Atlas + CMS: ~2 MCHF (or ~20 FTE, largely “in kind”) o Alice: 0.5 MCHF o (LHCb: 170 kCHF for subsistence) m In addition: o Atlas itemises all computing contributions under M&O B P 171 FTEs in 2013 o CMS finances additional core computing manpower at CERN through M&O B P 8 FTEs m All have formal agreements of where manpower comes from. 7

Observations m Manpower currently devoted to operations is incompressible o Compares very favourably with situation in GPDs P BUT many tasks do not scale with collaboration size P AND data handling for LHCb in upgrade comparable to GDPs in Run 1 m Funding for computing resources is (at best) following a constant budget o Growth per CHF follows Moore’s Law only if the software is optimised for new architectures o Growth of LHCb requirements is steeper than Moore’s law m Major evolution of computing model and software required o Requires significant injection of new manpower P Initially for coordination and R&D P Subsequently for deployment and operations 8

Possible scenario m Divide computing project into a number of work packages o Each including organisational, development and support components o Work in progress m Ask individual groups (or countries) to volunteer responsibility for one or more work packages o Each contribution: team of several people P Size will depend on work package, but 1-2 FTEs will be a minimum viable contribution o Similar to sub-detector organisation P Resulting in a document describing sharing of responsibilities o Precise sharing of responsibilities is essential P Best effort like now is not enough for the upgrade m Add up contributions. If large shortfall, may need to introduce charging for computing o e,g. funding of core software services through M&O A contributions, either in-kind or through a ‘tax’ m Principle to be discussed in CB this week 9

Other news m Ricardo Graciani’s mandate as computing resources coordinator is over o Concezio Bozzi (Ferrara) has agreed to take on this reponsibility 10