LHCb/DIRAC week A.Tsaregorodtsev, CPPM 7 April 2011.

Slides:



Advertisements
Similar presentations
LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
Advertisements

Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Grid services based architectures Growing consensus that Grid services is the right concept for building the computing grids; Recent ARDA work has provoked.
Jiri Chudoba for the Pierre Auger Collaboration Institute of Physics of the CAS and CESNET.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
1 Managing distributed computing resources with DIRAC A.Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille September 2011, NEC’11, Varna.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Grid Initiatives for e-Science virtual communities in Europe and Latin America DIRAC TEAM CPPM – CNRS DIRAC Grid Middleware.
EGEE is a project funded by the European Union under contract IST Testing processes Leanne Guy Testing activity manager JRA1 All hands meeting,
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Belle MC Production on Grid 2 nd Open Meeting of the SuperKEKB Collaboration Soft/Comp session 17 March, 2009 Hideyuki Nakazawa National Central University.
Grid Workload Management Massimo Sgaravatto INFN Padova.
Bookkeeping Tutorial. Bookkeeping & Monitoring Tutorial2 Bookkeeping content  Contains records of all “jobs” and all “files” that are created by production.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Getting started DIRAC Project. Outline  DIRAC information system  Documentation sources  DIRAC users and groups  Registration with DIRAC  Getting.
DDM Monitoring David Cameron Pedro Salgado Ricardo Rocha.
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
How to Read gUSE Documents Orange Docs Series for General Pruposes RELEASE ISSUE POLICY LICENSE HOW TO READ GUSE DOCUMENTS GUSE IN A NUTSHELL by Tibor.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Firmware - 1 CMS Upgrade Workshop October SLHC CMS Firmware SLHC CMS Firmware Organization, Validation, and Commissioning M. Schulte, University.
Managing Data DIRAC Project. Outline  Data management components  Storage Elements  File Catalogs  DIRAC conventions for user data  Data operation.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
Transformation System report Luisa Arrabito 1, Federico Stagni 2 1) LUPM CNRS/IN2P3, France 2) CERN 5 th DIRAC User Workshop 27 th – 29 th May 2015, Ferrara.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
The GridPP DIRAC project DIRAC for non-LHC communities.
ATLAS-specific functionality in Ganga - Requirements for distributed analysis - ATLAS considerations - DIAL submission from Ganga - Graphical interfaces.
Global ADC Job Monitoring Laura Sargsyan (YerPhI).
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
LHCbDirac and Core Software. LHCbDirac and Core SW Core Software workshop, PhC2 Running Gaudi Applications on the Grid m Application deployment o CVMFS.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Jiri Chudoba for the Pierre Auger Collaboration Institute of Physics of the CAS and CESNET.
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
The GridPP DIRAC project DIRAC for non-LHC communities.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
David Adams ATLAS ADA: ATLAS Distributed Analysis David Adams BNL December 15, 2003 PPDG Collaboration Meeting LBL.
1 DIRAC Data Management Components A.Tsaregorodtsev, CPPM, Marseille DIRAC review panel meeting, 15 November 2005, CERN.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
Core and Framework DIRAC Workshop October Marseille.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
DIRAC Distributed Computing Services A. Tsaregorodtsev, CPPM-IN2P3-CNRS FCPPL Meeting, 29 March 2013, Nanjing.
Workload Management Status DIRAC User Meeting Marseille, Oct 2012.
1 DIRAC Project Status A.Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille 10 March, DIRAC Developer meeting.
InSilicoLab – Grid Environment for Supporting Numerical Experiments in Chemistry Joanna Kocot, Daniel Harężlak, Klemens Noga, Mariusz Sterzel, Tomasz Szepieniec.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
1 Building application portals with DIRAC A.Tsaregorodtsev, CPPM-IN2P3-CNRS, Marseille 27 April 2010, Journée LuminyGrid, Marseille.
DIRAC services.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Discussions on group meeting
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Presentation transcript:

LHCb/DIRAC week A.Tsaregorodtsev, CPPM 7 April 2011

Agenda Marseille 27/10/10Formation utilisateurs DIRAC2  Agenda page   DIRAC and LHCb relations  DIRAC status  Subsystem reports and dicussions  DataManagement  ResourceManagement  ProductionManagement  Release procedure  Certification procedure  Documentation

DIRAC Project Marseille 27/10/10Formation utilisateurs DIRAC3  DIRAC was born and was developed as a project inside the LHCb Collaboration  MC Production  Data Reconstruction  User Analysis  DIRAC is successfully meeting the LHCb basic computing requirements  The project has absorbed many years of practical experience with building and running a distributed grid computing system  No surprise that other user communities are willing to benefit from this experience

DIRAC Project Marseille 27/10/10Formation utilisateurs DIRAC4  Several communities are now actively using DIRAC as an important element of their computing systems  ILC, CERN  Data Production and User Analysis  Belle, KEK  MC Production using grid and cloud resources  CREATIS, Lyon – VIP ( Virtual Imaging Platform ), biomed  Using DIRAC as a backend for their Workflow engine “Moteur”  CTA Collaboration  Starting to evaluate for their data production  GISELA – Latin American Grid  Using DIRAC as part of the service infrastructure  French NGI, CC, Lyon  User training  Support for multiple small user communities

DIRAC Project Marseille 27/10/10Formation utilisateurs DIRAC5  Other projects are likely to start evaluation of DIRAC  Collaborations  Regional  A series of successful Tutorials took place  Total of more than 40 users from various scientific domains  Several non-LHCb Developers started to contribute to DIRAC. Among these developments are  Cloud support  MPI jobs support  DIRAC File Catalog

DIRAC Project Marseille 27/10/10Formation utilisateurs DIRAC6  The support for the new user communities dictates a new structure for the DIRAC project  Non-LHCb Project  Project where non-LHCb developers can easily contribute  LHCb is staying the main consumer of the DIRAC software  The main DIRAC developers are still members of the LHCb Collaboration

DIRAC and LHCb Marseille 27/10/10Formation utilisateurs DIRAC7  The DIRAC project is being formed now  We started regular DIRAC developer meetings  Monthly for the moment  We plan to have regular meetings of DIRAC developers and users  Each 3-6 months  The next on will take place in Barcelona in May  The project will get its infrastructure  Web Site  Project management site  Code repository

DIRAC Project Marseille 27/10/10Formation utilisateurs DIRAC8  Separation of the DIRAC Core and LHCb extensions  Updated release management  Upgraded DISET framework  WMS updates  DMS updates  v5r13 release  v6r0 release

DIRAC and LHCbDIRAC separation  The code was already designed with a clear separation of experiment specific extensions  This is finalized now  LHCbDIRAC code repository will go to the LHCb general software repository or to a dedicated one  DIRAC SVN repository will be maintained to allow LHCb CMT based tools to work  The DIRAC code repository will be migrated to the new (GIT) repository  Keeping GIT-SVN gateway for LHCb

New DISET Framework  The DIRAC client/service framework is completely refurbished  Based on a large accumulated experience  Increased reliability of connections  Increased efficiency of connections  Better treatment of errors  The client/service interface will be preserved  Transparent for the services and clients  See Adria’s talk

WMS update  Several new features not yet exploited by LHCb  Parametric jobs  Possibility to submit hundreds of similar jobs in 1-2 secs  Supported by putting special parameters in the job job JDL  Specifying a vector of parameters each of which will result in a separate job  Support in the DIRAC API is also provided  S.Poss,ILC  Parameteric jobs can be convenient to  submit bunches of Production jobs  Submit bunches of subjobs in Ganga

WMS update, cont’d  Direct pilot submission to CREAM CEs  Available since v5r11  Tested with few sites ( CNAF, NIKHEF )  Pilot accounting by GridType  gLite, CREAM, …  No job accounting by grid type  Do we need it  Uniform Site description for both gLite and direct submission  SubmisionMode flag in the CE section determines the pilot submission mode  Single SiteDirector can serve multiple sites  E.g. per country  Pilot output download local  Will be kept on the CE and downloaded on demand  LHCb policies for direct pilot submission are to be defined

DMS update  DIRAC File Catalog  Functionality similar to LFC and more  Based on the practical LHCb experience  Developed for ILC and other communities  Intensively used by ILC since almost 1 year  Can be used in LHCb as well for various purposes  As an additional catalog in parallel with LFC and other LHCb specific catalogs  To track Storage Usage  No need for a special system to collect storage usage summaries  Immediate storage usage snapshots  To provide users with a metadata catalog  Per directory, per file

Dynamic data placement Marseille 27/10/10Formation utilisateurs DIRAC14  Needed to optimize the usage of the Storage resources  Prerequisite – how to measure the popularity of datasets  Metrics  Info collection  The ATLAS “Popularity” framework  Data usage traces collected  Up to 4M per day  Non-SQL database  Dataset popularity evaluation  Automatic cleaning of “no more popular” data  No automatic replication yet

Other DMS issues Marseille 27/10/10Formation utilisateurs DIRAC15  Historical usage of Storage  Consistency checks  SE vs LFC, Bookkeeping  Detection, correction, feedback  FTS, Staging  Improve traceability, error handling  Data removal  Asynchronous operations for bulk operations  Replications  Prepare for the dynamic placement

Resource Status System Marseille 27/10/10Formation utilisateurs DIRAC16  Recources are described in the CS  Properties, access points, status  Mixing static and dynamic information  Resource Status System is a framework for the dynamic resource status evaluation  Should become the main source of the resource status information  Will provide a dedicated service and database backend for this purpose

CM and Release procedure Marseille 27/10/10Formation utilisateurs DIRAC17  The proposal of SVN based procedure with branches  See Ricardo’s talk on the previous meeting  Turns out to be hard to manage in real life with many SVN limitations  The more adequate solution for code management is to use DVCS  Git, Mercurial  After a long discussion it was decided  DIRAC CMS should be based on DVCS  Git will be evaluated first  Adopt as soon as the technical details are ironed out

Certification, Documentation Marseille 27/10/10Formation utilisateurs DIRAC18  LHCb is putting in place a Certification procedure  Not yet finalized  Special Certification installation  Should include base testing framework  Unittest based ?  Krzystof to work on the proposal  Documentation  Sphynx based static documentation – Guides  Epydoc based dynamic code documentation  The same choice as for the DIRAC project proper