T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.

Slides:



Advertisements
Similar presentations
DataTAG WP4 Meeting CNAF Jan 14, 2003 Interfacing AliEn and EDG 1/13 Stefano Bagnasco, INFN Torino Interfacing AliEn to EDG Stefano Bagnasco, INFN Torino.
Advertisements

ATLAS/LHCb GANGA DEVELOPMENT Introduction Requirements Architecture and design Interfacing to the Grid Ganga prototyping A. Soroko (Oxford), K. Harrison.
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Israel Cluster Structure. Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
Physicists's experience of the EGEE/LCG infrastructure usage for CMS jobs submission Natalia Ilina (ITEP Moscow) NEC’2007.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
K. Harrison CERN, 20th April 2004 AJDL interface and LCG submission - Overview of AJDL - Using AJDL from Python - LCG submission.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
EGrid Software Packages Overview. EGrid Introduction Egrid Introduction : A description of the main software packages EGrid Inside : A detailed description.
GridFE: Web-accessible Grid System Front End Jared Yanovich, PSC Robert Budden, PSC.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
- Distributed Analysis (07may02 - USA Grid SW BNL) Distributed Processing Craig E. Tull HCG/NERSC/LBNL (US) ATLAS Grid Software.
Bookkeeping Tutorial. Bookkeeping & Monitoring Tutorial2 Bookkeeping content  Contains records of all “jobs” and all “files” that are created by production.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Stuart Wakefield Imperial College London Evolution of BOSS, a tool for job submission and tracking W. Bacchi, G. Codispoti, C. Grandi, INFN Bologna D.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Data Access for Analysis Jeff Templon PDP Groep, NIKHEF A. Tsaregorodtsev, F. Carminati, D. Liko, R. Trompert GDB Meeting 8 march 2006.
…building the next IT revolution From Web to Grid…
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
1 Grid2Win: porting of gLite middleware to Windows Dario Russo INFN Catania
INFSO-RI Enabling Grids for E-sciencE Αthanasia Asiki Computing Systems Laboratory, National Technical.
INFSO-RI Enabling Grids for E-sciencE Αthanasia Asiki Computing Systems Laboratory, National Technical.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Grid2Win: Porting of gLite middleware to.
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Grid2Win : gLite for Microsoft Windows Roberto.
DIRAC Review (12 th December 2005)Stuart K. Paterson1 DIRAC Review Workload Management System.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
The GridPP DIRAC project DIRAC for non-LHC communities.
AHM04: Sep 2004 Nottingham CCLRC e-Science Centre eMinerals: Environment from the Molecular Level Managing simulation data Lisa Blanshard e- Science Data.
A proposal: from CDR to CDH 1 Paolo Valente – INFN Roma [Acknowledgements to A. Di Girolamo] Liverpool, Aug. 2013NA62 collaboration meeting.
Data Management The European DataGrid Project Team
ATLAS-specific functionality in Ganga - Requirements for distributed analysis - ATLAS considerations - DIAL submission from Ganga - Graphical interfaces.
Data Management The European DataGrid Project Team
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
STAR Scheduler Gabriele Carcassi STAR Collaboration.
The GridPP DIRAC project DIRAC for non-LHC communities.
Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA Grid2Win : gLite for Microsoft Windows Elisa Ingrà - INFN.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
InSilicoLab – Grid Environment for Supporting Numerical Experiments in Chemistry Joanna Kocot, Daniel Harężlak, Klemens Noga, Mariusz Sterzel, Tomasz Szepieniec.
EXPERIENCE WITH ATLAS DISTRIBUTED ANALYSIS TOOLS S. González de la Hoz L. March IFIC, Instituto.
Seven things you should know about Ganga K. Harrison (University of Cambridge) Distributed Analysis Tutorial ATLAS Software & Computing Workshop, CERN,
ATLAS Distributed Analysis S. González de la Hoz 1, D. Liko 2, L. March 1 1 IFIC – Valencia 2 CERN.
L’analisi in LHCb Angelo Carbone INFN Bologna
Oxana Smirnova, Jakob Nielsen (Lund University/CERN)
Belle II Physics Analysis Center at TIFR
Grid2Win: Porting of gLite middleware to Windows XP platform
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
R. Graciani for LHCb Mumbay, Feb 2006
Data services in gLite “s” gLite and LCG.
Production Manager Tools (New Architecture)
The LHCb Computing Data Challenge DC06
Presentation transcript:

T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated computing centers in three main categories: Tier0 center : part of CERN infrastructure, targeted for real data acquisition Tier1 (6+CERN) centers: used for data reconstruction/processing /reprocessing and collaboration analysis Tier2 centers: MC generation and - in some particular cases - distributed analysis. Computing model of LHCb can be found at: The T3 Analysis Facility project aims to exploit Tier 3 (non-grid) computing and storage facilities at Tier 2 sites to run local analysis (see Figure 1). The T2 grid services and facilities are used to download or replicate the data needed from the grid by local physics groups. Once available on the site, the data is then used by the institute’s scientific community to perform a local analysis. Therefore, the basic required services are the compute element, the storage element, the user interface and worker nodes. The compute element is responsible to control the batch system (PBS, LFS, Condor e etc), then computing resources locally accessible (WNs) are used via site batch system queues already in place by the site’s middleware installation. Ganga allows for the user to transparently change on where to submit the job, either the local cluster or the Grid, without the need to change the job description. It is necessary to install Ganga tool at User Interface to be able to submit jobs to batch system and to analyze data. A XML Slice produced by T3 tool facility Standard XML Slice UML Diagram Download Schema The project was designed in five basic modules to express basic services involved to copy a grid file to a non-grid storage. Theses modules are named as download, data_manager, check_disk, proxy and catalogue. The proxy module is responsible for managing proxies such as creating and destroying a proxy. The catalogue acts as an interface between the tool and the generic File Catalog. The main feature of this service is the management of the data replica. The data_manager service is responsible for retrieving file information from the Storage element and to ensure the consistency between the source and the destination file. The check_disk service checks for available disk space in order to store the files. The download service integrated all of these services previously presented. Therefore, this tool offers a friendly user interface to copy a files from the PFN (Physical File Name) to the Storage Element. Workflow: A physicist wants to access a dataset of files for analysis. This dataset normally consits of a list of files specified as LFNs (Logical Filenames) The LFC (LCG File Catalog) is then used to retrieve the list of physical replicas of these files and their location. Once the LFC gives the replicas available on the Grid, the file is either copied down to the to storage element or replicated*. Once the download finishes successfully the files are then cached into the local NFS area. Since now the data is available on the local site, jobs can be scheduled to use the data either locally or through the grid. This is best achieved using the Ganga job management and submission tool (ref:Ganga). The use of local NFS area as cache is just an option to by pass issues related with grid proxies management in non grid resources. Normally, Ganga produces a XML slice to indicate the file´s GUID, PFN and LFN. Therefore, through the PFN one can analyze grid files using their protocol to access data. The proposed project permits to use PFNs of the non-grid files using directly the path of their POSIX directories. This XML slice is generated at submission time, thus it was necessary to change it. Routines like “RTHUtils.py” - responsible to generate the XML file – has been overloaded for that reason.. * Replicated means that the file is also registered in the file catalog and then potentially made available to the whole community. Figure 2: The T3 Analysis Facility targets existing T2 sites to be used for local analysis. In the example the LHCb computing model is used as an example Figure 1: Overview of the T3 Analysis Facility