The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.

Slides:



Advertisements
Similar presentations
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Advertisements

Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
AMI S.A. Datasets… Solveig Albrand. AMI S.A. A set is… A number of things grouped together according to a system of classification, or conceived as forming.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
The ATLAS Production System. The Architecture ATLAS Production Database Eowyn Lexor Lexor-CondorG Oracle SQL queries Dulcinea NorduGrid Panda OSGLCG The.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Don Quijote Data Management for the ATLAS Automatic Production System Miguel Branco – CERN ATC
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
INFSO-RI Enabling Grids for E-sciencE Geant4 Physics Validation: Use of the GRID Resources Patricia Mendez Lorenzo CERN (IT-GD)
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
Monitoring in EGEE EGEE/SEEGRID Summer School 2006, Budapest Judit Novak, CERN Piotr Nyczyk, CERN Valentin Vidic, CERN/RBI.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America Grid Monitoring Tools Alexandre Duarte CERN.
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
GridPP Dirac Service The 4 th Dirac User Workshop May 2014 CERN Janusz Martyniak, Imperial College London.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006.
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
INFSO-RI Enabling Grids for E-sciencE ATLAS DDM Operations - II Monitoring and Daily Tasks Jiří Chudoba ATLAS meeting, ,
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
The GridPP DIRAC project DIRAC for non-LHC communities.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
Oxana Smirnova LCG/ATLAS/Lund September 3, 2002, Budapest 5 th EU DataGrid Conference ATLAS-EDG Task Force status report.
Testing the HEPCAL use cases J.J. Blaising, F. Harris, Andrea Sciabà GAG Meeting April,
Pavel Nevski DDM Workshop BNL, September 27, 2006 JOB DEFINITION as a part of Production.
Panda Production on LCG ATLAS software –Releases in $VO_ATLAS_SW_DIR by AdeS $SITEROOT must be used –TRFs also installed but no guarantee Dynamic trf installation(same.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
22/10/2007Software Week1 Distributed analysis user feedback (I) Carminati Leonardo Universita’ degli Studi e sezione INFN di Milano.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Oxana Smirnova LCG/ATLAS/Lund August 27, 2002, EDG Retreat ATLAS-EDG Task Force status report.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
EGEE Production – DDM Experience Definition and Executor types DDM interactions Errors Operations Conclusions Rod Walker, TRIUMF.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Dynamic Data Placement: the ATLAS model Simone Campana (IT-SDC)
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
The GridPP DIRAC project DIRAC for non-LHC communities.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
28 Nov 2007 Alessandro Di Girolamo 1 A “Hands On” overview of the ATLAS Distributed Data Management Disclaimer & Special Thanks Things are changing (of.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
GDB Meeting CERN 09/11/05 EGEE is a project funded by the European Union under contract IST A new LCG VO for GEANT4 Patricia Méndez Lorenzo.
1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
Lessons learned administering a larger setup for LHCb
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
BaBar-Grid Status and Prospects
The EDG Testbed Deployment Details
INFN GRID Workshop Bari, 26th October 2004
LHCOPN update Brookhaven, 4th of April 2017
Data Challenge with the Grid in ATLAS
lcg-infosites documentation (v2.1, LCG2.3.1) 10/03/05
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Short update on the latest gLite status
An introduction to the ATLAS Computing Model Alessandro De Salvo
LHCb Computing Philippe Charpentier CERN
LHC Data Analysis using a worldwide computing grid
Site availability Dec. 19 th 2006
Presentation transcript:

The ATLAS Cloud Model Simone Campana

LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production software is installed in roughly 80 sites Not all those sites are ATLAS T1s and T2s. –Being an ATLAS T1 or T2 means to provide a certain amount of resources Storage + CPU Defined in the Memorandum of Understanding ATLAS counts 10 T1s –CNAF, PIC, SARA, RAL, BNL, TRIUMF, SINICA, LYON, FZK, NorduGrid –And of course there is the CERN T0. Each T1 should support between 3 and 6 T2s Other tiers are “Opportunistic Resources”

The cloud Model ATLAS sites have been divided in CLOUDS –At the moment consider only ATLAS T1 and T2 Will introduce opportunistic later on Currently there are therefore 9 clouds –T0 (CERN), IT (Italy), ES (Spain), UK (United Kingdom), FR (France), CA (Canada), TW (Taiwan), DE (Germany), NL (Nederland) Every TASK in the ATLAS Production System is assigned to a specific cloud –Jobs run in one of the Cloud CE –Input Data are fetched from one of the cloud SEs –Output Data are stored in one of the cloud SEs

IT Cloud SE CE MILANO SE CE ROMA1 SE CE NAPOLI SE CE LNF LOCALLFC FTS SECE CNAF (T1) VOBOX

Job Submission ATLAS production jobs can be of different types –Mostly EVGEN, SIMUL+DIGIT, RECO Each part of the chain produces inputs for the subsequent one –Outputs of the EVGEN are input of the SIMUL etc.. –EVGENS generally have no input Each task is assigned to a cloud accordingly to the location of its input –If EVGEN run on UK cloud, the corresponding SIMUL will most likely run on the same cloud

Data Management at runtime Every file of the cloud is registered in the File Catalog (LFC) of the T1 –The job locates the input file using LFC at the T1 –The job copies the input file locally in the WN from a SE of the cloud –The job stores the output in one SE of the cloud –The job registers the output in the LFC of the cloud

Asynchronous Data Management DQ2 is installed on the VOBOX of every T1 DQ2 allows to Move files from T2 of the cloud to T1 of the cloud (and vice-versa) Using T1 FTS Move files to the T1 from another T1 using T1 FTS Move files to the T1 from any T2 (in other clouds) Using T1 FTS Move files to a T2 of the cloud from any tier (of the cloud) Using T1 FTS Move files from T0 to T1 Using T0 FTS

FTS channels at CNAF CNAF-MILANO MILANO-CNAF CNAF-ROMA ROMA-CNAF CNAF-LNF LNF-CNAF CNAF-NAPOLI NAPOLI-CNAF LYON-CNAF RAL-CNAF SINICA-ROMA (otherT1)-CNAF * - CNAF * - MILANO * - ROMA * - NAPOLI * - LNF

Usage of DQ2 Before assigning the job to the cloud –Might need to replicate input file –Cloud containing the input file might be too busy –This is NOT an operator issue When a job is assigned to the cloud –The output dataset is subscribed to the T1 –Files are migrated to the T1 as soon as they appear They will end up in the DISK-only area of the T1 SE –Currently, if this subscription fails, a manual operation is needed Unsubscribe and re-subscribe again –This is an issue for the Data Management operator

TiersOfAtlas The definition of the ATLAS clouds comes in the TiersOfAtlas.py file –It is under $DQ2DIR/common/ TiersOfAtlas.py contains the API to retrieve cloud information –Name of LFC, name of the SEs … The actual topology (database) is in TiersOfAtlasCache.py –Also under $DQ2DIR/common –The API checks automatically for updates of the cache and downloads the latest version.

TiersOfAtlas wrapper ToA is not a client tool –If you want to know the IT LFC, you can grep for it in the cache or write a small python script –A more user friendly client is being written Will be there very soon.