An introduction to the ATLAS Computing Model Alessandro De Salvo

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
ATLAS Analysis Model. Introduction On Feb 11, 2008 the Analysis Model Forum published a report (D. Costanzo, I. Hinchliffe, S. Menke, ATL- GEN-INT )
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
WLCG/8 July 2010/MCSawley WAN area transfers and networking: a predictive model for CMS WLCG Workshop, July 7-9, 2010 Marie-Christine Sawley, ETH Zurich.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Alexei Klimentov : ATLAS Computing CHEP March Prague Reprocessing LHC beam and cosmic ray data with the ATLAS distributed Production System.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
LHC computing HEP 101 Lecture #8 ayana arce. Outline Major computing systems for LHC experiments: –(ATLAS) Data Reduction –(ATLAS) Data Production –(ATLAS)
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
CCRC08-1 report WLCG Workshop, April KorsBos, ATLAS/NIKHEF/CERN.
Introduction to CMS computing J-Term IV 8/3/09 Oliver Gutsche, Fermilab.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
А.Минаенко Совещание по физике и компьютингу, 03 февраля 2010 г. НИИЯФ МГУ, Москва Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
The Worldwide LHC Computing Grid WLCG Service Ramp-Up LHCC Referees’ Meeting, January 2007.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
10/03/2008A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 10/03/08.
ATLAS Bulk Pre-stageing Tests Graeme Stewart University of Glasgow.
Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006.
SC4 Planning Planning for the Initial LCG Service September 2005.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
The ATLAS TAGs Database - Experiences and further developments Elisabeth Vinek, CERN & University of Vienna on behalf of the TAGs developers group.
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
22/10/2007Software Week1 Distributed analysis user feedback (I) Carminati Leonardo Universita’ degli Studi e sezione INFN di Milano.
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
A proposal for the KM3NeT Computing Model Pasquale Migliozzi INFN - Napoli 1.
CERN IT Department CH-1211 Genève 23 Switzerland t L'infrastructure de calcul pour le LHC Le point de vue d'ATLAS Simone Campana CERN IT/GS.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
1 S. JEZEQUEL- First chinese-french workshop 13 December 2006 Grid: An LHC user point of vue S. Jézéquel (LAPP-CNRS/Université de Savoie)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
Presented by: Santiago González de la Hoz IFIC – Valencia (Spain) Experience running a distributed Tier-2 and an Analysis.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Emanuele Leonardi PADME General Meeting - LNF January 2017
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
Database Replication and Monitoring
Status Report on LHC_2 : ATLAS computing
Pasquale Migliozzi INFN Napoli
Distributed analysis for the ATLAS Experiment in the S.Co.P.E Project
Data Challenge with the Grid in ATLAS
ALICE analysis preservation
Overview: high-energy computing
INFN Roma Grid Activities
LHCb computing in Russia
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Off-line & GRID Computing
Project: COMP_01 R&D for ATLAS Grid computing
LHC Data Analysis using a worldwide computing grid
ATLAS DC2 & Continuous production
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

An introduction to the ATLAS Computing Model Alessandro De Salvo <Alessandro.DeSalvo@roma1.infn.it> 20-02-2008 Outline The ATLAS Tier and Cloud model The Tier centers Data types Data avilability in the Tiers A. De Salvo – 20 Apr 2008

The ATLAS Tier and Cloud Model Multi-Tier gerarchical model Cloud model Each Tier 1 center defines a Cloud 3 or 4 Tier2 centers are associated to each Cloud (Tier1), often on the basis of a geographical criteria

The ATLAS Tier1 centers ASGC (Taiwan) BNL (USA) CNAF (Italy) AU-ATLAS, TW-FTT,AU-UNIMELB BNL (USA) AGLT2, BU, MWT2, OU, SLAC, UTA, WISC CNAF (Italy) LNF, MILANO, NAPOLI, ROMA1 FZK (Germany) CSCS,CYF,DESY-HH,DESY-ZN,FZU,LRZ, FREIBURG, WUP NDGF (Nordic Countries) LYON (France) BEIJING, CPPM, LAL, LAPP, LPC, LPNHE, NIPNE_02,NIPNE_07, SACLAY,TOKYO PIC (Spain) IFAE, IFIC, UAM,LIP RAL (UK) GLASGOW, LANCS, MANC, QMUL, DUR, EDINBURGH, OXF,CAM,LIV,BRUN,RHUL SARA (Netherlands) IHEP, ITEP, JINR, PNPI, SINP TRIUMF (Canada) ALBERTA, MONTREAL, SFU, TORONTO,UVIC

Tasks of the Tier centers Raw data archival Data distribution to the Tier1 centers Prompt Reconstruction of the raw data within 48 hours 1st pass calibration within 24 hours Distribution of the reconstruction output to the Tier1 centers ESD, AOD and TAG Tier-1 Long-term access and storage of a subset of the raw data Backup copy of part of the raw data of another Tier1 Reprocessing of the raw data stored in each Tier1 with the final calibration and alignment parameters, after 2 months of the data taking AOD distribution to the Tier2 centers Archival of the MC data produced in the Tier2 centers Group analysis Centralized analysis of the physics groups Tier-2 Monte Carlo simulation User analysis Off-site detector calibration (selected Tier2s only)

Data types and sizes Raw data ESD (Event Summary Data) Byte-stream data coming from the trigger ESD (Event Summary Data) Output of the reconstruction Tracks Hits Calorimeter clusters Combined reconstruction objects … Used for calibration, alignment, refitting Generally used when the data in the AOD is not sufficient to complete the analysis AOD (analysis Object Data) Reduced format created for the analysis Reconstructed physical quantities Electrons, muons, taus, … DPD (Derived Physics Data) Reduced data for explicit usage in ROOT 1.6 MB Target 500 kB Currently 750/900 kB Target 100 kB Currently 250/290 kB 10% of AOD

Event size

Data distribution

Tier-0 throughput schema TDAQ output rate = 320 MB/s ⇒ 200 Hz (trigger rate) · 1,6 MB (event size) Dedicated optical-fiber connection Tier0 ↔ Tier-1s @ 10 Gbps

The Tier-0 center The full set of RAW data and the primary ESD and AOD are stored at Tier-0

The Tier-1 centers

Tier-1 data and computing activities A full copy of the RAW data in the Tier-1 centers (10% on disk) Double-copy of the ESD dta in the Tier-1 centers A full copy of AOD and TAG in each Tier-1 Each Tier-1 reprocesses its own RAW data and replicates them A full copy of physics group DPD per Tier-1 Computing activities (2008) 25% reprocessing 25% simulation 50% group analysis

The Tier-2 centers

Tier-2 data and computing activities A full copy of the AOD and TAG data in each Tier-2 cloud Physiscs group DPD and user DPD RAW data in all the Tier-2 centers 30% in 2008 10% in 2009 ESD data in all the Tier-2 centers 50% in 2008 30% in 2009 Computing activities (2008) 15% reconstruction 37% simulation 48% user analysis