Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting 08.03.05 A.Minaenko IHEP (Protvino)

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

1 First Considerations on LNF Tier2 Activity E. Vilucchi January 2006.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
SLUO LHC Workshop, SLACJuly 16-17, Analysis Model, Resources, and Commissioning J. Cochran, ISU Caveat: for the purpose of estimating the needed.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
External and internal data traffic in Tier-2 ATLAS farms. Sketch of farm organization Some approximate estimate s of internal and external data flows in.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.
А.Минаенко Совещание по физике и компьютингу, 16 сентября 2009 г., ИФВЭ, Протвино Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
Alexei Klimentov : ATLAS Computing CHEP March Prague Reprocessing LHC beam and cosmic ray data with the ATLAS distributed Production System.
LHCb computing in Russia Russia-CERN JWGC, March 2007.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
А.Минаенко Совещание по физике и компьютингу, 03 февраля 2010 г. НИИЯФ МГУ, Москва Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
ATLAS Computing Model – US Research Program Manpower J. Shank N.A. ATLAS Physics Workshop Tucson, AZ 21 Dec., 2004.
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Optimizing CMS Data Formats for Analysis Peerut Boonchokchuay August 11 th,
10/03/2008A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 10/03/08.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
04/09/2007 Reconstruction of LHC events at CMS Tommaso Boccali - INFN Pisa Shahram Rahatlou - Roma University Lucia Silvestris - INFN Bari On behalf of.
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
Analysis Performance and I/O Optimization Jack Cranshaw, Argonne National Lab October 11, 2011.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
 IO performance of ATLAS data formats Ilija Vukotic for ATLAS collaboration CHEP October 2010 Taipei.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Atlas IO improvements and Future prospects
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Workshop Computing Models status and perspectives
Data Challenge with the Grid in ATLAS
Russian Regional Center for LHC Data Analysis
Status and Prospects of The LHC Experiments Computing
LHCb computing in Russia
Readiness of ATLAS Computing - A personal view
RDIG for ALICE today and in future
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
TPC status - Offline Q&A
An introduction to the ATLAS Computing Model Alessandro De Salvo
ALICE Computing Upgrade Predrag Buncic
ILD Ichinoseki Meeting
New strategies of the LHC experiments to meet
R. Graciani for LHCb Mumbay, Feb 2006
DØ MC and Data Processing on the Grid
Computing at the HL-LHC
The ATLAS Computing Model
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)

Main Tier-2 tasks Tier-0, Tiers-1: storage of all data types (Raw, ESD, AOD, SIM), event reconstruction, ESD and AOD production, main bulk of calibration The main Tier-2 task is providing facilities for physics analysis using mainly AOD, DPD and user derived data formats It includes also development of reconstruction algorithms using subsets of ESD and Raw All the data used for analysis should be stored on disks and some unique data (user, group DPD) to be stored on tapes also

Main Tier-2 tasks The second important task is a production and storage of MC simulated data The ATLAS computing model implies that simulated data are stored mainly in Tiers-1 but Russian Tier-2 proposes to take the responsibility for the locally produced MC data This implies that the proper modes of access to the data to be granted and the data should be backed-up on the tapes Participation in calibration/alignment activity can be considered if there will be interest from Russian groups

Data taking conditions and data sizes Luminosity – 0.5*10 33 cm -2 sec -1 in 2007 – 2.0*10 33 cm -2 sec -1 in 2008, 2009 – cm -2 sec -1 in 1010 and after Event rate200 events/sec Event numbers10 9 in 2007, 2*10 9 each next year Raw 1.6 MB/event ESD0.5 MB/event AOD0.1 MB/event RawSim2.0 MB/event Reconstruction 15 kSI2k*sec/event Simulation100 kSI2k*sec/event Analysis 0.5 kSI2k*sec/event Luminosity increase in 2010 leads to the event size increase by 50% and reconstruction time by 75%

Evolution of Raw, ESD, AOD sizes in ATLAS

Cost table Disc (CHF/GB) ATape (CHF/GB) CPU (CHF/SI2k) Used lifetimes for the different types of resources: CPU - 3 years Disk- 4 years Tape- 5 years

Resources for simulation Number of events – 20% of real data Contribution of RuTier-2 to total statistics of all MC data – 10% Efficiency of CPU usage – 85% All types of data to be saved on automated tapes All ESD, AOD and 20% of Raw simulated data to be kept on disks permanently It is necessary to take into account needs for re- simulation and availability of several models for a single physics process (not done)

Tables with simulation resources Simulation resources evolution RAW (TB) ESD (TB) AOD (TB) Disc (TB) Tape (TB) CPU (kSI2k) Simulation resources yearly increase Disc (TB) Tape (TB) CPU (kSI2k) Disc (kCHF) Tape (kCHF) CPU (kCHF) Total (kCHF)

Evolution of disk, tape, CPU (kSi2k) simulation resources at RuTier-2

Yearly costs of added disk, tape and CPU simulation resources at RuTier-2

Resources for physics analysis Number of active users – 50 5% of total Raw data and 10% of ESD are permanently kept of disks to be used for algorithm developments and analysis Volume of group DPD data is equal to 50% of AOD data Volume of user data – 1 TB per user for 2008 data only and varies proportionally to events number CPU power needed to analyze 2008 data only by one user – 15 kSI2k Total CPU power is proportional to the number of accumulated events and to the number of users: float CPU = Nuser*(Nev_year[i]/Nev_year[2008])*CPU_user_2008;

Tables with analysis resources Analysis resources evolution RAW (TB) ESD (TB) AOD+DPD (TB) User (TB) Disc (TB) Tape (TB) CPU (kSI2k) Analysis resources yearly increase Disc (TB) Tape (TB) CPU (kSI2k) Disc (kCHF) Tape (kCHF) CPU (kCHF) Total (kCHF)

Evolution of disk, tape, CPU (kSi2k) analysis resources at RuTier-2

Yearly costs of added disk, tape and CPU analysis resources at RuTier-2

Evolution of disk, tape, CPU (kSi2k) total resources at RuTier-2

Yearly costs of added disk, tape and CPU total resources at RuTier-2

Summary RuTier-2 for ATLAS – distributed computing centre shared with other LHC experiments Expected parameters for 2008 are –Disks 975 TB –Tapes 230 TB –CPU 1300 kSI2k Provides facilities for physics analysis of data by active 50 users and MC simulation of 10% of ATLAS simulated data The RuTier-2 model implemented as ROOT macro and has a number of adjustable parameters which had to be tuned for model optimization Important feature – linear rise of practically all resources proportionally to the accumulated statistics Commissioning: resources to be shifted partially to