LHCb computing in Russia

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, L.Levchuk 4, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2 1- Institute of Theoretical and Experimental Physics, Moscow,
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
1 RDMS CMS Computing: Russian CMS Tier-2 and Tier-1 for CMS at JINR llya Gorbunov and Sergei Shmatov Joint Institute for Nuclear Research, Dubna
16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
LHCb computing in Russia Russia-CERN JWGC, March 2007.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Monitoring system of the LHCb electromagnetic calorimeter NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
Dan Tovey, University of Sheffield GridPP: Experiment Status & User Feedback Dan Tovey University Of Sheffield.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
А.Минаенко Совещание по физике и компьютингу, 03 февраля 2010 г. НИИЯФ МГУ, Москва Текущее состояние и ближайшие перспективы компьютинга для АТЛАСа в России.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Russian groups in LHCb: Status report V. Egorychev CERN-Russia JWG, October 2009.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
V.A. Ilyin, ICFA DDW’06, Cracow, 11 October 2006 Networking and Grid in Russia V.A. Ilyin DDW’06, Cracow 11 October 2006.
ITEP participation in the EGEE project NEC’2007, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Russia-CERN Joint Working Group on LHC Computing Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN V.A. Ilyin 1.Some about JWGC 2.Russia.
Outline: The LHCb Computing Model Philippe Charpentier, CERN ICFA workshop on Grid activities, Sinaia, Romania, October
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
David Stickland CMS Core Software and Computing
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
LHCb system for distributed MC production (data analysis) and its use in Russia NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
GDB meeting - July’06 1 LHCb Activity oProblems in production oDC06 plans & resource requirements oPreparation for DC06 oLCG communications.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Hall D Computing Facilities Ian Bird 16 March 2001.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
LHCb Simulation LHCC Computing Manpower Review 3 September 2003 F.Ranjard / CERN.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Domenico Galli, Bologna
Ian Bird WLCG Workshop San Francisco, 8th October 2016
LHCb report G. Passaleva (INFN – Florence)
Pasquale Migliozzi INFN Napoli
Current Status and Plans
Data Challenge with the Grid in ATLAS
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN B00le.
Russian Regional Center for LHC Data Analysis
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
RDIG for ALICE today and in future
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Off-line & GRID Computing
Proposal for the LHCb Italian Tier-2
An introduction to the ATLAS Computing Model Alessandro De Salvo
ALICE Computing Upgrade Predrag Buncic
US ATLAS Physics & Computing
R. Graciani for LHCb Mumbay, Feb 2006
LHCb thinking on Regional Centres and Related activities (GRIDs)
Development of LHCb Computing Model F Harris
The LHCb Computing Data Challenge DC06
Presentation transcript:

LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, March 2005

Russian participation in LHCb IHEP (Protvino) INP (Novosibirsk) INR (Troitsk) ITEP (Moscow) PNPI (St.Petersburg) RICH mirrors HCAL SPD and Preshower ECAL Muon System

History of LHCb DCs in Russia 2002 130K events, 1% contribution only one centre (ITEP) 2003 1.3M events, 3% contribution all 4 our centers (IHEP,ITEP,JINR,MSU) 2004 9.0M events, 5% contribution started to use LCG 2005 …PNPI and INR are joining…

2004 DC Phase 1 Statistics

What to “compute” in Russia ? There are 2 main tasks: 1. Provide facilities for LHC data analysis in all participating Russian institutes. 2. Satisfy collaboration needs

LHCb computing model HLT output: 2000 Hz For details see LHCb 2004-119

LHCb computing model 1. RAW data → 2x1010 events/year 25 kB/event , 500 TB/year 1 copy stored at CERN and 1 copy in Tier1 centres 2. Reconstruction → 2.4 kSI2k.s/event 2 times per year → 7 months and 2 months CPU needs : 12MSI2k storage: 500 TB (rDST) 1 copy stored across CERN and Tier1 centres

LHCb computing model 3. Stripping → 2.1x109 events/year 50-100 kB/event , 139 TB/year (DST) 4 times per year (1 month) stored at CERN, Tier1 and some Tier2 centres 4. Analysis → 0.3 kSI2k.s/event 140 physicists , 2 jobs/week , 3x106 events CPU needs : 0.8 + 0.8x(n-1) MSI2k storage: 200 TB run at CERN, Tier1 and some Tier2 centres

LHCb computing model 5. MC production → 4x109 events/year ~50 kSI2k.s/event (8 MSI2k.years) 400 kB/event 160 TB to store only triggered events (4x108) produced at Tier2 centres stored at CERN and Tier1 centres LHCb trigger differs significantly from ATLAS and CMS

Computing in Russia 1. Store stripped data (DST) 2 most recent copies for the current year 1 copy for all previous years storage: 280 + 140x(n-1) TB 2. Run analysis jobs (~15% of LHCb) CPU → 0.1 + 0.1x(n-1) MSI2k.years storage: 30 + 30x(n-1) TB

Computing in Russia 3. MC production (~10% of LHCb) CPU → 0.8 MSI2k.years storage: 3 TB 4. Calibration of detectors, rec. algorithms CPU → 0.1 MSI2k.years storage: 30 TB (hard to estimate now, double analysis needs)

Computing in Russia Cluster of Tier2 centres in Russia partial Tier1 functionality (DST, analysis) LHCb requirements CPU → 1.0 + 0.1x(n-1) MSI2k.years storage → 350 + 140x(n-1) TB

Contingency Unlike, for example CMS, LHCb has no contingency beyond a common expt efficiency factor