Download presentation
Presentation is loading. Please wait.
Published byJasper Wilkinson Modified over 9 years ago
1
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental Physics, Moscow, Russia 2- Joint Institute for Nuclear Research, Dubna, Russia 3 – Skobeltsyn Institute of Nuclear Physics, Moscow, Russia NEC’2009 Varna, Bulgaria, September 07-14, 2009 RDMS CMS computing activities to satisfy LHC data processing and analysis scenario
2
Russia Russian Federation Institute for High Energy Physics, Protvino Institute for Theoretical and Experimental Physics, Moscow Institute for Nuclear Research, RAS, Moscow Moscow State University, Institute for Nuclear Physics, Moscow Petersburg Nuclear Physics Institute, RAS, St.Petersburg P.N.Lebedev Physical Institute, Moscow Associated members: High Temperature Technology Center of Research & Development Institute of Power Engineering, Moscow Russian Federal Nuclear Centre – Scientific Research Institute for Technical Physics, Snezhinsk Myasishchev Design Bureau, Zhukovsky Electron, National Research Institute, St. Petersburg Georgia High Energy Physics Institute, Tbilisi State University, Tbilisi Institute of Physics, Academy of Science,Tbilisi Ukraine Institute of Single Crystals of National Academy of Science, Kharkov National Scientific Center, Kharkov Institute of Physics and Technology, Kharkov Kharkov State University, Kharkov Uzbekistan Institute for Nuclear Physics, UAS, Tashkent Dubna Member States Armenia Yerevan Physics Institute, Yerevan Belarus Byelorussian State University, Minsk Research Institute for Nuclear Problems, Minsk National Centre for Particle and High Energy Physics, Minsk Research Institute for Applied Physical Problems, Minsk Bulgaria Institute for Nuclear Research and Nuclear Energy, BAS, Sofia University of Sofia, Sofia JINR Joint Institute for Nuclear Research, Dubna Composition of the RDMS CMS Collaboration the RDMS CMS Collaboration was founded in Dubna in September 1994 RDMS - Russia and Dubna Member States CMS Collaboration
3
RDMS Full Responsibility RDMS Participation ME1/ 1 HE SEMEEE FS HF RDMS Participation in CMS Construction
4
Full responsibility including management, design, construction, installation, commissioning, maintenance and operation for: Endcap Hadron Calorimeter, HE 1st Forward Muon Station, ME1/1 Participation in: Forward Hadron Calorimeter, HF Endcap ECAL, EE Endcap Preshower, SE Endcap Muon System, ME Forward Shielding, FS Full responsibility including management, design, construction, installation, commissioning, maintenance and operation for: Endcap Hadron Calorimeter, HE 1st Forward Muon Station, ME1/1 Participation in: Forward Hadron Calorimeter, HF Endcap ECAL, EE Endcap Preshower, SE Endcap Muon System, ME Forward Shielding, FS RDMS Participation in CMS Project
5
Design, production and installation Calibration and alignment Reconstruction algorithms Data processing and analysis Monte Carlo simulation H (150 GeV) Z 0 Z 0 4 RDMS activities in CMS
6
RAL IN2P3 BNL FZK CNAF PIC ICEPP FNAL LHC Computing Model Tier-0 (CERN) Filter raw data Reconstruction summary data (ESD) Record raw data and ESD Distribute raw and ESD to Tier-1 PNPI NIKHEF Minsk Kharkov Rome IHEP TRIUMF CSCS Legnaro ITEP JINR IC MSU Prague Budapest Cambridge Tier-1 small centres desktops portables Santiago Weizmann Tier-2 Tier-1 Permanent storage and management of raw, ESD, calibration data, meta-data, analysis data and databases grid-enabled data service Data-heavy analysis Re-processing raw ESD ESD-AOD selection National, regional support Tier-2 Simulation, digitization, calibration of simulated data End-user analysis
7
Tier 0 – Tier 1 – Tier 2 7 Tier-0 (CERN): Data recording Initial data reconstruction Data distribution Tier-1 (11 centres): Permanent storage Re-processing Analysis Tier-2 (>200 centres): Simulation End-user analysis
8
8 RDMS CMS computing structure RDIG sites
9
9 RCMS CMS T2 association Now Future interest Analysis Groups Exotica: T2_RU_JINR Exotica: T2_RU_INR HI: T2_RU_SINP QCD: T2_RU_PNPI Top: T2_RU_SINP FWD: T2_RU_IHEP Object/Performance Groups Muon: T2_RU_JINR e-gamma-ECAL: T2_RU_INR JetMET-HCAL: T2_RU_ITEP
10
10 CMS T2 requirements Basic requirements to CMS VO T2 sites for Physics group hosting: a) info on contact persons responsible for site operation b) site visibility (BDII) c) availability of CMSSW actual version d) regular file transfer test “OK” e) Certified links with CMS T1: 2 up and 4 down f) CMS job robot test “OK” g) disk space ~ 150-200 TB for: - central space (~30 TB) - analysis space (~60-90 TB) - MC space (~20 TB) - local space (~30-60 TB) - local CMS users space (~1 TB per user) h) CPU resources ~ 3KSI2K per 1 TB disk space, 2GB memory per job
11
11 T2 readiness requirements Site visibility and CMS VO support Availability of disk and CPU resources Daily SAM availability > 80% Daily JR-MM efficiency > 80% Commissioned links TO Tier-1 sites ≥ 2 Commissioned links FROM Tier-1 sites ≥ 4
12
Moscow 1 Gbps (ITEP, SINP MSU, LPI) IHEP (Protvino) 1 Gbps JINR (Dubna) 20 Gbps INR RAS (Troitsk) 1 Gbps PNPI (Gatchina) 1 Gbps KIPT (Kharkov) 1 Gbps SCPPHE (Minsk) 34 Mbps ErPI (Erevan) 34 Mbps IHEP (Tbilisi) 34 Mbps INRNE (Sofia) 100 Mbps RDMS CMS connectivity
13
13 CMS T1 – RU T2 link status RU T2Up linksDown links IHEP23 INR13 ITEP25 JINR25 PNPI00 RRC KI25 SINP28 KIPT27
14
14 Available resources RU T2Disk (TB)Used (TB)Job slots IHEP8736 INR46875 ITEP802699 JINR19740270 PNPI10440 RRC KI16176174 SINP12459103 KIPT501068
15
15 T2_RU_ITEP: Ready T2_RU_SINP: Ready T2_RU_JINR: Ready RDMS CMS T2 readiness T2_UA_KIPT: Ready
16
16 CMS computing in 2009 year Computing scale test (together with ATLAS) May – June 2009 Cosmic run data processing and analysis July – September 2009 Big MC samples production Starting in July 2009 LHC data processing and analysis Starting in October 2009
17
17 STEP 09 results Test of data transfer from CMS T1s to T2s RU_SINP, RU_JINR, RU_ITEP were participated High transfer rate and quality were achieved SINP max 101 MB/s
18
CMS T1-CH-CERN
20
20 Request for RDMS CMS T2s upgrade CMS request to upgrade by Jan. 2010: Total disk space – up to 1300TB Total CPU - up 4500kSI2K (~1800 job slots) First priority tasks: - Complete T1 T2 link certification for INR, IHEP, PNPI - Improve stability of operation (“availability” & “readiness”) - Full test of MC prod and analysis jobs running in parallel - Increase disk space at each of T2 up to 150 TB - Increase number of CMS job slots at each of T2 up to 200
21
21 Summary ITEP, JINR, SINP and UA_KIPT- in a stable state RRC_KI – all sw required is installed, the links are certified but not in a stable state INR – not all the links required are certified – to be accomplished in a month or earlier PNPI – now installed 1 Gbs external channel; certification of link is in process IHEP – now installed 1 Gbs external channel; certification of link is in process ITEP, JINR and SINP support group space for MUON, JetMET/HCAL, HI and Exotica – thus the main efforts were applied to certify links to/from these institutes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.