Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25.

Slides:



Advertisements
Similar presentations
Computing for LHC Dr. Wolfgang von Rüden, CERN, Geneva ISEF students visit CERN, 28 th June - 1 st July 2009.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
T.Strizh (LIT, JINR) 1 Распределенная грид-инфраструктура для обработки и анализа данных с Большого адронного коллайдера Кореньков В.В. (ЛИТ ОИЯИ) Дубна,
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
1 RDMS CMS Computing: Russian CMS Tier-2 and Tier-1 for CMS at JINR llya Gorbunov and Sergei Shmatov Joint Institute for Nuclear Research, Dubna
Experience with the WLCG Computing Grid 10 June 2010 Ian Fisk.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
GRID in JINR and participation in the WLCG project Korenkov Vladimir LIT, JINR
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Ian Bird LHCC Referees Meeting; CERN, 12 th March 2013 March 6, 2013
Progress in Computing Ian Bird ICHEP th July 2010, Paris
RDIG (Russian Data Intensive Grid) e-Infrastructure: Status and Plans Vyacheslav Ilyin (SINP, MSU), Vladimir Korenkov (JINR, Dubna), Aleksey Soldatov (RRC.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
V.Gavrilov 1, I.Golutvin 2, V.Ilyin 3, O.Kodolova 3, V.Korenkov 2, E.Tikhonenko 2, S.Shmatov 2,V.Zhiltsov 2 1- Institute of Theoretical and Experimental.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
Julia Andreeva on behalf of the MND section MND review.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Monitoring of the LHC Computing Activities Key Results from the Services.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Round-table session «Presentation of the new telecommunication channel JINR-Moscow and of the JINR Grid-segment» of the JINR Grid-segment» Korenkov Vladimir.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Operations in R ussian D ata I ntensive G rid Andrey Zarochentsev SPbSU, St. Petersburg Gleb Stiforov JINR, Dubna ALICE T1/T2 workshop in Tsukuba, Japan,
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
The Tier-1 center for CMS experiment at LIT JINR N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova,
Report on availability of the JINR networking, computing and information infrastructure for real data taking and processing in LHC experiments Ivanov V.V.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Grid site as a tool for data processing and data analysis
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Vanderbilt Tier 2 Project
Update on Plan for KISTI-GSDC
Dagmar Adamova, NPI AS CR Prague/Rez
The LHC Computing Grid Visit of Her Royal Highness
RDIG for ALICE today and in future
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25

Tier 0 at CERN: Acquisition, First pass reconstruction, Storage & Distribution 1.25 GB/sec (ions) 1

Tier Structure of GRID Distributed Computing: Tier-0/Tier-1/Tier-2 2 Tier-0 (CERN): accepts data from the CMS Online Data Acquisition and Trigger System archives RAW data the first pass of reconstruction and performs Prompt Calibration data distribution to Tier-1 Tier-1 (11 centers): receives a data from the Tier-0 data processing (re- reconstruction, skimming, calibration etc) distributes data and MC to the other Tier-1 and Tier-2 secure storage and redistribution for data and MC Tier-2 (>200 centers): simulation user physics analysis

3 Wigner Data Centre, Budapest New facility due to be ready at the end of m² (725m²) in an existing building but new infrastructure 2 independent HV lines Full UPS and diesel coverage for all IT load (and cooling) Maximum 2.7MW Slide from I.Bird (CERN, WLCG) presentation at GRID2012 in Dubna

Tier 0 Tier 1 Tier 2 WLCG Grid Sites Today >150 sites >300k CPU cores >250 PB disk Today >150 sites >300k CPU cores >250 PB disk

Russian Data Intensive Grid infrastructure (RDIG) RDIG Resource Centres: – ITEP – JINR-LCG2 (Dubna) – RRC-KI – RU-Moscow-KIAM – RU-Phys-SPbSU – RU-Protvino-IHEP – RU-SPbSU – Ru-Troitsk-INR – ru-IMPB-LCG2 – ru-Moscow-FIAN – ru-Moscow-MEPHI – ru-PNPI-LCG2 (Gatchina) – ru-Moscow-SINP - Kharkov-KIPT (UA) - BY-NCPHEP (Minsk) - UA-KNU The Russian consortium RDIG (Russian Data Intensive Grid), was set up in September 2003 as a national federation in the EGEE project. Now the RDIG infrastructure comprises 17 Resource Centers with > kSI2K CPU and > 4500 TB of disc storage.

6 Country Normalized CPU time ( ) All Country - 19,416,532,244 Russia- 410,317,672 (2.12%) Job 726,441,731 23,541,182 ( 3.24%)

7 Country Normalized CPU time per VO ( )

8 8 Russia Normalized CPU time per SITE and VO ( ) All VO Russia - 409,249,900 JINR - 183,008,044 CMS Russia - 112,025,416 JINR - 67,938,700 (61%)

9 Frames for Grid cooperation with CERN  2001: EU-dataGrid  Worldwide LHC Computing Grid (WLCG)  2004: Enabling Grids for E-sciencE (EGEE)  EGI-InSPIRE  CERN-RFBR project “Grid Monitoring from VO perspective” Collaboration in the area of WLCG monitoring WLCG today includes more than 170 computing centers where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring of the LHC computing activities and of the health and performance of the distributed sites and services is a vital condition of the success of the LHC data processing. WLCG Transfer Dashboard Monitoring of the XRootD federations WLCG Google Earth Dashboard Tier3 monitoring toolkit

JINR –LCG2 Tier2 site  Provides the largest share to the Russian Data Intensive Grid (RDIG) contribution to the global WLCG/EGEE/EGI Grid-infrastructure. JINR secured 46% of the overall RDIG computing time contribution to the solution of LHC tasks  During 2012, CICC has run more than 7.4 million jobs, the overall CPU time spent exceeding 152 million hours (in HEPSpec06 units)  Presently, the CICC computing cluster comprises bit processors and a data storage system of 1800 TB total capacity.

11 WLCG Tier1 center in Russia Proposal to create the LCG Tier1 center in Russia (official letter by Minister of Science and Education of Russia A. Fursenko has been sent to CERN DG R. Heuer in March 2011). Proposal to create the LCG Tier1 center in Russia (official letter by Minister of Science and Education of Russia A. Fursenko has been sent to CERN DG R. Heuer in March 2011). The corresponding point to include in the agenda of next 5x5 meeting Russia-CERN (October 2011) The corresponding point to include in the agenda of next 5x5 meeting Russia-CERN (October 2011) - for all four experiments ALICE, ATLAS, CMS and LHCb - ~10% of the summary Tier1 (without CERN) resources - increase by 30% each year - draft planning (proposal under discussion) to have prototype in the endof 2012, and full resources in 2014 to meet the start of next working LHC session. Discussion about distributed Tier1 in Russia for LHC and FAIR Discussion about distributed Tier1 in Russia for LHC and FAIR

Project: «Creation of the automated system of data processing for experiments at the Large Hadron Collider (LHC) of Tier1 level and maintenance of Grid-services for a distributed analysis of these data» Terms: Type of project: R&D Cost: federal budget million rubles (~8.5 MCHF), extrabudgetary sources - 50% of the total cost Leading executor: RRC KI «Kurchatov institute» for ALICE, ATLAS, and LHC-B Co-executor: LIT JINR (Dubna) for the CMS experiment Project goal: creation in Russia of a computer-based system for processing experimental data received at the LHC and provision of Grid-services for a subsequent analysis of these data at the distributed centers of the LHC global Grid- system. Core of the proposal: development and creation of a working prototype of the first- level center for data processing within the LHC experiments with a resource volume not less than 15% of the required one and a full set of grid-services for a subsequent distributed analysis of these data. Joint NRC "Kurchatov Institute"– JINR Tier1 ComputingCentre Joint NRC "Kurchatov Institute" – JINR Tier1 Computing Centre

13 The Core of LHC Networking: LHCOPN and Partners 13

JINR CMS Tier-1 progress ● Disk & server installation and tests: done ● Tape system installation: done ● Organization of network infrastructure and connectivity to CERN via GEANT: done ● Registration in GOC DB and APEL: done ● Tests of WLCG services via Nagios: done 2012(done) CPU (HEPSpec06) Number of core Disk (Terabytes) Tape (Terabytes)

CMS-specific activity ● Currently commissioning Tier-1 resource for CMS: – Local Tests of CMS VO-services and CMS SW – The PhEDEx LoadTest (tests of data transfer links) – Job Robot Tests (or tests via HammerCloud) – Long-running CPU intensive jobs – Long-running I/O intensive jobs ● PHDEDX transferred of RAW input data to our storage element with transfer efficiency around 90% ● Prepared services and data storage for the reprocessing of TeV reprocessing

Services Security (GSI) Security (GSI) Computing Element (CE) Computing Element (CE) Storage Element (SE) Storage Element (SE) Monitoring and Accounting Monitoring and Accounting Virtual Organizations (VOMS) Virtual Organizations (VOMS) Workload management (WMS) Workload management (WMS) Information service (BDII) Information service (BDII) File transfer service (FTS + PhEDEx) File transfer service (FTS + PhEDEx) SQUID Server SQUID Server CMS user services (Reconstruction Services, Analysis Services etc) CMS user services (Reconstruction Services, Analysis Services etc) 17

19 ObjectiveTarget date Presentation the Execution Plan to WLCG OBSep 2012 Prototype Disk & Servers installation and testsOct 2012 Tape system installationNov 2012 Organization of network infrastructure and connectivity to CERN via GEANT (2 Gb)Nov 2012 WLCG OPN integration (2 Gb) and JINR-T1 registration in GOCDB including integration with the APEL accounting system Dec 2012 M1Dec 2012 LHC OPN functional tests (2 Gb)May 2013 Test of WLCG and CMS services (2 Gb LHCOPN)May 2013 Test of tape system at JINR: data transfers from CERN to JINR (using 2 Gb LHC OPN)May 2013 Test of publishing accounting dataMay 2013 Definition of Tier 2 sites supportMay 2013 Connectivity to CERN 10 GbJul 2013 M2Jul 2013 LHC OPN functional tests (10 Gb)Aug 2013 Test of tape system at JINR: data transfers from CERN to JINR (using 10 Gb LHC OPN)Aug 2013 Upgrade of tape, disk and CPU capacity at JINRNov 2013 M3Nov % of the job capacity running for at least 2 months Storage availability > 98% (functional tests) for at least 2 months Running with > 98% Availabilities & Reliabilities for at least 30 days WLCG MoU as an associate Tier-1 centerFeb 2014 Disk & Tape & Servers upgradeOct 2014 M4Dec 2014 Milestones of the JINR CMS Tier-1 Deployment and Commissioning

Lyon/CCIN2P3 Barcelona/PIC De-FZK US-FNAL Ca- TRIUMF NDGF CERN US-BNL UK-RAL Taipei/ASGC 26 June 2009 Amsterdam/NIKHEF-SARA Bologna/CNAF Russia: NRC KI JINR

21 Staffing ROLE FTE Administrative1.5 Network support2 Engineering Infrastructure 2.5 Hardware support3 Core software and WLCG middleware 4.5 CMS Services 3.5 Total17 Korenkov V. Dolbilov A. Shmatov S. Trofimov V. Mitsyn V.