The Tier-1 center for CMS experiment at LIT JINR N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova,

Slides:



Advertisements
Similar presentations
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Advertisements

Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
1 106 th Session of the JINR Scientific Council Joint Institute for Nuclear Research On the Financial Support for the Seven-Year Plan for the Development.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Report from the Program Advisory Committee for Particle Physics to the JINR Scientific Council T. Hallman for J. Nassalski JINR Scientific Council Meeting.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Dr. Andreas Wagner Deputy Group Leader - Operating Systems and Infrastructure Services CERN IT Department The IT Department & The LHC Computing Grid –
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ismayilov Ali Institute of Physics of ANAS Creating a distributed computing grid of Azerbaijan for collaborative research NEC'2011.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
the project of the voluntary distributed computing ver.4.06 Ilya Kurochkin Institute for information transmission problem Russian academy of.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Report on availability of the JINR networking, computing and information infrastructure for real data taking and processing in LHC experiments Ivanov V.V.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
PROOF system for parallel NICA event processing
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Grid site as a tool for data processing and data analysis
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
A highly reliable data center network topology Tier 1 at JINR
Update on Plan for KISTI-GSDC
Dagmar Adamova, NPI AS CR Prague/Rez
Clouds of JINR, University of Sofia and INRNE Join Together
LHC DATA ANALYSIS INFN (LNL – PADOVA)
The LHC Computing Grid Visit of Her Royal Highness
RDIG for ALICE today and in future
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

The Tier-1 center for CMS experiment at LIT JINR N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.Kadochnikov, I. A. Kashunin, V. V. Korenkov, V. V. Mitsyn, S. V. Shmatov, T. А. Strizh, E. А. Tikhonenko, V. V. Trofimov, N. N. Voitishin, V. E. Zhiltsov Joint Institute for Nuclear Research

Grid technologies - a way to success On a festivity dedicated to receiving the Nobel Prize for discovery of Higgs boson, CERN Director professor Rolf Dieter Heuer directly called the grid- technologies one of three pillars of success (alongside with the LHC accelerator and physical installations). Without implementation of the grid-infrastructure on LHC it would be impossible to process and store enormous data coming from the collider and therefore to make discoveries. Nowadays, every large-scale project will fail without using a distributed infrastructure for data processing. NEC2015, 2 Oct. 2015, Becici, Montenegro 2

In November, 2011 at the suggestion of A.A. Fursenko at a session of the Committee on Russia-CERN cooperation, a decision was accepted on the creation in Russia of a Tier-1 center for LHC experiments on the base of NRC "Kurchatov institute" and JINR. On September 28, 2012, a session of the Supervisory Council of WLCG project approved a work schedule on the creation of Tier-1 in Russia. First stage (December, 2012) - creation of a prototype of Tier-1 at NRC KI and at JINR. Second stage (November, 2013) - installation of equipment for the base Tier-1 center, its testing and finishing up to required functional characteristics. Third stage (March, 2015) – finalization of this complex and commissioning a full- scale Tier-1 center for СМS in JINR. In the sequel, a systematic increase of computing capacity and data storage is needed in accordance with the experiment requirements. The project of launching Tier-1 at JINR NEC2015, 2 Oct. 2015, Becici, Montenegro 3

LHC Computing Model Tier-0 (CERN): Data recording Initial data reconstruction Data distribution Tier-1 (11 → 14 centres): Permanent storage Re-processing Analysis Simulation Tier-2 (>200 centres): Simulation End-user analysis Dubna, JINR NEC2015, 2 Oct. 2015, Becici, Montenegro 4

Tier-1 Tier-2 + Local Computing HybriLIT Tier-2 + Local Computing + Cloud IBM Tape Robot Cloud GridEdu AIS 1C Network equipment JINR Computing Centre Status UPS NEC2015, 2 Oct. 2015, Becici, Montenegro 5

Creation of CMS Tier-1 in JINR  Engineering infrastructure (a system of uninterrupted power supply, climate - control);  High-speed reliable network infrastructure with a dedicated reserved data link to CERN (LHCOPN);  Computing system and storage system on the basis of disk arrays and tape libraries of high capacity;  100% reliability and availability. NEC2015, 2 Oct. 2015, Becici, Montenegro6

Tier-1 Components March 2015 LHCOPN LHCOPN 2400 cores (~ 30 kHS06) 2400 cores (~ 30 kHS06) 5 PB tapes (IBM TS3500) 5 PB tapes (IBM TS3500) 2,4 PB disk 2,4 PB disk Close-coupled, chilled water cooling InRow Close-coupled, chilled water cooling InRow Hot and cold air containment system Hot and cold air containment system MGE Galaxy 7000 – 2x300 kW energy efficient solutions 3Ph power protection with high adaptability MGE Galaxy 7000 – 2x300 kW energy efficient solutions 3Ph power protection with high adaptability Tape Robot Computing elements Uninterrupted power supply system Cooling system NEC2015, 2 Oct. 2015, Becici, Montenegro 7

JINR Tier-1 monitoring system JINR Tier-1 monitoring system provides real-time information about:  work nodes;  disk servers;  network equipment;  uninterruptible power supply elements;  cooling system. It also can be used for creating network maps and network equipment load maps, for drawing state tables and different plots. NEC2015, 2 Oct. 2015, Becici, Montenegro 8

Development perspectives of the JINR Tier-1 monitoring system NEC2015, 2 Oct. 2015, Becici, Montenegro9

10/98 JINR Tier-1 Connectivity Scheme NEC2015, 2 Oct. 2015, Becici, Montenegro

11/98 NEC2015, 2 Oct. 2015, Becici, Montenegro

12

Last month jobs 13NEC2015, 2 Oct. 2015, Becici, Montenegro

Last month pending and running jobs 14NEC2015, 2 Oct. 2015, Becici, Montenegro

Efficiency good&all jobs 15NEC2015, 2 Oct. 2015, Becici, Montenegro

Events Processed 16NEC2015, 2 Oct. 2015, Becici, Montenegro

WallClock Efficiency based on success/all accomplished jobs 17NEC2015, 2 Oct. 2015, Becici, Montenegro

Efficiency based on success/all accomplished jobs 18NEC2015, 2 Oct. 2015, Becici, Montenegro

Resource Utilization 19NEC2015, 2 Oct. 2015, Becici, Montenegro

TRANSFERS NEC2015, 2 Oct. 2015, Becici, Montenegro20

JINR-T1 farm usage history 21NEC2015, 2 Oct. 2015, Becici, Montenegro

Tier-1 jobs (history July-September 2015) NEC2015, 2 Oct. 2015, Becici, Montenegro22

Tier-1 usage and jobs (real time) NEC2015, 2 Oct. 2015, Becici, Montenegro23

Job processing by activities 24NEC2015, 2 Oct. 2015, Becici, Montenegro

Lyon/CCIN2P3 Barcelona/PIC De-FZK US-FNAL Ca- TRIUMF NDGF CERN US-BNL UK-RAL Taipei/ASGC 26 June 2009 Amsterdam/NIKHEF-SARA Bologna/CNAF Russia: NRC KI JINR NEC2015, 2 Oct. 2015, Becici, Montenegro25

NICA Accelerator Complex For the NICA project the data stream has the following parameters: high speed of the event set (up to 6 kHz), in central Au-Au collision at the NICA energies, about 1000 charged particles are generated, predicted event quantity - 19 billion; the total amount of initial data can be valued as 30 PB annually or 8.4 PB after processing. Simulation of the distributed computer infrastructure A model for studying processes has been created: Tape robot, Disk array, CPU Cluster. NEC2015, 2 Oct. 2015, Becici, Montenegro26

Importance of the Tier-1 center at JINR  Creation of conditions for JINR physicists, JINR Member States, RDMS-CMS collaboration for a full-scale participation in processing and analysis of data of the CMS experiment on the Large Hadron Collider.  The invaluable experience of launching the Tier-1 center will be used for creating a system of storage and data processing of megaproject NICA and other scale projects of the JINR- participating countries.  The studies in the field of Big Data analytics assume significance for the development of the perspective directions of science and economy as well as analysis and forecasting of processes in various fields. NEC2015, 2 Oct. 2015, Becici, Montenegro27

Thank you for your attention!