Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.

Slides:



Advertisements
Similar presentations
First results from the ATLAS experiment at the LHC
Advertisements

Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Oliver Gutsche - CMS / Fermilab Analyzing Millions of Gigabyte of LHC Data for CMS - Discover the Higgs on OSG.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
LHC’s Second Run Hyunseok Lee 1. 2 ■ Discovery of the Higgs particle.
Study of the Optimum Momentum Resolution of CMS Experiment Presented by: Timothy McDonald Faculty Mentor: Darin Acosta.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
J OINT I NSTITUTE FOR N UCLEAR R ESEARCH OFF-LINE DATA PROCESSING GRID-SYSTEM MODELLING FOR NICA 1 Nechaevskiy A. Dubna, 2012.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Development of the distributed monitoring system for the NICA cluster Ivan Slepov (LHEP, JINR) Mathematical Modeling and Computational Physics Dubna, Russia,
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
The ATLAS Experiment; to the Heart of Matter The SFU Experimental High-Energy Particle Physics Group 3 faculty members, 3 postdocs, 7-8 graduate students,
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
LHC Computing, CERN, & Federated Identities
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
TeV muons: from data handling to new physics phenomena Vladimir Palichik JINR, Dubna NEC’2009 Varna, September 07-14, 2009.
M.C. Vetterli; SFU/TRIUMF Simon Fraser ATLASATLAS SFU & Canada’s Role in ATLAS M.C. Vetterli Simon Fraser University and TRIUMF SFU Open House, May 31.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
The ATLAS detector … … is composed of cylindrical layers: Tracking detector: Pixel, SCT, TRT (Solenoid magnetic field) Calorimeter: Liquid Argon, Tile.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
The Tier-1 center for CMS experiment at LIT JINR N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova,
Report on availability of the JINR networking, computing and information infrastructure for real data taking and processing in LHC experiments Ivanov V.V.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Vasilisa Lenivenko Vladimir Palichik (LHEP, JINR ) Alushta, June 2016.
PROOF system for parallel NICA event processing
Grid site as a tool for data processing and data analysis
Overview of the Belle II computing
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
evoluzione modello per Run3 LHC
Update on Plan for KISTI-GSDC
Clouds of JINR, University of Sofia and INRNE Join Together
LHC DATA ANALYSIS INFN (LNL – PADOVA)
LHCb computing in Russia
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
The LHC Computing Grid Visit of Her Royal Highness
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Southwest Tier 2.
ALICE Computing Upgrade Predrag Buncic
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I. A. Kashunin, V. V. Korenkov, V. V. Mitsyn, V. V. Palichik, S. V. Shmatov, T. А. Strizh, E. А. Tikhonenko, V. V. Trofimov, N. N. Voitishin, V. E. Zhiltsov Laboratory of Information Technologies JINR Alushta, Crimea 2016

Grid technologies - a way to success On a festivity dedicated to receiving the Nobel Prize for discovery of Higgs boson, CERN Director professor Rolf Dieter Heuer directly called the grid- technologies one of three pillars of success (alongside with the LHC accelerator and physical installations). Without implementation of the grid-infrastructure on LHC it would be impossible to process and store enormous data coming from the collider and therefore to make discoveries. Nowadays, every large-scale project will fail without using a distributed infrastructure for data processing.

LHC Computing Model Tier-0 (CERN): Data recording Initial data reconstruction Data distribution Tier-1 (11 → 14 centres): Permanent storage Re-processing Analysis Simulation Tier-2 (>200 centres): Simulation End-user analysis Dubna, JINR

Creation of CMS Tier1 in JINR  Engineering infrastructure (a system of uninterrupted power supply, climate - control);  High-speed reliable network infrastructure with a dedicated reserved data link to CERN (LHCOPN);  Computing system and storage system on the basis of disk arrays and tape libraries of high capacity;  100% reliability and availability.

5/98 JINR Tier-1 Connectivity Scheme

Tape Robot Cooling system Uninterrupted power supply Computing elements Components of the Tier1

3600 cores (~ 45 kHS06) 5 PB tapes (IBM TS3500) 2,4 PB disk Annual increase 11,5 kHS06 1,6 PB tapes 0,9 PB disk Current configuration and plans

JINR Tier-1 monitoring system JINR Tier-1 monitoring system provides real-time information about:  work nodes;  disk servers;  network equipment;  uninterruptible power supply elements;  cooling system. It also can be used for creating network maps and network equipment load maps, for drawing state tables and different plots.

JINR Tier’s Dashboard

Usage of Tier1 centers by the CMS experiment (last month) JINR Tier-1 CMS jobs

JINR LIT CMS group Cathode Strip Chambers (CSC)

JINR LIT CMS group Development of a new CSC segment building algorithm Reconstruction efficiency vs pseudorapidity Difference in angles of the reconstructed and simulated segments (strip_unit ~ 3mrad) The new CSC segment building algorithm takes into account the interaction point ~5000 CRAB jobs ~5TB skimmed datasets 1 TeV MC muons

Baryonic Matter at Nuclotron 184 μm DCH Detector Resolution Momentum measurement: 8.68 GeV/c ± 2.6% 4.3% Fixed target experiment Proton and gold beams 1-6 GeV per nucleon DCH Detector Efficiency 184 μm

NICA Project NICA project data amount estimation: -High frequency of registered events (up to 6 КHz); -in Au-Au collisions at the expected energies more than 1000 charged particles per event will be produced; -overall number of events – 19 billions; -overall amount of data produced per year – 30 PB and 8.4 PB more after analysis. The model of a distributed computer infrastructure The model for detailed investigation and estimation: Tape robot, Disk array, CPU Cluster.

Conclusions  The concept of grid perfectly fits the LHC project, making possible tremendous amounts of data transfers and job processing.  The JINR Tier-1 site along with the Russia Tier-2 sites gives the possibility for physicists from JINR, member states to fully participate in processing and analysis of the LHC experimental data.  An enormous experience in building and maintaining big data processing and storing center was obtained that can be very useful for the development of large scale projects at JINR and other member states.

Thank you for your attention!

Back-up slides

Stepping into Big Data era The comparison plot of the worldwide processed data shows that the amount of data that comes from the LHC fits the term of Big Data. The expected amount of data received from LHC should become 2.5 times bigger in Run2, that will require the increase in resources and the optimization of their usage.