RECENT DEVELOPMENTS IN THE CONTRIBUTION OF DFCTI/IFIN-HH TO THE WLCG COLLABORATION Department of Computational Physics and Information Technologies (DFCTI)

Slides:



Advertisements
Similar presentations
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Advertisements

Mihnea Dulea, IFIN-HH Efficient Handling and Processing of PetaByte-Scale Data for the Grid Centers within the FR Cloud 1ST JOINT SYMPOSIUM.
LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
CBPF Brasil and LHCONE in South America Edoardo Martelli (CERN); Pedro Diniz (CBPF); Renato Santana (CBPF); Douglas Vieira (USP - SAMPA) 09/02/2015, LHCOPN-LHCONE.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Assessment of Core Services provided to USLHC by OSG.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI)
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Current Status of the Grid Computing for Physics in Romania Horia Hulubei National Institute of R&D in Physics and Nuclear Engineering (IFIN-HH)
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
ISS-AliEn and ISS-gLite Adrian Sevcenco RO-LCG 2011 WORKSHOP Applications of Grid Technology and High Performance Computing in Advanced Research.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
Romanian Tier-2 Federation One site for all: RO-07-NIPNE Mihai Ciubancan on behalf of IT Department.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
BalticGrid-II Project EGEE’09, Barcelona1 GRID infrastructure for astrophysical applications in Lithuania Gražina Tautvaišienė and Šarūnas Mikolaitis Institute.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI strategy and Grand Vision Ludek Matyska EGI Council Chair EGI InSPIRE.
New solutions for large scale functional tests in the WLCG infrastructure with SAM/Nagios: The experiments experience ES IT Department CERN J. Andreeva.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Mihnea Dulea, IFIN-HH R-ECFA Meeting, National Physics Library IFIN-HH, Magurele Romanian participation in WLCG M. Dulea Elementary Particles.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Bob Jones EGEE Technical Director
Status of WLCG FCPPL project
GRID OPERATIONS IN ROMANIA
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Grid site as a tool for data processing and data analysis
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Update on Plan for KISTI-GSDC
Ákos Frohner EGEE'08 September 2008
New strategies of the LHC experiments to meet
Romanian Sites Current Status
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
The LHCb Computing Data Challenge DC06
Presentation transcript:

RECENT DEVELOPMENTS IN THE CONTRIBUTION OF DFCTI/IFIN-HH TO THE WLCG COLLABORATION Department of Computational Physics and Information Technologies (DFCTI) IFIN-HH, Magurele, Romania Mihai Ciubancan, Teodor Ivanoaica, Mihnea Dulea 1 Grid 2014, LIT/JINR, 01.07

2 First Romanian computer (CIFA-1, 1956) New computing center (2015) VVR-S nuclear reactor (1957) ELI-NP (2018) Group I Group II IFIN-HH

3 Grid 2014, LIT/JINR, DFCTI coordinates the Romanian Tier-2 Federation (RO-LCG), which contribute to the computational support of three of the LHC experiments within the WLCG collaboration. The technical expertise of the staff includes HPC and grid technologies, algorithm programming and optimization, distributed data access and management, advanced networking, cluster architectures and optimization, parallel computing, symbolic computing, molecular dynamics. The staff also conducts inter- and multidisciplinary research in areas such as strongly correlated systems, nuclear and sub- nuclear phenomena, condensed matter and biomolecular systems. DFCTI hosts, develops and administrates one of the most important e-infrastructures in the country, that includes HPC and Grid computing facilities dedicated to the support of the research community within IFIN-HH and of the large scale international collaborations DFCTI

4 Grid 2014, LIT/JINR, Resource centres IFIN-HH: RO-07-NIPNE ( alice, atlas, lhcb NIHAM ( alice ) RO-02-NIPNE ( atlas ) RO-11-NIPNE ( lhcb ISS = Institute of Space Science, Magurele: RO-13-ISS ( alice ) ITIM = Natl. Inst. for R&D in Isotopic & Molecular Technologies from Cluj-Napoca: RO-14-ITIM ( atlas ) UAIC = 'Alexandru Ioan Cuza' University of Iasi RO-16-UAIC ( atlas ) RO-LCG RO-LCG provides disk storage and computing power for simulations and data analysis required by the ALICE, ATLAS and LHCb experiments at the LHC. Network The centres are connected through 10 Gbps links to the 100 Gbps backbone of the RoEduNet NREN. RoEduNet currently provides a 10 Gbps connection to GEANT.

5 Grid 2014, LIT/JINR, RO-LCG RO-LCG resources dedicated to WLCG  2.6 PB storage capacity  6800 processing cores The resources allocated by RO-LCG to the VOs represent 2-10% of the total experiments requests from the Tier2s Standard software configuration of the sites:  Scientific Linux  EMI 3 middleware. Two ALICE sites (NIHAM & RO-13-ISS) also use AliEn  CEs with CREAM job management service  PBS/TORQUE distributed resource manager  Maui job scheduler;  Disk Pool Manager (DPM) for disk storage

6 Grid 2014, LIT/JINR, RO-LCG PRODUCTION Total Grid production on LHC VOs since its beginning (2006):  Total # of run jobs: 44.8 millon  Total CPU time: 489 mega HEPSpec06-hours Annual production (2013): RO-LCG ranked 12 th among all the 36 Tier2 national centers, regarding the cumulated ALICE + ATLAS + LHCb CPU hours, with a share of 2.1% of the total

7 Grid 2014, LIT/JINR, RUN-2 The LHC Long Shutdown (LS1) offered the opportunity to prepare the computational support for the Run-2. The higher luminosity and the increase of the trigger rates by experiments will generate significantly larger amounts of data to be processed and stored within WLCG. Solutions for the handling of extra data required changes in the computing models and an overall upgrade of the Grid resources, including those of the Tier2 (federations of) centres. The transition of LCG from the hierarchical to mesh topology allows direct connections of the Tier2s with multiple Tier1s and other Tier2s within the LHCONE. The performance of the Tier2 sites will strongly depend on their network capabilities. DFCTI is preparing the upgrade of the network capacity and the computational support of two initiatives launched by the ATLAS and, respectively, the LHCb experiment: a)the planned migration to multi-core queues (to be performed by ATLAS in parallel with its Data Challenge 14); b)LHCb’s intention of supporting user analysis on a set of ‘Tier2 with Data’ (T2D) sites endowed with sufficient disk capacity

8 Grid 2014, LIT/JINR, NETWORK INFRASTRUCTURE UPGRADE DFCTI provides access to RoEduNet for 9 institutions and 5 grid sites located in Magurele. These grid sites are connected by DFCTI to the NREN’s NOC through a 10 Gbps link and backups. Since 2013, DFCTI hosts the Magurele PoP (point of presence) of the RoEduNet. Due to the massive transfer of WLCG data, the in/out traffic is occasionally reaching values close to the bandwidth limit (see below). DFCTI recently installed a Cisco ASR 9006 router, that allows the increase of the bandwidth to more than 10 Gbps. A 40 Gbps upgrade of the external link is currently performed, which will be later followed by an upgrade to 100 Gbps.

9 Grid 2014, LIT/JINR, RO-07-NIPNE, RO-11-NIPNE, IFIN GRID DD = DPM disk server; mCore = multicore cluster Thick lines: 10 Gbps; thin lines 1 Gbps GRID Encircled in red: the last 12 months’ upgrades

10 Grid 2014, LIT/JINR, RO-07 is a distributed site with scalable architecture. LHC VOs currently supported: ALICE production, ATLAS, LHCb Other VOs supported: gridifin (general purpose), ifops (monitoring) Will support soon a new VO dedicated to ELI-NP. Resources provided to WLCG:  2000 cores  964 TB storage capacity Problems solved ATLAS analysis requires large bandwidth between DPM disk servers and worker nodes, which increases with the number of concurrent jobs. A number of WNs that share enough bandwidth with the DDs were dedicated to analysis. The analysis queue is managed by a separate CE – tbit03. RO-07-NIPNE ‘Computing centre’ ‘Data centre’ The concurrent running of 500 analysis jobs generates a maximum traffic of 11 Gbps from the SE to WNs ATLAS multicore queue An older HPC cluster, with 8 cores/node, was adapted for the running of AthenaMP (MC production jobs) LHCb T2D See slide 12

11 Grid 2014, LIT/JINR, JOB MANAGEMENT <- ATLAS production regime on tbit03 LHCb LHCb -> ATLAS transition ATLAS analysis tbit07 <- ALICE production tbit01

12 Grid 2014, LIT/JINR, LHCb PRODUCTION

13 Grid 2014, LIT/JINR, T2D CANDIDATE The LHCb experiment recently modified the computing model (2013). T2-D proposal: request some T2 sites to add LHCb storage and pass qualification tests to become “T2-D” sites Storage to be used for data so can run analysis jobs at T2-Ds, but also available for use in production campaigns

14 Grid 2014, LIT/JINR, JOINING DIRAC 4 EGI COMMUNITY RO-07-NIPNE has recently its SE registered See Dr. Andrei Tsaregorodtsev’s presentation Goal: to integrate the distributed LCG, IFIN GRID, HPC clusters

15 Grid 2014, LIT/JINR, SERVICE AVAILABILITY/RELIABILITY MONITORING DFCTI provides its own monitoring services for RO-LCG. OPS availability tests were replaced by CERN with LHC VOs tests, but EGI is still using them. SAM provided by IFIN GRID infrastructure, through ifops VO, which is dedicated to the monitoring of the sites of National Grid for Physics and Related Fields – GriNFiC (including RO-LCG). Upgraded for getting more detailed info.

16 Grid 2014, LIT/JINR, RO-LCG 2014 CONFERENCE You are invited to the RO-LCG Conference, , IFIN-HH,

17 Grid 2014, LIT/JINR,  National contribution to the development of the LCG computing grid for elementary particle physics, project funded by the Ministry of National Education – RDI, under contract 8EU/2012  CEA - IFA Partnership, R&D project: Efficient Handling and Processing Petabite Scale Data for the Computing Centres within the French Cloud, contract C1-06/2010, co-funded by the National Authority for Scientific Research (ANCS)  Collaboration with LIT-JINR / Dubna in the framework of the Hulubei- Meshcheryakov program, project: Optimization Investigations of the Grid and Parallel Computing Facilities at LIT-JINR and Magurele Campus  Development of Grid and HPC infrastructure for complex systems physics, contract PN , funded by the Ministry of National Education - RDI FUNDING/ACKNOWLEDGEMENTS Most of the funding was provided in through national projects