Presentation is loading. Please wait.

Presentation is loading. Please wait.

This work is supported by projects Research infrastructure CERN (CERN-CZ, LM2015058) and OP RDE CERN Computing (CZ.02.1.01/0.0/0.0/1 6013/0001404) from.

Similar presentations


Presentation on theme: "This work is supported by projects Research infrastructure CERN (CERN-CZ, LM2015058) and OP RDE CERN Computing (CZ.02.1.01/0.0/0.0/1 6013/0001404) from."— Presentation transcript:

1 This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from EU funds and MEYS.

2 Extending WLCG Tier-2 Resources
Jiří Chudoba, Michal Svatoš Institute of Physics of the Czech Academy of Sciences (FZU)

3 Motivation LHC experiment requirements
Ian Bird: C-RRB: 24 Oct 2017 – usage of opportunistic resources Torre Wenaus, Thorsten Wengler, CERN LHCC Report to LHC RRB April 24, 2017

4 CZ Tier-2 Center “Standard” Tier-2 center
Supported projects: LHC: ALICE, ATLAS NOvA CTA, Auger Interfaces: CEs: CREAM -> ARC, HTCondor for OSG Torque/Maui -> HTCondor (7000 cores) SEs: DPM (2.5 PB -> 4 PB later this year), xrootd (1.6 PB) Good external connectivity (2x10 Gbps to LHCONE, 10 Gbps generic) Vacant system administrator positions WLCG pledges delivered, but under experimental requirements (ALICE) Standard Tier-2 center supporting ALICE and ATLAS experiments Heterogenious cluster, torque -> HTCondor, DPM, xrootd servers

5 e-Infrastructures in the Czech Republic: CESNET
network (NREN) distributed computing storage NGI role in EGI

6 CzechLight Network for HEP
Connects HEP Institutions in Prague and close to Prague Enables xrootd storage servers located in NPI Řež Tests with remote WNs at CUNI

7 CESNET grid computing Distributed infrastructure with central PBS server 17000 cores, Debian OS Singularity EGI cluster 800 cores (small for ATLAS if we expect 10-20%) – credit 13M in 3 months Cloud resources OpenNebula -> OpenStack transition this year at CHEP:

8 External Storage CESNET Storage department 21 PB total in 3 locations
100 TB via dCache for ATLAS user: ATLASPPSLOCALGROUPDISK and ATLASLOCALGROUPTAPE backup tool for “local” users transfer rates > 1 TB/hour to disks distance Prague - Pilsen: 100 km

9 e-Infrastructures in the Czech Republic: IT4I
IT4I – IT4Innovations Czech National Supercomputing Center located in Ostrava (300 km from Prague) Founded in 2011, first cluster in 2013 Initial funds mostly from EU Operational Programme Research and Development for Innovations, 1.8 billion CZK (80 MCHF) Mission: to deliver scientifically excellent and industry relevant research in the fields of high performance computing and embedded systems

10 Cluster Anselm Delivered in 2013 94 TFLOPs 209 compute nodes
180 nodes without acc. 16 cores per node (2x Intel Xeon E5-2665) 64 GB RAM bullx Linux Server release 6.3 PBSPro Lustre FS for shared HOME and SCRATCH Infiniband QDR and Gigabit Ethernet Access via login nodes

11 Cluster Salomon - 2015 2 PFLOPs peak perf – nr. 87 in 2017/11
1008 compute nodes 576 no accelerators 432 with Intel Xeon Phi MIC 24 cores per node (2x Intel Xeon E5-2680v3 ) 128 GB RAM (or more) CentOS 6.9 PBSPro 13 Lustre FS for shared HOME and SCRATCH Infiniband (56 Gbps) Access via login nodes

12 ATLAS SW Installation CVMFS stratum1 squid squid.farm.particle.cz
ARC CE arc-it4i.farm.particle.cz http http Tier-2 Prague rsync sshfs Lustre servers Shared FS Login nodes arc-it4i.farm.particle.cz IT4I Ostrava Compute node Compute node Compute node

13 ATLAS jobs on Salomon PanDA aCT CERN Tier-2 Prague IT4I Ostrava
qsub via ssh ARC CE arc-it4i.farm.particle.cz Login nodes salomon.it4i.cz sshfs for IO via shared /scratch qsub SE PBS Server SE Compute node Compute node Compute node

14 Jobs at Salomon Limit 100 from qfree

15 CZ-Tier2 vs Salomon: Running jobs

16 CZ-Tier2 vs Salomon: CPU consumption
10% - IT4I

17 CZ-Tier2 vs Salomon: CPU/walltime eff.
85% - IT4I

18 CZ-Tier2 vs Salomon: IO sizes
Input: IT4I – 2.4 TB (0.15 %) Output: IT4I – 5 TB (4.4 %)

19 Conclusion LHC experiments requirements cannot be covered only by Tier-2 resources (flat budget expected in next 4 years) External resources can significantly contribute to the CZ Tier-2 computing capacity HPC Cloud HTCondor We greatly appreciate the possibility to use CESNET and IT4I resources.


Download ppt "This work is supported by projects Research infrastructure CERN (CERN-CZ, LM2015058) and OP RDE CERN Computing (CZ.02.1.01/0.0/0.0/1 6013/0001404) from."

Similar presentations


Ads by Google