This work is supported by projects Research infrastructure CERN (CERN-CZ, LM2015058) and OP RDE CERN Computing (CZ.02.1.01/0.0/0.0/1 6013/0001404) from.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
11 September 2007Milos Lokajicek Institute of Physics AS CR Prague Status of the GRID in the Czech Republic NEC’2007.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Prague Site Report Jiří Chudoba Institute of Physics, Prague Hepix meeting, Prague.
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
FZU Computing Centre Jan Švec Institute of Physics of the AS CR, v.v.i
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
Computing Jiří Chudoba Institute of Physics, CAS.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
STFC in INDIGO DataCloud WP3 INDIGO DataCloud Kickoff Meeting Bologna April 2015 Ian Collier
IBERGRID as RC Total Capacity: > 10k-20K cores, > 3 Petabytes Evolving to cloud (conditioned by WLCG in some cases) Capacity may substantially increase.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
Dynamic Extension of the INFN Tier-1 on external resources
Extending the farm to external sites: the INFN Tier-1 experience
WLCG IPv6 deployment strategy
COMPUTING for ALICE at WLCG TIER-2 SITE in PRAGUE in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Status of WLCG FCPPL project
Status of BESIII Distributed Computing
(Prague, March 2009) Andrey Y Shevel
GRID OPERATIONS IN ROMANIA
HPC usage and software packages
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Daniele Cesini – INFN-CNAF - 19/09/2017
Belle II Physics Analysis Center at TIFR
Operations and plans - Polish sites
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
INFN Computing infrastructure - Workload management at the Tier-1
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
Prague TIER2 Site Report
Update on Plan for KISTI-GSDC
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
UTFSM computer cluster
ATLAS Sites Jamboree, CERN January, 2017
Southwest Tier 2.
BEIJING-LCG2 Site Report
Javier Magnin Brazilian Center for Research in Physics & ROC-LA
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
This work is partially supported by projects InterExcellence (LTT17018), Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1.
Footer.
Backfilling the Grid with Containerized BOINC in the ATLAS computing
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

This work is supported by projects Research infrastructure CERN (CERN-CZ, LM2015058) and OP RDE CERN Computing (CZ.02.1.01/0.0/0.0/1 6013/0001404) from EU funds and MEYS.

Extending WLCG Tier-2 Resources Jiří Chudoba, Michal Svatoš 22. 3. 2018 Institute of Physics of the Czech Academy of Sciences (FZU)

Motivation LHC experiment requirements Ian Bird: C-RRB: 24 Oct 2017 – usage of opportunistic resources Torre Wenaus, 24.10.2017 Thorsten Wengler, CERN LHCC Report to LHC RRB April 24, 2017 22. 3. 2018 chudoba@fzu.cz

CZ Tier-2 Center “Standard” Tier-2 center Supported projects: LHC: ALICE, ATLAS NOvA CTA, Auger Interfaces: CEs: CREAM -> ARC, HTCondor for OSG Torque/Maui -> HTCondor (7000 cores) SEs: DPM (2.5 PB -> 4 PB later this year), xrootd (1.6 PB) Good external connectivity (2x10 Gbps to LHCONE, 10 Gbps generic) Vacant system administrator positions WLCG pledges delivered, but under experimental requirements (ALICE) Standard Tier-2 center supporting ALICE and ATLAS experiments Heterogenious cluster, torque -> HTCondor, DPM, xrootd servers 22. 3. 2018 chudoba@fzu.cz

e-Infrastructures in the Czech Republic: CESNET network (NREN) distributed computing storage NGI role in EGI 22. 3. 2018 chudoba@fzu.cz

CzechLight Network for HEP Connects HEP Institutions in Prague and close to Prague Enables xrootd storage servers located in NPI Řež Tests with remote WNs at CUNI 22. 3. 2018 chudoba@fzu.cz

CESNET grid computing Distributed infrastructure with central PBS server 17000 cores, Debian OS Singularity EGI cluster 800 cores (small for ATLAS if we expect 10-20%) ATLAS@Home – credit 13M in 3 months Cloud resources OpenNebula -> OpenStack transition this year ATLAS@Home at CHEP: https://indico.cern.ch/event/505613/contributions/2230707/attachments/1346591/2030567/Oral-153.pdf https://indico.cern.ch/event/505613/contributions/2230707/ 22. 3. 2018 chudoba@fzu.cz

External Storage CESNET Storage department 21 PB total in 3 locations 100 TB via dCache for ATLAS user: ATLASPPSLOCALGROUPDISK and ATLASLOCALGROUPTAPE backup tool for “local” users transfer rates > 1 TB/hour to disks distance Prague - Pilsen: 100 km 22. 3. 2018 chudoba@fzu.cz

e-Infrastructures in the Czech Republic: IT4I IT4I – IT4Innovations Czech National Supercomputing Center located in Ostrava (300 km from Prague) Founded in 2011, first cluster in 2013 Initial funds mostly from EU Operational Programme Research and Development for Innovations, 1.8 billion CZK (80 MCHF) Mission: to deliver scientifically excellent and industry relevant research in the fields of high performance computing and embedded systems 22. 3. 2018 chudoba@fzu.cz

Cluster Anselm Delivered in 2013 94 TFLOPs 209 compute nodes 180 nodes without acc. 16 cores per node (2x Intel Xeon E5-2665) 64 GB RAM bullx Linux Server release 6.3 PBSPro Lustre FS for shared HOME and SCRATCH Infiniband QDR and Gigabit Ethernet Access via login nodes 22. 3. 2018 chudoba@fzu.cz

Cluster Salomon - 2015 2 PFLOPs peak perf – nr. 87 in 2017/11 1008 compute nodes 576 no accelerators 432 with Intel Xeon Phi MIC 24 cores per node (2x Intel Xeon E5-2680v3 ) 128 GB RAM (or more) CentOS 6.9 PBSPro 13 Lustre FS for shared HOME and SCRATCH Infiniband (56 Gbps) Access via login nodes 22. 3. 2018 chudoba@fzu.cz

ATLAS SW Installation CVMFS stratum1 squid squid.farm.particle.cz ARC CE arc-it4i.farm.particle.cz http http Tier-2 Prague rsync sshfs Lustre servers Shared FS Login nodes arc-it4i.farm.particle.cz IT4I Ostrava Compute node Compute node Compute node 22. 3. 2018 chudoba@fzu.cz

ATLAS jobs on Salomon PanDA aCT CERN Tier-2 Prague IT4I Ostrava qsub via ssh ARC CE arc-it4i.farm.particle.cz Login nodes salomon.it4i.cz sshfs for IO via shared /scratch qsub SE PBS Server SE Compute node Compute node Compute node 22. 3. 2018 chudoba@fzu.cz

Jobs at Salomon Limit 100 from qfree 22. 3. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Running jobs 22. 3. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: CPU consumption 10% - IT4I 22. 3. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: CPU/walltime eff. 85% - IT4I 22. 3. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: IO sizes Input: IT4I – 2.4 TB (0.15 %) Output: IT4I – 5 TB (4.4 %) 22. 3. 2018 chudoba@fzu.cz

Conclusion LHC experiments requirements cannot be covered only by Tier-2 resources (flat budget expected in next 4 years) External resources can significantly contribute to the CZ Tier-2 computing capacity HPC Cloud HTCondor We greatly appreciate the possibility to use CESNET and IT4I resources. 22. 3. 2018 chudoba@fzu.cz