This work is supported by projects Research infrastructure CERN (CERN-CZ, LM2015058) and OP RDE CERN Computing (CZ.02.1.01/0.0/0.0/1 6013/0001404) from.

Slides:



Advertisements
Similar presentations
CHEPREO Tier-3 Center Achievements. FIU Tier-3 Center Tier-3 Centers in the CMS computing model –Primarily employed in support of local CMS physics community.
Advertisements

CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
IT4Innovations national supercomputing center Martin Palkovič.
Dell IT Innovation Labs in the Cloud “The power to do more!” Andrew Underwood – Manager, HPC & Research Computing APJ Solutions Engineering Team.
CVMFS AT TIER2S Sarah Williams Indiana University.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
10 October 2006ICFA DDW'06, Cracow Milos Lokajicek, Prague 1 Current status and plans for Czech Grid for HEP.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
29 June 2004Distributed Computing and Grid- technologies in Science and Education. Dubna 1 Grid Computing in the Czech Republic Jiri Kosina, Milos Lokajicek,
ATLAS DC2 seen from Prague Tier2 center - some remarks Atlas sw workshop September 2004.
November 16, 2012 Seo-Young Noh Haengjin Jang {rsyoung, Status Updates on STAR Computing at KISTI.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
June 6, 2007TeraGrid '071 Clustering the Reliable File Transfer Service Jim Basney and Patrick Duda NCSA, University of Illinois This material is based.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
October 2002 INFN Catania 1 The (LHCC) Grid Project Initiative in Prague Dagmar Adamova INP Rez near Prague.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Computing Jiří Chudoba Institute of Physics, CAS.
13 October 2004GDB - NIKHEF M. Lokajicek1 Operational Issues in Prague Data Challenge Experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.
Slide 1 Cluster Workload Analytics Revisited Saurabh Bagchi Purdue University Joint work with: Subrata Mitra, Suhas Javagal, Stephen Harrell (Purdue),
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
IPPP Grid Cluster Phil Roffe David Ambrose-Griffith.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space 1 ESIF - The PLGrid Experience ACK Cyfronet AGH PL-Grid.
High Performance Computing (HPC)
Parrot and ATLAS Connect
Extending the farm to external sites: the INFN Tier-1 experience
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Status of WLCG FCPPL project
Status of BESIII Distributed Computing
Experience of Lustre at QMUL
Open OnDemand: Open Source General Purpose HPC Portal
Modern supercomputers, Georgian supercomputer project and usage areas
HPC usage and software packages
Belle II Physics Analysis Center at TIFR
Operations and plans - Polish sites
Cluster / Grid Status Update
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Outline Benchmarking in ATLAS Performance scaling
Heterogeneous Computation Team HybriLIT
Appro Xtreme-X Supercomputers
David Cameron ATLAS Site Jamboree, 20 Jan 2017
Experience of Lustre at a Tier-2 site
Southwest Tier 2.
Infrastructure for testing accelerators and new
This work is partially supported by projects InterExcellence (LTT17018), Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1.
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
Footer.
Overview of HPC systems and software available within
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Backfilling the Grid with Containerized BOINC in the ATLAS computing
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Progress of the CEA contributions to the Broader Aproach projects
Presentation transcript:

This work is supported by projects Research infrastructure CERN (CERN-CZ, LM2015058) and OP RDE CERN Computing (CZ.02.1.01/0.0/0.0/1 6013/0001404) from EU funds and MEYS.

ATLAS Computing at Czech HPC Center IT4I Jiří Chudoba, Michal Svatoš 2. 2. 2018 Institute of Physics (FZU) of the Czech Academy of Sciences

IT4I IT4I – IT4Innovations Czech National Supercomputing Center located in Ostrava (300 km from Prague) Founded in 2011, first cluster in 2013 Initial funds mostly from EU Operational Programme Research and Development for Innovations, 1.8 billion CZK (80 MCHF) Mission: to deliver scientifically excellent and industry relevant research in the fields of high performance computing and embedded systems 2. 2. 2018 chudoba@fzu.cz

Cluster Anselm Delivered in 2013 94 TFLOPs 209 compute nodes 180 nodes without acc. 16 cores per node (2x Intel Xeon E5-2665) 64 GB RAM bullx Linux Server release 6.3 PBSPro Lustre FS for shared HOME and SCRATCH Infiniband QDR and Gigabit Ethernet Access via login nodes 2. 2. 2018 chudoba@fzu.cz

Cluster Salomon - 2015 2 PFLOPs peak perf – nr. 87 in 2017/11 1008 compute nodes 576 no accelerators 432 with Intel Xeon Phi MIC 24 cores per node (2x Intel Xeon E5-2680v3 ) 128 GB RAM (or more) CentOS 6.9 PBSPro 13 Lustre FS for shared HOME and SCRATCH Infiniband (56 Gbps) Access via login nodes, port forwarding allowed 2. 2. 2018 chudoba@fzu.cz

ATLAS jobs on Anselm Solution similar to TITAN Needs some changes for a different environment Work in progress 2. 2. 2018 chudoba@fzu.cz

ATLAS jobs on Salomon Sw installed by rsync with the site CVMFS A special Panda queue on praguelcg2 (CZ Tier2 site) ARC CE (arc-it4i) accepts jobs from Panda downloads input files to sshfs mounted SCRATCH on Salamon submits jobs via login node uploads log and output files from SCRATCH Solution based on ARC CE was introduced to ATLAS first for SuperMUC and CSC. Many thanks to Rod Walker, Gianfranco Sciacca, Jaroslava Schovancova (test jobs), David Cameron, Petr Vokac, Emmanoile Vamvakopoulos 2. 2. 2018 chudoba@fzu.cz

Jobs at Salomon Limit 100 from qfree 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Running jobs 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Running job slots 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Completed jobs Completed = successful + failed 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Njobs Job failures at Salomon on 12.1.-14.1. caused by jobs from release which was incomplete at the scratchdisk Other failures: boost::filesystem::status: Permission denied:"/var/spool/PBS/mom_priv/hooks/resourcedef" reason why some jobs need it is under investigation 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Wallclock usage 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Efficiency 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Processed events 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Input Size 2. 2. 2018 chudoba@fzu.cz

CZ-Tier2 vs Salomon: Output Size 2. 2. 2018 chudoba@fzu.cz

Conclusion HPC resources can significantly contribute to the CZ Tier-2 computing capacity We greatly appreciate the possibility to use IT4I resources and very good support from IT4I team. 2. 2. 2018 chudoba@fzu.cz