H2020 EU PROJECT | Topic SC1-DTH | GA:

Slides:



Advertisements
Similar presentations
Polska Infrastruktura Informatycznego Wspomagania Nauki w Europejskiej Przestrzeni Badawczej Institute of Computer Science AGH ACC Cyfronet AGH The PL-Grid.
Advertisements

Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Polish Infrastructure for Supporting Computational Science in the European Research Space GridSpace Based Virtual Laboratory for PL-Grid Users Maciej Malawski,
VL-e PoC Introduction Maurice Bouwhuis VL-e work shop, April 7 th, 2006.
Next Generation Domain-Services in PL-Grid Infrastructure for Polish Science. Numerical Simulations of Metal Forming Production Processes and Cycles by.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Testing PanDA at ORNL Danila Oleynik University of Texas at Arlington / JINR PanDA UTA 3-4 of September 2013.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Distributed Cloud Environment for PL-Grid Applications Piotr Nowakowski, Tomasz Bartyński, Tomasz Gubała, Daniel Harężlak, Marek Kasztelnik, J. Meizner,
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
MaterialsHub - A hub for computational materials science and tools.  MaterialsHub aims to provide an online platform for computational materials science.
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
Common Practices for Managing Small HPC Clusters Supercomputing 12
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
DataNet – Flexible Metadata Overlay over File Resources Daniel Harężlak 1, Marek Kasztelnik 1, Maciej Pawlik 1, Bartosz Wilk 1, Marian Bubak 1,2 1 ACC.
Polish Infrastructure for Supporting Computational Science in the European Research Space FiVO/QStorMan: toolkit for supporting data-oriented applications.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.
Lightweight construction of rich scientific applications Daniel Harężlak(1), Marek Kasztelnik(1), Maciej Pawlik(1), Bartosz Wilk(1) and Marian Bubak(1,
Towards energy efficient HPC HP Apollo 8000 at Cyfronet Part I Patryk Lasoń, Marek Magryś.
Federating PL-Grid Computational Resources with the Atmosphere Cloud Platform Piotr Nowakowski, Marek Kasztelnik, Tomasz Bartyński, Tomasz Gubała, Daniel.
A 1.7 Petaflops Warm-Water- Cooled System: Operational Experiences and Scientific Results Łukasz Flis, Karol Krawentek, Marek Magryś.
Patryk Lasoń, Marek Magryś
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space The Capabilities of the GridSpace2 Experiment.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
EGEE is a project funded by the European Union under contract IST Generic Applications Requirements Roberto Barbera NA4 Generic Applications.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Overview of Atmosphere.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space 1 ESIF - The PLGrid Experience ACK Cyfronet AGH PL-Grid.
Advanced Computing Facility Introduction
PLG-Data and rimrock Services as Building
Piotr Bała, Marcin Radecki, Krzysztof Benedyczak
NIIF HPC services for research and education
Department of Computer Science AGH
Demo of the Model Execution Environment WP2 Infrastructure Platform
Demo of the Model Execution Environment WP2 Infrastructure Platform
HPC usage and software packages
Model Execution Environment Current status of the WP2 Infrastructure Platform Marian Bubak1, Daniel Harężlak1, Marek Kasztelnik1 , Piotr Nowakowski1, Steven.
on behalf of the CECM Project Marian Bubak
From VPH-Share to PL-Grid: Atmosphere as an Advanced Frontend
Operations and plans - Polish sites
Example: Rapid Atmospheric Modeling System, ColoState U
Model Execution Environment for Investigation of Heart Valve Diseases
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
DI4R Conference, September, 28-30, 2016, Krakow
R.Mashinistov (UTA) July
Windows Azure Migrating SQL Server Workloads
MaterialsHub - A hub for computational materials science and tools.
Simulation use cases for T2 in ALICE
Computing Infrastructure for DAQ, DM and SC
PROCESS - H2020 Project Work Package WP6 JRA3
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
High Performance Computing in Bioinformatics
Mariusz Sterzel1 , Lukasz Dutka1, Tomasz Szepieniec1
Using and Building Infrastructure Clouds for Science
Infrastructure for Personalised Medicine: It’s MEE that Matters!
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Final Review 27th March Final Review 27th March 2019.
Data analysis IT infrastructure status report IT Systems - Computing
A Survey of Interactive Execution Environments
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

H2020 EU PROJECT | Topic SC1-DTH-07-2018 | GA: 826494 T2.3 (M1-M48) HPC Resources for PRIMAGE Simulations Marek Kasztelnik, Tomasz Gubała, Marian Bubak | Academic Computer Centre Cyfronet AGH, Kraków, Poland 12/12/2018 http://dice.cyfronet.pl/ H2020 EU PROJECT | Topic SC1-DTH-07-2018 | GA: 826494

PLGrid Infrastructure >7 000 users, all five Polish academic HPC centres integrated Synergy between domain-specific researchers and IT experts Computing resources 5+ PFLOPS 130 000+ cores Scientific Software 500+ applications, tools, libraries http://apps.plgrid.pl Storage 60+ PB archives backups distributed access fast scratch filesystem Tools for collaboration project tracking (JIRA) version control (Git) teleconferencing (Adobe Connect) Computational Cloud (PaaS based on OpenStack)

Primary HPC Resources for PRIMAGE Prometheus cluster (fair-shared with other users) Available though PL-Grid (Polish Grid Infrastructure) 2.4 PFlops computing power, 10 PB of disk, 282 TB of memory, Infiniband FDR 56 Gb/s 2160 regular nodes (24 Xeon 2.5 GHz cores, 128 GB RAM) 72 GPGPU nodes (24 Xeon 2.5 GHz cores, 128 GB RAM, 2 Nvidia K40 XL cards) 3 BigMem nodes (12 Xeon 3.4 GHz cores, 768 or 1536 GB RAM) A lot of software modules and licenses (e.g. Ansys, Matlab): Module/package browser available at: https://apps.plgrid.pl/ Possibility to set up dedicated license server if PRIMAGE acquires additional, dedicated licences PLGrid-provided licences are available on the fair-share use basis Support for MPI, Singularity and RSM (Ansys)

HPC Resources: Procedure Procedure and requirements PRIMAGE users who want to use HPC need to register in PLGrid Infrastructure (https://portal.plgrid.pl/) User manual: https://docs.cyfronet.pl/display/PLGDoc/User+Manual Exact instructions on how to affiliate oneself inside PLGrid will be provided by Cyfronet PRIMAGE project needs to negotiate a computing grant (number of CPU hours, storage, etc.) Cyfronet will do it, but first we need to learn: Your approximate CPU/disk consumption requirements in the first PRIMAGE year Any special requirements? (GPGPUs, BigMem jobs, or jobs expected to execute longer than 72h) Negotiated annually, can be renewed and/or extended if we run out of resources All published PRIMAGE research supported by such computational grant should be co-authored by Cyfronet Basic mode of use: SSH and Slurm (queuing system) for submitting jobs Manual: https://kdm.cyfronet.pl/portal/Prometheus:Podstawy (in PL only, apologies) Advanced modes, UIs and APIs: in the following presentation (T2.5) All on-cluster data are accessible only to members of the PRIMAGE A dedicated PLGrid group (plggprimage) was setup and we control who belongs to it Group storage separation is secured with on the OS/POSIX file access rights level

http://dice.cyfronet.pl/ Marek Kasztelnik | m.kasztelnik@cyfronet.pl