NIIF HPC services for research and education

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

LinkSCEEM-2: A computational resource for the Eastern Mediterranean.
1 Computational models of the physical world Cortical bone Trabecular bone.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
Supermicro © 2009Confidential HPC Case Study & References.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
E-Infrastructure for Science in Georgia Prof. Ramaz Kvatadze Georgian Research and Educational Networking Association – GRENA
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME.
HPC at IISER Pune Neet Deo System Administrator
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
1 18 April 2002 Recent developments of the Hungarian Academic and Research Network István Tétényi dr., Hungarnet/NIIF, Hungary
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
High Performance Computing
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Finnish Meteorological Institute IT Services and infrastructure Matti Keränen
Parallel Computers Today LANL / IBM Roadrunner > 1 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating point.
1 "The views expressed in this presentation are those of the author and do not necessarily reflect the views of the European Commission" NCP infoday Capacities.
PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space 1 ESIF - The PLGrid Experience ACK Cyfronet AGH PL-Grid.
Brief introduction about “Grid at LNS”
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
CMSC 611: Advanced Computer Architecture
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Proof of Concept – preliminary NIIFI plan
Challenges of HPC High Performance Computing in Albania
Grid infrastructure development: current state
Jeremy Maris Research Computing IT Services University of Sussex
Appro Xtreme-X Supercomputers
Super Computing By RIsaj t r S3 ece, roll 50.
Small site approaches - Sussex
Parallel Computers Today
Nicole Ondrus Top 500 Parallel System Presentation
Footer.
Overview of HPC systems and software available within
IBM Power Systems.
Pioneering the Computing & Communication Services for Academic Studies
Office of Information Technology February 16, 2016
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
H2020 EU PROJECT | Topic SC1-DTH | GA:
Argon Phase 3 Feedback June 4, 2019.
Presentation transcript:

NIIF HPC services for research and education GPU Day 2015 Wigner Research Centre 2015.05.20-21. Dr. Tamás Máray NIIF Institute

Content - The NIIF Institute HPC infrastructure HPC services Access to the resources PRACE

The NIIF Institute NIIF stands for the National Information Infrastructure Development Institute of Hungary NIIF Institute is a not-for-profit public organization It is the Hungarian NREN, running the Hungarnet network NIIF was founded in 1986, thus it is one of the oldest NRENs in Europe It is a founding member of DANTE and TERENA (currently GEANT Association) The user community of NIIF: All universities and higher education institutes All the academic research institutes Nearly all the public collections (libraries, museums, archives) All schools of Hungary The number of client organizations is more than 5000

The NIIF Institute NIIF designed, developed and operates a dark fiber based, DWDM backbone network covering the whole country 67 DWDM nodes, 80 lambdas/connection, 100Gbps/lambda max. speed NIIF put into operation the very first 100 Gbps NREN service in Europe (for the CERN-Wigner datacenter) NIIF has 3 datacenters and 6 supercomputers NIIF provides 27 advanced services for the user organizations, among them: cloud, storage, HPC, multimedia, AAI, etc. NIIF is a partner in many European R+D projects, including Geant, PRACE, EGI, Byte, etc. The number of employees is about 100

The NIIF network

History of HPC at NIIF 2001 Sun E10k 60 Gflop/s SMP architecture 96 UltraSparc processors 48 GB memory Listed at TOP500 (rank 428.) Upgrades in several steps (last in 2009) Sun F15k ~900 Gflop/s 216 processor cores 400 GB memory

Today Several supercomputers Different architectures Distributed setup (4 locations) CPU + coprocessors Several hundred Tflop/s total capacity Storage in Pbyte range

Locations NIIF Institute, Budapest NIIF @ University of Debrecen NIIF @ University of Pécs NIIF @ University of Szeged

NIIF supercomputing services High utilization ~200 user projects Application areas: chemistry, physics, biology, astrophysics, geology, information technology, mathematics, geophysics, engineering, hydrology, medical research, life sciences, meteorology, agro sciences, economics, etc.

Budapest 1. HP CP4000SL Fat node cluster architecture AMD Opteron Magny-Cours processors 5 Tflop/s 768 cores (2.2 GHz) 24 core/node Redundant Infiniband QDR interconnect 2 TB of memory 50 TB disk (Ibrix parallel FS) Linux, RHEL Water cooled racks

Budapest 2. HP SL250s Cluster architecture Intel Xeon E5-2680 v2 @ 2.80GHz 32 Tflop/s 14 nodes, 2 CPU + 2 GPU / node 28 Nvidia K20X GPUs 20 core/node Infiniband FDR interconnect 900 GB of memory 300 TB of disk (Lustre FS) Linux, RHEL Nvidia Quadro K5000 based visualization Water cooled racks

Pécs SGI UltraViolet 1000 (UV) ccNUMA („SMP”) architecture Intel Xeon Nehalem-EX processors 10.5 Tflop/s 1152 cores (2.66 GHz) Numalink 5 interconnect 6 TB of memory 500 TB of disk Linux, SLES11 Water cooled racks Nvidia Quadro FX5800 based visualization

Szeged HP CP4000BL Fat node cluster architecture AMD Opteron Magny-Cours processors 15 + 8 Tflop/s 2112 CPU cores (2.2 GHz) 48 cores/node (SMP node-ok) 12 Nvidia M2070 GPU boards Redundant Infiniband QDR interconnect (mesh) 5 TB of memory 240 TB of disk (Ibrix parallel FS) Linux, RHEL Nvidia Quadro FX5800 based visualization

Debrecen 1. SGI Altix ICE 8400 Cluster architecture Intel Westmere-EP processors 18 Tflop/s 1536 CPU cores (3.33 GHz) Redundant Infiniband QDR interconnect 6 TB of memory 500 TB of disk Linux, SLES11 Water cooled racks Nvidia Quadro FX5800 based visualization

Debrecen 2. HP SL250s Cluster architecture Intel Xeon E5-2650 v2 @ 2.60GHz 202 Tflop/s (listed on TOP500) 84 nodes, 2 CPU + 2 GPU /node 1344 CPU cores 168 x Nvidia K20X GPUs Infiniband FDR interconnect 10 TB of memory 300 TB of disk (Lustre FS) Linux, SLES11 Water cooled racks Nvidia Quadro K6000 based visualization

NIIF HPC services Aggregated computing capacity: ~290 Tflop/s Dedicated Nx10Gbps optical interconnection between the locations NIIF storage service: 7 PBytes Commercial application licenses (Matlab, Gaussian, Maple, Intel compilers, etc.) User support PRACE integration

Access to the resources Open only for the NIIF community members (5000 organizations!) Entirely dedicated to research and education Only for non-commercial usage Free of charge Preliminary review of the subscriptions (project proposals) Users must report periodically

PRACE Strategic program supported by the European Commission Part of the European eInfrastructure

PRACE

PRACE European HPC cooperation A whole ecosystem of HPC resources and services, including education and training PRACE research projects (1IP, 2IP, 3IP, 4IP) Hierarchical infrastructure World class resources 6 Tier-0 centres, and 23 Tier-1 centres

PRACE hierarchy

Thank you! Dr. Tamás Máray NIIF Institute