Ernst Haunschmid, TU WIEN EOSC, 30th October 2018

Slides:



Advertisements
Similar presentations
HPC in Poland Marek Niezgódka ICM, University of Warsaw
Advertisements

Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
CaSToRC LinkSCEEM-2: A Computational Resource for the Development of Computational Sciences in the Eastern Mediterranean Jens Wiegand Scientific Coordinator.
Supermicro © 2009Confidential HPC Case Study & References.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
02/24/09 Green Data Center project Alan Crosswell.
Top500: Red Storm An abstract. Matt Baumert 04/22/2008.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
An Introduction to the Open Science Data Cloud Heidi Alvarez Florida International University Robert L. Grossman University of Chicago Open Cloud Consortium.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
J. J. Rehr & R.C. Albers Rev. Mod. Phys. 72, 621 (2000) A “cluster to cloud” story: Naturally parallel Each CPU calculates a few points in the energy grid.
Green Schools: Collaborative Approach To Energy Problem Solving EPA Environmental Summit May 9, 2006.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Infrastructure Improvements 2010 – November 4 th – Hepix – Ithaca (NY)
Haley Toni Mastelić & Dražen Lučanin, Vienna University of Technology.
May 25-26, 2006 LQCD Computing Review1 Jefferson Lab 2006 LQCD Analysis Cluster Chip Watson Jefferson Lab, High Performance Computing.
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
Customer Data Access -The New Ground Floor Christopher Irwin US Department of Energy – Office of Electricity Oct 16, 2015.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
Towards energy efficient HPC HP Apollo 8000 at Cyfronet Part I Patryk Lasoń, Marek Magryś.
Patryk Lasoń, Marek Magryś
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
The North Northamptonshire Joint Planning Unit Experience Andrew Longley North Northamptonshire Joint Planning Unit.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO.
 the creation of an "internal market" in research (free movement of knowledge, researchers and technology)  the restructuring of the European research.
Operational and Application Experiences with the Infiniband Environment Sharon Brunett Caltech May 1, 2007.
CERN - IT Department CH-1211 Genève 23 Switzerland t Power and Cooling Challenges at CERN IHEPCCC Meeting April 24 th 2007 Tony Cass.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space 1 ESIF - The PLGrid Experience ACK Cyfronet AGH PL-Grid.
Brief introduction about “Grid at LNS”
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Cluster Status & Plans —— Gang Qin
A Brief Introduction to NERSC Resources and Allocations
Annex III to BS/SC/PDF/A(2003)1
Greater Peterborough Region DNA Cluster
White Rose Grid Infrastructure Overview
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Lattice QCD Computing Project Review
DI4R Conference, September, 28-30, 2016, Krakow
Western Analysis Facility
Grid infrastructure development: current state
Jeremy Maris Research Computing IT Services University of Sussex
Building the Economy with Clusters
Appro Xtreme-X Supercomputers
The e-Infrastructure Commons
Presentation on Copernicus Dissemination
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Super Computing By RIsaj t r S3 ece, roll 50.
Nicole Ondrus Top 500 Parallel System Presentation
Overview of HPC systems and software available within
CEITEC Nano Research Infrastructure
Office of Information Technology February 16, 2016
EO[D|S]C Christoph Reimer EODC – Earth Observation Data Centre.
ICSTI International conference “Scientific and Tecnological Innovations: National Experience and International Collaboration” Collaboration for the Development.
League of Advanced European Neutron Sources
WP6 – EOSC integration J-F. Perrin (ILL) 15th Jan 2019
EOSC Secretariat.
Presentation transcript:

Ernst Haunschmid, TU WIEN EOSC, 30th October 2018 Vienna Scientific Cluster A joint High Performance Computing Facility of Austrian Universities Ernst Haunschmid, TU WIEN EOSC, 30th October 2018

Vienna Scientific Cluster Besides Theory and Experiment numerical Simulation has become a third pillar in science. VSC satisfies the high performance computing needs of scientist from all universities with the joint project Ernst Haunschmid Vienna Scientific Cluster

Vienna Scientific Cluster The first 10 Years Collaboration established 2008 VSC-1 installed 2009 Ranked #156 in Top500 32 Tflop/s VSC-2 installed 2011 Ranked #56 in Top500 150 Tflop/s New partners in 2013 VSC-3 Installed in 2014 Ranked #85 in Top500 Ranked #86 in Green500 600 Tflop/s More partners in 2016 VSC-4 awarded end of 2018 About 2.000 Tflop/s Ernst Haunschmid Vienna Scientific Cluster

Vienna Scientific Cluster Organization Governance: Steering Committee advices rectorates of member universities decides on project applications and grants resources. Coordination between partners VSC is not a legal entity Finance: From the Federal Ministry of Science, Research and Economy via the budget of the member universities. Additional funding via HRSM projects Pay per use for other research oriented partners (e.g. eodc) Operation: TU Wien has been assigned to operate the VSC systems VSC Team supports user community No legal entity VSC-team Ernst Haunschmid Vienna Scientific Cluster

Vienna Scientific Cluster VSC-2 Installed Summer 2011 1220 AMD Opteron nodes Mellanox QDR Infiniband Air cooled Water cooled rear doors 20 °C water input temperature 18 °C in the outer loop Hybrid cooling system free-cooling (up to 15°C) chiller based cooling Ernst Haunschmid Vienna Scientific Cluster

VSC-2 Power consumption Ernst Haunschmid Vienna Scientific Cluster

Vienna Scientific Cluster VSC-3 VSC-3 optimized for performance and efficiency Installed Summer 2014 2020 Supermicro based compute nodes Intel E5-2650 v2 (8 cores, 2.6 GHz) 64 GB main memory Two Intel QDR Infiniband HCAs Intel dual-rail QDR Infiniband Fabric is segmented into 8 islands each about 4500 cores Immersion cooling Ernst Haunschmid Vienna Scientific Cluster

Vienna Scientific Cluster VSC-3 Cooling Oil based immersion cooling Provided by Green Revolution Cooling (GRC), Texas based More than 32.000 liters of white mineral oil Pump modules (PM) under the raised floor, heat transfer into water loop No room (air) cooling Room temperature up to 45 °C Infiniband core rack air cooled Free cooling all over the year Cooling overhead less than 3% State of the art energy efficiency Storage is located in nearby room Ernst Haunschmid Vienna Scientific Cluster

Vienna Scientific Cluster Science at VSC The VSC is primarily devoted to research Projects are peer reviewed either by a funding agency (preferred) or by reviewers appointed by the steering committee Publications and Projects are publicly visible at our web site. More than 900 publications in scientific journals 300 active projects with 1200 users 900 projects since 2009 Ernst Haunschmid Vienna Scientific Cluster

Vienna Scientific Cluster Challenges New topics Big data, data analytics, real time data Machine learning, artificial intelligence Empower scientists computation resources and data storage Support scientific projects with new tools and methods Collaboration and partnership Interoperability Ernst Haunschmid Vienna Scientific Cluster

Thank You for Your Attention Ernst Haunschmid Vienna Scientific Cluster