HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.

Slides:



Advertisements
Similar presentations
Beowulf Supercomputer System Lee, Jung won CS843.
Advertisements

Performance Analysis of Virtualization for High Performance Computing A Practical Evaluation of Hypervisor Overheads Matthew Cawood University of Cape.
Scaling Up Engineering Analysis using Windows HPC Server 2008 Todd Wedge Platform Strategy Advisor, HPC Microsoft.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Benchmarking Parallel Code. Benchmarking2 What are the performance characteristics of a parallel code? What should be measured?
Information Technology Center Introduction to High Performance Computing at KFUPM.
Copyright 2007, Information Builders. Slide 1 Workload Distribution for the Enterprise Mark Nesson, Vashti Ragoonath June, 2008.
Lincoln University Canterbury New Zealand Evaluating the Parallel Performance of a Heterogeneous System Elizabeth Post Hendrik Goosen formerly of Department.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Academic and Research Technology (A&RT)
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Parallel Architectures Dr Andy Evans With additions from Dr Nick Malleson.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
WSV206. X64 Server $40,000,000$1,000,000$1,000.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
THE AFFORDABLE SUPERCOMPUTER HARRISON CARRANZA APARICIO CARRANZA JOSE REYES ALAMO CUNY – NEW YORK CITY COLLEGE OF TECHNOLOGY ECC Conference 2015 – June.
Integrating HPC and the Grid – the STFC experience Matthew Viljoen, STFC RAL EGEE 08 Istanbul.
STRATEGIES INVOLVED IN REMOTE COMPUTATION
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
J. J. Rehr & R.C. Albers Rev. Mod. Phys. 72, 621 (2000) A “cluster to cloud” story: Naturally parallel Each CPU calculates a few points in the energy grid.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Grid MP at ISIS Tom Griffin, ISIS Facility. Introduction About ISIS Why Grid MP? About Grid MP Examples The future.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
HPC computing at CERN - use cases from the engineering and physics communities Michal HUSEJKO, Ioannis AGTZIDIS IT/PES/ES 1.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Page 1 Monitoring, Optimization, and Troubleshooting Lecture 10 Hassan Shuja 11/30/2004.
Automating Installations by Using the Microsoft Windows 2000 Setup Manager Create setup scripts simply and easily. Create and modify answer files and UDFs.
ANSYS, Inc. Proprietary © 2004 ANSYS, Inc. Chapter 5 Distributed Memory Parallel Computing v9.0.
Introduction to Exadata X5 and X6 New Features
A Practical Evaluation of Hypervisor Overheads Matthew Cawood Supervised by: Dr. Simon Winberg University of Cape Town Performance Analysis of Virtualization.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Slide 1 User-Centric Workload Analytics: Towards Better Cluster Management Saurabh Bagchi Purdue University Joint work with: Subrata Mitra, Suhas Javagal,
Grids and SMEs: Experience and Perspectives Emanouil Atanassov, Todor Gurov, and Aneta Karaivanova Institute for Parallel Processing, Bulgarian Academy.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
Evolution at CERN E. Da Riva1 CFD team supports CERN development 19 May 2011.
Wolfgang Gentzsch EGI 2015 Lisbon May 2015 UberCloud Deep Dive A Cloud Marketplace for Engineers & Scientists.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
HPC In The Cloud Case Study: Proteomics Workflow
Organizations Are Embracing New Opportunities
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
SPIDAL Analytics Performance February 2017
Create setup scripts simply and easily.
Early Results of Deep Learning on the Stampede2 Supercomputer
Matt Lemons Nate Mayotte
ECRG High-Performance Computing Seminar
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Heterogeneous Computation Team HybriLIT
Practical aspects of multi-core job submission at CERN
Debunking the Top 10 Myths of Small Business Server: Using Windows SBS in Larger Environments Abstract: This session will debunk some of the common myths.
Grid Canada Testbed using HEP applications
Overview Introduction VPS Understanding VPS Architecture
Early Results of Deep Learning on the Stampede2 Supercomputer
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Footer.
IBM Power Systems.
Wide Area Workload Management Work Package DATAGRID project
LO2 – Understand Computer Software
Assoc. Prof. Marc FRÎNCU, PhD. Habil.
Presentation transcript:

HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016

Outline «New» CFD project ANSYS HPC strategy Relevant HPC at CERN 5/4/2016 A. Rakai EN-CV-PJ3

Outline «New» CFD projects ANSYS HPC strategy Relevant HPC at CERN A. Rakai EN-CV-PJ4 5/4/2016

«New» CFD project 1: accidental release modelling Gaussian plume type: uniform flow field Computational fluid dynamics: resolving flow field A. Rakai EN-CV-PJ5 Gaussian plume Computational fluid dynamics From COST Action ES1006 5/4/2016

Demonstrative calculations on a smaller domain: Only a few buildings around Isolde: 6A. Rakai EN-CV-PJ Pollutant source next to a building Far field wind direction 5/4/2016

Devil is in the details: close to source – iso-surface view 7A. Rakai EN-CV-PJ Wind 5/4/2016

«New» CFD project 2: ATLAS cavern ventilation New technical student: Eugenia Rocco A. Rakai EN-CV-PJ8 5/4/2016

CERN HPC needs in general Initiative from G. Arduini (BE-ABP) to collect similar needs and present to F. Hemmer: P. Collier – BE R. Jones – BE-BI E. Jensen – BE-RF S.Gilardini – EN-STI M. Battistin – EN-CV F. Bertinelli, F. Carra – EN-MME P. Chiggiato – TE-VSC L. Bottura, Duarte Ramos – TE-MSC A. Rakai EN-CV-PJ9 5/4/2016

Outline «New» CFD projects ANSYS HPC strategy Relevant HPC at CERN A. Rakai EN-CV-PJ10 5/4/2016

ANSYS HPC strategy – scaling A. Rakai EN-CV-PJ11 5/4/2016 Scales well up to ~ cores Good scaling with 5500 cells/core: Interest for even small CFD (1million cells) models to run on ~200 cores

ANSYS HPC strategy – scaling A. Rakai EN-CV-PJ12 Good scaling with 5500 cells/core: Interest for small (1million cells) models Also for mechanical applications until ~100 cores 5/4/2016

ANSYS HPC strategy – interconnection A. Rakai EN-CV-PJ13 5/4/2016

ANSYS HPC strategy – cloud A. Rakai EN-CV-PJ14 5/4/2016

ANSYS HPC strategy – benchmarking A. Rakai EN-CV-PJ15 5/4/2016 Automatic toolkit with Perl script to be run from installation directory, ~60 GB disc space is required

Outline «New» CFD projects ANSYS HPC strategy Relevant HPC at CERN A. Rakai EN-CV-PJ16 5/4/2016

History of IT CFD meetings A. Rakai EN-CV-PJ17 5/4/2016 Single machine, several CPUs No hyperthreading Low latency interconnection Linux better than windows

Benchmarking 2013: 18A. Rakai EN-CV-PJ 5/4/2016

CERN HPC – EngPara linux Low latency ethernet cards (Infiniband?) Batch runs with MPI 12 cores/node, 64Gb of memory Lxbatch/eng, only for parallel applications For two user groups: CFD and BE users Long experience within CFD group (logging to nodes, check availability) but people have left A. Rakai EN-CV-PJ19 5/4/2016

CERN HPC – HpcEng Win cEngCluster Windows Server 2012 R2 and Windows HPC Pack Total 1024 cores Low latency interconnect (Infiniband?) Only 1 year of experience (no memory use or availability check), but for new users more accessible A. Rakai EN-CV-PJ20 5/4/2016

CERN HPC – HpcEng Win cEngCluster Windows Server 2012 R2 and Windows HPC Pack 16x32 core, 512 GB RAM 32x16 core, 128 GB RAM Only 1 year of experience, but for new users more accessible A. Rakai EN-CV-PJ21 How to know which one is free, the ones taken use how much memory? 5/4/2016

Benchmarking 2015: 22A. Rakai EN-CV-PJ Speed-up comparison of LSF and Windows cluster in 2015: HVAC case, R15.0 Ideal speed-up: linear, a calculation runs n times faster on n cores 5/4/2016

Benchmarking 2016 – winHPC: 4/21/16 23A. Rakai EN-CV-PJ HVAC case, R15.0 HVAC case, R16.2 Outdoor case, R16.2 Not able to repeat last year’s performance

Benchmarking 2016 – PC: 24A. Rakai EN-CV-PJ HP Z800 workstation: 24GB RAM, 8 core Intel Xeon CPU 5/4/2016 Far from ideal performance

CERN ANSYS license monitoring A. Rakai EN-CV-PJ25 Max cores to be used theoretically for one job with existing licenses: 16 (aa_r_cfd) (aa_r_hpc) = 400 HPC is not so widely used apparently (could not find stat) 5/4/2016

A. Rakai EN-CV-PJ26 5/4/2016 The only license usage statistics I found from 2015: EN-CV with 1 person using ANSYS uses a lot

Conclusion 1 CFD and other (BE department) codes have more (theoretical – thousands of cores) potential than what CERN HPC capacity can provide at the moment, and it is growing Windows or linux based cluster: both have advantages and disadvantages Investigating cloud possibilities can be interesting for optimizing usage A. Rakai EN-CV-PJ27 5/4/2016

Conclusion 2 CFD calculations can benefit from this growing parallel capacity: Faster results More (numerically) accurate results More scenarios investigated A. Rakai EN-CV-PJ28 5/4/2016

ANSYS HPC strategy – best practices A. Rakai EN-CV-PJ30 1. Don’t Move the Data (More Than You Have to) 2. Remote Graphics & Graphics User Interface for End-to-End Simulation 3. Secure Network Communications & Data Storage 4. Effective End-User Access for Job & Data Management 5. Re-Use On-Premise Licenses (or Not) 6. Consider a Mix of Business Models 7. Match Your Cloud to Your HPC Workload 8. Start Small, Grow Organically… but Think Big 5/4/2016

ANSYS HPC strategy – tests A. Rakai EN-CV-PJ31 Technical Computing Power When You Need It UberCloud is the online community and marketplace where engineers and scientists discover, try and buy Computing as a Service UberCloud Experiment: a free experiment of 1000 free CPU hours with professional providers (ANSYS) and experts, possibility to bring licenses 5/4/2016