Evolution at CERN E. Da Riva1 CFD team supports CERN development 19 May 2011.

Slides:



Advertisements
Similar presentations
Multidisciplinary Computation and Numerical Simulation V. Selmin.
Advertisements

Dominic Hudson, Simon Lewis, Stephen Turnock
1 DISTRIBUTION STATEMENT XXX– Unclassified, Unlimited Distribution Laser Propagation Modeling Making Large Scale Ultrashort Pulse Laser Simulations Possible.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
Extending the capability of TOUGHREACT simulator using parallel computing Application to environmental problems.
OpenFOAM on a GPU-based Heterogeneous Cluster
Division of Pharmacokinetics and Drug Therapy Department of Pharmaceutical Biosciences Uppsala University Estimating and forecasting in vivo drug disposition.
Thermo-fluid Analysis of Helium cooling solutions for the HCCB TBM Presented By: Manmeet Narula Alice Ying, Manmeet Narula, Ryan Hunt and M. Abdou ITER.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Abstract An upgrade to the ATLAS silicon tracker cooling control system may require a change from C 3 F 8 (octafluoro-propane) to a blend containing 10-30%
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Page - 1 Rocketdyne Propulsion & Power Role of EASY5 in Integrated Product Development Frank Gombos Boeing Canoga Park, CA.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Evolutionary Optimisation for Microsoft Excel
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Ulrich Heck, DHCAE-Tools UG ___________________________ CastNet and OpenFOAM® in chemical plant engineering applications ______________________________.
© 2008 The MathWorks, Inc. ® ® Parallel Computing with MATLAB ® Silvina Grad-Freilich Manager, Parallel Computing Marketing
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Easy-to-Use CFD for Electronics Design. Introduction A CFD thermal simulation tool specifically designed for the electronics industry Future Facilities.
PuReMD: Purdue Reactive Molecular Dynamics Package Hasan Metin Aktulga and Ananth Grama Purdue University TST Meeting,May 13-14, 2010.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
XStream: Rapid Generation of Custom Processors for ASIC Designs Binu Mathew * ASIC: Application Specific Integrated Circuit.
ASIP Architecture for Future Wireless Systems: Flexibility and Customization Joseph Cavallaro and Predrag Radosavljevic Rice University Center for Multimedia.
CCGrid 2014 Improving I/O Throughput of Scientific Applications using Transparent Parallel Compression Tekin Bicer, Jian Yin and Gagan Agrawal Ohio State.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Stokes flow in porous media
“DECISION” PROJECT “DECISION” PROJECT INTEGRATION PLATFORM CORBA PROTOTYPE CAST J. BLACHON & NGUYEN G.T. INRIA Rhône-Alpes June 10th, 1999.
HPC computing at CERN - use cases from the engineering and physics communities Michal HUSEJKO, Ioannis AGTZIDIS IT/PES/ES 1.
© Fraunhofer SCAI 20. February 2013 – Klaus Wolf MpCCI 4.3 (2013)
1 Raspberry Pi HPC Testbed By Bradford W. Bazemore Georgia Southern University.
CCGrid 2014 Improving I/O Throughput of Scientific Applications using Transparent Parallel Compression Tekin Bicer, Jian Yin and Gagan Agrawal Ohio State.
Geoffrey Duval (ISAE-SUPAERO) Naples, October 1 st, 2012.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Distributed Computation: Circuit Simulation CK Cheng UC San Diego
‘Computer power’ budget for the CERN Space Charge Group Alexander Molodozhentsev for the CERN-ICE ‘space-charge’ group meeting March 16, 2012 LIU project.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Efficiency of small size tasks calculation in grid clusters using parallel processing.. Olgerts Belmanis Jānis Kūliņš RTU ETF Riga Technical University.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
Satellital Image Clasification with neural networks Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR
© Ram Ramanan 2/22/2016 Commercial Codes 1 ME 7337 Notes Computational Fluid Dynamics for Engineers Lecture 4: Commercial Codes.
On the Path to Trinity - Experiences Bringing Codes to the Next Generation ASC Platform Courtenay T. Vaughan and Simon D. Hammond Sandia National Laboratories.
CFD ON REMOTE CLUSTERS Carlo Pettinelli CD-adapco
1 Programming and problem solving in C, Maxima, and Excel.
Hybrid Parallel Implementation of The DG Method Advanced Computing Department/ CAAM 03/03/2016 N. Chaabane, B. Riviere, H. Calandra, M. Sekachev, S. Hamlaoui.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Parallel OpenFOAM CFD Performance Studies Student: Adi Farshteindiker Advisors: Dr. Guy Tel-Zur,Prof. Shlomi Dolev The Department of Computer Science Faculty.
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
Modelling and Solving Configuration Problems on Business
Organizations Are Embracing New Opportunities
Summary Remaining Challenges The Future Messages to Take Home.
Difference Between SOC (System on Chip) and Single Board Computer
CERN presentation & CFD at CERN
CRESCO Project: Salvatore Raia
Genomic Data Clustering on FPGAs for Compression
CMAQ PARALLEL PERFORMANCE WITH MPI AND OpenMP George Delic, Ph
A robust preconditioner for the conjugate gradient method
Making the System Operational Implementation & Deployment
GENERAL VIEW OF KRATOS MULTIPHYSICS
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
A Domain Decomposition Parallel Implementation of an Elasto-viscoplasticCoupled elasto-plastic Fast Fourier Transform Micromechanical Solver with Spectral.
Computer Evolution and Performance
CINECA HIGH PERFORMANCE COMPUTING SYSTEM
Distributed Virtual Laboratory for Smart Sensor System Design
Types of Parallel Computers
Cluster Computers.
Presentation transcript:

evolution at CERN E. Da Riva1 CFD team supports CERN development 19 May 2011

SUMMARY 1.High performance computing 2.Open source CFD software E. Da Riva2 CFD team supports CERN development 19 May 2011

Opportunities … PC speed and parallel calculation have reduced the numerical solution time (and cost) Meshing time (and cost) has dramatically decreased CFD is more and more integrated in the design tools –Automatic meshing –Model/surface importation –Subroutine facilities –CAD integration Interface more and more user friendly CFD more and more cheap… but: The tool has an easy access and gives always a result… E. Da Riva3 CFD team supports CERN development 19 May 2011

…specific knowledge is required Efficiency to build the right model Selection of the right numerical solver Sensibility to result interpretation Training Experience Knowledge Problem sharing CV group has a CFD team since 1993 E. Da Riva4 CFD team supports CERN development 19 May 2011

20 Viglen CPU servers 8 core Intel "Nehalem" L5520 chips 48Gb of memory Low-latency "Net Effect" 10Gb Ethernet Optimized for MPI applications under LSF Dedicated to CFD Team and BE users E. Da Riva5 CFD team supports CERN development 19 May 2011 ENGINEERING BATCH CLUSTERCFD SOFTWARE STAR-CCM+ STAR-CD OpenFOAM RESOURCES (80 licenses)

PERFORMANCE EVALUATION 1/3 E. Da Riva6 CFD team supports CERN development 19 May 2011 Parallel computing speeds up CFD Domain decomposition

PERFORMANCE EVALUATION 2/3 E. Da Riva7 CFD team supports CERN development 19 May 2011 Fast interconnection is essential Domain decomposed across different nodes of the cluster Domain decomposed across one single node

PERFORMANCE EVALUATION 3/3 E. Da Riva8 CFD team supports CERN development 19 May 2011 Reducing computational time Solving more complex problems/geometries Solving different versions of the same case

E. Da Riva9 CFD team supports CERN development 19 May 2011 Open source toolbox Meshing, Solving & Post-processing work in parallel No licenses constraint Conversion to/from major commercial codes formats OpenFOAM (Open Field Operation And Manipulation) Higher flexibility Better exploitation of the computational resources

E. Da Riva10 CFD team supports CERN development 19 May 2011 STARCCM+OpenFOAM Radio Frequency Cavity, Mass Flow Rate 0.3 kg/s VALIDATION 1/2: VELOCITY FIELD

E. Da Riva11 CFD team supports CERN development 19 May 2011 VALIDATION 2/2: PRESSURE DROP OpenFOAM and STAR-CCM+ predicted the same pressure drop values

E. Da Riva12 CFD team supports CERN development 19 May CFD is very useful at CERN (design speed up, insight analysis, …) 2.Large computing resources available at CERN allows to perform fast and complex CFD analysis 3.The CFD Team of EN/CV has the specific knowledge required 4.CFD Team is enlarging the CFD tools available in order to better use the resources and increase flexibility CONCLUSIONS

THANK YOU FOR YOUR ATTENTION E. Da Riva13 CFD team supports CERN development 19 May 2011