Download presentation
Presentation is loading. Please wait.
Published byAlisha Hardy Modified over 9 years ago
1
Enabling HPC Simulation Workflows for Complex Industrial Flow Problems C.W. Smith, S. Tran, O. Sahni, and M.S. Shephard, Rensselaer Polytechnic Institute S. Singh Indiana University Outline Industry requires complete HPC workflows RPI efforts on HPC for industry Components for parallel adaptive simulation Science Gateway Application to complex industrial flow problems
2
HPC for Industry Increasingly Industry requires parallel analysis to meet their simulation needswith key drivers being Higher spatial and temporal resolution More complex physics with many multiphysics problems Increased use of validation and movement toward uncertainty quantification Reasonable progress being made on the analysis engines Research codes that scale to nearly 1,000,000 cores on unstructured meshes Commercial codes improving scaling to thousands for flow More reasonable software pricing models However, the application of HPC in industry is growing slowly Economics of product design cycle indicate it should be growing quickly
3
HPC for Industry Why is the use of industrial HPC growing slowly? The analysis codes are available. What is missing? To obtain the potential cost benefits the entire simulation workflow must be integrated into the HPC environment Workflow must included tools industry has spend years integrating and validating in their processes Need to use multiple CAD and CAE tools Effective industrial use of large scale parallel computations will demand simulation reliability Must have very high degree of automation – human in the loop kills scalability and performance Need easy access to cost effective parallel computers Must be able to do proprietary work Must have easy to use parallel simulation management
4
HPC for Industry Approach being taken A component-based approach to integrate from design through results quantification Link to industry design data (e.g., CAD geometry) Manage the model construction directly on massively parallel computers Support the use of multiple analysis engines Support simulation automation Support in-memory integration of components as much as possible to avoid I/O bottlenecks Provide web-based portal for execution of massively parallel simulation workflows This presentation will focus on components developed for parallel adaptive unstructured mesh simulations
5
Rensselaer’s Efforts to Bringing HPC to Industry Scientific Computation Research Center (SCOREC) Parallel methods for unstructured meshes and adaptive simulation control Component-based methods for developing parallel simulation workflows Center for Computational Innovations Petaflop IBM Blue Gene/Q and clusters Industry can gain guaranteed access to run proprietary applications (for a price less that cloud computing) Programs for HPC for Industry HPC2 – New York State HPC consortium NSF Partnership for Innovation NSF XSEDE Industrial Challenge Program 5 i M 0 j M 1 1 P 0 P 2 P off-node part boundary on-node part boundary Node j Node i
6
SCOREC’s Research Builds on Broad Partnerships Interdisciplinary research program supported by Government – NSF, DOE, DoD, NASA, NIH, NY State Strong industrial support – 46 different companies have supported SCOREC Multiple pieces of software have been commercialized Center has generated a software vendor Multi-way partnerships are common Large industry, software vendor, SCOREC SBIR from government agencies to software vendor and SCOREC Government laboratory, software vendor(s), SCOREC University, SCOREC, etc.
7
Center for Computational Innovations IBM Blue Gene/Q petaflop computer 5120 compute nodes -5 racks @ 1024 nodes each Each node has 16 A2 processing cores -17 th core for OS functions 16 GB of RAM per node 80 TB of RAM system wide 56 Gb/s IB external network 160 nodes for data I/O 1.2 PB parallel file system
8
High Performance Computation Consortium HPC2 supported by the NYSTAR Division of the Empire State Development Agency Goal is to provide NY State Industry support in the application of high performance computing technologies in: Research and discovery Product development Improved engineering and manufacturing processes HPC2 works with NY State Centers for Advanced Technology which serve as focal points for technology transfer to Industry The HPC 2 is a distributed activity - key participants Rensselaer Stony Brook/Brookhaven SUNY Buffalo NYSERNET 8 x/h=-6 z/h=16 x/h=-6 z/h=-16 x/h=25 z/h=16 x/h=-6 z/h=16 x/h=-6 z/h=-16 x/h=25 z/h=16 Time-avg (Cb=1.2) Exp CFD Simulation of active flow control device (Sahn, et al.)
9
NSF Sponsored Activities on HPC for Industry Partnership for Interoperable Components for Parallel Engineering Simulations Technologies to make construction of HPC workflows more efficient Component-based methods supporting combinations of open source and commercial software Mechanisms to help industry effectively apply HPC NSF XSEDE Industrial Challenge Program Install components for parallel adaptive simulations on XSEDE machines Develop HPC workflows for industry on XSEDE machines Investigate use of Phi co-processors on the Stampede system for in parallel adaptive unstructured mesh simulations 9
10
Recent Industrial Partners Industrial Partners ACUSIM (now Altair) Ames-Goldsmith Blasch Ceramics Boeing Calabazas Creek Research Corning Crystal IS GE HyPerComp IBM ITT Gould Pumps Kitware Northrop Grumman Pliant Procter & Gamble Sikorsky Simmetrix Xerox
11
Parallel Data & Services Domain Topology Domain Topology Mesh Topology/Shape Dynamic Load Balancing Simulation Fields Partition Control Partition Control Component-Based Unstructured Mesh Infrastructure Parallel data and services are core Abstraction of geometric model topology (GMI or GeomSim) Mesh also based on topology – it must be distributed (PUMI or MeshSim), growing need for distributed geometry (GeomSim) Simulation fields are tensors with distributions over geometric model and mesh entities (APF or FieldSim) Partition control must coordinate communication and partition updates Dynamic load balancing required at multiple steps in the workflow to account for mesh changes and application needs (Zoltan and ParMA) PUMI, GMI, APF, ParMa are SCOREC research codes GeomSim, MeshSim and FieldSim are component-based tool from Simmetrix Zoltan is from Sandia Nat. Labs i M 0 j M 1 1 P 0 P 2 P off-node part boundary on-node part boundary Node j Node i
12
Distributed Mesh and Partition Control Distributed Mesh Requirement On part operated without communication Communications through partition model Services Mesh Migration – moving mesh between parts Ghosting – read only copies to reduce communication Changing numbers of parts Geometric model Partition model i M 0 j M 0 1 P 0 P 2 P inter-process boundary intra-process part boundary process j process i Distributed mesh
13
Dynamic Load Balancing Equal “work” with minimum communication Tools Graph-based (ParMETIS, Zoltan) Geometry-based (Zoltan, Zoltan2) Mesh-based (ParMA) Local/global Load balancing throughout simulation Need fast methods – can not dominate Need predictive load balancing to account for mesh adaptation Need to account for needs of specific workflow components
14
Gateway Execution High barrier to run HPC workflows Requires knowledge of filesystem, scheduler, scripting, runtime env., compilers, … - for each HPC system XSEDE science gateway for PHASTA lowers the barrier User specifies problem definition, simulation parameters, and required compute resources through experiment creation web page (right) Workflow steps are executed on HPC system, user is emailed, and output is prepared for download – option to delete or archive Scales to multiple users and systems
15
Gateway Creation and Maintenance System and user software expert maintains builds and execution scripts Optimized builds and runtime parameters Web interface for defining workflow XSEDE gateway developers quickly accommodate user requests through SciGap and Airavata APIs Output log monitoring – monitor job output from web interface Email notifications – completion, failure, app specific milestone, … Data persistence – industrial user wants data deleted after run Configuring HPC system access – adding RPI BlueGene /Q support Twin-screw extruder axial velocity: (left) two threads of the screw and (right) cross-section across the extruder. PHASTA gateway experiment summary.
16
Component-Based Unstructured Mesh Infrastructure
17
File transfer a serious bottleneck in parallel simulation workflows All core parallel data and services accessed through APIs File-based workflows require no change of components Often first implementations done via files, but using APIs In-memory integration approaches use APIs Support effective migration from file-based to in-memory for “file-based” codes – replace I/O routines with routines that use APIs for transfer between data structures For more component-based codes the in-memory integration was easier to implement than file based In-memory has far superior parallel performance
18
Adaptive Loop Applications Adaptive loops to date have been used for Modeling of nuclear accidents and various flow problems with University of Colorado’s PHASTA code Solid mechanics applications with Sandia’s Albany code Modeling fusion MHD with PPPL’s M3D-C1 code Accelerator modeling problems with SLAC’s ACE3P code Aerodynamics problems with NASA’s Fun3D code Waterway flow problems with ERDC’s Proteus code High-order fluids simulations with Nektar++ Modeling a dam break Plastic deformation of a mechanical part
19
Complex Flow Simulations
20
Active flow control on vertical tail improves its effectiveness. Massively parallel simulations provide tremendous physical insights. Integrated experimental and numerical investigation at UC Boulder and RPI. Active Flow Control on Vertical Tail AFC results in reduction of drag/size of the tail. 20 Petascale simulations
21
Two-phase modeling using level-sets coupled to structural activation Adaptive mesh control – reduces mesh required from 20 million elements to 1 million New ARO project using explicit interface tracking to track reacting particles Adaptive Two-Phases Flow
22
Modeling Ceramic Extrusion Objectives Develop end-to-end workflow for modeling ceramic extrusion Tools SimModeler – mesh generation and problem definition PHASTA – massively parallel CFD Chef – pre-processing, solution transfer, and mesh adaptation driver Kitware Paraview – visualization Status and Plans Non-linear material model and partial- slip boundary condition into PHASTA. Extended pre-processor to support partial-slip boundary condition. Created XSEDE web-based gateway for automated execution of workflow. Planning gateway support for CCI. Velocity and pressure fields. Twin screw extruder.
23
Aerodynamics Simulations
24
NASA Trap Wing Zoom of leading edge of the main wing Adapted: LEV2 Initial: LEV0 C p plots, near the tip
25
Summary Technologies and tools needed to create effective HPC workflows for industry are available However, it is not a “field of dream” – just building the tools will not get industry to come use them Need to work with industry to create effective simulation workflows that address their needs Progress is being made on developing the needed tools and mechanisms – more progress is needed Requires too much expertise Takes too much time/effort Even with additional improvement, expect that it will still be a “contact sport” requiring interactions between computational scientists and the engineers that will use the simulations
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.