Interoperable Mesh Tools for Petascale Applications Lori Diachin, LLNL Representing the ITAPS Team.

Slides:



Advertisements
Similar presentations
Laser Direct Manufacturing of Nuclear Power Components
Advertisements

WinDS-H2 Model and Analysis Walter Short, Nate Blair, Donna Heimiller, Keith Parks National Renewable Energy Laboratory May 27, 2005 Project AN4 This presentation.
Concentrating Solar Deployment Systems (CSDS) A New Model for Estimating U.S. Concentrating Solar Power Market Potential Nate Blair, Walter Short, Mark.
Megan Houchin Safety Analysis Engineering Y-12 National Security Complex SAWG May 7 th, 2012.
Keeping your Automated Devices Alive
Coupling Continuum Model and Smoothed Particle Hydrodynamics Methods for Reactive Transport Yilin Fang, Timothy D Scheibe and Alexandre M Tartakovsky Pacific.
Thermo-fluid Analysis of Helium cooling solutions for the HCCB TBM Presented By: Manmeet Narula Alice Ying, Manmeet Narula, Ryan Hunt and M. Abdou ITER.
Software Version Control SubVersion software version control system WebSVN graphical interface o View version history logs o Browse directory structure.
Steady Aeroelastic Computations to Predict the Flying Shape of Sails Sriram Antony Jameson Dept. of Aeronautics and Astronautics Stanford University First.
Parallel Mesh Refinement with Optimal Load Balancing Jean-Francois Remacle, Joseph E. Flaherty and Mark. S. Shephard Scientific Computation Research Center.
Advancing Computational Science Research for Accelerator Design and Optimization Accelerator Science and Technology - SLAC, LBNL, LLNL, SNL, UT Austin,
SLAC is focusing on the modeling and simulation of DOE accelerators using high- performance computing The performance of high-brightness RF guns operating.
Slide 1 Upgrading the United States Transuranium and Uranium Registries’ Pathology Database Stacey L. McCord, MS USTUR Project Associate
Introduction to virtual engineering László Horváth Budapest Tech John von Neumann Faculty of Informatics Institute of Intelligent Engineering.
Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), Center Director David Brown (LLNL), Ed D’Azevedo (ORNL), Joe Flaherty (RPI), Lori.
Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), Center Director David Brown (LLNL), Co-PI Ed D’Azevedo (ORNL), Co-PI Joe Flaherty.
Grid Generation.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear.
1 Jon Sudduth Project Engineer, Intelligent Grid Deployment SWEDE April 26, 2011.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency & Renewable Energy, operated by the Alliance for Sustainable.
1 Presenters: Cameron W. Smith and Glen Hansen Workflow demonstration using Simmetrix/PUMI/PAALS for parallel adaptive simulations FASTMath SciDAC Institute.
Deposition Velocity Issues at Y-12 Bruce A Wilson Chief Engineer, Nuclear Facility Safety Douglas Clark Analyst B&W Technical Services Y-12 May 9, 2012.
 A.C. Bauer, M.S. Shephard, E. Seol and J. Wan,   Scientific Computation Research Center  Rensselaer Polytechnic Institute,
Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), David Brown (LLNL), Lori Freitag (ANL), PIs Ed D’Azevedo (ORNL), Joe Flaherty (RPI),
Modeling and Validation of a Large Scale, Multiphase Carbon Capture System William A. Lane a, Kelsey R. Bilsback b, Emily M. Ryan a a Department of Mechanical.
Interoperable Geometry and Mesh Components for SciDAC ApplicationsTerascaleSimulation Tools and Technologies The TSTT Interfaces TSTTB: “Base” Tags Error.
PRES-ET A011 Lynn J. Harkey SDIT Project Engineer Uranium Processing Facility Project B&W Y-12 August 26, 2009 The Process, Methods and Tool Used.
OG&E’s Smart Study TOGETHER: Impact Assessment of Enabling Technologies and Dynamic Pricing Rates Katie Chiccarelli, Craig Williamson January 24, 2012.
Accelerating Scientific Exploration Using Workflow Automation Systems Terence Critchlow (LLNL) Ilkay Altintas (SDSC) Scott Klasky(ORNL) Mladen Vouk (NCSU)
USAEE Conference 2011, CJN Oct 2011 The Role of CCS under a Clean Energy Standard 30 th USAEE/IAEE Conference Oct 10, 2011 Washington, DC Chris Nichols,
2011 Broward Municipal Green Initiatives Survey Results GHG Mitigation Energy 2/3 of Broward’s reporting municipalities have implemented incentives or.
Gas-Electric System Interface Study OPSI Annual Meeting October 8, 2013 Raleigh, North Carolina.
Combinatorial Scientific Computing and Petascale Simulation (CSCAPES) A SciDAC Institute Funded by DOE’s Office of Science Investigators Alex Pothen, Florin.
Y-12 Integration of Security and Safety Basis, Including Firearms Safety David Sheffey Safety Analysis, Compliance, and Oversight Manager B&W Technical.
Terascale Simulation Tools and Technology Center TSTT brings together existing mesh expertise from Labs and universities. State of the art: many high-quality.
The Terascale Simulation Tools and Technologies Center Simulation SimulationToolsandTechnologies David Brown (Lawrence.
1 1  Capabilities: Dynamic load balancing and static data partitioning -Geometric, graph-based, hypergraph-based -Interfaces to ParMETIS, PT-Scotch, PaToH.
TerascaleSimulation Tools and Technologies The TSTT Interface Definition Effort Lori Freitag Diachin Lawrence Livermore National Lab.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency & Renewable Energy, operated by the Alliance for Sustainable.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Stress constrained optimization using X-FEM and Level Set Description
TerascaleSimulation Tools and Technologies The Mesquite Mesh Quality Improvement Toolkit Lori Diachin LLNL.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency & Renewable Energy, operated by the Alliance for Sustainable.
Adaptive Meshing Control to Improve Petascale Compass Simulations Xiao-Juan Luo and Mark S Shephard Scientific Computation Research Center (SCOREC) Interoperable.
Software Prototyping Rapid software development to validate requirements.
MESQUITE: Mesh Optimization Toolkit Brian Miller, LLNL
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
Technical Report NREL/TP April 2008 Controlled Hydrogen Fleet and Infrastructure Demonstration and Validation Project Spring 2008 Composite Data.
1 Technical Report NREL/TP June 2010 Early Fuel Cell Market Deployments: ARRA Quarter 1 of 2010 Composite Data Products Final Version February.
V UNCLASSIFIED This document has been reviewed by a Y-12 DC/UCNI-RO and has been determined to be UNCLASSIFIED and contains no UCNI. This review does not.
Presented by Adaptive Hybrid Mesh Refinement for Multiphysics Applications Ahmed Khamayseh and Valmor de Almeida Computer Science and Mathematics Division.
TR&D 2: NUMERICAL TOOLS FOR MODELING IN CELL BIOLOGY Software development: Jim Schaff Fei Gao Frank Morgan Math & Physics: Boris Slepchenko Diana Resasco.
TerascaleSimulation Tools and Technologies The Terascale Simulation Tools and Technologies Center James Glimm David Brown Lori Freitag Diachin March 2004.
Evaluation of the Impact to the Safety Basis of Research Conducted in Production Facilities at the Y-12 National Security Complex Rebecca N. Bell Senior.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
© Siemens Product Lifecycle Management Software Inc. All rights reserved Siemens PLM Software Solver Language Environment.
Scalable Coupled ICT and Power Grid Simulation - High-performance Coupled Transmission, Distribution, and Communication Simulation Tool 15PESGM2794 Liang.
Unstructured Meshing Tools for Fusion Plasma Simulations
Challenges in Electromagnetic Modeling Scalable Solvers
Parallel Unstructured Mesh Infrastructure
Construction of Parallel Adaptive Simulation Loops
Many-core Software Development Platforms
Parallel 3D Finite Element Particle-In-Cell Simulations with Pic3P*
Subset Selection in Multiple Linear Regression
L Ge, L Lee, A. Candel, C Ng, K Ko, SLAC
September Workshop and Advisory Board Meeting Presenter Affiliation
September Workshop and Advisory Board Meeting Presenter Affiliation
Mitigating Inter-Job Interference Using Adaptive Flow-Aware Routing
2/3 20% 71% Half 54% Over Half 45% 14% Introduction GHG Mitigation
Presentation transcript:

Interoperable Mesh Tools for Petascale Applications Lori Diachin, LLNL Representing the ITAPS Team

2 ITAPS focuses on interoperable meshing and geometry services for SciDAC ITAPS Goal – Improve SciDAC applications’ ability to take advantage of state-of-the-art meshing and geometry tools – Develop the next generation of meshing and geometry tools for petascale computing Technology Focus Areas – Complex geometry – High quality meshes and adaptivity – Coupled phenomenon – Dynamic mesh calculations – Tera/Petascale computing

3 Accomplishing the ITAPS interoperability goal requires a strong team with diverse expertise Lori Diachin LLNL Ed d’Azevedo ORNL Jim Glimm BNL/SUNY SB Ahmed Khamayseh ORNL Bill Henshaw LLNL Pat Knupp SNL Xiaolin Li SUNY SB Roman Samulyak BNL Ken Jansen RPI Mark Shephard RPI Harold Trease PNNL Tim Tautges ANL Carl Ollivier-Gooch UBC Our senior personnel are experts in complex geometry tools, mesh generation, mesh quality improvement, front tracking, partitioning, mesh refinement, PDE solvers, and working with application scientists Our senior personnel are experts in complex geometry tools, mesh generation, mesh quality improvement, front tracking, partitioning, mesh refinement, PDE solvers, and working with application scientists Karen Devine SNL

4 ITAPS will enable use of interoperable and interchangeable mesh and geometry tools Build on successes with SciDAC-1 applications and explore new opportunities with SciDAC-2 application teams Develop and deploy key mesh, geometry and field manipulation component services needed for petascale computing applications Develop advanced functionality integrated services to support SciDAC application needs – Combine component services together – Unify tools with common interfaces to enable interoperability Common Interfaces Component Tools Petascale Integrated Tools Build on Are unified by

5 We have a suite of ITAPS services that are built on the ITAPS common interfaces MeshGeometryRelationsField Common Interfaces Component Tools Are unified by Petascale Integrated Tools Build on Mesh Improve Front tracking Mesh Adapt Interpolation Kernels Swapping Dynamic Services Geom/Mesh Services AMR Front tracking Shape Optimization Solution Adaptive Loop Solution Transfer Petascale Mesh Generation N N N N N N         P P P P P P   P P   P P    

6 ITAPS technologies impact SciDAC applications in three ways 1.Direct use of ITAPS technology in applications – Geometry tools, mesh generation and optimization for accelerators and fusion – Mesh adaptivity for accelerators and fusion – Front tracking for astrophysics and groundwater – Partitioning techniques for accelerators and fusion 2.Technology advancement through demonstration and insertion of key new technology areas – Design optimization loop for accelerators (w/ TOPS) – Petascale mesh generation for accelerators 3.Enabling future applications with ITAPS services and interfaces – Parallel mesh-to-mesh transfer for multi-scale, multi- physics applications – Dynamic mesh services for adaptive computations Current ITAPS/TOPS/SLAC shape optimization activities will improve ILC design process

7 Work with the COMPASS project spans many different ITAPS service areas Mesh Control – Correct curvilinear meshes that properly satisfy the geometric approximation requirements Adaptive Mesh Refinement Parallel Mesh Generation Embedded Boundary Discretization Partitioning Techniques Geometry Search and Sort ILC cryomodule consisting of 8 TDR cavities Beamline Absorber

8 Mesh correction for curves required new technology developments Bezier higher order mesh shape – Analytical validity determination – Determine key mesh entity causing invalidity Curved mesh modifications for invalid elements – Reshape, split, collapse, swap and refinement Curved Edge Split to Fix Model Tangency Curved Region Split Reshape

9 Results allowed simulations to run faster and further than before Initial mesh – 108k mesh regions – 250 invalid regions – Solution blows-up unless negative contribution removed Corrected mesh – No invalid regions – Solution process and 37.8% faster (CG iterations per time step reduced) Valid curved mesh after operations 2.97M regions by correcting 1,357 invalid curved regions

10 Goals for curved meshes in thin sections – Thermal/Mechanical multiphysics simulations – Curved anisotropic meshes for thin sections for computational efficiency

11 Technology Developments Automatically identify thin sections for complex geometry Construct curved anisotropic layer elements with proper order

12 Initial results are promising; working with SLAC on use in simulation setting Straight-sided and curved anisotropic mesh for the cell model Close-up of the three curved layer elements on top of the model faces 2,664 tetrahedral regions To Do: Technically complex problem; may need fine- tuning in application setting

13 New area: use moving mesh refinement regions to increase computational efficiency SciDAC codes - Pic3P/Pic2P – Isotropic refinement for defined particle domains – Coarse possible meshes for the rest domains – Pre-defined particle domain is time-dependent – Achieve computational efficiency and accuracy Technical development – Built on the mesh modification tool (does all the mesh level work) – Define moving mesh size field with blend PIC in long structures

14 Early results show significant savings New Procedure using Adaptive Controlled Mesh - 1,030,121 Regions Compared to a Uniform Mesh - 4,880,593 Regions To Do: Iterate to determine best use Correct small number of poor quality elements

15 Adapted mesh (23,082,517 tets) Initial mesh (1,595 tets) Parallel adaptive loop for SLAC accelerator design – Geometry defined in CAD modeler – Omega3P code from SLAC – High level modeling accuracy needed – Adaptive mesh control to provide accuracy needed – Adaptive loop runs in serial and parallel To Do: Initial results used file transfer; need to coordinate parallel I/O Ensure results are satisfactory

16 Parallel adaptive loop for SLAC uses many different software components Using geometry operators means alternate solid modelers can be inserted Using ITAPS mesh operators means alternate mesh generators and mesh adaptation procedures can be inserted Using ITAPS field operators allows easy construction of alternative error estimators Projection-based error estimator used to construct new mesh size field given to mesh modification Mesh adaptation based on local modification linked directly to CAD Unaltered SLAC code Unaltered SLAC code Error estimators from RPI and SLAC

17 Optimizing a cavity design is still mostly a manual process Future accelerators employ complex cavity shapes that require optimization to improve performance Geometry & meshing support: bl al a1 a2 b1 ar br b ra1 ra2 zcl zcr zcbzcc zcll Fixed mesh topology: Convergence No re-meshing Re-use factorization Shape Optimization for Accelerator Cavity Design Omega3P Sensitivity optimization ITAPS Geometry & Mesh Update Services Omega3P

18 Geometry MeshUpdate ITAPS Geometry & Mesh Update Services Projection x r (d’)→ G(d’) δd, d’ x r (d) x Ω (d) x r (d’) Omega3P d : Design parameter vector d’ : d + δd, new design parameters G(d)↔x r (d) : Geometry-mesh classification G(d’) : Geometry for parameters d’ x r (d’), x Ω (d’) : Mesh for new iteration G(d’) Design Velocity ∂x r / ∂G, ∂n/∂x r ∂G/ ∂d n(x r (d’)) Omega3P Sensitivity x r (d) x r (d’) x Ω (d’)

19 Services provided by ITAPS DDRIV tool Parameterized geometric model construction – You write function which constructs model using iGeom – DDRIV acts as driver and handles IO Coordination of mesh smoothing on geometric model Re-classification of “old” mesh on “new” model Target matrix-based smoothing of re-classified mesh Computation of design velocities & embedding on mesh using iMesh Tags Smooth Curves Smooth Volume Smooth Surfaces New geom, old mesh Project to CAD, inverted elements To Do: Incorporate with TOPS/SLAC optimization tools Parallelize all aspects

20 Parallel Mesh Generation Needed for Petascale Simulations Modeling long range electro- magnetic effects in the ILC requires meshes 4X – 8X current meshes Status: – Have a prototype CAD-based parallel mesh generation capability – Demonstrated good scaling for SLAC NLC accelerator To Do: update to production capability – Adaptive size control – Link to mesh partitioning/load balancing

21 Dynamic load balancing and partitioning via the Zoltan toolkit Reduce total execution time – Distributing work evenly among processors – Reducing applications’ interprocessor communication – Keeping data movement costs low Important in many SciDAC technologies – Adaptive mesh refinement – Particle-in-cell methods – Parallel remeshing – Linear solvers and preconditioners Adaptive Mesh Refinement Particle Simulations

22 Contact Information ITAPS Web Site: Lori Diachin: We actively seek and welcome your input!

23 LLNL Disclaimer and Auspices This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California, and shall not be used for advertising or product endorsement purposes. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.