Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), Center Director David Brown (LLNL), Ed D’Azevedo (ORNL), Joe Flaherty (RPI), Lori.

Slides:



Advertisements
Similar presentations
4 th order Embedded Boundary FDTD algorithm for Maxwell Equations Lingling Wu, Stony Brook University Roman Samulyak, BNL Tianshi Lu, BNL Application collaborators:
Advertisements

MUTAC Review April 6-7, 2009, FNAL, Batavia, IL Mercury Jet Target Simulations Roman Samulyak, Wurigen Bo Applied Mathematics Department, Stony Brook University.
Coupling Continuum Model and Smoothed Particle Hydrodynamics Methods for Reactive Transport Yilin Fang, Timothy D Scheibe and Alexandre M Tartakovsky Pacific.
A Bezier Based Approach to Unstructured Moving Meshes ALADDIN and Sangria Gary Miller David Cardoze Todd Phillips Noel Walkington Mark Olah Miklos Bergou.
A Bezier Based Approach to Unstructured Moving Meshes ALADDIN and Sangria Gary Miller David Cardoze Todd Phillips Noel Walkington Mark Olah Miklos Bergou.
Parallel Mesh Refinement with Optimal Load Balancing Jean-Francois Remacle, Joseph E. Flaherty and Mark. S. Shephard Scientific Computation Research Center.
Network and Grid Computing –Modeling, Algorithms, and Software Mo Mu Joint work with Xiao Hong Zhu, Falcon Siu.
B1 -Biogeochemical ANL - Townhall V. Rao Kotamarthi.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), Center Director David Brown (LLNL), Co-PI Ed D’Azevedo (ORNL), Co-PI Joe Flaherty.
Center for Component Technology for Terascale Simulation Software (aka Common Component Architecture) (aka CCA) Rob Armstrong & the CCA Working Group Sandia.
Loads Balanced with CQoS Nicole Lemaster, Damian Rouson, Jaideep Ray Sandia National Laboratories Sponsor: DOE CCA Meeting – January 22, 2009.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Grid Generation.
Version, Date, POC Name 1 Purpose: To investigate multiscale flow discretizations that represent both the geometry and solution variable using variable-order.
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN Interfaces: The Key to Code Reuse and.
 A.C. Bauer, M.S. Shephard, E. Seol and J. Wan,   Scientific Computation Research Center  Rensselaer Polytechnic Institute,
Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), David Brown (LLNL), Lori Freitag (ANL), PIs Ed D’Azevedo (ORNL), Joe Flaherty (RPI),
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
Brookhaven Science Associates U.S. Department of Energy MUTAC Review January 14-15, 2003, FNAL Target Simulations Roman Samulyak Center for Data Intensive.
Parallel Simulation of Continuous Systems: A Brief Introduction
Components for Beam Dynamics Douglas R. Dechow, Tech-X Lois Curfman McInnes, ANL Boyana Norris, ANL With thanks to the Common Component Architecture (CCA)
Issues in (Financial) High Performance Computing John Darlington Director Imperial College Internet Centre Fast Financial Algorithms and Computing 4th.
Terascale Simulation Tools and Technology Center TSTT brings together existing mesh expertise from Labs and universities. State of the art: many high-quality.
The Terascale Simulation Tools and Technologies Center Simulation SimulationToolsandTechnologies David Brown (Lawrence.
Presented by An Overview of the Common Component Architecture (CCA) The CCA Forum and the Center for Technology for Advanced Scientific Component Software.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
TerascaleSimulation Tools and Technologies The TSTT Interface Definition Effort Lori Freitag Diachin Lawrence Livermore National Lab.
Presented by Scientific Data Management Center Nagiza F. Samatova Network and Cluster Computing Computer Sciences and Mathematics Division.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
TerascaleSimulation Tools and Technologies The Mesquite Mesh Quality Improvement Toolkit Lori Diachin LLNL.
Interoperable Mesh Tools for Petascale Applications Lori Diachin, LLNL Representing the ITAPS Team.
Parallel and Distributed Computing Research at the Computing Research Institute Ananth Grama Computing Research Institute and Department of Computer Sciences.
Adaptive Meshing Control to Improve Petascale Compass Simulations Xiao-Juan Luo and Mark S Shephard Scientific Computation Research Center (SCOREC) Interoperable.
VAPoR: A Discovery Environment for Terascale Scientific Data Sets Alan Norton & John Clyne National Center for Atmospheric Research Scientific Computing.
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
MODEL-BASED SOFTWARE ARCHITECTURES.  Models of software are used in an increasing number of projects to handle the complexity of application domains.
Domain Decomposition in High-Level Parallelizaton of PDE codes Xing Cai University of Oslo.
CASE (Computer-Aided Software Engineering) Tools Software that is used to support software process activities. Provides software process support by:- –
Progress on Component-Based Subsurface Simulation I: Smooth Particle Hydrodynamics Bruce Palmer Pacific Northwest National Laboratory Richland, WA.
Large Scale Time-Varying Data Visualization Han-Wei Shen Department of Computer and Information Science The Ohio State University.
An Evaluation of Partitioners for Parallel SAMR Applications Sumir Chandra & Manish Parashar ECE Dept., Rutgers University Submitted to: Euro-Par 2001.
Chapter 4 Automated Tools for Systems Development Modern Systems Analysis and Design Third Edition 4.1.
Presented by Adaptive Hybrid Mesh Refinement for Multiphysics Applications Ahmed Khamayseh and Valmor de Almeida Computer Science and Mathematics Division.
TR&D 2: NUMERICAL TOOLS FOR MODELING IN CELL BIOLOGY Software development: Jim Schaff Fei Gao Frank Morgan Math & Physics: Boris Slepchenko Diana Resasco.
TerascaleSimulation Tools and Technologies The Terascale Simulation Tools and Technologies Center James Glimm David Brown Lori Freitag Diachin March 2004.
The Performance Evaluation Research Center (PERC) Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego Lawrence Berkeley Natl.
Targetry Simulation with Front Tracking And Embedded Boundary Method Jian Du SUNY at Stony Brook Neutrino Factory and Muon Collider Collaboration UCLA.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
Motivation: dynamic apps Rocket center applications: –exhibit irregular structure, dynamic behavior, and need adaptive control strategies. Geometries are.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Center for Extended MHD Modeling (PI: S. Jardin, PPPL) –Two extensively developed fully 3-D nonlinear MHD codes, NIMROD and M3D formed the basis for further.
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN CCA Status, Code Walkthroughs, and Demonstrations.
Adaptive grid refinement. Adaptivity in Diffpack Error estimatorError estimator Adaptive refinementAdaptive refinement A hierarchy of unstructured gridsA.
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN This work has been sponsored by the Mathematics,
Quality of Service for Numerical Components Lori Freitag Diachin, Paul Hovland, Kate Keahey, Lois McInnes, Boyana Norris, Padma Raghavan.
CSCAPES Mission Research and development Provide load balancing and parallelization toolkits for petascale computation Develop advanced automatic differentiation.
Unstructured Meshing Tools for Fusion Plasma Simulations
Xing Cai University of Oslo
Scalable Interfaces for Geometry and Mesh based Applications (SIGMA)
Parallel Unstructured Mesh Infrastructure
Modern Systems Analysis and Design Third Edition
Programming Models for SimMillennium
Construction of Parallel Adaptive Simulation Loops
L Ge, L Lee, A. Candel, C Ng, K Ko, SLAC
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Presentation transcript:

Terascale Simulation Tools and Technologies Center Jim Glimm (BNL/SB), Center Director David Brown (LLNL), Ed D’Azevedo (ORNL), Joe Flaherty (RPI), Lori Freitag (ANL), Patrick Knupp (SNL), Mark Shephard (RPI), Harold Trease (PNNL), Co-PIs

TSTT-2 The TSTT Center will bring terascale simulation technology to applications scientists n Observation: Terascale computing will enable high-fidelity calculations based on multiple coupled physical processes and multiple physical scales n Adaptive methods n Composite or hybrid solution strategies n High-Order discretization strategies n Barrier: The lack of easy-to-use interoperable meshing, discretization, and adaptive tools requires too much software expertise by application scientists The TSTT recognizes this gap and will address the technical and human barriers preventing use of adaptive, composite, hybrid methods

TSTT-3 TSTT will develop interoperable meshing and discretization technology components n Meshing and Discretization Research and Development n high-quality, hybrid mesh generation for complex domains n front tracking and other adaptive approaches n high-order discretization techniques n algorithms for terascale computing n Software interoperability is a pervading theme n initial design will account for interoperability at all levels n encapsulate research into software components n define interfaces for plug-and-play experimentation n Application deployment and testing is paramount n SciDAC collaborations in accelerator design, fusion, climate and chemically reacting flows n existing DOE application collaborations in biology, mixing fluids, and many more

TSTT-4 Existing Tools for Mesh Generation A wide variety of tools exist for the generation of … n … structured meshes n Overture - high quality predominantly structured meshes on complex CAD geometries (LLNL) n Variational and Elliptic Grid Generators (ORNL, SNL) n … unstructured meshes n MEGA (RPI) - primarily tetrahedral meshes, boundary layer mesh generation, curved elements, AMR n CUBIT (SNL) - primarily hexahedral meshes, automatic decomposition tools, common geometry module n NWGrid (PNNL) - hybrid meshes using combined Delaunay, AMR and block structured algorithms These tools all meet particular needs, but they do not interoperate to form hybrid, composite meshes MEGA Boundary Layer Mesh (RPI) Overture Diesel Engine Mesh (LLNL)

TSTT-5 Geometric Hierarchy Required to n provide a common frame of reference for all tools n facilitate multilevel solvers n facilitate transfer of information in discretizations n Level 0: Original problem specification via high level geometric description n Level 1/2: Decomposition into subdomains and mesh components that refer back to Level 0 n Level 3: Partitioning Given Geometry Specification Domain Decomposition Mesh Components Parallel Decomposition Level 3 Level 0 Level 2 Level 1 P0 P2 P3 P1P4 P6 P7 P5 P8 Pa Pb P9 Pc Pe Pf Pd

TSTT-6 Mesh Data Hierarchy n Level A: Geometric description of the domain n Accessed via tools such as CGM (SNL) or functional interfaces to solid modeling kernels (RPI) n Level B: Full geometry hybrid meshes n mesh components n communication mechanisms that link them (key new research area) n allows structured and unstructured meshes to be combined in a single computation n Level C: Mesh Components Geometry Information (Level A) Full Geometry Meshes (Level B) Mesh Components (Level C)

TSTT-7 Access to Mesh Data Hierarchy... n … as a single object (high-level common interfaces) n TSTT will develop functions that provide, e.g., n PDE discretization operators n adaptive mesh refinement n multilevel data transfer n Prototype provided by Overture framework n Enables rapid development of new mesh-based applications n … through the mesh components (low-level common interfaces) n TSTT will provide, e.g., n element-by-element access to mesh components n fortran-callable routines that return interpolation coefficients at a single point (or array of points) n Facilitates incorporation into existing applications

TSTT-8 Common Interface Specification n Initially focus on low level access to static mesh components (Level C) n Data: mesh geometry, topology, field data n Efficiency though n Access patterns appropriate for each mesh type n Caching strategies and agglomerated access n Appropriateness through working with n Application scientists n TOPS and CCA SciDAC ISICs n Application scientists program to the common interface and can than use any conforming tool without changing their code n High level interfaces n to entire grid hierarchy which allows interoperable meshing by creating a common view of geometry n mesh adaptation including error estimators and curved elements n All TSTT tools will be interface compliant

TSTT-9 Mesh Data Hierarchy Construction n Level 0 to Level 1 geometry n Leverage existing TSTT tools that provide graphical interfaces to decompose the initial geometry into subdomains n CGM (SNL), Overture (LLNL) n Level 1 mesh components n Leverage exsiting mesh generation tools n Level C to Level B hybrid meshes n Stitching algorithms n Overlapping meshes Start with a set of component meshes... … Cut holes... … Stitch together to form a hybrid mesh Overture Stitching Algorithm (LLNL) CUBIT Geometry Decomposition (SNL)

TSTT-10 Enhancing Mesh Generation Capabilities n Will leverage most existing TSTT technology “as is” n Provisions for n Creating interface compliant tools n Improving mesh generation capabilities on complex geometries for high order elements n Curvilinear elements n Geometry approximations n Interoperability of appropriate tools n e.g., ORNL elliptic and variational mesh generators with Overture n Mesh quality control for hybrid meshes Linear coarse elements verses high-order, curvilinear P elements in MEGA (RPI)

TSTT-11 Mesh Quality Control n Unstructured mesh quality research and development is provided by MESQUITE (SNL, ANL) n optimization-based smoothing n reconnection schemes n development of quality metrics for high order methods n a posteriori quality control using error estimators n PDE-solution based mesh optimization will be investigated for overlapping and hybrid meshes Improvedmesh 8x error reduction by selecting optimal mesh generation parameters

TSTT-12 Dynamic Mesh Evolution n Geometry evolves due to n Adaptive mesh refinement n Internally tracked interfaces (e.g., shocks) n Motion of the domain boundary MEGA Rayleigh-Taylor Simulation (RPI) Overture simulation of Hele-Shaw flow

TSTT-13 TSTT Research in Mesh Evolution n Requires evolution of both the hierarchy and the individual mesh components n TSTT will provide interfaces that allow n the mesh tools to access the changing geometry n the application programmer to access the changing mesh n local or global modifications n New techniques will address n Curvilinear geometries to preserve convergence rates of high order discretizations n abstraction of adaptive techniques to provide “plug and play” n adaptive techniques that use multiple criteria to extend applicability n automatic selection and application of optimal strategies

TSTT-14 Combining TSTT technologies will improve front tracking techniques FronTier interface representation n n Improve conservation properties and accuracy at the front by inserting a surface determined by front tracking into a volume mesh n n Results in a front-adaptive space-time discretization

TSTT-15 TSTT will ease the use of high order discretization methods n Observation: Complexities of using high-order methods on adaptively evolving grids has hampered their widespread use n Tedious low level dependence on grid infrastructure n A source of subtle bugs during development n Bottleneck to interoperability of applications with different discretization strategies n Difficult to implement in general way while maintaining optimal performance n Result has been a use of sub-optimal strategies or lengthy implementation periods n TSTT Goal: to eliminate these barriers by developing a Discretization Library

TSTT-16 The Discretization Library Will... n … contain numerous mathematical operators n Start with +, -, *, /, interpolation, prologation n Move to div, grad, curl, etc. n Both strong and weak (variational) forms of operators when applicable n … contain numerous discretization strategies n Finite Difference, Finite Volume, Finite Element, Discontinuous Galerkin, Spectral Element, Partition of Unity n Emphasize high-order and variable-order methods n various boundary condition operators n … be independent of the underlying mesh infrastructure n Utilizes the common low-level mesh interfaces n All TSTT mesh tools will be available n … be extensible to allow user-defined operators and boundary conditions

TSTT-17 Additional Functionalities n Support for Temporal Discretization n Method of Lines formulation (time steps and temporal methods are spatially independent) n Local Refinement Methods (time steps and methods vary in space) n Space-Time techniques (unstructured meshes are used in both space and time) n Support for Adaptive Methods n Error estimators n Richardson’s extrapolation (meshes of different resolution) n P-refinement estimators n Solution gradient and vorticity metrics n Optimal strategies for mesh enrichment (combinations of p- and h- adaptivity) n Combined with work on mesh quality improvement n Support for Interpolation n Between meshes and operators n Local conservation when mapping between meshes

TSTT-18 Performance of Discretization Library n Kernel operations imply good performance is critical n Single Processor Performance n Compile time optimization of user-defined high level abstractions via ROSE (LLNL) n Consider hierarchical memory performance and cache usage n Terascale Computing n scalability of local operations requires good partitioning strategies n Efficiency determined by the size of the partition boundary relative to the partition volume n Will leverage the experience of n LLNL’s Overture project that supports structured mesh topologies n RPI’s Trellis project for variational discretization

TSTT-19 Benefits of the Discretization Library n Lowers the time, cost, and effort to effectively deploy modern discretization tools n High-level access for new application development on TSTT Level B meshes n Mid- and low-level access for insertion into existing technology n Increases reliability of application codes by eliminating a common source of coding errors n Enhances software reuse n Permits easy experimentation with various combinations of discretization strategies and mesh technologies for a given application

TSTT-20 Issues in Terascale Computing n Observation: Many tools exist that utilize hierarchical design principles to achieve good performance at the terascale n e.g., multi-level partitioners, multigrid solvers, multiresolution visualization tools n Barrier: Their union is not optimized n often difficult to take advantage of the multiresolution representations from one solution stage to the next n TSTT Goal: To design our hierarchy and tools so that downstream tools can take advantage of the multi- resolution information n Actively consider trade-offs across the entire simulation n Allow preservation of information as desired n e.g., subdomain decompositions used in creating a hybrid mesh may be similarly useful in preconditioning of iterative solvers

TSTT-21 Parallel Mesh Generation n Primarily leverage existing TSTT tools for parallel mesh generation n Current Techniques n generate a coarse mesh on the geometry and distribute that for further refinement n distribute complete level 1 geometry information to each processor n New development focuses on the partitioning and distribution of Level 1 geometry description n Provides a start to finish scalable solution for mesh generation

TSTT-22 Load Balancing n Use existing tools for partitioning n Chaco and Metis for static partitioning n Zoltan library (SNL) for dynamic partitioning n Develop and provide interfaces from TSTT software to Zoltan to ensure seamless operation n Augment Zoltan n Research methods to accommodate hierarchical machine models and heterogeneous parallel computers n different processor speeds, memory capacities, cache structures, networking speeds n RPM (RPI) and PADRE (LLNL) serve as prototypes n Load balancing strategies for adaptive, structured, overlapping grids n MLB (LLNL) serves as a prototype

TSTT-23 SciDAC ISIC Collaborations n CCA: (PI: Armstrong) n co-develop common interfaces for mesh and field data n create CCA-compliant mesh components and provide them in the CCA component repository n explore the role of the component model in the composition of numerous discrete operators n performance critical operations n extend ROSE project to explore component models n TOPS: (PI: Keyes) n provide mesh representations for multilevel techniques n co-develop well-defined interfaces to ensure that the meshes and discretization strategies will be interoperable with solution software n Performance: (PI: Bailey) n we will use ROSE preprocessor to develop highly-tuned discretization libraries n TSTT will provide benchmarks and a testing environment for developments in the performance ISIC

TSTT-24 SciDAC Applications: Accelerator Design n Particle Forces and EM Field Calculations (Ko) n TSTT will provide… n advanced mesh generation capabilities for complex geometries n Hybrid meshes that match conformally orthogonal structured grids to unstructured mixed element grids n Mesh quality and improvement to accelerate solver convergence n TSTT Points of contact: D. Brown, P. Knupp n Particle Tracking (Luccio) n TSTT will provide… n parallel decomposition tools to cluster particles into spatially coherent load balanced domains n assistance in the development of codes for adaptive solution to Poisson’s equation with realistic BC for rapid solution of the space charge n TSTT Point of Contact: J. Glimm

TSTT-25 SciDAC Applications: Fusion Magnetohydrodynamics modeling with parM3d (Jardin) n TSTT will provide… n higher-order finite element schemes for poloidal discretization n explore the use of TSTT mesh generation techniques for automating the process of flux-aligned unstructured meshes in the poloidal directions n Incorporation of adaptivity, mixed element meshes, and dynamic load balancing tools in the long term for resonant instability studies n Also will work with TOPS ISIC in the development of mesh abstractions for multilevel solvers n TSTT Point of contact: J. Flaherty

TSTT-26 SciDAC Applications: Chemically Reactive Flows n Computational Facility for Reacting Flow Science (Najm) n TSTT will provide… n high-order spectral elements deployed in current toolkit n collaborative development of CCA-compliant interfaces for block-structured mesh adaptation using GrACE n deployment of discretization library in GrACE (fourth order schemes are desired) n TSTT Points of contact: P. Fischer and L. Freitag n Model Jet Breakup and spray formation (non-SciDAC) n TSTT will provide… n Frontier interface tracking capabilities to provide a more accurate model n TSTT Point of Contact: J. Glimm

TSTT-27 SciDAC Applications: Climate n Community Climate System Model (J. Drake) n TSTT will provide… n Collaboration with model coupling toolkit (mct) developers to define locally conservative interpolation schemes between different mesh types n work with mct developers to include dynamic load balancing techniques for the case in which component models reside on dynamically changing sets of processors n TSTT Points of contact: L. Freitag n Global Transport models n TSTT will provide adaptive capabilities for local, regional, and global transport of atmospheric species and aerosols n TSTT Point of Contact: J. Glimm

TSTT-28 Other DOE Applications n Biosimulation modeling n Cardiac electrophysiology (BNL, PNNL) n Biofluids (ANL, RPI, LLNL) n Computational cell and organ physiology (PNNL, ORNL, LLNL) n Fluid instabilities in ICF applications (BNL, SB, RPI, PNNL, LLNL) n Jet breakup and spray modeling (BNL, SB, ANL, PNNL) n Free surface flow modeling for target design of a muon collider accelerator and liquid metal cooling in a Tokamak (BNL) n Flow in porous media (SB, BNL) n Accelerator tracking design (BNL)

TSTT-29 TSTT Institutional Roles n ANL n Co-lead mesh quality and optimization, contribute to discretization library, interoperable meshing and terascale computing. Liaison with CCA, climate, reacting flows, and biology applications n BNL n Leads the application effort and is liaison for climate and accelerator design. Leads efforts to create interoperability between Frontier and TSTT mesh generators, contributes to discretization library n LLNL n Co-leads design and implementation of mesh hierarchy and component design. Contributes performance optimization tools to discretization library and is liaison to the accelerator design app n ORNL n Contributes to mesh quality optimization, enhancement and interoperability. Contributes to climate and chemically reacting flow applications

TSTT-30 TSTT Institutional Roles n SNL n Co-leads efforts on mesh quality optimization, contributes to interoperable meshing, domain decomposition and load balancing. Liaison with accelerator application. n RPI n Co-leads the development of meshing and discretization technologies for mesh hierarchy and discretization libraries. Contributes to the load balancing work and serves as liaison to the fusion application. n PNNL n Contributes to interoperable meshing and terascale computing areas, liaison for the biology applications. n SUNY SB n Leads the interoperability of FronTier with meshing technologies and development of high-order versions. Liaison in spray simulations and oil reservoir applications.

TSTT-31 Contact Information n Jim Glimm, Center Director n Brookhaven National Lab and SUNY Stony Brook n n David Brown n Lawrence Livermore National Laboratory n n Patrick Knupp n Sandia National Laboratories n n Lori Freitag n Argonne National Laboratory n