Www.vacet.org E. WES BETHEL (LBNL), CHRIS JOHNSON (UTAH), KEN JOY (UC DAVIS), SEAN AHERN (ORNL), VALERIO PASCUCCI (LLNL), JONATHAN COHEN (LLNL), MARK DUCHAINEAU.

Slides:



Advertisements
Similar presentations
Detection and Visualization of Defects in 3D Unstructured Models of Nematic Liquid Crystals Ketan Mehta* & T. J. Jankun-Kelly Viz Lab, Computer Science.
Advertisements

4 th order Embedded Boundary FDTD algorithm for Maxwell Equations Lingling Wu, Stony Brook University Roman Samulyak, BNL Tianshi Lu, BNL Application collaborators:
Hank Childs Lawrence Berkeley National Laboratory /
Closest Point Transform: The Characteristics/Scan Conversion Algorithm Sean Mauch Caltech April, 2003.
An Efficient Multigrid Solver for (Evolving) Poisson Systems on Meshes Misha Kazhdan Johns Hopkins University.
Parallelizing stencil computations Based on slides from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
Large Vector-Field Visualization, Theory and Practice: Large Data and Parallel Visualization Hank Childs Lawrence Berkeley National Laboratory / University.
A Bezier Based Approach to Unstructured Moving Meshes ALADDIN and Sangria Gary Miller David Cardoze Todd Phillips Noel Walkington Mark Olah Miklos Bergou.
Dynamic Real-Time Deformations using Space & Time Adaptive Sampling Gilles Debunne Marie-Paule Cani Gilles Debunne Marie-Paule Cani Mathieu Desbrun Alan.
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
CSE351/ IT351 Modeling And Simulation Choosing a Mesh Model Dr. Jim Holten.
CSE351/ IT351 Modeling and Simulation
Evaluation and Optimization of a Titanium Adaptive Mesh Refinement Amir Kamil Ben Schwarz Jimmy Su.
E. WES BETHEL (LBNL), CHRIS JOHNSON (UTAH), KEN JOY (UC DAVIS), SEAN AHERN (ORNL), VALERIO PASCUCCI (LLNL), JONATHAN COHEN (LLNL), MARK DUCHAINEAU.
Implicit Surfaces Tom Ouyang January 29, Outline Properties of Implicit Surfaces Polygonization Ways of generating implicit surfaces Applications.
E. WES BETHEL (LBNL), CHRIS JOHNSON (UTAH), KEN JOY (UC DAVIS), SEAN AHERN (ORNL), VALERIO PASCUCCI (LLNL), JONATHAN COHEN (LLNL), MARK DUCHAINEAU.
Chapter 3 2D AND 3D SPATIAL DATA REPRESENTATIONS 김 정 준.
Efficient Parallelization for AMR MHD Multiphysics Calculations Implementation in AstroBEAR.
Adaptively Sampled Distance Fields (ADFs) A General Representation of Shape for Computer Graphics S. Frisken, R. Perry, A. Rockwood, T. Jones Richard Keiser.
Haptic Cloth Rendering 6th Dutch-Belgian Haptics Meeting TUDelft, 21 st June 2006 Lode Vanacken Expertise centre for Digital Media (EDM) Hasselt University.
Large Data Visualization on Distributed Memory Multi-GPU Clusters Thomas Fogal, Hank Childs, Siddharth Shankar, Jens Krüger, R. Daniel Bergeron, Philip.
Identifying Interplanetary Shock Parameters in Heliospheric MHD Simulation Results S. A. Ledvina 1, D. Odstrcil 2 and J. G. Luhmann 1 1.Space Sciences.
ITUppsala universitet Data representation and fundamental algorithms Filip Malmberg
SIDD: Short Range Interactions for Distributed Data A new algorithm for solid-boundary communication. Sean Mauch Caltech October, 2003.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Parallel Adaptive Mesh Refinement Combined With Multigrid for a Poisson Equation CRTI RD Project Review Meeting Canadian Meteorological Centre August.
Grid Generation.
A Parallelisation Approach for Multi-Resolution Grids Based Upon the Peano Space-Filling Curve Student: Adriana Bocoi Advisor: Dipl.-Inf.Tobias Weinzierl.
Improving Coarsening and Interpolation for Algebraic Multigrid Jeff Butler Hans De Sterck Department of Applied Mathematics (In Collaboration with Ulrike.
Hybrid WENO-FD and RKDG Method for Hyperbolic Conservation Laws
1 Data Structures for Scientific Computing Orion Sky Lawlor charm.cs.uiuc.edu 2003/12/17.
Scalable Algorithms for Structured Adaptive Mesh Refinement Akhil Langer, Jonathan Lifflander, Phil Miller, Laxmikant Kale Parallel Programming Laboratory.
Martin Berzins (Steve Parker) What are the hard apps problems? How do the solutions get shared? What non-apps work is needed? Thanks to DOE for funding.
Thinking outside the “Visualization” Box Ken Joy Visualization and Graphics Research Group Institute for Data Analysis and Visualization Computer Science.
VisIt Team April 24, 2012 VisIt Update, DOE CGF 2012 PySide GUI, H. Krishnan FTLE from tokamak, Childs, Krishnan, & Sugiyama (MIT)
Mesh Generation 58:110 Computer-Aided Engineering Reference: Lecture Notes on Delaunay Mesh Generation, J. Shewchuk (1999)
Nov. 14, 2012 Hank Childs, Lawrence Berkeley Jeremy Meredith, Oak Ridge Pat McCormick, Los Alamos Chris Sewell, Los Alamos Ken Moreland, Sandia Panel at.
Petr Krysl* Eitan Grinspun, Peter Schröder Hierarchical Finite Element Mesh Refinement *Structural Engineering Department, University of California, San.
Stable, Circulation- Preserving, Simplicial Fluids Sharif Elcott, Yiying Tong, Eva Kanso, Peter Schröder, and Mathieu Desbrun.
E. WES BETHEL (LBNL), CHRIS JOHNSON (UTAH), KEN JOY (UC DAVIS), SEAN AHERN (ORNL), VALERIO PASCUCCI (LLNL), JONATHAN COHEN (LLNL), MARK DUCHAINEAU.
1 1  Capabilities: Building blocks for block-structured AMR codes for solving time-dependent PDE’s Functionality for [1…6]D, mixed-dimension building.
VAPoR: A Discovery Environment for Terascale Scientific Data Sets Alan Norton & John Clyne National Center for Atmospheric Research Scientific Computing.
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
SFUMATO: A self-gravitational MHD AMR code Tomoaki Matsumoto ( Hosei Univerisity ) Circumstellar disk Outflow Magnetic field Protostar Computational domain.
Abstract Particle tracking can serve as a useful tool in engineering analysis, visualization, and is an essential component of many Eulerian-Lagrangian.
Hank Childs, University of Oregon Unstructured Grids.
Hank Childs, University of Oregon Volume Rendering Primer / Intro to VisIt.
J. Ray, S. Lefantzi and H. Najm Sandia National Labs, Livermore Using The Common Component Architecture to Design Simulation Codes.
An Evaluation of Partitioners for Parallel SAMR Applications Sumir Chandra & Manish Parashar ECE Dept., Rutgers University Submitted to: Euro-Par 2001.
Outline Introduction Research Project Findings / Results
Physically based deformations of implicit surfaces Michal Remiš.
TR&D 2: NUMERICAL TOOLS FOR MODELING IN CELL BIOLOGY Software development: Jim Schaff Fei Gao Frank Morgan Math & Physics: Boris Slepchenko Diana Resasco.
Finite Element Modelling of Photonic Crystals Ben Hiett J Generowicz, M Molinari, D Beckett, KS Thomas, GJ Parker and SJ Cox High Performance Computing.
1 Data Structures for Scientific Computing Orion Sky Lawlor /04/14.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
3D Object Representations 2009, Fall. Introduction What is CG?  Imaging : Representing 2D images  Modeling : Representing 3D objects  Rendering : Constructing.
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
AstroBEAR Overview Road to Parallelization. Current Limitations.
Unstructured Meshing Tools for Fusion Plasma Simulations
VisIt Project Overview
Visualization Shading
Writing a File Format Reader for VisIt
Project 8: Development and Validation of Bleed Models for Control of Supersonic Shock-Wave Interactions with Boundary Layers.
3D Object Representations
Heliosphere - Lectures 5-7
Vector Field Visualization
L Ge, L Lee, A. Candel, C Ng, K Ko, SLAC
Redundant Ghost Nodes in Jacobi
Presentation transcript:

E. WES BETHEL (LBNL), CHRIS JOHNSON (UTAH), KEN JOY (UC DAVIS), SEAN AHERN (ORNL), VALERIO PASCUCCI (LLNL), JONATHAN COHEN (LLNL), MARK DUCHAINEAU (LLNL), BERND HAMANN (UC DAVIS), CHARLES HANSEN (UTAH), DAN LANEY (LLNL), PETER LINDSTROM (LLNL), JEREMY MEREDITH (ORNL), GEORGE OSTROUCHOV (ORNL), STEVEN PARKER (UTAH), CLAUDIO SILVA (UTAH), XAVIER TRICOCHE (UTAH), ALLEN SANDERSON (UTAH), HANK CHILDS (LLNL) On Integral Curves in AMR Simulations

Introduction Simulation domains span vast spatial scales Not possible to adapt the mesh to finest region using rectilinear grid Unstructured grids impose considerable memory overhead and require lookup structure for cell location Simulation domains span vast spatial scales Not possible to adapt the mesh to finest region using rectilinear grid Unstructured grids impose considerable memory overhead and require lookup structure for cell location

AMR combines adaptivity of unstructured grids with implicit connectivity of regular grids Customer: Applied Partial Differential Equation Center (APDEC) at LBNL –Simulation of complex flow fields using AMR technique AMR combines adaptivity of unstructured grids with implicit connectivity of regular grids Customer: Applied Partial Differential Equation Center (APDEC) at LBNL –Simulation of complex flow fields using AMR technique Adaptive Mesh Refinement

AMR data sets Multiple refinement levels Several domains represented as rectilinear grids Data in finer levels replace data in coarser levels Cell-centered data Multiple refinement levels Several domains represented as rectilinear grids Data in finer levels replace data in coarser levels Cell-centered data

Integral curves Integral curves for visualization of streamlines, streaklines, pathlines etc. Essential visualization tool providing easy understanding of flow data Integral curves for visualization of streamlines, streaklines, pathlines etc. Essential visualization tool providing easy understanding of flow data

Integral curves Numerical integration involves: 1.Selecting an initial point 2.Locating the cell containing the point 3.Interpolating the vector field and calculating the next point Numerical integration involves: 1.Selecting an initial point 2.Locating the cell containing the point 3.Interpolating the vector field and calculating the next point pkpk p k+1 p k+2 p k+3 p k+4

Goal Integral curve computation Considering AMR hierarchy Process the curve integration in each domain separately (parallel processing) Integral curve computation Considering AMR hierarchy Process the curve integration in each domain separately (parallel processing) Disregarding level hierarchyConsidering level hierarchy

Proper handling of cell-centered data Use dual-mesh representation to interpolate the vector field Use dual-mesh representation to interpolate the vector field

Proper handling of cell-centered data “Gaps” between domainsDual grid using “ghost” cells Use additional “ghost” cells (resulting from simulation or computed using stored information)

Algorithm Start in domain in finest possible level Build dual mesh Advance integration step If step inside nested domains (finer level) Intersect with the bounding box of the finer domain Restart the algorithm inside the finer domain If outside domain Intersect with the domain bounding box Restart in the next domain Start in domain in finest possible level Build dual mesh Advance integration step If step inside nested domains (finer level) Intersect with the bounding box of the finer domain Restart the algorithm inside the finer domain If outside domain Intersect with the domain bounding box Restart in the next domain

Implementation Implementation in VisIt –AMR as first class data type AMR Data organization: – Nesting structure Information about finer level domains nested in the coarse domain – Neighbor structure Information about the neighbor domains in the same refinement level – Ghost information Additional “outer” cells around a domain Refined cells Implementation in VisIt –AMR as first class data type AMR Data organization: – Nesting structure Information about finer level domains nested in the coarse domain – Neighbor structure Information about the neighbor domains in the same refinement level – Ghost information Additional “outer” cells around a domain Refined cells

Solar system simulation Interaction of the solar wind with the interstellar medium Computational region about 1000 AU Some structures 0.01 AU To fine to be modeled without AMR AMR Mesh –five refinement levels –20037 domains Interaction of the solar wind with the interstellar medium Computational region about 1000 AU Some structures 0.01 AU To fine to be modeled without AMR AMR Mesh –five refinement levels –20037 domains

Interplanetary magnetic field lines Parker spiral Past the termination shock

Interstellar magnetic field lines Wrapping the Heliopause

More examples Argon bubble Vortex cores

Mapped Grids Locally rectangular computational grid Mapped to physical space via mapping function applied to each grid node Locally rectangular computational grid Mapped to physical space via mapping function applied to each grid node

Mapped Grids Data: Fusion simulation example – Magnetic/Velocity field – Mapping field Visualization: –Compute field lines in the block-structured domain –Map the result to physical space Data: Fusion simulation example – Magnetic/Velocity field – Mapping field Visualization: –Compute field lines in the block-structured domain –Map the result to physical space

Current / Future Work Embedded Boundary / Material interfaces – represented as level set – represented as volume fraction – MIR challenges: Preserve volume fraction Continuous representation Global vs. local methods Embedded Boundary / Material interfaces – represented as level set – represented as volume fraction – MIR challenges: Preserve volume fraction Continuous representation Global vs. local methods

Thank you for attention!