Implementing Hypre- AMG in NIMROD via PETSc S. Vadlamani- Tech X S. Kruger- Tech X T. Manteuffel- CU APPM S. McCormick- CU APPM Funding: DE-FG02-07ER84730.

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing An Adaptive Numerical Method for Multi- Scale Problems Arising in Phase-field Modelling Peter.
Advertisements

Mutigrid Methods for Solving Differential Equations Ferien Akademie 05 – Veselin Dikov.
Lect.3 Modeling in The Time Domain Basil Hamed
A Discrete Adjoint-Based Approach for Optimization Problems on 3D Unstructured Meshes Dimitri J. Mavriplis Department of Mechanical Engineering University.
Improvement of a multigrid solver for 3D EM diffusion Research proposal final thesis Applied Mathematics, specialism CSE (Computational Science and Engineering)
Advanced Computational Software Scientific Libraries: Part 2 Blue Waters Undergraduate Petascale Education Program May 29 – June
Computer Science & Engineering Department University of California, San Diego SPICE Diego A Transistor Level Full System Simulator Chung-Kuan Cheng May.
1 Numerical Solvers for BVPs By Dong Xu State Key Lab of CAD&CG, ZJU.
CS 290H 7 November Introduction to multigrid methods
MULTISCALE COMPUTATIONAL METHODS Achi Brandt The Weizmann Institute of Science UCLA
Geometric (Classical) MultiGrid. Hierarchy of graphs Apply grids in all scales: 2x2, 4x4, …, n 1/2 xn 1/2 Coarsening Interpolate and relax Solve the large.
MATH 685/ CSI 700/ OR 682 Lecture Notes
1 On The Use Of MA47 Solution Procedure In The Finite Element Steady State Heat Analysis Ana Žiberna and Dubravka Mijuca Faculty of Mathematics Department.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
Introduction to Physics
Influence of (pointwise) Gauss-Seidel relaxation on the error Poisson equation, uniform grid Error of initial guess Error after 5 relaxation Error after.
Introduction to Volume Visualization Mengxia Zhu Fall 2007.
Multiscale Methods of Data Assimilation Achi Brandt The Weizmann Institute of Science UCLA INRODUCTION EXAMPLE FOR INVERSE PROBLEMS.
1 Systems of Linear Equations Error Analysis and System Condition.
Ordinary Differential Equations (ODEs)
Direct and iterative sparse linear solvers applied to groundwater flow simulations Matrix Analysis and Applications October 2007.
1 Parallel Simulations of Underground Flow in Porous and Fractured Media H. Mustapha 1,2, A. Beaudoin 1, J. Erhel 1 and J.R. De Dreuzy IRISA – INRIA.
Iterative and direct linear solvers in fully implicit magnetic reconnection simulations with inexact Newton methods Xuefei (Rebecca) Yuan 1, Xiaoye S.
Finite Differences Finite Difference Approximations  Simple geophysical partial differential equations  Finite differences - definitions  Finite-difference.
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
CS 591x – Cluster Computing and Programming Parallel Computers Parallel Libraries.
CompuCell Software Current capabilities and Research Plan Rajiv Chaturvedi Jesús A. Izaguirre With Patrick M. Virtue.
Multigrid for Nonlinear Problems Ferien-Akademie 2005, Sarntal, Christoph Scheit FAS, Newton-MG, Multilevel Nonlinear Method.
Stratified Magnetohydrodynamics Accelerated Using GPUs:SMAUG.
1 Using the PETSc Parallel Software library in Developing MPP Software for Calculating Exact Cumulative Reaction Probabilities for Large Systems (M. Minkoff.
UPC Applications Parry Husbands. Roadmap Benchmark small applications and kernels —SPMV (for iterative linear/eigen solvers) —Multigrid Develop sense.
ParCFD Parallel computation of pollutant dispersion in industrial sites Julien Montagnier Marc Buffat David Guibert.
Component-Based Implementation of STOMP Yilin Fang Bruce Palmer Pacific Northwest National Laboratory Silver Spring, July 2007.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
1 SciDAC TOPS PETSc Work SciDAC TOPS Developers Satish Balay Chris Buschelman Matt Knepley Barry Smith.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Multigrid Computation for Variational Image Segmentation Problems: Multigrid approach  Rosa Maria Spitaleri Istituto per le Applicazioni del Calcolo-CNR.
Parallel Solution of the Poisson Problem Using MPI
Domain Decomposition in High-Level Parallelizaton of PDE codes Xing Cai University of Oslo.
Connections to Other Packages The Cactus Team Albert Einstein Institute
Introduction to Scientific Computing II Multigrid Dr. Miriam Mehl Institut für Informatik Scientific Computing In Computer Science.
Introduction to Scientific Computing II Multigrid Dr. Miriam Mehl.
Lecture 21 MA471 Fall 03. Recall Jacobi Smoothing We recall that the relaxed Jacobi scheme: Smooths out the highest frequency modes fastest.
Discretization Methods Chapter 2. Training Manual May 15, 2001 Inventory # Discretization Methods Topics Equations and The Goal Brief overview.
Discretization for PDEs Chunfang Chen,Danny Thorne Adam Zornes, Deng Li CS 521 Feb., 9,2006.
MULTISCALE COMPUTATIONAL METHODS Achi Brandt The Weizmann Institute of Science UCLA
Linear Algebra Operators for GPU Implementation of Numerical Algorithms J. Krüger R. Westermann computer graphics & visualization Technical University.
Algebraic Solvers in FASTMath Argonne Training Program on Extreme-Scale Computing August 2015.
Center for Extended MHD Modeling (PI: S. Jardin, PPPL) –Two extensively developed fully 3-D nonlinear MHD codes, NIMROD and M3D formed the basis for further.
Brain (Tech) NCRR Overview Magnetic Leadfields and Superquadric Glyphs.
Parallel Programming & Cluster Computing Linear Algebra Henry Neeman, University of Oklahoma Paul Gray, University of Northern Iowa SC08 Education Program’s.
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
High Performance Computing Seminar II Parallel mesh partitioning with ParMETIS Parallel iterative solvers with Hypre M.Sc. Caroline Mendonça Costa.
EEE 431 Computational Methods in Electrodynamics
Xing Cai University of Oslo
“Solve the equations right” Mathematical problem Validation :
Solving Systems of Linear Equations: Iterative Methods
MultiGrid.
Convergence in Computational Science
Introduction to Multigrid Method
The Future of Fortran is Bright …
Introduction to Scientific Computing II
Chapter 27.
Supported by the National Science Foundation.
Stencil Quiz questions
Stencil Quiz questions
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Presentation transcript:

Implementing Hypre- AMG in NIMROD via PETSc S. Vadlamani- Tech X S. Kruger- Tech X T. Manteuffel- CU APPM S. McCormick- CU APPM Funding: DE-FG02-07ER84730

Goals SBIR funding for –“improving existing multigrid linear solver libraries applied to the extended MHD system to work efficiently on petascale computers”. HYPRE chosen because –Multigrid shown to “scale” – CU development callable from PETSc interface –development of library (ie. AMG method) will benefit all CEMM efforts Phase I: –explore HYPRE’s solvers applied to the positive definite matricies in NIMROD –start a validation process for petascale scalings Phase II: –push development for non-symmetric operators of the extended MHD system on high-order FE grids

Equations of interest

Major Difficulties for MHD system MHD equations yield difficult to invert matrices for three reasons: – Velocity advance has 3D matrix which stabilizes the MHD waves to high accuracy, –Magnetic field advance has a 3D matrix due to the temperature-dependent resistivity that varies by 5 orders of magnitude across the simulation domain, –Temperature advance has a three-dimensional, anisotropic operator with parallel diffusion coefficient that is 5 to 10 orders of magnitude larger than the perpendicular coefficent. All of the matrices discussed, while ill-conditioned, are Hermitian. Inclusion of extended MHD terms not only increases the condition number of matrices (by making the largest eigenvalue larger), but is fundamentally non-symmetric in nature. Very hard linear problems, must use solvers that scale

Reminder: Algebraic Multigrid The smoothing process (also known as relaxation) is an application of a linear solver (usually iterative) that results in a smooth error. The coarse-grid correction is made up of three subprocesses: –(1) restriction: a particular transfer of information to a coarse-grid, –(2) coarse-grid solve: solving the linear system on the chosen coarse- grid, –(3)prolongation: a transfer or interpolation of information back to the finer grid. The effectiveness of this algorithm relies on carefully choosing restriction and the coarse- grid solve, which are dependent on attributes of the system of equations being solved.

Typical AMG process

Using PETSc Level of PETSc Compliance –for developer support –PETSc programs usually initialize and kill their own MPI communicators..need to match patterns Calling from fortran (77 mentality): –#include ”include/finclude/includefile.h”, *.F for preproccesor –careful to only “include” once in each encapsulated subroutine –must access arrays via an integer index name internal to PETSc –zero indexing *IMPORTANT*

Sample code to set elements of an array #define xx_a(ib) xx_v(xx_i + (ib)) double precision xx_v(1) PetscOffset xx_i PetscErrorCode ierr integer i, n Vec x call VecGetArray(x,xx_v,xx_i,ierr) call VecGetLocalSize(x,n,ierr) do i=1,n xx_a(i) = 3*i + 1 enddo call VecRestoreArray(x,xx_v,xx_i,ierr)

Hypre calls in PETSc No change to matrix and vector creation Will set the preconditioner type to HYPRE types within a linear system solver –KSP package –needed for the smoothing process PCHYPRESetType () PetscOptionsSetValue()

Work Plan for Phase 1 SBIR Implement HYPRE Solvers In Nimrod via PETSc –understanding full NIMROD data structure –backward compatibility with current solvers –comparative simulations with benchmark cases Establish metrics for solvers’ efficiency Initial analysis of MG capability for extended MHD

This Week Revisit work done with SuperLU interface –implementation of distributed interface will give better insight to NIMROD data structure on communication patterns Obtain troublesome matrices in triplet format –Send to Sherry Li for further analysis and SuperLU development –possibilty of visualization (matplotlib, etc)

Summary Beginning implementing PETSc in NIMROD Will explore HYPRE solvers with derived metrics to establish effectiveness Explore mathematical properties of the extended MHD system to understand feasibility of AMG still scaling while solving these particular non-symmetric matrices [way in the future]: May need to use BoxMG (LANL) for anisotropic temperature advance