Innovative Multigrid Methods

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing An Adaptive Numerical Method for Multi- Scale Problems Arising in Phase-field Modelling Peter.
Advertisements

CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
Partial Differential Equations
A Discrete Adjoint-Based Approach for Optimization Problems on 3D Unstructured Meshes Dimitri J. Mavriplis Department of Mechanical Engineering University.
By Paul Delgado. Motivation Flow-Deformation Equations Discretization Operator Splitting Multiphysics Coupling Fixed State Splitting Other Splitting Conclusions.
MULTISCALE COMPUTATIONAL METHODS Achi Brandt The Weizmann Institute of Science UCLA
Geometric (Classical) MultiGrid. Hierarchy of graphs Apply grids in all scales: 2x2, 4x4, …, n 1/2 xn 1/2 Coarsening Interpolate and relax Solve the large.
Algebraic MultiGrid. Algebraic MultiGrid – AMG (Brandt 1982)  General structure  Choose a subset of variables: the C-points such that every variable.
Fast, Multiscale Image Segmentation: From Pixels to Semantics Ronen Basri The Weizmann Institute of Science Joint work with Achi Brandt, Meirav Galun,
Sparse Matrix Solvers on the GPU: Conjugate Gradients and Multigrid Jeffrey Bolz, Ian Farmer, Eitan Grinspun, Peter Schröder Caltech ASCI Center.
Multiscale Methods of Data Assimilation Achi Brandt The Weizmann Institute of Science UCLA INRODUCTION EXAMPLE FOR INVERSE PROBLEMS.
Direct and iterative sparse linear solvers applied to groundwater flow simulations Matrix Analysis and Applications October 2007.
Numerical methods for PDEs PDEs are mathematical models for –Physical Phenomena Heat transfer Wave motion.
School of something FACULTY OF OTHER School of Computing “On the Development of Implicit Solvers for Time-Dependent Systems” Peter Jimack School of Computing,
A Multigrid Solver for Boundary Value Problems Using Programmable Graphics Hardware Nolan Goodnight Cliff Woolley Gregory Lewin David Luebke Greg Humphreys.
1 A Domain Decomposition Analysis of a Nonlinear Magnetostatic Problem with 100 Million Degrees of Freedom H.KANAYAMA *, M.Ogino *, S.Sugimoto ** and J.Zhao.
Hans De Sterck Department of Applied Mathematics University of Colorado at Boulder Ulrike Meier Yang Center for Applied Scientific Computing Lawrence Livermore.
ParCFD Parallel computation of pollutant dispersion in industrial sites Julien Montagnier Marc Buffat David Guibert.
CFD Lab - Department of Engineering - University of Liverpool Ken Badcock & Mark Woodgate Department of Engineering University of Liverpool Liverpool L69.
Efficient Integration of Large Stiff Systems of ODEs Using Exponential Integrators M. Tokman, M. Tokman, University of California, Merced 2 hrs 1.5 hrs.
PpOpen-HPC Open Source Infrastructure for Development and Execution of Large-Scale Scientific Applications with Automatic Tuning (AT) Kengo Nakajima Information.
Mark Marron 1, Deepak Kapur 2, Manuel Hermenegildo 1 1 Imdea-Software (Spain) 2 University of New Mexico 1.
University of Illinois at Urbana-Champaign Computational Fluid Dynamics Lab Bin Zhao 1 LES Simulation of Transient Fluid Flow and Heat Transfer in Continuous.
Parallel Iterative Solvers with the Selective Blocking Preconditioning for Simulations of Fault-Zone Contact Kengo Nakajima GeoFEM/RIST, Japan. 3rd ACES.
Simulating complex surface flow by Smoothed Particle Hydrodynamics & Moving Particle Semi-implicit methods Benlong Wang Kai Gong Hua Liu
Multigrid Computation for Variational Image Segmentation Problems: Multigrid approach  Rosa Maria Spitaleri Istituto per le Applicazioni del Calcolo-CNR.
October 2008 Integrated Predictive Simulation System for Earthquake and Tsunami Disaster CREST/Japan Science and Technology Agency (JST)
A Dirichlet-to-Neumann (DtN)Multigrid Algorithm for Locally Conservative Methods Sandia National Laboratories is a multi program laboratory managed and.
Introduction to Scientific Computing II Multigrid Dr. Miriam Mehl Institut für Informatik Scientific Computing In Computer Science.
Introduction to Scientific Computing II Multigrid Dr. Miriam Mehl.
1 Mark F. Adams 22 October 2004 Applications of Algebraic Multigrid to Large Scale Mechanics Problems.
Outline Introduction Research Project Findings / Results
Introduction to Scientific Computing II
MULTISCALE COMPUTATIONAL METHODS Achi Brandt The Weizmann Institute of Science UCLA
Algebraic Solvers in FASTMath Argonne Training Program on Extreme-Scale Computing August 2015.
Brain (Tech) NCRR Overview Magnetic Leadfields and Superquadric Glyphs.
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Computational Fluid Dynamics Lecture II Numerical Methods and Criteria for CFD Dr. Ugur GUVEN Professor of Aerospace Engineering.
High Performance Computing Seminar II Parallel mesh partitioning with ParMETIS Parallel iterative solvers with Hypre M.Sc. Caroline Mendonça Costa.
A novel approach to visualizing dark matter simulations
Unstructured Meshing Tools for Fusion Plasma Simulations
Geometric Camera Calibration
Parallel Iterative Solvers for Ill-Conditioned Problems with Reordering Kengo Nakajima Department of Earth & Planetary Science, The University of Tokyo.
Hui Liu University of Calgary
Xing Cai University of Oslo
Objective Numerical methods SIMPLE CFD Algorithm Define Relaxation
Chamber Dynamic Response Modeling
MultiGrid.
© Fluent Inc. 1/10/2018L1 Fluids Review TRN Solution Methods.
A Parallel Hierarchical Solver for the Poisson Equation
Introduction to Scientific Computing II
Significance of research Challenges & objectives
Objective Unsteady state Numerical methods Discretization
GPU Implementations for Finite Element Methods
A robust preconditioner for the conjugate gradient method
GENERAL VIEW OF KRATOS MULTIPHYSICS
Supported by the National Science Foundation.
Objective Numerical methods Finite volume.
Stencil Quiz questions
Results (Accuracy of Low Rank Approximation by iSVD)
Introduction to Scientific Computing II
Stencil Quiz questions
Hybrid Programming with OpenMP and MPI
The C&C Center Three Major Missions: In This Presentation:
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Comparison of CFEM and DG methods
University of Virginia
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Presentation transcript:

Innovative Multigrid Methods jh180022-NAHI Kengo Nakajima (Information Technology Center, The University of Tokyo, Japan) Innovative Multigrid Methods T. Iwashita1, K. Nakajima (Leading-PI)2,4, T. Hanawa2, A. Ida2, T. Hoshino2, N. Nomura2, A. Fujii3, R. Yoda3, M. Kawai4, S. Ohshima5, M. Bolten (Co-PI)6, L. Claus6, G. Wellein7, O. Marques8 1: Hokkaido U., 2: U. Tokyo, 3: Kogakuin U., 4: RIKEN R-CCS, 5: Kyushu U., 6: U. Wuppertal, Germany, 7: FAU, Germany, 8: LBNL, USA Background Parallel-in-Space/Time (PinST) Multigrid (MG) Scalable multilevel method for solving linear equations GMG (Geometrical Multigrid) and AMG (Algebraic) Suitable for large-scale problems Generally, number of iterations until convergence for multigrid method is kept constant as the problem size changes. The parallel multigrid method is expected to be one of the most powerful tools on exa-scale systems. Applied to rather well-conditioned problems (e.g. Poisson’s equation) Many sophisticated methods for robustness of MG have been developed for ill- conditioned problems derived from real-world scientific and engineering applications. Robust and efficient algorithms for GMG and AMG methods, and PinST towards the Exascale/Post-Moore Era. Background Domain-Decomposition: Parallel in Space Direction New Approach with Parallel Computation in Time Direction (Parallel-in-Space/Time, PinST) MGRIT (Multigrid-Reduction-in-Time) (R.D. Falgout et al., 2014) PinST/TSC MGRIT was very successful in various types of applications including nonlinear ones, but it suffers from instability at coarse-level grid in time TSC (Time Segment Correction)(Kaneko, Fujii, Iwashita et al., 2017) requires no coarse time step at coarser levels in time domain PinST/Exp PinST has been mainly applied to implicit problems due to constraint of time step. But, it is really needed in explicit methods, where many time steps are needed. New method for PinST with explicit time marching (PinST-Exp), where we apply coarser mesh in space at the coarser level in time Development: 2-Year Project (FY.2018 & 2019) Serial Optimization for both of GMG & AMG Robust & Efficient Smoother (MS-BMC-GS, Multiplicative-Schwartz type Block Multicolor Gauss-Seidel) (Kawai, Ida, Iwashita, Bolten, Claus) Matrix Storage Format by SELL-C-s (Nakajima, Hoshino, Wellein) [Kreutzer, Wellein et al. 2014] Extraction of Near-Kernel Vectors (Fujii, Nomura, Bolten, Marques) Parallel Optimization for both of GMG & AMG Parallel Reordering/Aggregation (Kawai, Wellein) AM-hCGA (Adaptive Multilevel Hierarchical Coarse Grid Aggregation (hCGA)) (Nakajima, Nomura, Fujii, Marques) Fast Parallel Mesh Generation by IME/Burst Buffer (Hanawa, Nakajima) Parallel in Space/Time (PinST) PinST-TSC (Time Segment Correction) for Nonlinear Problems (Fujii, Iwashita, Yoda) PinST-Explicit (Explicit Time Marching (Nakajima, Bolten, Yoda) Applications GMG: 3D Groundwater Flow AMG: 3D Solid Mechanics PinST-TSC: Nonlinear Heat Transfer PinST-Explicit: Heat Transfer, Incomp. Navier-Stokes with Pressure Correction Target Systems Oakforest-PACS (U.Tokyo) : Intel Xeon Phi ITO-A (Kyushu U.): Intel Xeon (Skylake) New System (Hokkaido U.): Intel Xeon (Skylake) Open-Source Library Various Types of Applications One of the First Practical Library of MG including PinST, especially PinST-Exp Time MGRIT Time Efficiency of 1D Problem: MGRID+TSC is the best [Kaneko, Fujii, Iwashita et al. 2017] TSC Parallel Reordering Source : The electronic properties of graphene Effect of Hierarchical Reordering for Graphene Application with 500M DOF on 4,800 Nodes of the Fujitsu FX10 (Oakleaf-FX) using Blocked/Shifted ICCG, Constant Iterations for a wide Range of Node Number [Kawai et al. 2017] AM-hCGA (Adaptive Multilevel hCGA) Weak Scaling Strong Scaling ELL/Sliced ELL/SELL-C-s MGCG for 3D GW Flow on Fujitsu FX10 (Oakleaf-FX) up to 4,096 nodes, 17,179,869,184 DOF, Effects of hCGA [Nakajima 2014] AM-hCGA (Adaptive Multi Level hCGA (Hierarchical Coarse Grid Aggregation) MGCG for 3D GW Flow on Fujitsu FX10 (Oakleaf-FX) up to 4,096 nodes, 17,179,869,184 DOF [Nakajima 2014]