Sparse linear solvers applied to parallel simulations of underground flow in porous and fractured media A. Beaudoin 1, J.R. De Dreuzy 2, J. Erhel 1 and H. Mustapha IRISA / INRIA, Rennes, France 2 - Department of Geosciences, University of Rennes, France Matrix Computations and Scientific Computing Seminar Berkeley, 26 October 2005
2D heterogeneous porous medium Heterogeneous permeability field Y = ln(K) with correlation function Parallel Simulations of Underground Flow in Porous and Fractured Media
3D fracture network with impervious matrix Parallel Simulations of Underground Flow in Porous and Fractured Media length distribution has a great impact : power law n(l) = l - a 3 types of networks based on the moments of length distribution mean variation third moment 3 < a < 4 mean variation 2 < a < 3 mean variation third moment a > 4
Equations Q = - K*grad (h) div (Q) = 0 Boundary conditions Flow model Fixed head Nul flux 3D fracture network Fixed head Nul flux 2D porous medium Parallel Simulations of Underground Flow in Porous and Fractured Media
Numerical method for 2D heterogeneous porous medium Parallel Simulations of Underground Flow in Porous and Fractured Media Finite Volume Method with a regular mesh Large sparse structured matrix with 5 entries per row
Parallel Simulations of Underground Flow in Porous and Fractured Media n=32 zoom Sparse matrix for 2D heterogeneous porous medium
Conforming triangular mesh Parallel Simulations of Underground Flow in Porous and Fractured Media Mixed Hybrid Finite Element Method with unstructured mesh Large sparse unstructured matrix with about 5 entries per row Numerical method for 3D fracture network
Parallel Simulations of Underground Flow in Porous and Fractured Media Sparse matrix for 3D fracture network N = 8181 Intersections and 7 fractures zoom
Memory requirements for matrices A and L Parallel Simulations of Underground Flow in Porous and Fractured Media Complexity analysis with PSPASES
CPU time of matrix generation, linear solving and flow computation obtained with two processors Parallel Simulations of Underground Flow in Porous and Fractured Media Complexity analysis with PSPASES
Parallel Simulations of Underground Flow in Porous and Fractured Media 2D porous medium : memory size and CPU time with PSPASES Theory : NZ(L) = O(N logN)Theory : Time = O(N 1.5 ) Slope about 1Slope about 1.5
Parallel Simulations of Underground Flow in Porous and Fractured Media 3D fracture network : memory size and CPU time with PSPASES NZ(L) = O(N) ?Time = O(N) ? Theory to be done
Parallel Simulations of Underground Flow in Porous and Fractured Media 2D porous medium : condition number estimated by MUMPS To be ckecked : scaling or not
Parallel Simulations of Underground Flow in Porous and Fractured Media 2D porous medium : residuals with PSPASES
Parallel architecture distributed memory 2 nodes of 32 bi – processors (Proc AMD Opteron 2Ghz with 2Go of RAM) Parallel architecture Parallel Simulations of Underground Flow in Porous and Fractured Media
Scalability analysis with PSPASES : speed-up Parallel Simulations of Underground Flow in Porous and Fractured Media
Scalability analysis with PSPASES : isoefficiency Parallel Simulations of Underground Flow in Porous and Fractured Media PNTpR , , PNTpR No value 2D medium3D fracture network
Parallel Simulations of Underground Flow in Porous and Fractured Media 2D porous medium : number of V cycles with HYPRE/SMG
Comparison between PSPASES and HYPRE/SMG : CPU time Parallel Simulations of Underground Flow in Porous and Fractured Media PSPASESHYPRE
Comparison between PSPASES and HYPRE/SMG : speed-up HYPRE PSPASES Parallel Simulations of Underground Flow in Porous and Fractured Media
Perspectives porous medium : large sigma, up to 9 and large N, up to 10 8 porous medium : 3D problems, N up to porous medium : scaling, iterative refinement, multigrid adapted to heterogeneous permeability field 3D fracture networks : large N, up to 10 9 model for complexity and scalability issues 2-level nested dissection subdomain method parallel architectures : up to 128 processors Monte-Carlo simulations grid computing with clusters for each random simulation parallel advection-diffusion numerical models