A comparison between a direct and a multigrid sparse linear solvers for highly heterogeneous flux computations A. Beaudoin, J.-R. De Dreuzy and J. Erhel ECCOMAS CFD 06, Egmond aan Zee, the Netherlands, September 2006
2D Heterogeneous permeability field Stochastic model Y = ln(K) with correlation function Physical flow model Q = - K*grad (h) div (Q) = 0 Fixed head Nul flux
Examples of simulations σ=0.5 and σ=3
Numerical method for 2D heterogeneous porous medium Finite Volume Method with a regular mesh Large sparse structured matrix of order N with 5 entries per row
Sparse direct solver memory size and CPU time with PSPASES Theory : NZ(L) = O(N logN)Theory : Time = O(N 1.5 )
Multigrid sparse solver convergence and CPU time with HYPRE/SMG
Parallel architecture distributed memory 2 nodes of 32 bi – processors (Proc AMD Opteron 2Ghz with 2Go of RAM) Parallel architecture
Direct and multigrid solvers Parallel CPU times for various sizes
Direct and multigrid solvers Speed-ups for various sizes
Direct solver Scalability analysis with PSPASES : isoefficiency PNTpR , ,
Multigrid solver Impact of permeability standard deviation and system size Convergence and CPU time
Multigrid solver Impact of permeability standard deviation and system size Convergence and CPU time
Direct and multigrid solvers Impact of permeability standard deviation
Direct and multigrid solvers Summary PSPASES is more efficient for small matrices PSPASES is scalable and is more efficient with many processors HYPRE requires less memory HYPRE is more efficient for large matrices HYPRE is very sensitive to the permeability variance Another method for large matrices and large variance ?