Parallel Iterative Solvers with the Selective Blocking Preconditioning for Simulations of Fault-Zone Contact Kengo Nakajima GeoFEM/RIST, Japan. 3rd ACES.

Slides:



Advertisements
Similar presentations
A Discrete Adjoint-Based Approach for Optimization Problems on 3D Unstructured Meshes Dimitri J. Mavriplis Department of Mechanical Engineering University.
Advertisements

Basic FEA Procedures Structural Mechanics Displacement-based Formulations.
Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Systems of Equations
Solving Linear Systems (Numerical Recipes, Chap 2)
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
CISE301_Topic3KFUPM1 SE301: Numerical Methods Topic 3: Solution of Systems of Linear Equations Lectures 12-17: KFUPM Read Chapter 9 of the textbook.
Rayan Alsemmeri Amseena Mansoor. LINEAR SYSTEMS Jacobi method is used to solve linear systems of the form Ax=b, where A is the square and invertible.
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Solution of linear system of equations
Multilevel Incomplete Factorizations for Non-Linear FE problems in Geomechanics DMMMSA – University of Padova Department of Mathematical Methods and Models.
ECIV 720 A Advanced Structural Mechanics and Analysis Non-Linear Problems in Solid and Structural Mechanics Special Topics.
Finite Element Method in Geotechnical Engineering
Special Matrices and Gauss-Siedel
ECIV 520 Structural Analysis II Review of Matrix Algebra.
CS240A: Conjugate Gradients and the Model Problem.
Iterative Solvers for Coupled Fluid-Solid Scattering Jan Mandel Work presentation Center for Aerospace Structures University of Colorado at Boulder October.
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
A Solenoidal Basis Method For Efficient Inductance Extraction H emant Mahawar Vivek Sarin Weiping Shi Texas A&M University College Station, TX.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Numerical Linear Algebra IKI Course outline Review linear algebra Square linear systems Least Square Problems Eigen Problems Text: Applied Numerical.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
MATH 250 Linear Equations and Matrices
Computer Graphics Group Tobias Weyand Mesh-Based Inverse Kinematics Sumner et al 2005 presented by Tobias Weyand.
1 A Domain Decomposition Analysis of a Nonlinear Magnetostatic Problem with 100 Million Degrees of Freedom H.KANAYAMA *, M.Ogino *, S.Sugimoto ** and J.Zhao.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Computing a posteriori covariance in variational DA I.Gejadze, F.-X. Le Dimet, V.Shutyaev.
Fast Thermal Analysis on GPU for 3D-ICs with Integrated Microchannel Cooling Zhuo Fen and Peng Li Department of Electrical and Computer Engineering, {Michigan.
Eruptive vents formation and surface unloading in active volcanoes: insights from axis-symmetric 2D finite element models Francisco Delgado Department.
Scientific Computing Partial Differential Equations Poisson Equation.
STE 6239 Simulering Friday, Week 1: 5. Scientific computing: basic solvers.
Introduction to Parallel Finite Element Method using GeoFEM/HPC-MW Kengo Nakajima Dept. Earth & Planetary Science The University of Tokyo VECPAR’06 Tutorial:
1 20-Oct-15 Last course Lecture plan and policies What is FEM? Brief history of the FEM Example of applications Discretization Example of FEM softwares.
The Finite Element Method A Practical Course
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Simulating complex surface flow by Smoothed Particle Hydrodynamics & Moving Particle Semi-implicit methods Benlong Wang Kai Gong Hua Liu
MHD Dynamo Simulation by GeoFEM Hiroaki Matsui Research Organization for Informatuion Science & Technology(RIST), JAPAN 3rd ACES Workshop May, 5, 2002.
Parallel Solution of the Poisson Problem Using MPI
CS240A: Conjugate Gradients and the Model Problem.
1 HPC Middleware on GRID … as a material for discussion of WG5 GeoFEM/RIST August 2nd, 2001, ACES/GEM at MHPCC Kihei, Maui, Hawaii.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
MECH4450 Introduction to Finite Element Methods Chapter 9 Advanced Topics II - Nonlinear Problems Error and Convergence.
October 2008 Integrated Predictive Simulation System for Earthquake and Tsunami Disaster CREST/Japan Science and Technology Agency (JST)
Cracow Grid Workshop, November 5-6, 2001 Concepts for implementing adaptive finite element codes for grid computing Krzysztof Banaś, Joanna Płażek Cracow.
Thermal Analysis Appendix Six. Training Manual General Preprocessing Procedure March 29, 2005 Inventory # A6-2 Basics of Steady-State Heat Transfer.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
MECH593 Introduction to Finite Element Methods
Algebraic Solvers in FASTMath Argonne Training Program on Extreme-Scale Computing August 2015.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Programming Massively Parallel Graphics Multiprocessors using CUDA Final Project Amirhassan Asgari Kamiabad
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
1 Numerical Methods Solution of Systems of Linear Equations.
Parallel Direct Methods for Sparse Linear Systems
Parallel Iterative Solvers for Ill-Conditioned Problems with Reordering Kengo Nakajima Department of Earth & Planetary Science, The University of Tokyo.
Hui Liu University of Calgary
Xing Cai University of Oslo
Introduction to the Finite Element Method
روش عناصر محدود غیرخطی II Nonlinear Finite Element Procedures II
GPU Implementations for Finite Element Methods
Supported by the National Science Foundation.
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
Presentation transcript:

Parallel Iterative Solvers with the Selective Blocking Preconditioning for Simulations of Fault-Zone Contact Kengo Nakajima GeoFEM/RIST, Japan. 3rd ACES Workshop, May 5-10, Maui, Hawaii, USA.

2 3rd ACES Workshop, Maui, May5-10, Solving large-scale linear equations Ax=b is the most important and expensive part of various types of scientific computing. for both linear and nonlinear applications for both linear and nonlinear applications Various types of methods have been proposed and developed. for dense and sparse matrices for dense and sparse matrices classified into direct and iterative methods classified into direct and iterative methods Dense Matrices : Globally Coupled Problems BEM, Spectral Methods, MO/MD (gas, liquid) BEM, Spectral Methods, MO/MD (gas, liquid) Sparse Matrices : Locally Defined Problems FEM, FDM, DEM, MD (solid), BEM w/FMP FEM, FDM, DEM, MD (solid), BEM w/FMP I am usually working on solving Ax=b !!!

3 3rd ACES Workshop, Maui, May5-10, Gaussian Elimination/LU Factorization. compute A -1 directly. compute A -1 directly.C Robust for wide range of applications. Robust for wide range of applications. Good for both dense and sparse matrices Good for both dense and sparse matricesD More expensive than iterative methods (memory, CPU) More expensive than iterative methods (memory, CPU) Not suitable for parallel and vector computation due to its global operations. Not suitable for parallel and vector computation due to its global operations. Direct Methods

4 3rd ACES Workshop, Maui, May5-10, Stationary Methods (SOR, Gauss-Seidel etc.) and Nonstationary Methods (CG, GMRES, BiCGSTAB etc.) C Less expensive than direct methods, especially in memory. Less expensive than direct methods, especially in memory. Suitable for parallel and vector computing. Suitable for parallel and vector computing.D Convergence strongly depends on problems, boundary conditions (condition number etc.) Convergence strongly depends on problems, boundary conditions (condition number etc.) Preconditioning is required. Preconditioning is required. Iterative Methods

5 3rd ACES Workshop, Maui, May5-10, Convergence rate of iterative solvers strongly depends on the spectral properties (eigenvalue distribution) of the coefficient matrix A. A preconditioner M transforms the linear system into one with more favorable spectral properties In "ill-conditioned" problems, "condition number" (ratio of max/min eigenvalue if A is symmetric) is large. In "ill-conditioned" problems, "condition number" (ratio of max/min eigenvalue if A is symmetric) is large. M transforms original equation Ax=b into A'x=b' where A'=M -1 A, b'=M -1 b M transforms original equation Ax=b into A'x=b' where A'=M -1 A, b'=M -1 b ILU (Incomplete LU Factorization) or IC (Incomplete Cholesky Factorization) are well-known preconditioners. Preconditioing for Iterative Methods

6 3rd ACES Workshop, Maui, May5-10, D linear elastic problem for simple cubic geometry on Hitachi SR8000/MPP with 128 SMP nodes (1024 PEs) (not ES40, unfortunately). Block ICCG Solver. The largest problem size so far is 805,306,368 DOF. Iterative method is the ONLY choice for large-scale parallel computing. Problem specific preconditioning method is the most important issue although traditional ILU(0)/IC(0) cover wide range of applications. Strategy in GeoFEM 128 SMP nodes 805,306,368 DOF GFLOPS 16 SMP nodes 100,663,296 DOF 42.4 GFLOPS

7 3rd ACES Workshop, Maui, May5-10, Contact Problems in Simulations for Earthquake Generation Cycle by GeoFEM. Non-linear Non-linear Ill-conditioned problem due to penalty constraint by ALM (Augmented Lagrangean) Ill-conditioned problem due to penalty constraint by ALM (Augmented Lagrangean) Assumptions Assumptions Infinitesimal deformation, static contact relationship. Location of nodes is in each "contact pair" is identical. Location of nodes is in each "contact pair" is identical. No friction : Symmetric coefficient matrix Topics in this Presentation Special preconditioning : Selective Blocking. provides robust and smooth convergence in 3D solid mechanics simulations for geophysics with contact. provides robust and smooth convergence in 3D solid mechanics simulations for geophysics with contact. Examples on Hitachi SR2201 parallel computer with 128 processing elements.

8 3rd ACES Workshop, Maui, May5-10, OVERVIEW Background General Remedy for Ill-Conditioned Problems Deep Fill-in Deep Fill-in Blocking Blocking Special Method for Fault-Contact Problems Selective Blocking Selective Blocking Special Repartitioning Special RepartitioningExamples Large Scale Computation on Hitachi SR2201 w/128 PEs Large Scale Computation on Hitachi SR2201 w/128 PEsSummary

9 3rd ACES Workshop, Maui, May5-10, Geophysics Application w/Contact Augmented Lagrangean Method with Penalty Constraint Condition for Contact 6,156 elements, 7,220 nodes, 21,660 DOF 840km  1020km  600km region

10 3rd ACES Workshop, Maui, May5-10, ● Newton Raphson iteration ▲ Solver iteration for entire Newton Raphson iteration Newton Raphson iteration ■ Solver iteration for ONE Newton Raphson iteration Newton Raphson iteration Large Penalty provides ・ Good N-R convergence ・ Good N-R convergence ・ Large Condition Number ・ Large Condition Number Optimum Choice Augmented Lagrangean Method Penalty~Iteration Relation for Contact Problems Newton-Raphson / Iterative Solver

11 3rd ACES Workshop, Maui, May5-10, Block-type Preconditioning seems to work well for ill-conditioned cases Results in the Benchmark 7,220 nodes, 21,660 DOFs,  =10 -8 GeoFEM's CG solver (scalar version) Single PE case =10 10 IC(0) : 89 iters, 8.9 sec. DIAG : 340 iters, 19.1 sec. Block LU scaling : 165 iters, 11.9 sec. =10 16 IC(0) : >10,000 iters, >1,300.0 sec. DIAG : No Convergence Block LU scaling : 3,727 iters, sec.

12 3rd ACES Workshop, Maui, May5-10, Background General Remedy for Ill-Conditioned Problems Deep Fill-in Deep Fill-in Blocking Blocking Special Method for Fault-Contact Problems Selective Blocking Special Repartitioning Examples Large Scale Computation on Hitachi SR2201 w/128 PEs Summary

13 3rd ACES Workshop, Maui, May5-10, The world where direct solvers have governed. But iterative methods are the only choice for large- scale massively parallel computation. We need robust preconditioning !! Remedy : Basically Preconditioning like Direct Solver Deep Fill-in Deep Fill-in Blocking and Ordering Blocking and Ordering Ill-Conditioned Problems

14 3rd ACES Workshop, Maui, May5-10, Deep Fill-in : LU and ILU(0)/IC(0) Even if A is sparse, A -1 is not necessarily sparse due to fill-in. Gaussian Elimination do i= 2, n do k= 1, i-1 a ik := a ik /a kk do j= k+1, n a ij := a ij - a ik *a kj enddo ILU(0) : keep non-zero pattern of the original coefficient matrix do i= 2, n do k= 1, i-1 if ((i,k) ∈ NonZero(A)) then a ik := a ik /a kk a ik := a ik /a kk endif endif do j= k+1, n if ((i,j) ∈ NonZero(A)) then a ij := a ij - a ik *a kj a ij := a ij - a ik *a kj endif endif enddo DEEP Fill-in

15 3rd ACES Workshop, Maui, May5-10, Deep Fill-in : ILU(p)/IC(p) LEV ij =0 if ((i,j) ∈ NonZero(A)) otherwise LEV ij = p+1 LEV ij =0 if ((i,j) ∈ NonZero(A)) otherwise LEV ij = p+1 do i= 2, n do i= 2, n do k= 1, i-1 do k= 1, i-1 if (LEV ik ≦ p) then if (LEV ik ≦ p) then a ik := a ik /a kk a ik := a ik /a kk endif endif do j= k+1, n do j= k+1, n if (LEV ij = min(LEV ij,1+LEV ik + LEV kj ) ≦ p) then if (LEV ij = min(LEV ij,1+LEV ik + LEV kj ) ≦ p) then a ij := a ij - a ik *a kj a ij := a ij - a ik *a kj endif endif enddo enddo DEEP Fill-in

16 3rd ACES Workshop, Maui, May5-10, Close to direct solver if you have DEEPER fill-in. requires additional memory and computation. x2 for ILU(0) -> ILU(1) x2 for ILU(0) -> ILU(1) Deep Fill-in : General Issues DEEP Fill-in

17 3rd ACES Workshop, Maui, May5-10, Apply complete/full LU factorization in the certain size block for process D -1. Just divided by diagonal component for scalar cases. Just divided by diagonal component for scalar cases. 3x3 block for 3D solid mechanics. tightly coupled 3-components (u-v-w) on 1-node. tightly coupled 3-components (u-v-w) on 1-node. Blocking : Forward/Backward Substitution for ILU/IC Process M= (L+D)D -1 (D+U) Forward Substitution (L+D)p= q : p= D -1 (q-Lp) Backward Substitution (I+ D -1 U)p new = p old : p= p - D -1 Up BLOCKING

18 3rd ACES Workshop, Maui, May5-10,  3 Block ILU(0) Preconditioning Forward Substitution do i= 1, N SW1= WW(3*i-2,ZP) SW2= WW(3*i-1,ZP) SW3= WW(3*i,ZP) isL= INL(i-1)+1 ieL= INL(i) do j= isL, ieL k= IAL(j) X1= WW(3*k-2,ZP) X2= WW(3*k-1,ZP) X3= WW(3*k,ZP) SW1= SW1 - AL(1,1,j)*X1 - AL(1,2,j)*X2 - AL(1,3,j)*X3 SW2= SW2 - AL(2,1,j)*X1 - AL(2,2,j)*X2 - AL(2,3,j)*X3 SW3= SW3 - AL(3,1,j)*X1 - AL(3,2,j)*X2 - AL(3,3,j)*X3 enddo X1= SW1 X2= SW2 X3= SW3 X2= X2 - ALU(2,1,i)*X1 X3= X3 - ALU(3,1,i)*X1 - ALU(3,2,i)*X2 X3= ALU(3,3,i)* X3 X2= ALU(2,2,i)*( X2 - ALU(2,3,i)*X3 ) X1= ALU(1,1,i)*( X1 - ALU(1,3,i)*X3 - ALU(1,2,i)*X2) WW(3*i-2,ZP)= X1 WW(3*i-1,ZP)= X2 WW(3*i,ZP)= X3 enddo Full LU Factorization for 3x3 Block D -1 BLOCKING

19 3rd ACES Workshop, Maui, May5-10, Iteration number and computation time dramatically decreases by fill-in and blocking. Benchmark : Effect of Fill-in/Blocking 7,220 nodes, 21,660 DOFs,  =10 -8 CG solver, Single PE case =10 16 IC(0) : >10,000 iters, >1,300.0 sec. Block LU scaling : 3,727 iters, sec. Block IC(0) : 1,102 iters, sec. Block IC(1) : 94 iters, 21.1 sec. Block IC(2) : 33 iters, 15.4 sec. DEEP Fill-inBLOCKING

20 3rd ACES Workshop, Maui, May5-10, Background General Remedy for Ill-Conditioned Problems Deep Fill-in Blocking Special Method for Fault-Contact Problems Selective Blocking Selective Blocking Special Repartitioning Examples Large Scale Computation on Hitachi SR2201 w/128 PEs Summary

21 3rd ACES Workshop, Maui, May5-10, Selective Blocking Special Method for Contact Problem Strongly coupled nodes are put into the same diagonal block.

22 3rd ACES Workshop, Maui, May5-10, Selective Blocking Special Method for Contact Problem Strongly coupled nodes are put into the same diagonal block. Initial Coef. Matrix find strongly coupled contact groups (each small square:3x3) Reordered/Blocked Matrix nodes/block Each block corresponds to a contact group

23 3rd ACES Workshop, Maui, May5-10, Block ILU/IC Selective Blocking or Supernode Procedure : Forward Substitution in Lower Tri. Part Selective Blocking/ Supernode size of each diagonal block depends on contact group size Apply full LU factorization for computation of D -1

24 3rd ACES Workshop, Maui, May5-10, Benchmark : SB-BIC(0) Selective Blocking + Block IC(0) 7,220 nodes, 21,660 DOFs,  =10 -8 CG solver, Single PE case =10 16 IC(0) : >10,000 iters, >1,300.0 sec. Block LU scaling : 3,727 iters, sec. Block IC(0) : 1,102 iters, sec. Block IC(1) : 94 iters, 21.1 sec. Block IC(2) : 33 iters, 15.4 sec. SB-Block IC(0) : 82 iters, 11.2 sec.

25 3rd ACES Workshop, Maui, May5-10, Benchmark : Selective Blocking Selective Blocking converges even if =10 20 BIC(1) BIC(2) SB-BIC(0)

26 3rd ACES Workshop, Maui, May5-10, Benchmark : 4PE cases 7,220 nodes, 21,660 DOFs,  =10 -8 =10 16, CG solver Single PE Block IC(0) : 1,102 iters, sec. Block IC(1) : 94 iters, 21.1 sec. Block IC(2) : 33 iters, 15.4 sec. SB-BIC(0) : 82 iters, 11.2 sec. 4 PEs Block IC(0) : 2,104 iters, 68.4 sec. Block IC(1) : 1,724 iters, 85.8 sec. Block IC(2) : 962 iters, 69.9 sec. SB-BIC(0) : 1,740 iters, 70.0 sec. In 4PE case, nodes in tightly connected groups are on different partition and decoupled.

27 3rd ACES Workshop, Maui, May5-10, Deep fill-in, blocking and selective-blocking dramatically improve the convergence rate for ill- conditioned problems such as solid mechanics with contact. But performance is bad in parallel cases with localized preconditioning when nodes in tightly connected pairs are on different partition and decoupled. Special repartitioning method needed !! Summary

28 3rd ACES Workshop, Maui, May5-10, Background General Remedy for Ill-Conditioned Problems Deep Fill-in Blocking Special Method for Fault-Contact Problems Selective Blocking Special Repartitioning Special Repartitioning Examples Large Scale Computation on Hitachi SR2201 w/128 PEs Summary

29 3rd ACES Workshop, Maui, May5-10, Outline of the Repartitioning BEFORErepartitioning Nodes in contact pairs are on separated partition. AFTER repartitioning Nodes in contact pairs are on same partition, but no load-balancing. AFTERload-balancing Nodes in contact pairs are on same partition, and load-balanced. Convergence is slow if nodes in each contact group locate on different partition. Repartitioning so that nodes in contact pairs would be in same partition as INTERIOR nodes will be effective.

30 3rd ACES Workshop, Maui, May5-10, Special Repartitioning Benchmark: 4PE cases BEFORE RepartitioningAFTER Repartitioning

31 3rd ACES Workshop, Maui, May5-10, Background General Remedy for Ill-Conditioned Problems Deep Fill-in Blocking Special Method for Fault-Contact Problems Selective Blocking Special RepartitioningExamples Large Scale Computation on Hitachi SR2201 w/128 PEs Large Scale Computation on Hitachi SR2201 w/128 PEsSummary

32 3rd ACES Workshop, Maui, May5-10, Large-Scale Computation Description NX1 NX2 NZ1 NZ2 NZ1+NZ2 x= 0 x= NX1 x= NX1+1 x= NX1+NX2+1 z= 0 z= NZ1 z= NZ1+1 z= NZ1+NZ2+1 x y z

33 3rd ACES Workshop, Maui, May5-10, Problem Setting & B.C.'s Problem Setting & B.C.'s MPC at inter-zone boundaries Symmetric condition at x=0 and y=0 surfaces Dirichlet fixed condition at z=0 surface Uniform distributed load at z= Zmax surface

34 3rd ACES Workshop, Maui, May5-10, Sample Mesh 99 nodes, 80 elements. Sample Mesh 99 nodes, 80 elements ContactGroups

35 3rd ACES Workshop, Maui, May5-10, Results on Hitachi SR2201 (128PEs) NX1=NX2=70, NY=40, NZ1=NZ2=70, Repartitioned. 2,471,439 DOF, 784,000 Elements Iterations/CPU time until convergence (  =10 -8 ) BIC(0) /E iters sec > 8,300 > 1, BIC(1) BIC(2) SB-BIC(0)

36 3rd ACES Workshop, Maui, May5-10, Required Memory NX1=NX2=20, NY=20, NZ1=15, NZ2=16 83,649 DOF, 24,000 Elements BIC(0) 105 MB BIC(0) 105 MB BIC(1) 284 MB BIC(1) 284 MB BIC(2) 484 MB BIC(2) 484 MB SB-BIC(0) 128 MB SB-BIC(0) 128 MB

37 3rd ACES Workshop, Maui, May5-10, Concluding Remarks Robust Preconditioning Methods for Contact Problem. General: Deep Fill-in, Blocking. General: Deep Fill-in, Blocking. Problem Specific: Selective-Blocking using Supernodes. Problem Specific: Selective-Blocking using Supernodes. Large-Scale Problems using 128 PEs of Hitachi SR2201. Large-Scale Problems using 128 PEs of Hitachi SR2201. Selective-Blocking (SB-BIC(0)) provides robust convergence. More efficient and robust than BIC(0), BIC(1) and BIC(2). More efficient and robust than BIC(0), BIC(1) and BIC(2). Iteration number for convergence remains constant while increases. Iteration number for convergence remains constant while increases.

38 3rd ACES Workshop, Maui, May5-10, Further Study Optimization for Earth Simulator. Dynamic Update of Contact Information. Large Slip / Large Deformation. Large Slip / Large Deformation. More flexible and robust preconditioner under development such as SPAI (Sparse Approximate Inverse). More flexible and robust preconditioner under development such as SPAI (Sparse Approximate Inverse).