Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.

Slides:



Advertisements
Similar presentations
Nonnegative Matrix Factorization with Sparseness Constraints S. Race MA591R.
Advertisements

05/11/2005 Carnegie Mellon School of Computer Science Aladdin Lamps 05 Combinatorial and algebraic tools for multigrid Yiannis Koutis Computer Science.
Parallel Jacobi Algorithm Steven Dong Applied Mathematics.
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Basic FEA Procedures Structural Mechanics Displacement-based Formulations.
CS 240A: Solving Ax = b in parallel Dense A: Gaussian elimination with partial pivoting (LU) Same flavor as matrix * matrix, but more complicated Sparse.
Applied Linear Algebra - in honor of Hans SchneiderMay 25, 2010 A Look-Back Technique of Restart for the GMRES(m) Method Akira IMAKURA † Tomohiro SOGABE.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Solving Linear Systems (Numerical Recipes, Chap 2)
Systems of Linear Equations
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
Iterative methods TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A A A A.
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
1cs542g-term Notes  Assignment 1 is out (due October 5)  Matrix storage: usually column-major.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
CSCI 317 Mike Heroux1 Sparse Matrix Computations CSCI 317 Mike Heroux.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
Sparse Matrix Methods Day 1: Overview Day 2: Direct methods
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
CS240A: Conjugate Gradients and the Model Problem.
Iterative Solvers for Coupled Fluid-Solid Scattering Jan Mandel Work presentation Center for Aerospace Structures University of Colorado at Boulder October.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.
Numerical methods for PDEs PDEs are mathematical models for –Physical Phenomena Heat transfer Wave motion.

Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Fast Low-Frequency Impedance Extraction using a Volumetric 3D Integral Formulation A.MAFFUCCI, A. TAMBURRINO, S. VENTRE, F. VILLONE EURATOM/ENEA/CREATE.
Qualifier Exam in HPC February 10 th, Quasi-Newton methods Alexandru Cioaca.
Domain Range definition: T is a linear transformation, EIGENVECTOR EIGENVALUE.
Computational Aspects of Multi-scale Modeling Ahmed Sameh, Ananth Grama Computing Research Institute Purdue University.
Discrete Algorithms & Math Department Preconditioning ‘03 Algebraic Tools for Analyzing Preconditioners Bruce Hendrickson Erik Boman Sandia National Labs.
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
Linear System function [A,b,c] = assem_boundary(A,b,p,t,e); np = size(p,2); % total number of nodes bound = unique([e(1,:) e(2,:)]); % boundary nodes inter.
CS240A: Conjugate Gradients and the Model Problem.
On implicit-factorization block preconditioners Sue Dollar 1,2, Nick Gould 3, Wil Schilders 2,4 and Andy Wathen 1 1 Oxford University Computing Laboratory,
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
AMS 691 Special Topics in Applied Mathematics Lecture 8
Discretization for PDEs Chunfang Chen,Danny Thorne Adam Zornes, Deng Li CS 521 Feb., 9,2006.
CS 290H Administrivia: May 14, 2008 Course project progress reports due next Wed 21 May. Reading in Saad (second edition): Sections
Partial Derivatives Example: Find If solution: Partial Derivatives Example: Find If solution: gradient grad(u) = gradient.
1 Spring 2003 Prof. Tim Warburton MA557/MA578/CS557 Lecture 34.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
F. Fairag, H Tawfiq and M. Al-Shahrani Department of Math & Stat Department of Mathematics and Statistics, KFUPM. Nov 6, 2013 Preconditioning Technique.
A Parallel Hierarchical Solver for the Poisson Equation Seung Lee Deparment of Mechanical Engineering
Multipole-Based Preconditioners for Sparse Linear Systems. Ananth Grama Purdue University. Supported by the National Science Foundation.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Singular Value Decomposition and its applications
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Model Problem: Solving Poisson’s equation for temperature
Solving Linear Systems Ax=b
Lecture 19 MA471 Fall 2003.
Deflated Conjugate Gradient Method
A robust preconditioner for the conjugate gradient method
Matrix Methods Summary
Conjugate Gradient Method
Numerical Linear Algebra
Introduction to Scientific Computing II
Comparison of CFEM and DG methods
Administrivia: November 9, 2009
CS5321 Numerical Optimization
Presentation transcript:

Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix need not be formed explicitly; only need to solve whenever needed. With: requirement is that it should be easy to solve for an arbitrary vector z.

Left, Right, and Symmetric Preconditioners Left preconditioning : Right preconditioning : Symmetric preconditioning :

Left, Right, and Symmetric Preconditioners Symmetric preconditioning : The matrix in brackets is symmetric positive definite. ( so we can use CG) The matrix in brackets is similar to It is enough to examine the eigenvalues of the nonsymmetric matrix to investigate convergence. No loss of symmetry

Example Example: CG iterations converges slowly without preconditioner. This is better than direct method

Example Example: Preconditioner: No precond: after 40 iterations, achieves about 5-digit residual reduction. PCG: after 30 iterations, achieves 15-digits

Practical Implementations Consider why No…NO?? Loss of sparsity, matrix multiplication, finding inverse CG

Practical Implementations CGPCG-ver1

Practical Implementations PCG(ver1)PCG(ver2)

Practical Implementations PCG(ver2)PCG

Good Preconditioner Good Preconditioner M: 1)Few number of iterations (fast convergence) 2) Cheap to solve the system Mx = y textbook The preconditioners used in practice are sometimes as simple as this one (diagonal), but they are often far more complicated.

Example: Five items Item 1: (minimal polynomial with small degree) DEF: p(x) is the minimal polynomial of the nxn matrix A if p(x) is the monic polynomial of least degree such that p(A)=0. A = Characteristic polynomial minimal polynomial

Example: Item 1: (minimal polynomial with small degree) DEF: p(x) is the minimal polynomial of the nxn matrix A if p(x) is the monic polynomial of least degree such that p(A)=0. A = minimal polynomial WHY? Five items

Item 2: (A with few distinct eigenvalues) Note and why: Item 3: (small condition number) Note: it is only one way ( large condition number ) Example: Five items CG GMRES

Item 4: (residual and error) Five items The inequality implies that condition number provide an indication of the connection between the residual vector and the accuracy of the approximation In general, the relative error is bounded by the product of condition number with the relative residual

Item 5: (A with a good clustering of the eigenvalues) Textbook: A preconditioner M is good if M^(-1)A is not too far from normal and its eigenvalues are clustered. Five items

Item 6: (M is close enough to A) Textbook: If the eigenvalues of are close to 1 and is small, then any of the iterations we have discussed can be expected to converge quickly. Five items

Survey of Preconditioners Textbook: The preconditioners used in practice are sometimes as simple as this one (diag(A)), but they are often far more complicated. 1) Diagonal Scaling: Choose the preconditioner to be M = diag(c) where c is a suitable vector with nonzero entries. Problem:

Survey of Preconditioners 2) Incomplete Cholesky Factorization: (IC) Incomplete Cholesky factorization Sparse, SPD Incomplete Cholesky conjugate Gradient Method 3) Incomplete LU Factorization: (ILU) Incomplete LU factorization nonsymmetric Use GMRES with M as preconditioner

Survey of Preconditioners 4) Local Approximation The matrix A represents coupling between elements both near and far from one another. It may be worth considering M analogous to A but with the longer- range interactions omitted – a short-range approximation A. In the simplest cases of this kind, M may consist simply of a few of the diagonals of A near the main diagonal, making this a generalization of the idea of a diagonal preconditioner.

Survey of Preconditioners 5) Block Preconditioners This is another kind of local approximation, in that local effects within certain components are considered while connections to other components are ignored.

Survey of Preconditioners 7) Constant-coefficient approximation PDE with variable coefficient Use the discritezed matrix as a preconditioner for the first problem 8) symmetric approximation If a differential equation is not self-adjoint but is close in some sense to a self-adjoint equation that can be solved more easily, then the latter may sometimes sevre as a preconditioner

Survey of Preconditioners 6) Domain Decomposition In which solvers for certain subdomains of a problem are composed in flexible ways to form preconditioners for the global problem. This method combine mathematical power with natural paralleizability

Survey of Preconditioners 6) Domain Decomposition In which solvers for certain subdomains of a problem are composed in flexible ways to form preconditioners for the global problem. This method combine mathematical power with natural paralleizability Note: the problem can be parallized

Survey of Preconditioners 7) Low-order discertization Often a differential or integral equation is discretized by a higher-order method. Bringing a gain in accuracy but making the discretization stencils bigger and the matrix less sparse. A lower-order approximation of the same problem, with its sparser matrix, may be an effective preconditioner. 9-point formula. 5-point formula

Survey of Preconditioners 8) saddle-point system 9) Generalized saddle-point system

Survey of Preconditioners 10) Polynomial preconditioner

Survey of Preconditioners 11) Splitting Many applications involve combinations of physical effects, such as the diffusion and convection that combine tp make up the Navier-Stokes equations of fluid mechanics. Example: Laplacian in two or three dimensions is composed of analogous operators in each of the dimensions separately. This idea may form the basis of a preconditioner.