Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.

Slides:



Advertisements
Similar presentations
Lecture on Numerical Analysis Dr.-Ing. Michael Dumbser
Advertisements

CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng.
CSE 245: Computer Aided Circuit Simulation and Verification
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Optimization.
ACCELERATING GOOGLE’S PAGERANK Liz & Steve. Background  When a search query is entered in Google, the relevant results are returned to the user in an.
Applied Linear Algebra - in honor of Hans SchneiderMay 25, 2010 A Look-Back Technique of Restart for the GMRES(m) Method Akira IMAKURA † Tomohiro SOGABE.
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Solving Linear Systems (Numerical Recipes, Chap 2)
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
1cs542g-term Notes  In assignment 1, problem 2: smoothness = number of times differentiable.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
Tutorial 12 Unconstrained optimization Conjugate gradients.
Shawn Sickel A Comparison of some Iterative Methods in Scientific Computing.
Avoiding Communication in Sparse Iterative Solvers Erin Carson Nick Knight CS294, Fall 2011.
The Landscape of Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage (if sparse) More Robust.
Ordinary least squares regression (OLS)
CS240A: Conjugate Gradients and the Model Problem.
ECE 552 Numerical Circuit Analysis Chapter Five RELAXATION OR ITERATIVE TECHNIQUES FOR THE SOLUTION OF LINEAR EQUATIONS Copyright © I. Hajj 2012 All rights.
CSE 245: Computer Aided Circuit Simulation and Verification
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Computational Optimization
Systems of Linear Equations Iterative Methods
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
1 Iterative Solution Methods Starts with an initial approximation for the solution vector (x 0 ) At each iteration updates the x vector by using the sytem.
Qualifier Exam in HPC February 10 th, Quasi-Newton methods Alexandru Cioaca.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng.
CS240A: Conjugate Gradients and the Model Problem.
Case Study in Computational Science & Engineering - Lecture 5 1 Iterative Solution of Linear Systems Jacobi Method while not converged do { }
Numerical Analysis – Eigenvalue and Eigenvector Hanyang University Jong-Il Park.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Chapter 10 Minimization or Maximization of Functions.
Direct and Iterative Methods for Sparse Linear Systems
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Consider Preconditioning – Basic Principles Basic Idea: is to use Krylov subspace method (CG, GMRES, MINRES …) on a modified system such as The matrix.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
F. Fairag, H Tawfiq and M. Al-Shahrani Department of Math & Stat Department of Mathematics and Statistics, KFUPM. Nov 6, 2013 Preconditioning Technique.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Conjugate gradient iteration One matrix-vector multiplication per iteration Two vector dot products per iteration Four n-vectors of working storage x 0.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
CSE 245: Computer Aided Circuit Simulation and Verification
CSE 245: Computer Aided Circuit Simulation and Verification
Conjugate Gradient Method
Introduction to Scientific Computing II
~ Least Squares example
Numerical Linear Algebra
Solving Linear Systems: Iterative Methods and Sparse Systems
Introduction to Scientific Computing II
~ Least Squares example
Performance Optimization
Presentation transcript:

Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent method Conjugate gradient (CG) method --- most popular Preconditioning CG (PCG) method GMRES for nonsymmetric matrix –Other methods (read yourself) Chebyshev iterative method Lanczos methods Conjugate gradient normal residual (CGNR)

Modern iterative methods Ideas: –Minimizing the residual –Projecting to Krylov subspace Thm: If A is an n-by-n real symmetric positive definite matrix, then have the same solution Proof: see details in class

Steepest decent method Suppose we have an approximation Choose the direction as negative gradient of –If –Else, choose to minimize

Steepest decent method Computation Choose as

Algorithm– Steepest descent method

Theory Suppose A is symmetric positive definite. Define A-inner product Define A-norm Steepest decent method

Theory Thm: For steepest decent method, we have Proof: Exercise

Theory Rewrite the steepest decent method Let errors Lemma: For the method, we have

Theory Thm: For steepest decent method, we have Proof: See details in class (or as an exercise)

Steepest decent method Performance –Converge globally, for any initial data –If, then it converges very fast –If, then it converges very slow!!! Geometric interpretation –Contour plots are flat!! –Local best direction (steepest direction) is not necessarily a global best direction –Computational experience shows that the method suffers a decreasing convergence rate after a few iteration steps because the search directions become linearly dependent!!!

Conjugate gradient (CG) method Since A is symmetric positive definite, A-norm In CG method, the direction vectors are chosen to be A-orthogonal (and called as conjugate vectors), i.e.

CG method In addition, we take the new direction vector as a linear combination of the old direction vector and the descent direction as By the assumption we get

Algorithm– CG Method

An example Initial guess The approximate solutions

CG method In CG method, are A-orthogonal! Define the linear space as Lemma: In CG method, for m=0,1,…., we have –Proof: See details in class or as an exercise

CG method In CG method, is A-orthogonal to or Lemma: In CG method, we have –Proof: See details in class or as an exercise Thm: Error estimate for CG method

CG method Computational cost –At each iteration, 2 matrix-vector multiplications. This can be further reduced to 1 matrix-vector multiplications –At most n steps, we can get the exact solution!!! Convergence rate depends on the condition # –K 2 (A)=O(1), converges very fast!! –K 2 (A)>>1, converges slow but can be accelerated by preconditioning!!

Preconditioning Ideas: Replace by satisfying –C is symmetric positive definite – is well-conditioned, i.e. – can be easily solved Conditions for choosing the preconditioning matrix – as small as possible – is easy to compute –Trade-off

Algorithm– PCG Method

Preconditioning Ways to choose the matrix C (read yourself) –Diagonal part of A –Tri-diagonal part of A –m-step Jacobi preconditioner –Symmetric Gauss-Seidel preconditioner –SSOR preconditioner –In-complete Cholesky decomposition –In-complete block preconditioning –Preconditioning based on domain decomposition –…….

Extension of CG method to nonsymmetric Biconjugate gradient (BiCG) method: –Solve simultaneously –Works well for A is positive definite, not symmetric –If A is symmetric, BiCG reduces to CG Conjugate gradient squared (CGS) method –A has a special formula in computing Ax, its transport hasn’t –Multiplication by A is efficient but multiplication by its transport is not

Krylov subspace methods Problem I. Linear system Problem II. Variational formulation Problem III. Minimization problem –Thm1: Problem I is equivalent to Problem II –Thm2: If A is symmetric positive definite, they are equivalent

Krylov subspace methods To reduce problem size, we replace by a subspace Subspace minimization: –Find –Such that Subspace projection

Krylov subspace methods To determine the coefficients, we have – Normal Equations –It is a linear system with degree m!! m=1: line minimization or linear search or 1D projection By converting this formula into an iteration, we reduce the original problem into a sequence of line minimization (successive line minimization ).

For symmetric matrix Positive definite –Steepest decent method –CG method –Preconditioning CG method Non-positive definite –MINRES (minimum residual method)

For nonsymmetric matrix Normal equations method (or CGNR method) GMRES (generalized minimium residual method) –Saad & Schultz, 1986 –Ideas: In the m-th step, minimize the residual over the set Use Arnoldi (full orthogonal) vectors instead of Lanczos vectors If A is symmetric, it reduces to the conjugate residual method

Algorithm– GMRES

More topics on Matrix computations Eigenvalue & eigenvector computations If A is symmetric: Power method If A is general matrix –Householder matrix (transform) –QR method

More topics on matrix computations Singular value decomposition (SVD) Thm: Let A be an m-by-n real matrix, there exists orthogonal matrices U & V such that Proof: Exercise