MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
8.4. Unitary Operators. Inner product preserving V, W inner product spaces over F in R or C. T:V -> W. T preserves inner products if (Ta|Tb) = (a|b) for.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Signal , Weight Vector Spaces and Linear Transformations
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
MA5242 Wavelets Lecture 2 Euclidean and Unitary Spaces Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2 Singapore.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Lecture 12 Least Square Approximation Shang-Hua Teng.
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
Orthogonal Sets (12/2/05) Recall that “orthogonal” matches the geometric idea of “perpendicular”. Definition. A set of vectors u 1,u 2,…,u p in R n is.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Inner product Linear functional Adjoint
Length and Dot Product in R n Notes: is called a unit vector. Notes: The length of a vector is also called its norm. Chapter 5 Inner Product Spaces.
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Chapter 5: The Orthogonality and Least Squares
USSC3002 Oscillations and Waves Lecture 6 Forced Oscillations Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive.
Gram-Schmidt Orthogonalization
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
Orthogonality and Least Squares
The DYNAMICS & GEOMETRY of MULTIRESOLUTION METHODS Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2 Singapore.
MA5242 Wavelets Lecture 3 Discrete Wavelet Transform Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2 Singapore.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Chapter 10 Real Inner Products and Least-Square (cont.) In this handout: Angle between two vectors Revised Gram-Schmidt algorithm QR-decompoistion of matrices.
Elementary Linear Algebra Anton & Rorres, 9th Edition
USSC3002 Oscillations and Waves Lecture 11 Continuous Systems
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
Chapter 10 Real Inner Products and Least-Square
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Mathematical Physics Seminar Notes Lecture 4 Global Analysis and Lie Theory Wayne M. Lawton Department of Mathematics National University of Singapore.
USSC3002 Oscillations and Waves Lecture 5 Dampened Oscillations Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
Algorithm for non-negative matrix factorization Daniel D. Lee, H. Sebastian Seung. Algorithm for non-negative matrix factorization. Nature.
Krylov-Subspace Methods - II Lecture 7 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
6 6.5 © 2016 Pearson Education, Ltd. Orthogonality and Least Squares LEAST-SQUARES PROBLEMS.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
The Landscape of Sparse Ax=b Solvers Direct A = LU Iterative y’ = Ay Non- symmetric Symmetric positive definite More RobustLess Storage More Robust More.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Krylov-Subspace Methods - I
Review of Matrix Operations
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Euclidean Inner Product on Rn
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
Conjugate Gradient Method
Orthogonality and Least Squares
Symmetric Matrices and Quadratic Forms
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Vector Space and Matrix Mechanics
MA5242 Wavelets Lecture 1 Numbers and Vector Spaces
Eigenvalues and Eigenvectors
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Linear Algebra Lecture 28.
Orthogonality and Least Squares
Symmetric Matrices and Quadratic Forms
Presentation transcript:

MA5233 Lecture 6 Krylov Subspaces and Conjugate Gradients Wayne M. Lawton Department of Mathematics National University of Singapore 2 Science Drive 2 Singapore Tel (65)

EUCLIDEAN SPACES that satisfies 2 Definition A Euclidean structure on a vector space is a function Bilinearand Symmetric for all Positive Definite and Definition The norm is

EXAMPLES 3 Example 1. (Standard) is positive definite and symmetric.where is positive except at possible a finite Example 2. Example 3. where number of points – hence p is nonnegative. Example 4. the Euclidean space in example 3. Then V is a Real andare obtained by Hilbert Space = Complete Real Euclidean Space.

ORTHONORMAL BASES 4 Definition is an orthonormal basis if Example for the standard Euclidean space iff the matrix is an orthonormal basis for V satisfies is the transpose matrix defined bywhere andis the identity matrix defined by Such a matrix is called orthogonal and satisfies and

GRAM-SCHMIDT PROCESS 5 Given a basis there exists a unique upper triangular matrix with positive numbers on its diagonal such that for a Euclidean space V are orthogonal (and therefore are a basis for V). Proof We apply the Gram-Schmidt Process For j = 2 to d

QR FACTORIZATION 6 Given a basis yields an upper triangular matrix with positive numbers on its diagonal such that for are therefore, sinceis upper triangular, Gram-Schmidt a factorization that has important applications to least-squares problems (section 5.3) and to compute eigenvalues and eigenvectors (section 5.5)

PARTIAL HESSENBERG FACTORIZATION 7 Definition A (not necessarily square) matrix We consider a matrix and orthonormal vectors is upper Hessenberg if and integer such that or, equivalently

KRYLOV SPACES AND ARNOLDI ITERATION 8 has dimension n, then an orthonormal basis If the Krylov space can be computed by GS using the Arnoldi Iteration based on the equation For j = 2 to n (Recall that

COMPLETE HESSENBERG FACTORIZATION 9 Possibly using more than one Krylov subspace we can construct an orthonormal basisfor such that where We observe that the number of Krylov subspaces equals 1+ number of zeros on the diagonal beneath the main diagonal.

TRI-DIAGONAL MATRIX 10 Theoremiff Proof. therefore Corollary Ifthenis tridiagonal.

LANCZOS ITERATION 11 Theorem If and an orthonormal basis for then For j = 1 to n-1 can be computed by GS using the Lanczos Iteration

CONJUGATE GRADIENT ITERATION 12 that Hestenes and Stiefel made famous solves Ax = b For j = 1 to n-1 under the assumption that A is symmetric and pos. def.

CONJUGATE GRADIENT ITERATION 13 Theorem 1. The following sets all = and Proof By induction if j < n-1then andsince if j < n-1then since and

CONJUGATE GRADIENT ITERATION 14 Theorem 2. If A is symmetric and positive definite then if the CG algorithm to solve Ax = 0 has not minimizes Proof If and convergence is monotonic thenconverged, that is then therefore for Theorem 3. If subordinate to the 2-norm then Proof See the handouts