Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Chapter 4 Euclidean Vector Spaces
Numerical Solution of Linear Equations
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Chapter 6 Eigenvalues and Eigenvectors
6.1 Eigenvalues and Diagonalization. Definitions A is n x n. is an eigenvalue of A if AX = X has non zero solutions X (called eigenvectors) If is an eigenvalue.
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
Eigenvalues and Eigenvectors
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Chapter 5 Orthogonality
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Orthogonality and Least Squares
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Dirac Notation and Spectral decomposition
Dominant Eigenvalues & The Power Method
1 MAC 2103 Module 10 lnner Product Spaces I. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Define and find the.
259 Lecture 14 Elementary Matrix Theory. 2 Matrix Definition  A matrix is a rectangular array of elements (usually numbers) written in rows and columns.
Compiled By Raj G. Tiwari
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
BMI II SS06 – Class 3 “Linear Algebra” Slide 1 Biomedical Imaging II Class 3 – Mathematical Preliminaries: Elementary Linear Algebra 2/13/06.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Chapter 5 Orthogonality.
Linear Algebra Chapter 4 Vector Spaces.
Day 1 Eigenvalues and Eigenvectors
Day 1 Eigenvalues and Eigenvectors
Elementary Linear Algebra Anton & Rorres, 9th Edition
2 2.1 © 2016 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Appendix A. Mathematical Background EE692 Parallel and Distribution Computation.
1 MAC 2103 Module 3 Determinants. 2 Rev.F09 Learning Objectives Upon completing this module, you should be able to: 1. Determine the minor, cofactor,
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Chapter 2 Nonnegative Matrices. 2-1 Introduction.
Programming assignment #2 Solving a parabolic PDE using finite differences Numerical Methods for PDEs Spring 2007 Jim E. Jones.
Numerical Computation Lecture 9: Vector Norms and Matrix Condition Numbers United International College.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
5.5 Row Space, Column Space, and Nullspace
Chap. 5 Inner Product Spaces 5.1 Length and Dot Product in R n 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
2 2.1 © 2012 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Class 24: Question 1 Which of the following set of vectors is not an orthogonal set?
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
The Power Method for Finding
Solving Scalar Linear Systems A Little Theory For Jacobi Iteration
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Ltd.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices, Vectors, Determinants.
2.1 Matrix Operations 2. Matrix Algebra. j -th column i -th row Diagonal entries Diagonal matrix : a square matrix whose nondiagonal entries are zero.
Chapter 6 Eigenvalues and Eigenvectors
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
2. Matrix Algebra 2.1 Matrix Operations.
Linear Algebra Lecture 32.
DETERMINANT MATH 80 - Linear Algebra.
Eigenvalues and Eigenvectors
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Numerical Analysis Lecture11.
Linear Algebra Lecture 16.
Chapter 2 Determinants.
Presentation transcript:

Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers

Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). For a number x, we have |x| as a measurement of the magnitude of x. For a vector x, it is not clear what the “ best ” measurement of size should be. Note: we will use bold-face type to denote a vector. ( x )

Vector Norms Example: x = ( 4, -1 ) is the standard Pythagorean length of x. This is one possible measurement of the size of x. x

Vector Norms Example: x = ( 4, -1 ) |4| + |-1|=5 is the “ Taxicab ” length of x. This is another possible measurement of the size of x. x

Vector Norms Example: x = ( 4, -1 ) max(|4|,|-1|) =4 is yet another possible measurement of the size of x. x

Vector Norms A vector norm is a quantity that measures how large a vector is (the magnitude of the vector). Definition: A vector norm is a function that takes a vector and returns a non-zero number. We denote the norm of a vector x by The norm must satisfy: – Triangle Inequality: – Scalar: – Positive:,and = 0 only when x is the zero vector.

Our previous examples for vectors in R n : All of these satisfy the three properties for a norm. Vector Norms

Vector Norms Example

Definition: The L p norm generalizes these three norms. For p > 0, it is defined on R n by: p=1 L 1 norm p=2 L 2 norm p= ∞ L ∞ norm Vector Norms

Distance

The answer depends on the application. The 1-norm and ∞-norm are good whenever one is analyzing sensitivity of solutions. The 2-norm is good for comparing distances of vectors. There is no one best vector norm! Which norm is best?

In Matlab, the norm function computes the L p norms of vectors. Syntax: norm(x, p) >> x = [ ]; >> n = norm(x,2) n = >> n = norm(x,1) n = 8 >> n = norm(x, inf) n = 4 Matlab Vector Norms

Definition: Given a vector norm ||x|| the matrix norm defined by the vector norm is given by: What does a matrix norm represent? It represents the maximum “ stretching ” that A does to a vector x -> (Ax). Matrix Norms

Note that, since ||x|| is a scalar, we have Since is a unit vector, we see that the matrix norm is the maximum value of Az where z is on the unit ball in R n. Thus, ||A|| represents the maximum “stretching” possible done by the action Ax. Matrix Norm “Stretch”

Theorem A: The matrix norm corresponding to 1-norm is maximum absolute column sum: Proof: From the previous slide, we have Also, where A j is the j-th column of A. Matrix 1- Norm

Proof (continued): Then, Let x be a vector with all zeroes, except a 1 in the spot where ||A j || is a max. Then, we get equality above. □ Matrix 1- Norm

Theorem B: Matrix norm corresponding to ∞ norm is maximum absolute row sum: Proof (similar to Theorem A). Matrix Norms

|| A || > 0 if A ≠ O || A || = 0 iff A = O || c A || = | c| * ||A || if A ≠ O || A + B || ≤ || A || + || B || || A B || ≤ || A || * ||B || || A x || ≤ || A || * ||x || Matrix Norm Properties

The eigenvectors of a matrix are vectors that satisfy Ax = λx Or, (A – λI)x = 0 So, λ is an eigenvalue iff det(A – λI) = 0 Example: Eigenvalues-Eigenvectors

The spectral radius of a matrix A is defined as ρ(A) = max |λ| where λ is an eigenvalue of A In our previous example, we had So, the spectral radius is 1. Spectral Radius

Theorem 1: If ρ(A)<1, then Proof: We can find a basis for R n by unit eigenvectors (result from linear algebra), say {e 1, e 2, …, e n }. Then, For any unit vector x, we have x = a 1 e 1 + a 2 e 2 + … + a n e n Then, A n x = a 1 A n e 1 + a 2 A n e 2 + … + a n A n e n = a 1 λ 1 n e 1 + a 2 λ 2 n e 2 + … + a n λ n n e n Thus, Since ρ(A)<1, then the result must hold. □ Convergence

Theorem 2: If ρ(B)<1, then (I-B) -1 exists and (I-B) -1 = I + B + B 2 + · · · Proof: Since we have Bx = λx exactly when (I-B)x = (1- λ )x, then λ is an eigenvalue of B iff (1- λ) is an eigenvalue of (I-B). Now, we know that |λ|<1, so 0 cannot be an eigenvalue of (I-B). Thus, (I-B) is invertible (why?). Let S p = I + B + B 2 + · · ·+B p Then, (I-B) S p = (I + B + B 2 + · · ·+B p ) – (B + B 2 + · · ·+B p+1 ) = (I- B p+1 ) Since ρ(A)<1, then by Theorem 1, the term B p+1 will go to the zero matrix as p goes to infinity. □ Convergent Matrix Series

Recall: Our general iterative formula to find x was Q x (k+1) = ωb + (Q-ωA) x (k) where Q and ω were variable parameters. We can re-write this as x (k+1) = Q -1 (Q-ωA) x (k) + Q -1 ωb Let B = Q -1 (Q-ωA) and c = ωb Then, our iteration formula has the general form: x (k+1) = B x (k) + c Convergence of Iterative solution to Ax=b

Theorem 3: For any x 0 in R n, the iteration formula given by x (k+1) = Bx (k) + c will converge to the unique solution of x=Bx+c (i.e fixed point) iff ρ(B)<1. Proof: If ρ(B)<1, the term B k+1 x 0 will vanish. Also, the remaining term will converge to (I-B) -1. Thus, { x (k+1) } converges to z = (I-B) -1 c, or z-Bz = c or z = Bz + c. The converse proof can be found in Burden and Faires, Numerical Analysis. □ Convergence of Iterative solution to Ax=b

Def: A matrix A is called Diagonally Dominant if the magnitude of the diagonal element is larger than the sum of the absolute values of the other elements in the row, for all rows. Example: Diagonally Dominant Matrices

Recall: Jacobi Method x (k+1) = D -1 (b + (D-A) x (k) ) = D -1 (D-A) x (k) + D -1 b Theorem 4: If A is diagonally dominant, then the Jacobi method converges to the solution of Ax=b. Proof: Let B = D -1 (D-A) and c = D -1 b. Then, we have x (k+1) = B x (k) + c. Consider the L ∞ norm of B, which is equal to Jacobi Method

Proof: (continued) Then, If A is diagonally dominant, then the terms we are taking a max over are all less than 1. So, the L ∞ norm of B is <1. We will now show that this implies that the spectral radius is <1. Jacobi Method

Lemma: ρ(A)<||A|| for any matrix norm. Proof: Let λ be an eigenvalue with unit eigenvector x. □ Proof of Theorem 4 (cont): Since we have shown that then, by the Lemma, we have that ρ(B) < 1. By Theorem 3, the iteration method converges. □ Jacobi Method

Through similar means we can show (no proof): Theorem 5: If A is diagonally dominant, then the Gauss- Seidel method converges to the solution of Ax=b. Gauss-Seidel Method