Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

3D Geometry for Computer Graphics
Ch 7.7: Fundamental Matrices
Applied Informatics Štefan BEREŽNÝ
Chapter 6 Eigenvalues and Eigenvectors
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
Chapter 6 Eigenvalues.
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
Chapter 2 Matrices Definition of a matrix.
Finding Eigenvalues and Eigenvectors What is really important?
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
5.1 Orthogonality.
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
Orthogonal Matrices and Spectral Representation In Section 4.3 we saw that n  n matrix A was similar to a diagonal matrix if and only if it had n linearly.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
CHAPTER SIX Eigenvalues
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
1 資訊科學數學 14 : Determinants & Inverses 陳光琦助理教授 (Kuang-Chi Chen)
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
 Row and Reduced Row Echelon  Elementary Matrices.
1 Chapter 6 – Determinant Outline 6.1 Introduction to Determinants 6.2 Properties of the Determinant 6.3 Geometrical Interpretations of the Determinant;
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
A rectangular array of numbers (we will concentrate on real numbers). A nxm matrix has ‘n’ rows and ‘m’ columns What is a matrix? First column First row.
8.1 Matrices & Systems of Equations
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
5 5.2 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors THE CHARACTERISTIC EQUATION.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Lecture 8 Matrix Inverse and LU Decomposition
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Chapter 3 Determinants Linear Algebra. Ch03_2 3.1 Introduction to Determinants Definition The determinant of a 2  2 matrix A is denoted |A| and is given.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Linear Algebra Engineering Mathematics-I. Linear Systems in Two Unknowns Engineering Mathematics-I.
ALGEBRAIC EIGEN VALUE PROBLEMS
College Algebra Chapter 6 Matrices and Determinants and Applications
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
4. The Eigenvalue.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Linear Algebra Lecture 36.
Systems of First Order Linear Equations
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
EIGENVECTORS AND EIGENVALUES
Eigenvalues and Eigenvectors
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Chapter 2 Determinants.
Presentation transcript:

Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main part of numerical linear algebra is about transforming a matrix to leave its eigenvalues unchanged. Ax = x where  is an eigenvalue of A and non-zero x is the corresponding eigenvector.

What are Eigenvalues? Eigenvalues are important in physical, biological, and financial problems (and others) that can be represented by ordinary differential equations. Eigenvalues often represent things like natural frequencies of vibration, energy levels in quantum mechanical systems, stability parameters.

What are Eigenvectors Mathematically speaking, the eigenvectors of matrix A are those vectors that when multiplied by A are parallel to themselves. Finding the eigenvalues and eigenvectors is equivalent to transforming the underlying system of equations into a special set of coordinate axes in which the matrix is diagonal. The eigenvalues are the entries of the diagonal matrix. The eigenvectors are the new set of coordinate axes.

Determinants Suppose we if eliminate the components of x from Ax=0 using Gaussian Elimination without pivoting. We do the kth forward elimination step by subtracting ajk times row k from akk times row j, for j=k,k+1,…n. Then at the end of forward elimination we have Ux=0, and Unn is the determinant of A, det(A). For nonzero solutions to Ax=0 we must have det(A) = 0. Determinants are defined only for square matrices.

Determinant Example Suppose we have a 33 matrix. So Ax=0 is the same as: a11x1+a12x2+a13x3 = 0 a21x1+a22x2+a23x3 = 0 a31x1+a32x2+a33x3 = 0

Determinant Example (continued) Step k=1: subtract a21 times equation 1 from a11 times equation 2. subtract a31 times equation 1 from a11 times equation 3. So we have: (a11a22-a21a12)x2+(a11a23-a21a13)x3 = 0 (a11a32-a31a12)x2+(a11a33-a31a13)x3 = 0

Determinant Example (continued) Step k=2: subtract (a11a32-a31a12) times equation 2 from (a11a22-a21a12) times equation 3. So we have: [(a11a22-a21a12)(a11a33-a31a13)- (a11a32-a31a12)(a11a23-a21a13)]x3 = 0 which becomes: [a11(a22a33 –a23a32) – a12(a21a33-a23a31) + a13(a21a32-a22a31)]x3 = 0 and so: det(A) = a11(a22a33 –a23a32) – a12(a21a33-a23a31) + a13(a21a32-a22a31)

Definitions Minor Mij of matrix A is the determinant of the matrix obtained by removing row i and column j from A. Cofactor Cij = (-1)i+jMij If A is a 11 matrix then det(A)=a11. In general, where i can be any value i=1,…n.

A Few Important Properties det(AB) = det(A)det(B) If T is a triangular matrix, det(T) = t11t22…tnn det(AT) = det(A) If A is singular then det(A)=0. If A is invertible then det(A)0. If the pivots from Gaussian Elimination are d1, d2,…,dn then det(A) = d1d2dn where the plus or minus sign depends on whether the number of row exchanges is even or odd.

Characteristic Equation Ax = x can be written as (A-I)x = 0 which holds for x0, so (A- I) is singular and det(A-I) = 0 This is called the characteristic polynomial. If A is nn the polynomial is of degree n and so A has n eigenvalues, counting multiplicities.

Example Hence the two eigenvalues are 1 and 5.

Example (continued) Once we have the eigenvalues, the eigenvectors can be obtained by substituting back into (A-I)x = 0. This gives eigenvectors (1 -1)T and (1 1/3)T Note that we can scale the eigenvectors any way we want. Determinant are not used for finding the eigenvalues of large matrices.

Positive Definite Matrices A complex matrix A is positive definite if for every nonzero complex vector x the quadratic form xHAx is real and: xHAx > 0 where xH denotes the conjugate transpose of x (i.e., change the sign of the imaginary part of each component of x and then transpose).

Eigenvalues of Positive Definite Matrices If A is positive definite and  and x are an eigenvalue/eigenvector pair, then: Ax = x  xHAx = xHx Since xHAx and xHx are both real and positive it follows that  is real and positive.

Properties of Positive Definite Matrices If A is a positive definite matrix then: A is nonsingular. The inverse of A is positive definite. Gaussian elimination can be performed on A without pivoting. The eigenvalues of A are positive.

Hermitian Matrices A square matrix for which A = AH is said to be an Hermitian matrix. If A is real and Hermitian it is said to be symmetric, and A = AT. Every Hermitian matrix is positive definite. Every eigenvalue of an Hermitian matrix is real. Different eigenvectors of an Hermitian matrix are orthogonal to each other, i.e., their scalar product is zero.

Eigen Decomposition Let 1,2,…,n be the eigenvalues of the nn matrix A and x1,x2,…,xn the corresponding eigenvectors. Let  be the diagonal matrix with 1,2,…,n on the main diagonal. Let X be the nn matrix whose jth column is xj. Then AX = X, and so we have the eigen decomposition of A: A = XX-1 This requires X to be invertible, thus the eigenvectors of A must be linearly independent.

Powers of Matrices If A = XX-1 then: A2 = (XX-1)(XX-1) = X(X-1X)X-1 = X2X-1 Hence we have: Ap = XpX-1 Thus, Ap has the same eigenvectors as A, and its eigenvalues are 1p,2p,…,np. We can use these results as the basis of an iterative algorithm for finding the eigenvalues of a matrix.

The Power Method Label the eigenvalues in order of decreasing absolute value so |1|>|2|>… |n|. Consider the iteration formula: yk+1 = Ayk where we start with some initial y0, so that: yk = Aky0 Then yk converges to the eigenvector x1 corresponding the eigenvalue 1.

Proof We know that Ak = XkX-1, so: yk = Aky0 = XkX-1y0 Now we have: The terms on the diagonal get smaller in absolute value as k increases, since 1 is the dominant eigenvalue.

Proof (continued) So we have Since 1k c1 x1 is just a constant times x1 then we have the required result.

Example Let A = [2 -12; 1 -5] and y0=[1 1]’ y1 = -4[2.50 1.00]’ The iteration is converging on a scalar multiple of [3 1]’, which is the correct dominant eigenvector.

Rayleigh Quotient Note that once we have the eigenvector, the corresponding eigenvalue can be obtained from the Rayleigh quotient: dot(Ax,x)/dot(x,x) where dot(a,b) is the scalar product of vectors a and b defined by: dot(a,b) = a1b1+a2b2+…+anbn So for our example, 1 = -2.

Scaling The 1k can cause problems as it may become very large as the iteration progresses. To avoid this problem we scale the iteration formula: yk+1 = A(yk /k+1) where k+1 is the component of Ayk with largest absolute value.

Example with Scaling Let A = [2 -12; 1 -5] and y0=[1 1]’ Ay0 = [-10 -4]’ so 1=-10 and y1=[1.00 0.40]’. Ay1 = [-2.8 -1.0]’ so 2=-2.8 and y2=[1.0 0.3571]’. Ay2 = [-2.2857 -0.7857]’ so 3=-2.2857 and y3=[1.0 0.3437]’. Ay3 = [-2.1250 -0.7187]’ so 4=-2.1250 and y4=[1.0 0.3382]’. Ay4 = [-2.0588 -0.6912]’ so 5=-2.0588 and y5=[1.0 0.3357]’. Ay5 = [-2.0286 -0.6786]’ so 6=-2.0286 and y6=[1.0 0.3345]’.  is converging to the correct eigenvector -2.

Scaling Factor At step k+1, the scaling factor k+1 is the component with largest absolute value is Ayk. When k is sufficiently large Ayk  1yk. The component with largest absolute value in 1yk is 1 (since yk was scaled in the previous step to have largest component 1). Hence, k+1  1 as k  .

MATLAB Code function [lambda,y]=powerMethod(A,y,n) for (i=1:n) y = A*y; [c j] = max(abs(y)); lambda = y(j); y = y/lambda; end

Convergence The Power Method relies on us being able to ignore terms of the form (j/1)k when k is large enough. Thus, the convergence of the Power Method depends on |2|/|1|. If |2|/|1|=1 the method will not converge. If |2|/|1| is close to 1 the method will converge slowly.

Orthonormal Vectors A set S of nonzero vectors are orthonormal if, for every x and y in S, we have dot(x,y)=0 (orthogonality) and for every x in S we have ||x||2=1 (length is 1).

Similarity Transforms If T is invertible, then the matrices A and B=T-1AT are said to be similar. Similar matrices have the same eigenvalues. If x is the eigenvector of A corresponding to eigenvalue , then the eigenvector B corresponding to eigenvalue  is T-1x. Ax=x  TBT-1x=x  B(T-1x)=(T-1x)

The QR Algorithm The QR algorithm for finding eigenvalues is based on the QR factorisation that represents a matrix A as: A = QR where Q is a matrix whose columns are orthonormal, and R is an upper triangular matrix. Note that QHQ = I and Q-1=QH. Q is termed a unitary matrix.

QR Algorithm without Shifts A0 = A for k=1,2,… QkRk = Ak Ak+1 = RkQk end Since: Ak+1 = RkQk = Qk-1AkQk then Ak and Ak+1 are similar and so have the same eigenvalues. Ak+1 tends to an upper triangular matrix with the same eigenvalues as A. These eigenvalues lie along the main diagonal of Ak+1.

QR Algorithm with Shift Since: Ak+1 = RkQk + sI = Qk-1(Ak – sI)Qk +sI = Qk-1AkQk so once again Ak and Ak+1 are similar and so have the same eigenvalues. A0 = A for k=1,2,… s = Ak(n,n) QkRk = Ak - sI Ak+1 = RkQk + sI end The shift operation subtracts s from each eigenvalue of A, and speeds up convergence.

MATLAB Code for QR Algorithm Let A be an nn matrix n = size(A,1); I = eye(n,n); s = A(n,n); [Q,R] = qr(A-s*I); A = R*Q+s*I Use the up arrow key in MATLAB to iterate or put a loop round the last line.

Deflation The eigenvalue at A(n,n) will converge first. Then we set s=A(n-1,n-1) and continue the iteration until the eigenvalue at A(n-1,n-1) converges. Then set s=A(n-2,n-2) and continue the iteration until the eigenvalue at A(n-2,n-2) converges, and so on. This process is called deflation.