Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Matrix Representation
Ch 7.7: Fundamental Matrices
Chapter 6 Eigenvalues and Eigenvectors
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Computer Graphics Recitation 5.
Ch 7.8: Repeated Eigenvalues
Ch 7.9: Nonhomogeneous Linear Systems
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
Chapter 2 Matrices Definition of a matrix.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
Matrices CS485/685 Computer Vision Dr. George Bebis.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 資訊科學數學 14 : Determinants & Inverses 陳光琦助理教授 (Kuang-Chi Chen)
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
Day 1 Eigenvalues and Eigenvectors
Day 1 Eigenvalues and Eigenvectors
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
What is the determinant of What is the determinant of
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
Linear Algebra With Applications by Otto Bretscher.
Chapter 6 Eigenvalues and Eigenvectors
Review of Linear Algebra
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
CS479/679 Pattern Recognition Dr. George Bebis
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Section 4.1 Eigenvalues and Eigenvectors
Systems of First Order Linear Equations
Euclidean Inner Product on Rn
CS485/685 Computer Vision Dr. George Bebis
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Chapter 6 Eigenvalues Basil Hamed
SKTN 2393 Numerical Methods for Nuclear Engineers
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Presentation transcript:

Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column vector, and  is a scalar. The n values of  that satisfy the equation are the eigenvalues, and the corresponding values of x are the right eigenvectors. In MATLAB, the function eig solves for the eigenvalues  , and optionally the eigenvectors x.

Eigenvalues If A is a square matrix of order (n x n) and I is the unit vector of the same order, then the matrix B = A - I Is called the characteristic matrix of A ,  being parameter, for example:

Eigenvalues A =

Eigenvalues The equation: B= (A-I)= 0 Is called characteristic equation of A and is in general an equation of the nth degree in . The n roots of this equation are called the characteristic roots ( or eigenvalues ) of A. For example, the characteristic equation of Matrix A is obtained by equating the determinant of B.

Eigenvalues whose solution gives us = 2, 2 2 , That gives: whose solution gives us = 2, 2 2 , stands as eigenvalues of A matrix

Eigenvalue Decomposition With the eigenvalues on the diagonal of a diagonal matrix  and the corresponding eigenvectors forming the columns of a matrix V, we have: If V is nonsingular, this becomes the eigenvalue decomposition

Eigenvalue Decomposition A good example is provided by the coefficient matrix of the ordinary differential equation in the previous section. A = 0 -6 -1 6 2 -16 -5 20 -10

Eigenvalue Decomposition The statement lambda = eig(A) produces a column vector containing the eigenvalues. For this matrix, the eigenvalues are complex. lambda = -3.0710 -2.4645+17.6008i -2.4645-17.6008i

Eigenvalue Decomposition The real part of each of the eigenvalues is negative, so et approaches zero as t increases. The nonzero imaginary part of two of the eigenvalues, , contributes the oscillatory component, sint , to the solution of the differential equation.

Eigenvalue Decomposition With two output arguments, eig computes the eigenvectors and stores the eigenvalues in a diagonal matrix. [V,D] = eig(A) V = -0.8326 0.2003 - 0.1394i 0.2003 + 0.1394i -0.3553 -0.2110 - 0.6447i -0.2110 + 0.6447i -0.4248 -0.6930 -0.6930 D = -3.0710 0 0 0 -2.4645+17.6008i 0 0 0 -2.4645-17.6008

Eigenvalue Decomposition The first eigenvector is real and the other two vectors are complex conjugates of each other. All three vectors are normalized to have Euclidean length, norm(v,2), equal to one. The matrix V*D*inv(V), which can be written more succinctly as V*D/V, is within roundoff error of A. And, inv(V)*A*V, or V\A*V, is within roundoff error of D.

Defective Matrices Some matrices do not have an eigenvector decomposition. These matrices are defective, or not diagonalizable. For example, A = [ 6 12 19 -9 -20 -33 4 9 15 ]

Defective Matrices For this matrix [V,D] = eig(A) Produces V = -0.4741 -0.4082 -0.4082 0.8127 0.8165 0.8165 -0.3386 -0.4082 -0.4082 D = -1.0000 0 0 0 1.0000 0 0 0 1.0000 There is a double eigenvalue at = 1. The second and third columns of V are the same. For this matrix, a full set of linearly independent eigenvectors do not exist.

http://www.miislita.com/information-retrieval-tutorial/matrix-tutorial-3-eigenvalues-eigenvectors.html#d

The Eigenvalue Problem Consider a scalar matrix Z, obtained by multiplying an identity matrix by a scalar; i.e., Z = c*I. Deducting this from a regular matrix A gives a new matrix A - c*I. A - Z = A - c*I. If its determinant is zero, |A - c*I| = 0 and A has been transformed into a singular matrix. The problem of transforming a regular matrix into a singular matrix is referred to as the eigenvalue problem.

Calculating Eigenvalues However, deducting c*I from A is equivalent to substracting a scalar c from the main diagonal of A. For the determinant of the new matrix to vanish the trace of A must be equal to the sum of specific values of c. For which values of c?

Calculating Eigenvalues

Calculating Eigenvalues The polynomial expression we just obtained is called the characteristic equation and the c values are termed the latent roots or eigenvalues of matrix A. Thus, deducting either c1 = 3 or c2 = 14 from the principal of A results in a matrix whose determinant vanishes (|A - c*I| = 0)

Calculating Eigenvalues In terms of the trace of A we can write: c1/trace = 3/17 = 0.176 or 17.6% c2/trace = 14/17 = 0.824 or 82.4% Thus, c2 = 14 is the largest eigenvalue, accounting for more than 82% of the trace. The largest eigenvalue of a matrix is also called the principal eigenvalue.

Calculating Eigenvalues Now that the eigenvalues are known, these are used to compute the latent vectors of matrix A. These are the so-called eigenvectors.

Eigenvectors A - ci*I Multiplying by a column vector Xi of same number of rows as A and setting the results to zero leads to : (A - ci*I)*Xi = 0 Thus, for every eigenvalue ci this equation constitutes a system of n simultaneous homogeneous equations, and every system of equations has an infinite number of solutions. Corresponding to every eigenvalue ci is a set of eigenvectors Xi, the number of eigenvectors in the set being infinite. Furthermore, eigenvectors that correspond to different eigenvalues are linearly independent from one another.

Properties of Eigenvalues and Eigenvectors the absolute value of a determinant (|detA|) is the product of the absolute values of the eigenvalues of matrix A c = 0 is an eigenvalue of A if A is a singular (noninvertible) matrix If A is a nxn triangular matrix (upper triangular, lower triangular) or diagonal matrix , the eigenvalues of A are the diagonal entries of A. A and its transpose matrix have same eigenvalues. Eigenvalues of a symmetric matrix are all real.

Properties of Eigenvalues and Eigenvectors…..cont. Eigenvectors of a symmetric matrix are orthogonal, but only for distinct eigenvalues. The dominant or principal eigenvector of a matrix is an eigenvector corresponding to the eigenvalue of largest magnitude (for real numbers, largest absolute value) of that matrix. For a transition matrix, the dominant eigenvalue is always 1. The smallest eigenvalue of matrix A is the same as the inverse (reciprocal) of the largest eigenvalue of A-1; i.e. of the inverse of A.

Computing Eigenvectors from Eigenvalues

Computing Eigenvectors

|c1|*|c2| = |3|*|14| = |42| = |detA|. In addition, it is confirmed that |c1|*|c2| = |3|*|14| = |42| = |detA|.

As show in earlier, plotting these vectors confirms that eigenvectors that correspond to different eigenvalues are linearly independent of one another. Note that each eigenvalue produces an infinite set of eigenvectors, all being multiples of a normalized vector. So, instead of plotting candidate eigenvectors for a given eigenvalue one could simply represent an entire set by its normalized eigenvector. This is done by rescaling coordinates; in this case, by taking coordinate ratios. In our example, the coordinates of these normalized eigenvectors are: (0.5, -1) for c1 = 3. (1, 0.2) for c2 = 14.

(0.5, -1) for c1 = 3. (1, 0.2) for c2 = 14.

Concluding Remarks Two of the eigenvalues are orthogonal. Real part of eigenvalue is inverse of the time constant associated with principal mode of the system in time domain. Imaginary part of eigenvalue gives angular frequency oscillation in frequency domain.