Section 4.1 Eigenvalues and Eigenvectors

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

5.1 Real Vector Spaces.
Chapter 6 Eigenvalues and Eigenvectors
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Ch 7.9: Nonhomogeneous Linear Systems
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Differential Equations
Math Dept, Faculty of Applied Science, HCM University of Technology
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
Eigenvalues and Eigenvectors
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Section 2.3 Properties of Solution Sets
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
7.1 Eigenvalues and Eigenvectors
Section 1.7 Linear Independence and Nonsingular Matrices
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
Chapter 5 Eigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors
Boyce/DiPrima 10th ed, Ch 7.9: Nonhomogeneous Linear Systems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E.
Eigenvalues and Eigenvectors
Systems of Linear Differential Equations
Review of Eigenvectors and Eigenvalues
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
4. The Eigenvalue.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Ch 10.1: Two-Point Boundary Value Problems
Linear Algebra Lecture 36.
of Matrices and Vectors
Ch 11.1: The Occurrence of Two-Point Boundary Value Problems
Systems of First Order Linear Equations
Euclidean Inner Product on Rn
Eigenvalues and Eigenvectors
Numerical Analysis Lecture 16.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
1.3 Vector Equations.
Chapter 6 Eigenvalues Basil Hamed
Linear Algebra Lecture 39.
MAE 82 – Engineering Mathematics
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
SKTN 2393 Numerical Methods for Nuclear Engineers
Properties of Solution Sets
Engineering Mathematics-I Eigenvalues and Eigenvectors
Boyce/DiPrima 10th ed, Ch 7.6: Complex Eigenvalues Elementary Differential Equations and Boundary Value Problems, 10th edition, by William E. Boyce and.
EIGENVECTORS AND EIGENVALUES
Elementary Linear Algebra Anton & Rorres, 9th Edition
RAYAT SHIKSHAN SANSTHA’S S.M.JOSHI COLLEGE HADAPSAR, PUNE
Eigenvalues and Eigenvectors
Linear Algebra Lecture 35.
Eigenvalues and Eigenvectors
Linear Algebra Lecture 28.
Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Presentation transcript:

Section 4.1 Eigenvalues and Eigenvectors Recall that a matrix transformation is, a function f from Rn to Rn determined by an n  n matrix A so that f(p) = Ap, for p in Rn. The length of input vector p need not be the same as the length of the image vector Ap; that is ||p|| need not equal ||Ap||. Since a vector p in Rn has two basic properties, length and direction, it is reasonable to investigate whether p and Ap are ever parallel for a given matrix A. (Recall that parallel means that one vector is a scalar multiple of the other; Ap = (scalar) p .) We note that A0 = 0, hence we need consider only nonzero vectors p. The vectors p such that Ap and p are parallel arise in a surprising number of places, including Markov problems, population model, differential equations, vibrations, aerodynamics, elasticity, nuclear physics, mechanics, chemical engineering, biology, digital transmission of images, etc.

Here we consider only square matrices Here we consider only square matrices. We begin with 2 × 2 and 3 × 3 matrices, but our techniques also apply to larger square matrices. Example: Show that Ap is parallel to p and that Aq is parallel to q. Example: Example: Strategy: Compute Ap and set it equal to scalar λ times p. Use equality of vectors to determine λ and the components of vector p.

 x = λx & -y = λy We have Setting corresponding entries equal we have two cases to investigate: Case1: x = λx So for x not 0 we have λ =1; then for –y = λy we have –y = y so that y = 0. Case 2: -y = λy So for y not 0 we have λ = -1; then for x = λx we have x = -x, so that x = 0.

For matrix we have the set of vectors p so that Ap =λp, are all vectors in R2 of the form Note that only particular values of scalar λ can be chosen. Definition Let A be an n  n matrix. The scalar  is called an eigenvalue of matrix A if there exists an n  1 vector x, x  0, such that Ax = x. Every nonzero vector x satisfying (1) is called an eigenvector of A associated with eigenvalue . Equation Ax = x is commonly called the eigen equation and its solution is a goal of this chapter. Sometimes we will use the term eigenpair to mean an eigenvalue and an associated eigenvector. This emphasizes that for a given matrix A a solution of (1) requires both a scalar, an eigenvalue , and a nonzero vector, an eigenvector x. In the eigen equation Ax = x there is no restriction on the entries of matrix A, the type of scalar , or the entries of vector x, other than they cannot all be zero. It is possible that eigenvalues and eigenvectors of a matrix with real entries can involve complex values.

Example: For matrix we found eigenpairs Note that there are many eigenvectors associated with a particular eigenvalue. Example: For matrix show that is an eigenpair and that is another eigenpair. show that Example: For matrix is an eigenpair

Properties: If x is an eigenvector corresponding to eigenvalue  of A, then so is kx for any scalar k  0. (A nonzero scalar multiple of an eigenvector is another eigenvector corresponding to the same eigenvalue.) Proof: If (,p) is an eigenpair of A then Ap = p. We know that p  0 and k  0 so it must be that kp  0. To show that kp is an vector we do the following: A(kp) = k(Ap) = k(p) = (kp), This shows that (,kp) is an eigenpair of A. If p and q are eigenvectors corresponding to eigenvalue  of A, then so is p + q (assuming p + q  0). Proof: If (,p) is an eigenpair of A then Ap = p. If (,q) is an eigenpair of A then Aq = q. To show that p + q is an an eigenpair we do the following: A(p + q) = Ap + Aq = p + q = (p + q) This shows that (,p + q) is an eigenpair of A.

Properties: (continued) If (,p) is an eigenpair of A, then for any positive integer r, (r,p) is an eigen pair of Ar. Proof: Since (,p) is an eigenpair of A then Ap = p. Thus we have A2p = A(Ap) = A(p) = (Ap) = (p) = 2p, A3p = A(A2p) = A(2p) = 2(Ap) = 2(p) = 3p, and in general Arp = A(Ar-1p) = A(r-1p) = r-1(Ap) = r-1(p) = rp If (,p) and (,q) are eigenpairs of A with   , then p and q are linearly independent. Proof: Here we use material from Section 2.4, namely that if vectors p and q are not scalar multiples of one another, then they are linearly independent. Since (,p) is an eigenpair of A then Ap = p. Similarly since (,q) is an eigenpair of A then Aq = q. Let s and t be scalars so that tp + sq = 0, or equivalently tp = -sq. Multiplying this linear combination by A we get A( tp + sq) = t(Ap) + s(Aq) = t(p) + s(q) = (tp) + (sq) = 0 . Substituting for tp we get (-sq) + (sq) = 0 or equivalently that s( - )q = 0. Since q  0 and   , it must be that s = 0, but then tp = -0q = 0. It follows that since p  0 then t = 0. Hence the only way tp + sq = 0 is when t = s = 0, so p and q are linearly independent. (This property implies that eigenvector corresponding to distinct eigenvalues are linearly independent.)

Let A be an n × n matrix with eigenpair (, p) Let A be an n × n matrix with eigenpair (, p) . By definition we know that an eigenvector cannot be the zero vector. Next let S be the set of all eigenvectors of A corresponding to eigenvalue  together with the zero vector. Show that S is a subspace of Rn. (S is called the eigenspace corresponding to eigenvalue .)

f(p) = p if an only if Ap = p. Application: Recall Markov processes from Section 1.3. The basic idea is that we have a matrix transformation f(x) = Ax, where A is a square probability matrix; that is, each entry of A is in the interval [0, 1] and the sum of the entries in each column is 1. (Matrix A is called the transition matrix.) In a Markov process the matrix transformation is used to form successive compositions as follows: If the sequence of vectors s0, s1, s2, ... converges to a limiting vector p, then we call p the steady state of the Markov process. If the process has a steady state p, then f(p) = p if an only if Ap = p. That is, the steady state is an eigenvector corresponding to an eigenvalue  = 1 of the transition matrix A.

Example: Let be the transition matrix of a Markov process be Show that the Markov process defined by A has a steady state and determine a probability vector that represents the steady state distribution. We seek a vector so that Ap = p. Using matrix algebra on the equation Ap = p we get The solution to this homogeneous linear system gives so we have eigenvector , x  0, corresponding to eigenvalue  = 1. We can choose a nonzero value for x so that vector p is a probability vector. (That is, each entry of p is in the interval [0, 1] and the sum of the entries is 1.) The value x = 3/8 works and so the steady state distribution is given by

Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = Inx  Ax - Inx = 0  (A - In)x = 0 (1) The matrix equation (A - In)x = 0 is a homogeneous linear system with coefficient matrix A - In . Since an eigenvector x cannot be the zero vector, this means we seek a nontrivial solution to the linear system (A - In)x = 0 . Thus ns(A - In)  0 or equivalently rref(A - In) must contain a zero row. It follows that matrix A - In must be singular, so from Chapter 3, det(A - In) = 0. (or det(In - A) = 0 ) (2) Equation (2) is called the characteristic equation of matrix A and solving it for  gives us the eigenvalues of A. Because the determinant is a linear combination of particular products of entries of the matrix, the characteristic equation is really a polynomial in  equation of degree n. We call c() = det(A - In) (3) the characteristic polynomial of matrix A. The eigenvalues are the solutions of (2) or equivalently the roots of the characteristic polynomial (3). Once we have the n eigenvalues of A, 1, 2, ..., n, the corresponding eigenvectors are nontrivial solutions of the homogeneous linear systems (A - iIn)x = 0 for i = 1,2, ..., n. (4)

We summarize the computational approach for determining eigenpairs (, x) as a two-step procedure: Example: Find eigenpairs of Step I. Find the eigenvalues.

The eigenvalues are Step II. To find corresponding eigenvectors we solve (A - iIn)x = 0 for i = 1,2

Given that  is an eigenvalue of A, then we know that matrix A - In is singular and hence rref(A - In) will have at least one zero row. A homogeneous linear system whose coefficient matrix has rref with at least one zero row will have a solution set with at least one free variable. The free variables can be chosen to have any value as long as the resulting solution is not the zero vector. In the example for eigenvalue 1 = 3, the general solution was The free variable in this case could be any nonzero value. We chose x2 = 2 to avoid fractions, but this is not required. If we chose x2 = 1/7, then is a valid eigenvector. Since there will be at least one free variable when determining an eigenvector to correspond to an eigenvalue, there are infinitely many ways to express the entries of an eigenvector.

Property: If x is an eigenvector corresponding to eigenvalue  of A, then so is kx for any scalar k  0 Example: Find the eigenvalues and corresponding eigenvectors of The characteristic polynomial is Its factors are So the eigenvalues are 1, 2, and 3. Corresponding eigenvectors are