Download presentation
Presentation is loading. Please wait.
1
Numerical Analysis Lecture 16
2
Chapter 4
3
Eigen Value Problems
4
Let [A] be an n x n square matrix
Let [A] be an n x n square matrix. Suppose, there exists a scalar and a vector such that
5
Then is the eigen value and X is the corresponding eigenvector of the matrix [A].
We can also write it as
6
This represents a set of n homogeneous equations possessing non-trivial solution, provided
7
This determinant, on expansion, gives an n-th degree polynomial which is called characteristic polynomial of [A], which has n roots. Corresponding to each root, we can solve these equations in principle, and determine a vector called eigenvector.
8
Finding the roots of the characteristic equation is laborious
Finding the roots of the characteristic equation is laborious. Hence, we look for better methods suitable from the point of view of computation. Depending upon the type of matrix [A] and on what one is looking for, various numerical methods are available.
9
Power Method Jacobi’s Method
10
Note! We shall consider only real and real-symmetric matrices and discuss power and Jacobi’s methods
11
Power Method
12
To compute the largest eigen value and the corresponding eigenvector of the system
where [A] is a real, symmetric or un-symmetric matrix, the power method is widely used in practice.
13
Procedure Step 1: Choose the initial vector such that the largest element is unity. Step 2: The normalized vector is pre-multiplied by the matrix [A].
14
Step 3:The resultant vector is again normalized.
Step 4: This process of iteration is continued and the new normalized vector is repeatedly pre-multiplied by the matrix [A] until the required accuracy is obtained.
15
Let be the distinct eigen values of an n x n matrix [A], such that and suppose are the corresponding eigen vectors
16
Power method is applicable if the above eigen values are real and distinct, and hence, the corresponding eigenvectors are linearly independent.
17
Then, any eigenvector v in the space spanned by the eigenvectors
can be written as their linear combination
18
Pre-multiplying by A and substituting
We get
19
Again, pre-multiplying by A and simplifying, we obtain
Similarly, we have and
20
Now, the eigen value can be computed as the limit of the ratio of the corresponding components of and That is, Here, the index p stands for the p-th component in the corresponding vector
21
Sometimes, we may be interested in finding the least eigen value and the corresponding eigenvector.
In that case, we proceed as follows. We note that Pre-multiplying by , we get
22
Which can be rewritten as
which shows that the inverse matrix has a set of eigen values which are the reciprocals of the eigen values of [A].
23
Thus, for finding the eigen value of the least magnitude of the matrix [A], we have to apply power method to the inverse of [A].
24
Jacobi’s Method
25
Definition An n x n matrix [A] is said to be orthogonal if
26
In order to compute all the eigen values and the corresponding eigenvectors of a real symmetric matrix, Jacobi’s method is highly recommended. It is based on an important property from matrix theory, which states
27
that, if [A] is an n x n real symmetric matrix, its eigen values are real, and there exists an orthogonal matrix [S] such that the diagonal matrix D is
28
This diagonalization can be carried out by applying a series of orthogonal transformations
29
Let A be an n x n real symmetric matrix
Let A be an n x n real symmetric matrix. Suppose be numerically the largest element amongst the off-diagonal elements of A. We construct an orthogonal matrix S1 defined as
30
While each of the remaining off-diagonal elements are zero, the remaining diagonal elements are assumed to be unity. Thus, we construct S1 as under
32
where are inserted in positions respectively, and elsewhere it is identical with a unit matrix. Now, we compute
33
Since S1 is an orthogonal matrix, such that
After the transformation, the elements at the position (i , j), (j , i) get annihilated, that is dij and dji reduce to zero, which is seen as follows:
35
Therefore, , only if, That is if
36
Thus, we choose such that the above equation is satisfied, thereby, the pair of off-diagonal elements dij and dji reduces to zero. However, though it creates a new pair of zeros, it also introduces non-zero contributions at formerly zero positions.
37
Also, the above equation gives four values of , but to get the least possible rotation, we choose
38
As a next step, the numerically largest off-diagonal element in the newly obtained rotated matrix D1 is identified and the above procedure is repeated using another orthogonal matrix S2 to get D2. That is we obtain
39
Similarly, we perform a series of such two-dimensional rotations or orthogonal transformations. After making r transformations, we obtain
40
Now, as approaches to a diagonal matrix, with the eigen values on the main diagonal. The corresponding eigenvectors are the columns of S. It is estimated that the minimum number of rotations required to transform the given n x n real symmetric matrix [A] into a diagonal form is n(n-1)/2
41
Numerical Analysis Lecture 16
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.