Numerical Analysis Lecture 17.

Slides:



Advertisements
Similar presentations
Numerical Solution of Linear Equations
Advertisements

Applied Informatics Štefan BEREŽNÝ
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 4 Systems of Linear Equations; Matrices
Chapter 6 Eigenvalues and Eigenvectors
Principal Component Analysis
Linear Transformations
Ch 7.9: Nonhomogeneous Linear Systems
Chapter 3 Determinants and Matrices
Orthogonality and Least Squares
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Stats & Linear Models.
Recall that a square matrix is one in which there are the same amount of rows as columns. A square matrix must exist in order to evaluate a determinant.
PHY 301: MATH AND NUM TECH Chapter 5: Linear Algebra Applications I.Homogeneous Linear Equations II.Non-homogeneous equation III.Eigen value problem.
Matrices. A matrix, A, is a rectangular collection of numbers. A matrix with “m” rows and “n” columns is said to have order m x n. Each entry, or element,
10.4 Matrix Algebra 1.Matrix Notation 2.Sum/Difference of 2 matrices 3.Scalar multiple 4.Product of 2 matrices 5.Identity Matrix 6.Inverse of a matrix.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Chapter 10 Real Inner Products and Least-Square
What is the determinant of What is the determinant of
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
4.5 Inverse of a Square Matrix
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
Lecture 39 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
ALGEBRAIC EIGEN VALUE PROBLEMS
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Chapter 4 Systems of Linear Equations; Matrices
Chapter 6 Eigenvalues and Eigenvectors
MTH108 Business Math I Lecture 20.
Mathematics-I J.Baskar Babujee Department of Mathematics
Linear Algebra Review.
Continuum Mechanics (MTH487)
Review of Linear Algebra
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
Linear Equations in Linear Algebra
Elementary Linear Algebra Anton & Rorres, 9th Edition
Numerical Analysis Lecture12.
L6 matrix operations.
CHAPTER 8.9 ~ 8.16 Matrices.
Angular Momentum Classical radius vector from origin linear momentum
Matrix Operations SpringSemester 2017.
Systems of First Order Linear Equations
Matrix Algebra.
Linear Equations in Linear Algebra
Numerical Analysis Lecture 16.
Derivative of scalar forms
Principal Component Analysis
Numerical Analysis Lecture14.
Numerical Analysis Lecture13.
Matrices and Matrix Operations
Numerical Analysis Lecture10.
Addition of Angular Momenta
Symmetric Matrices and Quadratic Forms
Matrix Algebra.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Engineering Mathematics-I Eigenvalues and Eigenvectors
Chapter 4 Systems of Linear Equations; Matrices
Matrix Operations SpringSemester 2017.
Eigenvalues and Eigenvectors
Numerical Analysis Lecture11.
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Symmetric Matrices and Quadratic Forms
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

Numerical Analysis Lecture 17

Chapter 4

Eigen Value Problems

Let [A] be an n x n square matrix Let [A] be an n x n square matrix. Suppose, there exists a scalar and a vector such that

Power Method Jacobi’s Method

Power Method

To compute the largest eigen value and the corresponding eigenvector of the system where [A] is a real, symmetric or un-symmetric matrix, the power method is widely used in practice.

Procedure Step 1: Choose the initial vector such that the largest element is unity. Step 2: The normalized vector is pre-multiplied by the matrix [A].

Step 3:The resultant vector is again normalized. Step 4: This process of iteration is continued and the new normalized vector is repeatedly pre-multiplied by the matrix [A] until the required accuracy is obtained.

We calculate

Now, the eigen value can be computed as the limit of the ratio of the corresponding components of and That is, Here, the index p stands for the p-th component in the corresponding vector

Sometimes, we may be interested in finding the least eigen value and the corresponding eigenvector. In that case, we proceed as follows. We note that Pre-multiplying by , we get

The inverse matrix has a set of eigen values which are the reciprocals of the eigen values of [A].

Thus, for finding the eigen value of the least magnitude of the matrix [A], we have to apply power method to the inverse of [A].

Jacobi’s Method

Definition An n x n matrix [A] is said to be orthogonal if

If [A] is an n x n real symmetric matrix, its eigen values are real, and there exists an orthogonal matrix [S] such that the diagonal matrix D is

This diagonalization can be carried out by applying a series of orthogonal transformations

Let A be an n x n real symmetric matrix Let A be an n x n real symmetric matrix. Suppose be numerically the largest element amongst the off-diagonal elements of A. We construct an orthogonal matrix S1 defined as

where are inserted in positions respectively, and elsewhere it is identical with a unit matrix. Now, we compute

Therefore, , only if, That is if

Thus, we choose such that the above equation is satisfied, thereby, the pair of off-diagonal elements dij and dji reduces to zero. However, though it creates a new pair of zeros, it also introduces non-zero contributions at formerly zero positions.

Also, the above equation gives four values of , but to get the least possible rotation, we choose

As a next step, the numerically largest off-diagonal element in the newly obtained rotated matrix D1 is identified and the above procedure is repeated using another orthogonal matrix S2 to get D2. That is we obtain

Similarly, we perform a series of such two-dimensional rotations or orthogonal transformations. After making r transformations, we obtain

Example Find all the eigen values and the corresponding eigen vectors of the matrix by Jacobi’s method

Solution The given matrix is real and symmetric. The largest off-diagonal element is found to be Now, we compute

Which gives, Thus, we construct an orthogonal matrix Si as

The first rotation gives,

We observe that the elements d13 and d31 got annihilated We observe that the elements d13 and d31 got annihilated. To make sure that calculations are correct up to this step, we see that the sum of the diagonal elements of D1 is same as the sum of the diagonal elements of the original matrix A.

As a second step, we choose the largest off-diagonal element of D1 and is found to be and compute

which again gives Thus, we construct the second rotation matrix as

At the end of the second rotation, we get

Which turned out to be a diagonal matrix, so we stop the computation Which turned out to be a diagonal matrix, so we stop the computation. From here, we notice that the eigen values of the given matrix are 5,1 and –1. The eigenvectors are the column vectors of

Therefore

Example Find all the eigen values of the matrix by Jacobi’s method.

Solution Here all the off-diagonal elements are of the same order of magnitude. Therefore, we can choose any one of them. Suppose, we choose a12 as the largest element and compute

Which gives, Then and we construct an orthogonal matrix S1 such that

The first rotation gives

Now, we choose as the largest element of D1 and compute

Now we construct another orthogonal matrix S2, such that

At the end of second rotation, we obtain Now, the numerically largest off-diagonal element of D2 is found to be and compute

Thus, the orthogonal matrix is

At the end of third rotation, we get To reduce D3 to a diagonal form, some more rotations are required. However, we may take 0.634, 3.386 and 1.979 as eigen values of the given matrix.

Numerical Analysis Lecture 17