Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.

Slides:



Advertisements
Similar presentations
Ch 7.7: Fundamental Matrices
Advertisements

Applied Informatics Štefan BEREŽNÝ
Chapter 6 Eigenvalues and Eigenvectors
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Systems of Linear Equations
Refresher: Vector and Matrix Algebra Mike Kirkpatrick Department of Chemical Engineering FAMU-FSU College of Engineering.
Linear Systems of Equations Ax = b Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
Matrix Operations. Matrix Notation Example Equality of Matrices.
Matrices and Systems of Equations
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 14 Elimination Methods.
Chapter 2 Matrices Definition of a matrix.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
ECIV 520 Structural Analysis II Review of Matrix Algebra.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
INDR 262 INTRODUCTION TO OPTIMIZATION METHODS LINEAR ALGEBRA INDR 262 Metin Türkay 1.
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Chapter 10 Review: Matrix Algebra
Compiled By Raj G. Tiwari
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Matrix Algebra. Quick Review Quick Review Solutions.
Chap. 2 Matrices 2.1 Operations with Matrices
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
Modern Navigation Thomas Herring
Matrices & Determinants Chapter: 1 Matrices & Determinants.
WEEK 8 SYSTEMS OF EQUATIONS DETERMINANTS AND CRAMER’S RULE.
Sec 3.5 Inverses of Matrices Where A is nxn Finding the inverse of A: Seq or row operations.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Linear Algebra 1.Basic concepts 2.Matrix operations.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
5 5.2 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors THE CHARACTERISTIC EQUATION.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Mathematical foundationsModern Seismology – Data processing and inversion 1 Some basic maths for seismic data processing and inverse problems (Refreshement.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Chapter 2 Determinants. With each square matrix it is possible to associate a real number called the determinant of the matrix. The value of this number.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Unit #1 Linear Systems Fall Dr. Jehad Al Dallal.
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
2 - 1 Chapter 2A Matrices 2A.1 Definition, and Operations of Matrices: 1 Sums and Scalar Products; 2 Matrix Multiplication 2A.2 Properties of Matrix Operations;
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Matrices, Vectors, Determinants.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
ALGEBRAIC EIGEN VALUE PROBLEMS
College Algebra Chapter 6 Matrices and Determinants and Applications
Matrices and Vector Concepts
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
7.7 Determinants. Cramer’s Rule
Lecture 2 Matrices Lat Time - Course Overview
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Spring Dr. Jehad Al Dallal
Systems of First Order Linear Equations
DETERMINANT MATRIX YULVI ZAIKA.
Numerical Analysis Lecture 16.
Numerical Analysis Lecture14.
Numerical Analysis Lecture10.
Numerical Analysis Lecture 17.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Presentation transcript:

Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES

The determinant of a matrix is a fundamental concept of linear algebra that provides existence and uniqueness results for linear systems of equations; det A = |A| LU factorization: A = LU, the determinant is |A| = |L||U| (5.1) Doolittle method: L = lower triangular matrix with l ii = 1  |L| = 1  |A| = |U| = u 11 u 22 …u nn Pivoting: each time we apply pivoting we need to change the sign of the determinant  |A| = (-1) m u 11 u 22 …u nn Gauss forward elimination with pivoting: 5.1 The Determinant of a Matrix

Procedure for finding the determinant following the elimination method

If A = BC  |A| = |B||C| (A, B, C – square matrices) |A T | = |A| If two rows (or columns) are proportional, |A| = 0 The determinant of a triangular matrix equals the product of its diagonal elements A factor of any row (or column) can be placed before the determinant Interchanging two rows (or columns) changes the determinant sign The properties of the determinant

Definition: A -1 is the inverse of the square matrix A, if (5.5) I – identity matrix A -1 = X (denote)  AX = I (5.6) LU factorization, Doolittle method: A = LU LY = I, UX = Y (5.7) Here Y = {y ij }, LY = I – lower triangular matrix with unity diagonal elements, i = 1,2,…,n; so for j from 1 to n Also, X = {xij} vector, i = 1,2,…,n  5.2 Inverse of a Matrix

Procedure for finding the inverse matrix following the LU factorization method

If Ax = b  x = A -1 b (A - matrix nxn, b, x – n-dimensional vectors) If AX = B  X = A -1 B (A – matrix nxn, B,X – matrices nxm) – more often case.  We can write down this system as Ax i = b i (i = 1,2,…,m) x i, b i – vectors, i th rows of matrices X, B, respectively

X = A -1 B calculating procedure following the method of LU factorization

Definition: let A be an nxn matrix For some nonzero column vector x it may happen, for some scalar λ Ax = λx (5.9) Then λ is an eigenvalue of A, x is eigenvector of A, associated with the eigenvalue λ; the problem of finding eigenvalue or eigenvector – eigen problem Eq.(5.9) can be written in the form Av = λIv or (A – λI)x = 0 If A is nonsingular matrix  inverse exists  det A ≠ 0  x = (A – λI) -1 0 = 0 Hence, not to get zero solution x = 0, (A – λI) must not be nonsingular, i.e. det A = 0: (5.10) (5.11) 5.3 Eigenvalues and Eigenvectors

Eq. (5.11) -n th order algebraic equation with n unknowns (algebraic as well as complex) Applying some numerical calculation, we can find λ But when n is big, expanding to Eq.(5.11) is not the easy way to solve  this method is not used much Here: Jacobi and QR methods for finding eigenvalues

Jacobi method is the direct method for finding eigenvalues and eigenvectors in case when A is the symmetric matrix diag(a 11,a 12,…,a nn ) – diagonal matrix A e i – unit vector with i th element = 1 and all others = 0 The properties of egenvalues and eigenvectors: 1. if x – eigenvector of A, then ax is also eigenvector (a = constant) 2. if A – diagonal matrix, then a ii are eigenvalues, e i is eigenvector 3. if R, Q are orthogonal matrices, then RQ is orthogonal 4. if λ – eigenvalue of A, x – eigenvector of A, R – orthogonal matrix, then λ is eigenvalue of R T AR, R T x is its eigenvector. R T AR is called similarity transformation 5. Eigenvalues of a symmetric matrix are real number Jacobi method

Jacobi method uses the properties mentioned above A – arbitrary matrix, R i – orthogonal matrix, R i T AR i – similarity transformation (5.12) - diagonal matrix  following the 2 nd property, the eigenvector is e i We can write the eq.(5.12) as Following the 3 rd property, R 1 R 2 …R n is orthogonal matrix; the 4 th property – it has the same eigenvalues as A The eigenvector of A is Then the matrix consisting of x i as columns Having found X, we find eigenvetors x i

Let’s consider 2-dimensional matrix example The orthogonal matrix is C, S – notation for cosθ, sinθ respectively We need to choose θ so that the above matrix becomes diagonal  If a 11 ≠a 22, then If a 11 =a 22, then θ=π/4 With this θ, R T AR is diagonal matrix, its diagonal elements are eigenvalues of A, and R the eigenvecotrs matrix

If A is nxn matrix, a ij – its non-diagonal elements with the largest absolute value, then orthogonal matrix R k and θ: (5.15) (5.16)

Then if we calculate R T k AR k, after transformation elements a* ij (5.17) Then again repeat the process, selecting the largest absolute valued non-diagonal elements and reducing them to zero Convergence condition: (5.18)

Jacobi method calculation procedure

Jacobi method model (1)

Jacobi model (2)

To find the eigenvalues and eigenvectors of a real matrix A, three methods are combined:  Pretreatment -Householder transformation  Calculation of eigenvalues - QR method  Calculation of eigenvectors -inverse power method QR method

Series of orthogonal transformations A k+1 = P k T A k P k For k = 1, …, n-2, starting from initial matrix A = A 1 and applying the similarity transformation until we get three-diagonal matrix A n-1 A three-diagonal matrix Matrix P k, n-dimensional vector u k (1) Householder transformation

Matrix P k, n-dimensional vector u k u k T u k = 1 = I. P k – symmetric matrix and through it is also orthogonal Orthogonal matrix satisfies the following statement:  If we have two column-vectors x, y, x≠y and ||x|| = ||y||, and if we assign then

In case k = 1 Here Since ||b1|| = ||s1e1||

Jacobi method model (1)

Jacobi model (2)