MA2213 Lecture 8 Eigenvectors. Application of Eigenvectors Vufoil 18, lecture 7 : The Fibonacci sequence satisfies.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

3D Geometry for Computer Graphics
Ch 7.7: Fundamental Matrices
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Chapter 6 Eigenvalues and Eigenvectors
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Eigenvalues and Eigenvectors
Slides by Olga Sorkine, Tel Aviv University. 2 The plan today Singular Value Decomposition  Basic intuition  Formal definition  Applications.
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Multivariable Control Systems Ali Karimpour Assistant Professor Ferdowsi University of Mashhad.
Chapter 6 Eigenvalues.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Orthogonality and Least Squares
Finding Eigenvalues and Eigenvectors What is really important?
Dirac Notation and Spectral decomposition
Matrices CS485/685 Computer Vision Dr. George Bebis.
Stats & Linear Models.
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
CHAPTER SIX Eigenvalues
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach Wayne Lawton Department of Mathematics National University of Singapore S ,
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
MA2213 Lecture 5 Linear Equations (Direct Solvers)
Gram-Schmidt Orthogonalization
Eigenvalues and Eigenvectors
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Linear Algebra Chapter 6 Linear Algebra with Applications -Gareth Williams Br. Joel Baumeyer, F.S.C.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
ALGEBRAIC EIGEN VALUE PROBLEMS
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Chapter 5 Eigenvalues and Eigenvectors
MA2213 Lecture 8 Eigenvectors.
CS479/679 Pattern Recognition Dr. George Bebis
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Systems of First Order Linear Equations
Euclidean Inner Product on Rn
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
Numerical Analysis Lecture 16.
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Algebra Lecture 35.
Eigenvalues and Eigenvectors
Linear Vector Space and Matrix Mechanics
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Symmetric Matrices and Quadratic Forms
Presentation transcript:

MA2213 Lecture 8 Eigenvectors

Application of Eigenvectors Vufoil 18, lecture 7 : The Fibonacci sequence satisfies

Fibonacci Ratio Sequence

Another Biomathematics Application Leonardo da Pisa, better known as Fibonacci, invented his famous sequence to compute the reproductive success of rabbits* Similar sequences describe frequencies in males, females of a sex-linked gene. For genes (2 alleles) carried in the X chromosome** *page i, ** pages in The Theory of Evolution and Dynamical Systems,J. Hofbauer and K. Sigmund, The solution has the form where

Eigenvector Problem (pages ) Recall that if is a square matrix then a nonzero vector if is an eigenvector corresponding to the Eigenvectors and eigenvalues arise in biomathematics where they describe growth and population genetics eigenvalue They arise in physical problems, especially those that involve vibrations in which eigenvalues are related to vibration frequencies They arise in numerical solution of linear equations because they determine convergence properties

Example pages For the eigenvalue-eigenvector pairs are We observe that every (column) vector and where

Example pages Therefore, since x  Ax is a linear transformation and since We can repeat this process to obtain are eigenvectors Question What happens as ?

Example pages General Principle : If a vector v can be expressed as a linear combination of eigenvectors of a matrix A, then it is very easy to compute Av It is possible to express every vector as a linear combination of eigenvectors of an n by n matrix A iff either of the following equivalent conditions is satisfied : (i) there exists a basis consisting of eigenvectors of A (ii) the sum of dimensions of eigenspaces of A = n Question Does this condition hold for? Question What special form does this matrix have ?

Example pages The characteristic polynomial of is 2 is the (only) eigenvalue, it has algebraic multiplicity 2 so the eigenspace for eigenvalue 5 has dimension 1 the eigenvalue 5 is said to have geometric multiplicity 1 Question What are alg.&geom. mult. in Example ?

Characteristic Polynomials pp Example 7.22 (p. 335) The eigenvalue-eigenvector pairs of the matrix in Example are corresponding eigenvectors Question What is the equation for?

Eigenvalues of Symmetric Matrices The following real symmetric matrices that we studied have real eigenvalues and eigenvectors corresponding to distinct eigenvectors are orthogonal. Question What are the eigenvalues of these matrices ? Question What are the corresponding eigenvectors ? Question Compute their scalar products

Eigenvalues of Symmetric Matrices Theorem 1. All eigenvalues of real symmetric matrices Proof For a matrix are real valued. with complex (or real) entries let denote the matrix whose entries are the complex conjugates of the entries of Question Proveis real (all entries are real) iff Question Prove Assume that and observe that therefore and

Eigenvalues of Symmetric Matrices Theorem 2. Eigenvectors of a real symmetric matrix that Proof Assume that correspond to distinct eigenvalues are orthogonal. Then compute and observe that

Orthogonal Matrices so Definition A matrixis orthogonal if Ifis orthogonal then therefore eitheror is nonsingular and has an inversehence soExamples

Permutation Matrices Definition A matrix matrix if there exists a function (called a permutation) is called a permutation that is 1-to-1 (and therefore onto) such that Examples Question Why is every permutation matrix orthogonal ?

Eigenvalues of Symmetric Matrices Theorem pages If of is symmetric then there exists a set eigenvalue-eigenvector pairs Proof Uses Theorems 1 and 2 and a little linear algebra. Choose eigenvectors so that construct matrices and observe that

MATLAB EIG Command >> help eig EIG Eigenvalues and eigenvectors. E = EIG(X) is a vector containing the eigenvalues of a square matrix X. [V,D] = EIG(X) produces a diagonal matrix D of eigenvalues and a full matrix V whose columns are the corresponding eigenvectors so that X*V = V*D. [V,D] = EIG(X,'nobalance') performs the computation with balancing disabled, which sometimes gives more accurate results for certain problems with unusual scaling. If X is symmetric, EIG(X,'nobalance') is ignored since X is already balanced. E = EIG(A,B) is a vector containing the generalized eigenvalues of square matrices A and B. [V,D] = EIG(A,B) produces a diagonal matrix D of generalized eigenvalues and a full matrix V whose columns are the corresponding eigenvectors so that A*V = B*V*D. EIG(A,B,'chol') is the same as EIG(A,B) for symmetric A and symmetric positive definite B. It computes the generalized eigenvalues of A and B using the Cholesky factorization of B. EIG(A,B,'qz') ignores the symmetry of A and B and uses the QZ algorithm. In general, the two algorithms return the same result, however using the QZ algorithm may be more stable for certain problems. The flag is ignored when A and B are not symmetric. See also CONDEIG, EIGS.

MATLAB EIG Command >> A = [ ; ; ] A = >> [U,D] = eig(A); >> U U = >> D D = Example page 336 >> A*U ans = >> U*D ans =

Positive Definite Symmetric Matrices Theorem 4 A symmetric matrix is [lec4,slide24] (semi) positive definite iff all of its eigenvalues Proof Letbe the orthogonal, diagonal matrices on the previous page that satisfy Then for every whereSinceis nonsingular therefore is (semi) positive definite iff Clearly this condition holds iff

Singular Value Decomposition Theorem 3 If exist orthogogonal matrices wheresuch that and Singular Values = sqrt eig has the form then there Proof Outline Choose so and are diagonal, then satisfies try to finish

MATLAB SVD Command >> help svd SVD Singular value decomposition. [U,S,V] = SVD(X) produces a diagonal matrix S, of the same dimension as X and with nonnegative diagonal elements in decreasing order, and unitary matrices U and V so that X =U*S*V'. S = SVD(X) returns a vector containing the singular values. [U,S,V] = SVD(X,0) produces the "economy size“ decomposition. If X is m-by-n with m > n, then only the first n columns of U are computed and S is n-by-n. See also SVDS, GSVD.

MATLAB SVD Command >> M = [ 0 1; ] M = >> [U,S,V] = svd(M) U = S = V = >> U*S*V' ans =

SVD Algebra

SVD Geometry

Square Roots Theorem 5 A symmetric positive definite matrix has a symmetric positive definite ‘square root’. Proof Letbe the orthogonal, diagonal matrices on the previous page that satisfy Then construct the matrices and observe that is symmetric positive definite and satisfies

Polar Decomposition Theorem 6 Every nonsingular matrix can be factored as Proof Construct where and positive definite and is symmetric and positive definite. Let is symmetric is orthogonal. and observe that be symmetric positive definite and satisfy and construct Then and clearly

Löwdin Orthonormalization Proof Start with (1) Per-Olov Löwdin, On the Non-Orthogonality Problem Connected with the use of Atomic Wave Functions in the Theory of Molecules and Crystals, J. Chem. Phys. 18, (1950). in an inner product space (assumed to be linearly independent), compute the Gramm matrix Sinceis symmetric and positive definite, Theorem 5 gives (and provides a method to compute) a matrix that is symmetric and positive definite and Then are orthonormal.

The Power Method pages Finds the eigenvalue with largest absolute value of a matrix whose eigenvalues satisfy Step 1 Compute a vector with random entries Step 2 Compute Step 3 Compute ( recall that) Step 4 Compute and Thenwith and Repeat

The Inverse Power Method Result If is an eigevector of corresponding to eigenvalue and then is an eigenvector of corresponding to eigenvalueFurthermore, ifthen is an eigenvector of eigenvalue corresponding to Definition The inverse power method is the power method applied to the matrix It can find the eigenvalue-eigenvector pair if there is one eigenvalue that has smallest absolute value.

Inverse Power Method With Shifts Computes eigenvalue Step 1 Apply 1 or more interations of the power method Step 2 Compute using the matrix and iterate. Then to estimate an eigenvalue - eigenvector pair closest toof and a corresponding eigenvector - better estimate of Step 3 Apply 1 or more interations of the power method using the matrix to estimate an eigenvalue - eigenvector pair with cubic rate of convergence !

Unitary and Hermitian Matrices Definition The adjoint of a matrix is the matrix Example Definition A matrix is unitary if Definition A matrixis hermitian if Super Theorem : All previous theorems true for complex matrices if orthogonal is replaced by unitary, symmetric by hermitian, and old with new (semi) positive definite. Definition A matrix is (semi) positive definite if (or self-adjoint)

Homework Due Tutorial 5 (Week 11, 29 Oct – 2 Nov) 1. Do Problem 1 on page Read Convergence of the Power Method (pages ) and do Problem 16 on page Do problem 19 on pages Estimate eigenvalue-eigenvector pairs of the matrix M using the power and inverse power methods – use 4 iterations and compute errors 5. Compute the eigenvalue-eigenvector pairs of the orthogonal matrix O 6. Prove that the vectorsdefined at the bottom of slide 29 are orthonormal by computing their inner products

Extra Fun and Adventure We have discussed several matrix decompositions : LUEigenvectorPolarSingular Value Find out about other matrix decompositions. How are they derived / computed ? What are their applications ?