Finding Eigenvalues and Eigenvectors What is really important?

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
8.4 Matrices of General Linear Transformations
Chapter 4 Euclidean Vector Spaces
Ch 7.7: Fundamental Matrices
Chapter 6 Eigenvalues and Eigenvectors
Lecture 13 - Eigen-analysis CVEN 302 July 1, 2002.
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Linear Transformations
Symmetric Matrices and Quadratic Forms
Chapter 6 Eigenvalues.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
Orthogonality and Least Squares
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Matrices CS485/685 Computer Vision Dr. George Bebis.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Stats & Linear Models.
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Chapter 10 Real Inner Products and Least-Square
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
ALGEBRAIC EIGEN VALUE PROBLEMS
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
Chapter 5 Eigenvalues and Eigenvectors
Review of Eigenvectors and Eigenvalues
7.7 Determinants. Cramer’s Rule
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Matrices and Vectors Review Objective
Systems of First Order Linear Equations
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Symmetric Matrices and Quadratic Forms
Principal Components What matters most?.
SKTN 2393 Numerical Methods for Nuclear Engineers
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Principal Components What matters most?.
Eigenvectors and Eigenvalues
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Finding Eigenvalues and Eigenvectors What is really important?

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 2 Approaches Find the characteristic polynomial – Leverrier’s Method Find the largest or smallest eigenvalue – Power Method – Inverse Power Method Find all the eigenvalues – Jacobi’s Method – Householder’s Method – QR Method – Danislevsky’s Method

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 3 Finding the Characteristic Polynomial Reduces to finding the coefficients of the polynomial for the matrix A Recall | I-A| = n +a n n-1 +a n-1 n-2 +…+a 2 1 +a 1 Leverrier’s Method – Set B n = A and a n = -trace(B n ) – For k = (n-1) down to 1 compute B k = A (B k+1 + a k+1 I) a k = - trace(B k )/(n – k + 1)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 4 Vectors that Span a Space and Linear Combinations of Vectors Given a set of vectors v 1, v 2,…, v n The vectors are said span a space V, if given any vector x ε V, there exist constants c 1, c 2,…, c n so that c 1 v 1 + c 2 v 2 +…+ c n v n = x and x is called a linear combination of the v i

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 5 Linear Independence and a Basis Given a set of vectors v 1, v 2,…, v n and constants c 1, c 2,…, c n The vectors are linearly independent if the only solution to c 1 v 1 + c 2 v 2 +…+ c n v n = 0 (the zero vector) is c 1 = c 2 =…=c n = 0 A linearly independent, spanning set is called a basis

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 6 Example 1: The Standard Basis Consider the vectors v 1 =, v 2 =, and v 3 = Clearly, c 1 v 1 + c 2 v 2 + c 3 v 3 = 0  c 1 = c 2 = c 3 = 0 Any vector can be written as a linear combination of v 1, v 2, and v 3 as = x v 1 + y v 2 + z v 3 The collection {v 1, v 2, v 3 } is a basis for R 3 ; indeed, it is the standard basis and is usually denoted with vector names i, j, and k, respectively.

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 7 Another Definition and Some Notation Assume that the eigenvalues for an n x n matrix A can be ordered such that | 1 | > | 2 | ≥ | 3 | ≥ … ≥ | n-2 | ≥ | n-1 | > | n | Then 1 is the dominant eigenvalue and | 1 | is the spectral radius of A, denoted  (A) The i th eigenvector will be denoted using superscripts as x i, subscripts being reserved for the components of x

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 8 Power Methods: The Direct Method Assume an n x n matrix A has n linearly independent eigenvectors e 1, e 2,…, e n ordered by decreasing eigenvalues | 1 | > | 2 | ≥ | 3 | ≥ … ≥ | n-2 | ≥ | n-1 | > | n | Given any vector y 0 ≠ 0, there exist constants c i, i = 1,…,n, such that y 0 = c 1 e 1 + c 2 e 2 +…+ c n e n

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 9 The Direct Method (continued) If y 0 is not orthogonal to e 1, i.e., (y 0 ) T e 1 ≠ 0, y 1 = Ay 0 = A(c 1 e 1 + c 2 e 2 +…+ c n e n ) = Ac 1 e 1 + Ac 2 e 2 +…+ Ac n e n = c 1 Ae 1 + c 2 Ae 2 +…+ c n Ae n Can you simplify the previous line?

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 10 The Direct Method (continued) If y 0 is not orthogonal to e 1, i.e., (y 0 ) T e 1 ≠ 0, y 1 = Ay 0 = A(c 1 e 1 + c 2 e 2 +…+ c n e n ) = Ac 1 e 1 + Ac 2 e 2 +…+ Ac n e n = c 1 Ae 1 + c 2 Ae 2 +…+ c n Ae n y 1 = c 1 1 e 1 + c 2 2 e 2 +…+ c n n e n What is y 2 = Ay 1 ?

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 11 The Direct Method (continued)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 12 The Direct Method (continued)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 13 The Direct Method (continued)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 14 The Direct Method (continued) Note: any nonzero multiple of an eigenvector is also an eigenvector Why? Suppose e is an eigenvector of A, i.e., Ae= e and c  0 is a scalar such that x = ce Ax = A(ce) = c (Ae) = c ( e) = (ce) = x

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 15 The Direct Method (continued)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 16 Direct Method (continued) Given an eigenvector e for the matrix A We have Ae = e and e  0, so e T e  0 (a scalar) Thus, e T Ae = e T e = e T e  0 So = (e T Ae) / (e T e)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 17 Direct Method (completed)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 18 Direct Method Algorithm

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 19 Jacobi’s Method Requires a symmetric matrix May take numerous iterations to converge Also requires repeated evaluation of the arctan function Isn’t there a better way? Yes, but we need to build some tools.

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 20 What Householder’s Method Does Preprocesses a matrix A to produce an upper-Hessenberg form B The eigenvalues of B are related to the eigenvalues of A by a linear transformation Typically, the eigenvalues of B are easier to obtain because the transformation simplifies computation

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 21 Definition: Upper-Hessenberg Form A matrix B is said to be in upper-Hessenberg form if it has the following structure:

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 22 A Useful Matrix Construction Assume an n x 1 vector u  0 Consider the matrix P(u) defined by P(u) = I – 2(uu T )/(u T u) Where – I is the n x n identity matrix – (uu T ) is an n x n matrix, the outer product of u with its transpose – (u T u) here denotes the trace of a 1 x 1 matrix and is the inner or dot product

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 23 Properties of P(u) P 2 (u) = I – The notation here P 2 (u) = P(u) * P(u) – Can you show that P 2 (u) = I? P -1 (u) = P(u) – P(u) is its own inverse P T (u) = P(u) – P(u) is its own transpose – Why? P(u) is an orthogonal matrix

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 24 Householder’s Algorithm Set Q = I, where I is an n x n identity matrix For k = 1 to n-2 –  = sgn(A k+1,k )sqrt((A k+1,k ) 2 + (A k+2,k ) 2 +…+ (A n,k ) 2 ) – u T = [0, 0, …, A k+1,k + , A k+2,k,…, A n,k ] – P = I – 2(uu T )/(u T u) – Q = QP – A = PAP Set B = A

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 25 Example

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 26 Example

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 27 Example

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 28 Example

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 29 Example

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 30 Example

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 31 Example

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 32 How Does It Work? Householder’s algorithm uses a sequence of similarity transformations B = P(u k ) A P(u k ) to create zeros below the first sub-diagonal u k =[0, 0, …, A k+1,k + , A k+2,k,…, A n,k ] T  = sgn(A k+1,k )sqrt((A k+1,k ) 2 + (A k+2,k ) 2 +…+ (A n,k ) 2 ) By definition, – sgn(x) = 1, if x≥0 and – sgn(x) = -1, if x<0

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 33 How Does It Work? (continued) The matrix Q is orthogonal – the matrices P are orthogonal – Q is a product of the matrices P – The product of orthogonal matrices is an orthogonal matrix B = Q T A Q hence Q B = Q Q T A Q = A Q – Q Q T = I (by the orthogonality of Q)

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 34 How Does It Work? (continued) If e k is an eigenvector of B with eigenvalue k, then B e k = k e k Since Q B = A Q, A (Q e k ) = Q (B e k ) = Q ( k e k ) = k (Q e k ) Note from this: – k is an eigenvalue of A – Q e k is the corresponding eigenvector of A

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 35 The QR Method: Start-up Given a matrix A Apply Householder’s Algorithm to obtain a matrix B in upper-Hessenberg form Select  >0 and m>0 –  is a acceptable proximity to zero for sub- diagonal elements – m is an iteration limit

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 36 The QR Method: Main Loop

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 37 The QR Method: Finding The ’s

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 38 Details Of The Eigenvalue Formulae

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 39 Details Of The Eigenvalue Formulae

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 40 Finding Roots of Polynomials Every n x n matrix has a characteristic polynomial Every polynomial has a corresponding n x n matrix for which it is the characteristic polynomial Thus, polynomial root finding is equivalent to finding eigenvalues

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 41 Example Please!?!?!? Consider the monic polynomial of degree n f(x) = a 1 +a 2 x+a 3 x 2 +…+a n x n-1 +x n and the companion matrix

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 42 Find The Eigenvalues of the Companion Matrix

7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 43 Find The Eigenvalues of the Companion Matrix