Presentation is loading. Please wait.

Presentation is loading. Please wait.

Finding Eigenvalues and Eigenvectors What is really important?

Similar presentations


Presentation on theme: "Finding Eigenvalues and Eigenvectors What is really important?"— Presentation transcript:

1 Finding Eigenvalues and Eigenvectors What is really important?

2 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 2 Approaches Find the characteristic polynomial – Leverrier’s Method Find the largest or smallest eigenvalue – Power Method – Inverse Power Method Find all the eigenvalues – Jacobi’s Method – Householder’s Method – QR Method – Danislevsky’s Method

3 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 3 Finding the Characteristic Polynomial Reduces to finding the coefficients of the polynomial for the matrix A Recall | I-A| = n +a n n-1 +a n-1 n-2 +…+a 2 1 +a 1 Leverrier’s Method – Set B n = A and a n = -trace(B n ) – For k = (n-1) down to 1 compute B k = A (B k+1 + a k+1 I) a k = - trace(B k )/(n – k + 1)

4 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 4 Vectors that Span a Space and Linear Combinations of Vectors Given a set of vectors v 1, v 2,…, v n The vectors are said span a space V, if given any vector x ε V, there exist constants c 1, c 2,…, c n so that c 1 v 1 + c 2 v 2 +…+ c n v n = x and x is called a linear combination of the v i

5 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 5 Linear Independence and a Basis Given a set of vectors v 1, v 2,…, v n and constants c 1, c 2,…, c n The vectors are linearly independent if the only solution to c 1 v 1 + c 2 v 2 +…+ c n v n = 0 (the zero vector) is c 1 = c 2 =…=c n = 0 A linearly independent, spanning set is called a basis

6 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 6 Example 1: The Standard Basis Consider the vectors v 1 =, v 2 =, and v 3 = Clearly, c 1 v 1 + c 2 v 2 + c 3 v 3 = 0  c 1 = c 2 = c 3 = 0 Any vector can be written as a linear combination of v 1, v 2, and v 3 as = x v 1 + y v 2 + z v 3 The collection {v 1, v 2, v 3 } is a basis for R 3 ; indeed, it is the standard basis and is usually denoted with vector names i, j, and k, respectively.

7 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 7 Another Definition and Some Notation Assume that the eigenvalues for an n x n matrix A can be ordered such that | 1 | > | 2 | ≥ | 3 | ≥ … ≥ | n-2 | ≥ | n-1 | > | n | Then 1 is the dominant eigenvalue and | 1 | is the spectral radius of A, denoted  (A) The i th eigenvector will be denoted using superscripts as x i, subscripts being reserved for the components of x

8 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 8 Power Methods: The Direct Method Assume an n x n matrix A has n linearly independent eigenvectors e 1, e 2,…, e n ordered by decreasing eigenvalues | 1 | > | 2 | ≥ | 3 | ≥ … ≥ | n-2 | ≥ | n-1 | > | n | Given any vector y 0 ≠ 0, there exist constants c i, i = 1,…,n, such that y 0 = c 1 e 1 + c 2 e 2 +…+ c n e n

9 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 9 The Direct Method (continued) If y 0 is not orthogonal to e 1, i.e., (y 0 ) T e 1 ≠ 0, y 1 = Ay 0 = A(c 1 e 1 + c 2 e 2 +…+ c n e n ) = Ac 1 e 1 + Ac 2 e 2 +…+ Ac n e n = c 1 Ae 1 + c 2 Ae 2 +…+ c n Ae n Can you simplify the previous line?

10 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 10 The Direct Method (continued) If y 0 is not orthogonal to e 1, i.e., (y 0 ) T e 1 ≠ 0, y 1 = Ay 0 = A(c 1 e 1 + c 2 e 2 +…+ c n e n ) = Ac 1 e 1 + Ac 2 e 2 +…+ Ac n e n = c 1 Ae 1 + c 2 Ae 2 +…+ c n Ae n y 1 = c 1 1 e 1 + c 2 2 e 2 +…+ c n n e n What is y 2 = Ay 1 ?

11 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 11 The Direct Method (continued)

12 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 12 The Direct Method (continued)

13 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 13 The Direct Method (continued)

14 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 14 The Direct Method (continued) Note: any nonzero multiple of an eigenvector is also an eigenvector Why? Suppose e is an eigenvector of A, i.e., Ae= e and c  0 is a scalar such that x = ce Ax = A(ce) = c (Ae) = c ( e) = (ce) = x

15 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 15 The Direct Method (continued)

16 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 16 Direct Method (continued) Given an eigenvector e for the matrix A We have Ae = e and e  0, so e T e  0 (a scalar) Thus, e T Ae = e T e = e T e  0 So = (e T Ae) / (e T e)

17 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 17 Direct Method (completed)

18 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 18 Direct Method Algorithm

19 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 19 Jacobi’s Method Requires a symmetric matrix May take numerous iterations to converge Also requires repeated evaluation of the arctan function Isn’t there a better way? Yes, but we need to build some tools.

20 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 20 What Householder’s Method Does Preprocesses a matrix A to produce an upper-Hessenberg form B The eigenvalues of B are related to the eigenvalues of A by a linear transformation Typically, the eigenvalues of B are easier to obtain because the transformation simplifies computation

21 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 21 Definition: Upper-Hessenberg Form A matrix B is said to be in upper-Hessenberg form if it has the following structure:

22 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 22 A Useful Matrix Construction Assume an n x 1 vector u  0 Consider the matrix P(u) defined by P(u) = I – 2(uu T )/(u T u) Where – I is the n x n identity matrix – (uu T ) is an n x n matrix, the outer product of u with its transpose – (u T u) here denotes the trace of a 1 x 1 matrix and is the inner or dot product

23 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 23 Properties of P(u) P 2 (u) = I – The notation here P 2 (u) = P(u) * P(u) – Can you show that P 2 (u) = I? P -1 (u) = P(u) – P(u) is its own inverse P T (u) = P(u) – P(u) is its own transpose – Why? P(u) is an orthogonal matrix

24 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 24 Householder’s Algorithm Set Q = I, where I is an n x n identity matrix For k = 1 to n-2 –  = sgn(A k+1,k )sqrt((A k+1,k ) 2 + (A k+2,k ) 2 +…+ (A n,k ) 2 ) – u T = [0, 0, …, A k+1,k + , A k+2,k,…, A n,k ] – P = I – 2(uu T )/(u T u) – Q = QP – A = PAP Set B = A

25 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 25 Example

26 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 26 Example

27 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 27 Example

28 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 28 Example

29 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 29 Example

30 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 30 Example

31 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 31 Example

32 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 32 How Does It Work? Householder’s algorithm uses a sequence of similarity transformations B = P(u k ) A P(u k ) to create zeros below the first sub-diagonal u k =[0, 0, …, A k+1,k + , A k+2,k,…, A n,k ] T  = sgn(A k+1,k )sqrt((A k+1,k ) 2 + (A k+2,k ) 2 +…+ (A n,k ) 2 ) By definition, – sgn(x) = 1, if x≥0 and – sgn(x) = -1, if x<0

33 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 33 How Does It Work? (continued) The matrix Q is orthogonal – the matrices P are orthogonal – Q is a product of the matrices P – The product of orthogonal matrices is an orthogonal matrix B = Q T A Q hence Q B = Q Q T A Q = A Q – Q Q T = I (by the orthogonality of Q)

34 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 34 How Does It Work? (continued) If e k is an eigenvector of B with eigenvalue k, then B e k = k e k Since Q B = A Q, A (Q e k ) = Q (B e k ) = Q ( k e k ) = k (Q e k ) Note from this: – k is an eigenvalue of A – Q e k is the corresponding eigenvector of A

35 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 35 The QR Method: Start-up Given a matrix A Apply Householder’s Algorithm to obtain a matrix B in upper-Hessenberg form Select  >0 and m>0 –  is a acceptable proximity to zero for sub- diagonal elements – m is an iteration limit

36 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 36 The QR Method: Main Loop

37 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 37 The QR Method: Finding The ’s

38 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 38 Details Of The Eigenvalue Formulae

39 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 39 Details Of The Eigenvalue Formulae

40 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 40 Finding Roots of Polynomials Every n x n matrix has a characteristic polynomial Every polynomial has a corresponding n x n matrix for which it is the characteristic polynomial Thus, polynomial root finding is equivalent to finding eigenvalues

41 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 41 Example Please!?!?!? Consider the monic polynomial of degree n f(x) = a 1 +a 2 x+a 3 x 2 +…+a n x n-1 +x n and the companion matrix

42 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 42 Find The Eigenvalues of the Companion Matrix

43 7/2/2015 DRAFT Copyright, Gene A Tagliarini, PhD 43 Find The Eigenvalues of the Companion Matrix


Download ppt "Finding Eigenvalues and Eigenvectors What is really important?"

Similar presentations


Ads by Google