Dominant Eigenvalues & The Power Method

Slides:



Advertisements
Similar presentations
Chapter 9 Approximating Eigenvalues
Advertisements

6.1 Vector Spaces-Basic Properties. Euclidean n-space Just like we have ordered pairs (n=2), and ordered triples (n=3), we also have ordered n-tuples.
Chapter 6 Eigenvalues and Eigenvectors
The QR iteration for eigenvalues. ... The intention of the algorithm is to perform a sequence of similarity transformations on a real matrix so that the.
Eigenvalues and Eigenvectors (11/17/04) We know that every linear transformation T fixes the 0 vector (in R n, fixes the origin). But are there other subspaces.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Linear Transformations
Eigenvalues and Eigenvectors
Matrices. Special Matrices Matrix Addition and Subtraction Example.
5. Topic Method of Powers Stable Populations Linear Recurrences.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Class 25: Question 1 Which of the following vectors is orthogonal to the row space of A?
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
化工應用數學 授課教師: 郭修伯 Lecture 9 Matrices
5.1 Orthogonality.
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Gerschgorin Circle Theorem. Eigenvalues In linear algebra Eigenvalues are defined for a square matrix M. An Eigenvalue for the matrix M is a scalar such.
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
Quantum Mechanics(14/2)Taehwang Son Functions as vectors  In order to deal with in more complex problems, we need to introduce linear algebra. Wave function.
Day 1 Eigenvalues and Eigenvectors
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Linear algebra: matrix Eigen-value Problems
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Eigenvectors and Linear Transformations Recall the definition of similar matrices: Let A and C be n  n matrices. We say that A is similar to C in case.
Section 5.1 First-Order Systems & Applications
Algorithms 2005 Ramesh Hariharan. Algebraic Methods.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Matrices: Simplifying Algebraic Expressions Combining Like Terms & Distributive Property.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
BASIC MATHEMATICAL Session 2 Course: S Introduction to Finite Element Method Year: 2010.
Algebra Matrix Operations. Definition Matrix-A rectangular arrangement of numbers in rows and columns Dimensions- number of rows then columns Entries-
The Power Method for Finding
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
13.3 Product of a Scalar and a Matrix.  In matrix algebra, a real number is often called a.  To multiply a matrix by a scalar, you multiply each entry.
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
Eigenvalues and Eigenvectors
Matrix Operations Free powerpoints at
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
Elementary Linear Algebra Anton & Rorres, 9th Edition
Matrices and vector spaces
Eigenvalues and Eigenvectors
Matrix Operations.
Matrix Operations Free powerpoints at
Matrix Operations.
Linear Algebra Lecture 36.
Section 7.4 Matrix Algebra.
Matrix Operations Free powerpoints at
Degree and Eigenvector Centrality
7.3 Matrices.
1.3 Vector Equations.
Linear Algebra Lecture 32.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Linear Algebra Lecture 35.
Eigenvalues and Eigenvectors
Linear Algebra Lecture 16.
Eigenvectors and Eigenvalues
Linear Algebra Lecture 28.
Eigenvalues and Eigenvectors
Presentation transcript:

Dominant Eigenvalues & The Power Method

Eigenvalues & Eigenvectors In linear algebra we learned that a scalar  is an eigenvalue for a square nn matrix A if there is a non-zero vector w such that Aw = w, we call the vector w an eigenvector for matrix A. The eigenvalue acts like scalar multiplication instead of matrix multiplication for the vector w. Eigenvalues are important for many applications in mathematics, physics, engineering and other disciplines. The Dominant Eigenvalue A nn matrix A will have n eigenvalues (some may be repeated). By the dominant eigenvalue we refer to the one that is biggest in terms of absolute value. This would include any eigenvalues that are complex. The matrix A to the right has as its eigenvalues the set of numbers {1,-4,2} (i.e. it is upper triangular). In absolute value this set is {|1|,|-4|,|2|}={1,4,2}. Since -4 is the largest in absolute value we say that -4 is the dominant eigenvalue. The problem we want to solve is that if we are given a matrix A can we estimate the dominant eigenvalue?

zk+1 = dominate term in Awk wk+1 = (1/zk+1) Awk The Power Method The Power Method works for matrices sort of like how the fixed point method works for functions. The iteration step is a bit different though. To explain how it works we need to introduce a bit of terminology. The dominant term of a vector v is the term that has the greatest absolute value (careful: it is the term itself not the absolute value of the term). If there are two terms that have the same absolute value you can pick either one for our purposes. dominant term=7 dominant term=-8 dominant term=-10 dominant term=6.5 The algorithm consists of the following steps. Start with an initial vector w0. Let the approximation for dominant eigenvalue be z0 the dominant term in w0. Use the iteration to the right: zk+1 = dominate term in Awk wk+1 = (1/zk+1) Awk

To get the next approximation for the dominant eigenvalue multiply the previous eigenvector by the matrix A and take that vectors dominant term. To get the next approximation for the eigenvector divide the product by it dominant term. The vector w0 given to the right is often used as the initial vector. Example: Apply the power method for 3 iterations to find z3 and w3 for the matrix A given to the right.

Power Method Convergence The Power method will not converge for a real matrix A the power method will converge to the dominate eigenvalue if the dominant eigenvalue is a real number. If the dominant eigenvalue is a complex number an initial vector with complex entries would need to be used. If the dominant eigenvalue is repeated it will find it. How this convergence can be seen is as follows. Given a nn matrix A with n eigenvalues 1,2,3,…,n with 1>2>3>…>n (i.e. 1 is the dominant eigenvalue) find a corresponding basis of eigenvectors w1,w2,w3,…,wn. Let the initial vector w0 be a linear combination of the vectors w1,w2,w3,…,wn. w0 = a1w1+a2w2+a3w3+…+anwn Aw0 = A(a1w1+a2w2+a3w3+…+anwn) =a1Aw1+a2Aw2+a3Aw3+…+anAwn (replace with eigenvalues) =a11w1+a22w2+a33w3+…+annwn Akw0 =a1(1)kw1+a2(2)kw2+…+an(n)kwn (repeat for powers of A) Akw0/(1)k-1 =a1(1)k /(1)k-1 w1+ a2(2)k /(1)k-1 w2 +…+an(n)k /(1)k-1 wn

  For large values of k (i.e. as k goes to infinity) we get the following: At each stage of the process we divide by the dominant term of the vector. If we write w1 as shown to the right and consider what happens between two consecutive estimates we get the following. Dividing by the dominant term gives something that is approximately 1.