The Power Method for Finding

Slides:



Advertisements
Similar presentations
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Advertisements

Chapter 9 Approximating Eigenvalues
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Scientific Computing QR Factorization Part 2 – Algorithm to Find Eigenvalues.
Chapter 6 Eigenvalues and Eigenvectors
Part 3 Chapter 13 Eigenvalues PowerPoints organized by Prof. Steve Chapra, Tufts University All images copyright © The McGraw-Hill Companies, Inc. Permission.
Lecture 13 - Eigen-analysis CVEN 302 July 1, 2002.
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
8 CHAPTER Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1.
Eigenvalues and Eigenvectors
Linear Transformations
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Symmetric Matrices and Quadratic Forms
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
5. Topic Method of Powers Stable Populations Linear Recurrences.
Eigenvalues Appendix A
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
5 5.1 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Techniques for studying correlation and covariance structure
Dominant Eigenvalues & The Power Method
Scientific Computing Matrix Norms, Convergence, and Matrix Condition Numbers.
Orthogonal matrices based on excelent video lectures by Gilbert Strang, MIT
Eigen Values Andras Zakupszki Nuttapon Pichetpongsa Inderjeet Singh Surat Wanamkang.
Applied Discrete Mathematics Week 9: Relations
1 MAC 2103 Module 12 Eigenvalues and Eigenvectors.
PHY 301: MATH AND NUM TECH Chapter 5: Linear Algebra Applications I.Homogeneous Linear Equations II.Non-homogeneous equation III.Eigen value problem.
Solving Scalar Linear Systems Iterative approach Lecture 15 MA/CS 471 Fall 2003.
Day 1 Eigenvalues and Eigenvectors
Day 1 Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Day 4 Differential Equations (option chapter). The number of rabbits in a population increases at a rate that is proportional to the number of rabbits.
Linear algebra: matrix Eigen-value Problems
Domain Range definition: T is a linear transformation, EIGENVECTOR EIGENVALUE.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
5 5.2 © 2012 Pearson Education, Inc. Eigenvalues and Eigenvectors THE CHARACTERISTIC EQUATION.
What is the determinant of What is the determinant of
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Part 3 Chapter 13 Eigenvalues {contains corrections to the Texbook} PowerPoints organized by Prof. Steve Chapra, Tufts University All images copyright.
5.1 Eigenvectors and Eigenvalues 5. Eigenvalues and Eigenvectors.
Lecture 6 - Single Variable Problems & Systems of Equations CVEN 302 June 14, 2002.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
ALGEBRAIC EIGEN VALUE PROBLEMS
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Chapter 6 Eigenvalues and Eigenvectors
MA2213 Lecture 8 Eigenvectors.
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
Elementary Linear Algebra Anton & Rorres, 9th Edition
Eigenvalues and Eigenvectors
Section 4.1 Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors
Chapter 27.
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Eigen Decomposition Based on the slides by Mani Thomas
Principal Components What matters most?.
SKTN 2393 Numerical Methods for Nuclear Engineers
Hour 33 Coupled Oscillators I
Linear Algebra Lecture 32.
Homogeneous Linear Systems
Linear Algebra Lecture 35.
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Principal Components What matters most?.
Eigenvectors and Eigenvalues
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Presentation transcript:

The Power Method for Finding Scientific Computing The Power Method for Finding Dominant Eigenvalue

Eigenvalues-Eigenvectors The eigenvectors of a matrix are vectors that satisfy Ax = λx Or, (A – λI)x = 0 So, λ is an eigenvalue iff det(A – λI) = 0 Example:

Eigenvalues-Eigenvectors Eigenvalues are used in the solution of Engineering Problems involving vibrations, elasticity, oscillating systems, etc. Eigenvalues are also important for the analysis of Markov Chains in statistics. The next set of slides are from the course “Computer Applications in Engineering and Construction” at Texas A&M (Fall 2008).

Equilibrium positions Mass-Spring System Equilibrium positions

Mass-Spring System Homogeneous system Find the eigenvalues  from det[ ] = 0

Polynomial Method m1 = m2 = 40 kg, k = 200 N/m Characteristic equation det[ ] = 0 Two eigenvalues  = 3.873s1 or 2.236 s 1 Period Tp = 2/ = 1.62 s or 2.81 s

Principal Modes of Vibration Tp = 1.62 s X1 = X2 Tp = 2.81 s X1 = X2

Power Method Power method for finding eigenvalues Start with an initial guess for x Calculate w = Ax Largest value (magnitude) in w is the estimate of eigenvalue Get next x by rescaling w (to avoid the computation of very large matrix An ) Continue until converged

k is the dominant eigenvalue Power Method Start with initial guess z = x0 rescaling k is the dominant eigenvalue

Power Method For large number of iterations,  should converge to the largest eigenvalue The normalization make the right hand side converge to  , rather than n

Example: Power Method Consider Start with Assume all eigenvalues are equally important, since we don’t know which one is dominant Start with Eigenvalue estimate Eigenvector

Example Current estimate for largest eigenvalue is 21 Rescale w by eigenvalue to get new x Check Convergence (Norm < tol?) Norm

Update the estimated eigenvector and repeat New estimate for largest eigenvalue is 19.381 Rescale w by eigenvalue to get new x Norm

Convergence criterion -- Norm (or relative error) < tol Example One more iteration Norm Convergence criterion -- Norm (or relative error) < tol

Example: Power Method

Script file: Power_eig.m

MATLAB Example: Power Method 2 8 10 8 3 4 10 4 7 » [z,m] = Power_eig(A,100,0.001); it m z(1) z(2) z(3) z(4) z(5) 1.0000 21.0000 0.9524 0.7143 1.0000 2.0000 19.3810 0.9091 0.7101 1.0000 3.0000 18.9312 0.9243 0.7080 1.0000 4.0000 19.0753 0.9181 0.7087 1.0000 5.0000 19.0155 0.9206 0.7084 1.0000 6.0000 19.0396 0.9196 0.7085 1.0000 7.0000 19.0299 0.9200 0.7085 1.0000 8.0000 19.0338 0.9198 0.7085 1.0000 9.0000 19.0322 0.9199 0.7085 1.0000 error = 8.3175e-004 » z z = 0.9199 0.7085 1.0000 » m m = 19.0322 » x=eig(A) x = -7.7013 0.6686 19.0327 MATLAB Example: Power Method eigenvector eigenvalue MATLAB function

MATLAB’s Methods e = eig(A) gives eigenvalues of A [V, D] = eig(A) eigenvectors in V(:,k) eigenvalues = Dii (diagonal matrix D) [V, D] = eig(A, B) (more general eigenvalue problems) (Ax = Bx) AV = BVD

Theorem: If A has a complete set of eigenvectors, then the Power method converges to the dominate eigenvalue of the matrix A. Proof: A has n eigenvalues 1,2,3,…,n with 1>2>3>…>n with a corresponding basis of eigenvectors w1,w2,w3,…,wn. Let the initial vector w0 be a linear combination of the vectors w1,w2,w3,…,wn. w0 = a1w1+a2w2+a3w3+…+anwn Aw0 = A(a1w1+a2w2+a3w3+…+anwn) =a1Aw1+a2Aw2+a3Aw3+…+anAwn =a11w1+a22w2+a33w3+…+annwn Akw0 =a1(1)kw1+a2(2)kw2+…+an(n)kwn Akw0/(1)k-1 =a1(1)k /(1)k-1 w1 +…+an(n)k /(1)k-1 wn

For large values of k (as k goes to infinity) we get the following: At each stage of the process we divide by the dominant term of the vector. If we write w1 as shown to the right and consider what happens between two consecutive estimates we get the following. Dividing by the dominant term gives something that is approximately 1.