Chapter 9 Approximating Eigenvalues

Slides:



Advertisements
Similar presentations
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Advertisements

Chapter 6 Eigenvalues and Eigenvectors
Eigen-analysis and the Power Method
PCA + SVD.
Eigenvalues and Eigenvectors
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Jonathan Richard Shewchuk Reading Group Presention By David Cline
Linear Transformations
Performance Optimization
Mar Numerical approach for large-scale Eigenvalue problems 1 Definition Why do we study it ? Is the Behavior system based or nodal based? What are.
Efficiency of Algorithms
ENGG2013 Unit 17 Diagonalization Eigenvector and eigenvalue Mar, 2011.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
An introduction to iterative projection methods Eigenvalue problems Luiza Bondar the 23 rd of November th Seminar.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Finding Eigenvalues and Eigenvectors What is really important?
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Fast Spectral Transforms and Logic Synthesis DoRon Motter August 2, 2001.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Dominant Eigenvalues & The Power Method
Today’s class Boundary Value Problems Eigenvalue Problems
Systems of Linear Equations Iterative Methods
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Linear Algebra/Eigenvalues and eigenvectors. One mathematical tool, which has applications not only for Linear Algebra but for differential equations,
Using Adaptive Methods for Updating/Downdating PageRank Gene H. Golub Stanford University SCCM Joint Work With Sep Kamvar, Taher Haveliwala.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Eigenvalues and Eigenvectors
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Programming assignment #2 Solving a parabolic PDE using finite differences Numerical Methods for PDEs Spring 2007 Jim E. Jones.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
Diagonalization and Similar Matrices In Section 4.2 we showed how to compute eigenpairs (,p) of a matrix A by determining the roots of the characteristic.
linear  2.3 Newton’s Method ( Newton-Raphson Method ) 1/12 Chapter 2 Solutions of Equations in One Variable – Newton’s Method Idea: Linearize a nonlinear.
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Variations on Backpropagation.
第五章 特征值与特征向量 —— 幂法 /* Power Method */ 计算矩阵的主特征根及对应的特征向量 Wait a second, what does that dominant eigenvalue mean? That is the eigenvalue with the largest.
The Power Method for Finding
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Gaoal of Chapter 2 To develop direct or iterative methods to solve linear systems Useful Words upper/lower triangular; back/forward substitution; coefficient;
Section 4.3 Properties of Linear Transformations from R n to R m.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
ALGEBRAIC EIGEN VALUE PROBLEMS
Review of Eigenvectors and Eigenvalues from CliffsNotes Online mining-the-Eigenvectors-of-a- Matrix.topicArticleId-20807,articleId-
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
CSE 554 Lecture 8: Alignment
Chapter 6 Eigenvalues and Eigenvectors
Review of Eigenvectors and Eigenvalues
Review of Linear Algebra
ISHIK UNIVERSITY FACULTY OF EDUCATION Mathematics Education Department
Matrices and vector spaces
Singular Value Decomposition
Eigenvalues and Eigenvectors
MATH 2140 Numerical Methods
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Eigenvalues and Eigenvectors
Numerical Analysis Lecture 17.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Linear Algebra Lecture 32.
Subject :- Applied Mathematics
Linear Algebra Lecture 16.
RKPACK A numerical package for solving large eigenproblems
Presentation transcript:

Chapter 9 Approximating Eigenvalues Compute the dominant eigenvalue of a matrix, and the corresponding eigenvector  9.2 The Power Method Why in the earth do I want to know that? Wait a second, what does that dominant eigenvalue mean? That is the eigenvalue with the largest magnitude. Don’t you have to compute the spectral radius from time to time? 1/8

This is the approximation of the eigenvector of A associated Chapter 9 Approximating Eigenvalues -- The Power Method  The Original Method Assumptions: A is an n  n matrix with eigenvalues satisfying |1| > |2|  …  |n|  0. The eigenvalues are associated with n linearly independent eigenvectors Idea: Start from any and ) , ( 1 ¹ v x This is the approximation of the eigenvector of A associated with 1 For sufficiently large k, we have … … … 2/8

Algorithm: Power Method Chapter 9 Approximating Eigenvalues -- The Power Method  Normalization Make sure that at each step to guarantee the stableness. Let . Then and and Algorithm: Power Method To approximate the dominant eigenvalue and an associated eigenvector of the nn matrix A given a nonzero initial vector. Input: dimension n; matrix a[ ][ ]; initial vector x0[ ]; tolerance TOL; maximum number of iterations Nmax. Output: approximate eigenvalue  and approximate eigenvector (normalized) or a message of failure. 3/8

Algorithm: Power Method (continued) Chapter 9 Approximating Eigenvalues -- The Power Method Algorithm: Power Method (continued) Step 1 Set k = 1; Step 2 Find index such that | x0[ index ] | = || x0 || ; Step 3 Set x0[ ] = x0[ ] / x0[ index ]; /* normalize x0 */ Step 4 While ( k  Nmax) do steps 5-11 Step 5 x[ ] = A x0[ ]; /* compute xk from uk1 */ Step 6  = x[ index ]; Step 7 Find index such that | x[ index ] | = || x || ; Step 8 If x[ index ] == 0 then Output ( “A has the eigenvalue 0”; x0[ ] ) ; STOP. /* the matrix is singular and user should try a new x0 */ Step 9 err = || x0  x / x[ index ] ||  ; x0[ ] = x[ ] / x[ index ]; /* compute uk */ Step 10 If (err < TOL) then Output (  ; x0[ ] ) ; STOP. /* successful */ Step 11 Set k ++; Step 12 Output (Maximum number of iterations exceeded); STOP. /* unsuccessful */ 4/8

The method works for multiple eigenvalues 1 = 2 = … = r Chapter 9 Approximating Eigenvalues -- The Power Method Note: The method works for multiple eigenvalues 1 = 2 = … = r since  The method fails to converge if 1 = 2 .  Since we cannot guarantee 1  0 for an arbitrary initial approximation vector , the result of such iteration might not be , but be the first to satisfy . The associated eigenvalue will be m .  Aitken’s 2 procedure can be used to speed the convergence. (p.563-564) 5/8

Chapter 9 Approximating Eigenvalues -- The Power Method  Rate of Convergence Make | 2 / 1 | as small as possible. Assume 1 > 2  …  n , and | 2 | > | n |. Determines the rate of convergence. Especially | 2 / 1 | 1 2 n O p = ( 2 + n ) / 2 Let B = A  pI , then | IA | = | I(B+pI) | = | (p)IB |  A  p = B . Since , the iteration for finding the eigenvalue of B converges much faster than that of A. Idea How are we supposed to know where p is? As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein (1879-1955) 6/8

HW: Self-study Deflation Techniques on p.570-574 Chapter 9 Approximating Eigenvalues -- The Power Method  Inverse Power Method If A has eigenvalues | 1 |  | 2 |  … > | n |, then A1 has and they correspond to the same set of eigenvectors. 1 l  … > - n HW: Self-study Deflation Techniques on p.570-574 The dominant eigenvalue of A1 The eigenvalue of A with the smallest magnitude. Q: How must we compute in every step? A: Solve a linear system with A factorized. If we know that an eigenvalue i of A is closest to a specified number p , then for any j  i we have | i  p | << | j  p |. And more, if (A  pI)1 exists, then the inverse power method can be used to find the dominant eigenvalue 1/(i  p ) of (A  pI)1 with faster convergence. Idea 7/8

Lab 05. Approximating Eigenvalues Time Limit: 1 second; Points: 4 Chapter 9 Approximating Eigenvalues -- The Power Method Lab 05. Approximating Eigenvalues Time Limit: 1 second; Points: 4 Approximate an eigenvalue and an associated eigenvector of a given n×n matrix A near a given value p and a nonzero vector . 8/8