An introduction to iterative projection methods Eigenvalue problems Luiza Bondar the 23 rd of November -2005 4 th Seminar.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Chapter 9 Approximating Eigenvalues
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Solving Linear Systems (Numerical Recipes, Chap 2)
Iterative Methods and QR Factorization Lecture 5 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
1cs542g-term Notes  In assignment 1, problem 2: smoothness = number of times differentiable.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
1cs542g-term Notes  Assignment 1 due tonight ( me by tomorrow morning)
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
Symmetric Matrices and Quadratic Forms
Mar Numerical approach for large-scale Eigenvalue problems 1 Definition Why do we study it ? Is the Behavior system based or nodal based? What are.
Multimedia Databases SVD II. Optimality of SVD Def: The Frobenius norm of a n x m matrix M is (reminder) The rank of a matrix M is the number of independent.
Chapter 6 Eigenvalues.
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
Singular Value Decomposition and Data Management
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Matrices CS485/685 Computer Vision Dr. George Bebis.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Dominant Eigenvalues & The Power Method
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Presented By Wanchen Lu 2/25/2013
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
Quantum Mechanics(14/2)Taehwang Son Functions as vectors  In order to deal with in more complex problems, we need to introduce linear algebra. Wave function.
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
Using Adaptive Methods for Updating/Downdating PageRank Gene H. Golub Stanford University SCCM Joint Work With Sep Kamvar, Taher Haveliwala.
Algorithms for a large sparse nonlinear eigenvalue problem Yusaku Yamamoto Dept. of Computational Science & Engineering Nagoya University.
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
AN ORTHOGONAL PROJECTION
In-Won Lee, Professor, PE In-Won Lee, Professor, PE Structural Dynamics & Vibration Control Lab. Structural Dynamics & Vibration Control Lab. Korea Advanced.
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
Orthogonalization via Deflation By Achiya Dax Hydrological Service Jerusalem, Israel
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
A Note on Rectangular Quotients By Achiya Dax Hydrological Service Jerusalem, Israel
4.8 Rank Rank enables one to relate matrices to vectors, and vice versa. Definition Let A be an m  n matrix. The rows of A may be viewed as row vectors.
Class 26: Question 1 1.An orthogonal basis for A 2.An orthogonal basis for the column space of A 3.An orthogonal basis for the row space of A 4.An orthogonal.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Mathematical Tools of Quantum Mechanics
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Krylov-Subspace Methods - I Lecture 6 Alessandra Nardi Thanks to Prof. Jacob White, Deepak Ramaswamy, Michal Rewienski, and Karen Veroy.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
1 Instituto Tecnológico de Aeronáutica Prof. Maurício Vicente Donadon AE-256 NUMERICAL METHODS IN APPLIED STRUCTURAL MECHANICS Lecture notes: Prof. Maurício.
Singular Value Decomposition and Numerical Rank. The SVD was established for real square matrices in the 1870’s by Beltrami & Jordan for complex square.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
MA237: Linear Algebra I Chapters 1 and 2: What have we learned?
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
Beyond Vectors Hung-yi Lee. Introduction Many things can be considered as “vectors”. E.g. a function can be regarded as a vector We can apply the concept.
Motivation Modern search engines for the World Wide Web use methods that require solving huge problems. Our aim: to develop multiscale techniques that.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
CS485/685 Computer Vision Dr. George Bebis
Conjugate Gradient Method
Linear Algebra Lecture 40.
Feature space tansformation methods
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Lecture 13: Singular Value Decomposition (SVD)
Linear Algebra Lecture 41.
Subject :- Applied Mathematics
Linear Algebra Lecture 16.
RKPACK A numerical package for solving large eigenproblems
Symmetric Matrices and Quadratic Forms
Presentation transcript:

An introduction to iterative projection methods Eigenvalue problems Luiza Bondar the 23 rd of November th Seminar

Introduction (Erwin) Perturbation analysis (Nico) Direct (global) methods (Peter) Introduction to projection methods (Luiza) (theoretical background) Krylov subspace methods 1 (Mark) Krylov subspace methods 2 (Willem)

Outline Introduction The power method Projection Methods Subspace iteration Summary

Direct methods ( Schur decomposition, QR iteration, Jacobi method, method of Sturm sequences ) compute all the eigenvalues and the corresponding eigenvectors What if we DON’T need all the eigenvalues? Example : compute the page rank of the www documents Introduction

WEB: a graph (pages are nodes links are edges ) Introduction

Web graph: 1.4 bilion nodes (pages) 6.6 bilion edges (links) page rank of page i : the probability that a surfer will visit the page i The page rank is a dominant vector of a sparse 1.4 bilion X 1.4 bilion matrix. It makes little sense to compute all the eigenvectors. page rank : vector with dimension N=1.4 bilion Introduction

The power method computes the dominant eigenvalue and an associated eigenvector Some background consider that A has p distinct eigenvalues. semi-simple is the algebraic multiplicity of is the projection onto

The power method consider that the dominant eigenvalue is unique and is semi-simple initial vector such that convergence ? NOYES ( ) compute andtake

The power method initial vector use then and ( ) 0 convergence of each term in given by The power method is used by to compute the page rank.

The power method the convergence of the method is given by the convergence might be very slow if are close from one another if the dominant eigenvalue is multiple but semi-simple, then the algorithm provides only one eigenvalue and a corresponding eigenvector does not converge if the dominant eigenvalue is complex and the original matrix is real (2 eigenvalues with the same modulus) IMPROVEMENT : the shifted power method LED TO : projection methods

The power method Shifted power method Example let be the dominant eigenvalue of a matrix that has an egenvalue then the power method does not converge when applied to but the power method converges for a shift (e.g. ) Other variants of the power method inverse power method (iterates with ) inverse power method with shift smallest eigenvalue eigenvalue closest to the shift

The power method inverse power method then converges to the smallest eigenvalue and converges to an associated eigenvector inverse power method with shift then converges to and converges to an eigenvector associated with

The power method does not converge if the dominant eigenvalue is complex and the original matrix is real (2 eigenvalues with the same modulus) But after a certain k IDEA: extract the vectors by performing a projection into the subspace contains approximations to the complex par of eigenvectors power method

Projection methods (Introduction) find and such that impose 2 more constrains one choice is to impose orthogonality conditions (Galerkin) i.e., introduce 2 degrees of freedom and projection method

Projection methods (Introduction) Generalization dim K =dim L =m find and such that A projection technique seeks an approximate eigenpar and such that orthogonal projection or oblique projection K : the right subspace, L : the left subspace A way to construct K is Krylov subspace (inspired by the power method)

Projection methods (orthogonal) Consider an orthonormal basis of K and the approximate can be written as eigenvalue of then eigenvalue of eigenvector of then eigenvector of Arnoldi’s method and the hermitian Lanczos algorithm are orthogonal projection methods

Projection methods (oblique) Search for and such that the approximate can be written as orthonormal basis of K orthonormal basis of L and are such that (biorthogonal) The condition leads to the approximate eigenvalue problem The nonhermitian Lanczos alghoritm is an oblique projection method.

Projection methods (orthogonal) How accurate can an orthogonal projection method be? exact eigenpar then projection onto K K

Projection methods (orthogonal) Hermitian case K

Subspace iteration generalization of the power method start with an initial system of m vectors instead of only one vector (power method) compute the matrix If each of the m vectors is normalised in the same way as for the power method, then each of these vectors will converge to the SAME eigenvector associated with the dominant eigenvalue (provided that ) Note looses its linear independence IDEA: restore the linear independence by performing a QR factorisation

Subspace iteration start with QR factorize take compute convergence ? recover the first m eigenvalues and corresponding eigenvectors of A from NO YES

Subspace iteration the i-th column of converges to a Schur vector associated with the eigenvalue the convergence of the column is given by the factor the speed of convergence for an eigenvalue depends on how close is it to the next one Variants of the subspace iteration method take the dimension of the subspace m larger than n ev number of eigenvalues wanted perform “locking” i.e., as soon as an eigenvalue has converged stop multiplying with A the corresponding vector in the subsequent iterations

Subspace iteration Some very theoretical result on residual norm projection onto projection onto the subspace spanned by the eigenvectors associated with the first m eigenvalues of Then for any eigenvalue of there is an unique such that and assume that are linearly independent 0

Summary The power method can be used to compute the dominant eigenvalue (real) and a corresponding eigenvector. Variants of the power method can compute the smallest eigenvalue or the eigenvalue closest to a given number (shift). General projection methods consist in approximating the eigenvectors of a matrix with vectors belonging to a subspace of approximants with dimension smaller than the dimension of the matrix. Subspace iteration method is a generalization of the power method that computes a given number of dominant eigenvalues and their corresponding eigenvectors.

Last minute questions answered by Tycho van Noorden Sorin Pop