Eigen Decomposition and Singular Value Decomposition

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Covariance Matrix Applications
Latent Semantic Analysis
Chapter 6 Eigenvalues and Eigenvectors
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Symmetric Matrices and Quadratic Forms
Principal Component Analysis
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Stats & Linear Models.
5.1 Orthogonality.
SVD(Singular Value Decomposition) and Its Applications
CHAPTER SIX Eigenvalues
Compiled By Raj G. Tiwari
Chapter 2 Dimensionality Reduction. Linear Methods
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
Eigen Decomposition Based on the slides by Mani Thomas Modified and extended by Longin Jan Latecki.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Latent Semantic Indexing
E IGEN & S INGULAR V ALUE D ECOMPOSITION قسمتی از درس ریاضی مهندسی پیشرفته ( ارشد و دکتری ) دوشنبه، 3/9/90 1 دکتر رنجبر نوعی، گروه مهندسی کنترل.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
ITCS 6265 Information Retrieval & Web Mining Lecture 16 Latent semantic indexing Thanks to Thomas Hofmann for some slides.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
2.5 – Determinants and Multiplicative Inverses of Matrices.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Chapter 13 Discrete Image Transforms
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Mathematics-I J.Baskar Babujee Department of Mathematics
Introduction to Vectors and Matrices
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Eigen & Singular Value Decomposition
CS479/679 Pattern Recognition Dr. George Bebis
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Singular Value Decomposition
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Principal Component Analysis
Recitation: SVD and dimensionality reduction
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Symmetric Matrices and Quadratic Forms
Eigen Decomposition Based on the slides by Mani Thomas
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Introduction to Vectors and Matrices
Eigen Decomposition Based on the slides by Mani Thomas
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Presentation transcript:

Eigen Decomposition and Singular Value Decomposition Based on the slides by Mani Thomas Modified and extended by Longin Jan Latecki

Introduction Eigenvalue decomposition Spectral decomposition theorem Physical interpretation of eigenvalue/eigenvectors Singular Value Decomposition Importance of SVD Matrix inversion Solution to linear system of equations Solution to a homogeneous system of equations SVD application

A(x) = (Ax) = (x) = (x) What are eigenvalues? Given a matrix, A, x is the eigenvector and  is the corresponding eigenvalue if Ax = x A must be square and the determinant of A -  I must be equal to zero Ax - x = 0 ! (A - I) x = 0 Trivial solution is if x = 0 The non trivial solution occurs when det(A - I) = 0 Are eigenvectors are unique? If x is an eigenvector, then x is also an eigenvector and  is an eigenvalue A(x) = (Ax) = (x) = (x)

Calculating the Eigenvectors/values Expand the det(A - I) = 0 for a 2 £ 2 matrix For a 2 £ 2 matrix, this is a simple quadratic equation with two solutions (maybe complex) This “characteristic equation” can be used to solve for x

Eigenvalue example Consider, The corresponding eigenvectors can be computed as For  = 0, one possible solution is x = (2, -1) For  = 5, one possible solution is x = (1, 2) For more information: Demos in Linear algebra by G. Strang, http://web.mit.edu/18.06/www/

Physical interpretation Consider a covariance matrix, A, i.e., A = 1/n S ST for some S Error ellipse with the major axis as the larger eigenvalue and the minor axis as the smaller eigenvalue

Physical interpretation Original Variable A Original Variable B PC 1 PC 2 Orthogonal directions of greatest variance in data Projections along PC1 (Principal Component) discriminate the data most along any one axis

Physical interpretation First principal component is the direction of greatest variability (covariance) in the data Second is the next orthogonal (uncorrelated) direction of greatest variability So first remove all the variability along the first component, and then find the next direction of greatest variability And so on … Thus each eigenvectors provides the directions of data variances in decreasing order of eigenvalues For more information: See Gram-Schmidt Orthogonalization in G. Strang’s lectures

Multivariate Gaussian

Bivariate Gaussian

Spherical, diagonal, full covariance

Eigen/diagonal Decomposition Let be a square matrix with m linearly independent eigenvectors (a “non-defective” matrix) Theorem: Exists an eigen decomposition (cf. matrix diagonalization theorem) Columns of U are eigenvectors of S Diagonal elements of are eigenvalues of Unique for distinct eigen-values diagonal

Diagonal decomposition: why/how Let U have the eigenvectors as columns: Then, SU can be written Thus SU=U, or U–1SU= And S=UU–1.

Diagonal decomposition - example Recall The eigenvectors and form Recall UU–1 =1. Inverting, we have Then, S=UU–1 =

Example continued  Let’s divide U (and multiply U–1) by Then, S= Q (Q-1= QT ) Why? Stay tuned …

Symmetric Eigen Decomposition If is a symmetric matrix: Theorem: Exists a (unique) eigen decomposition where Q is orthogonal: Q-1= QT Columns of Q are normalized eigenvectors Columns are orthogonal. (everything is real)

Spectral Decomposition theorem If A is a symmetric and positive definite k £ k matrix (xTAx > 0) with i (i > 0) and ei, i = 1  k being the k eigenvector and eigenvalue pairs, then This is also called the eigen decomposition theorem Any symmetric matrix can be reconstructed using its eigenvalues and eigenvectors

Example for spectral decomposition Let A be a symmetric, positive definite matrix The eigenvectors for the corresponding eigenvalues are Consequently,

Singular Value Decomposition If A is a rectangular m £ k matrix of real numbers, then there exists an m £ m orthogonal matrix U and a k £ k orthogonal matrix V such that  is an m £ k matrix where the (i, j)th entry i ¸ 0, i = 1  min(m, k) and the other entries are zero The positive constants i are the singular values of A If A has rank r, then there exists r positive constants 1, 2,r, r orthogonal m £ 1 unit vectors u1,u2,,ur and r orthogonal k £ 1 unit vectors v1,v2,,vr such that Similar to the spectral decomposition theorem

Singular Value Decomposition (contd.) If A is a symmetric and positive definite then SVD = Eigen decomposition EIG(i) = SVD(i2) Here AAT has an eigenvalue-eigenvector pair (i2,ui) Alternatively, the vi are the eigenvectors of ATA with the same non zero eigenvalue i2

Example for SVD Let A be a symmetric, positive definite matrix U can be computed as V can be computed as

Example for SVD Taking 21=12 and 22=10, the singular value decomposition of A is Thus the U, V and  are computed by performing eigen decomposition of AAT and ATA Any matrix has a singular value decomposition but only symmetric, positive definite matrices have an eigen decomposition

Applications of SVD in Linear Algebra Inverse of a n £ n square matrix, A If A is non-singular, then A-1 = (UVT)-1= V-1UT where -1=diag(1/1, 1/1,, 1/n) If A is singular, then A-1 = (UVT)-1¼ V0-1UT where 0-1=diag(1/1, 1/2,, 1/i,0,0,,0) Least squares solutions of a m£n system Ax=b (A is m£n, m¸n) =(ATA)x=ATb ) x=(ATA)-1 ATb=A+b If ATA is singular, x=A+b¼ (V0-1UT)b where 0-1 = diag(1/1, 1/2,, 1/i,0,0,,0) Condition of a matrix Condition number measures the degree of singularity of A Larger the value of 1/n, closer A is to being singular http://www.cse.unr.edu/~bebis/MathMethods/SVD/lecture.pdf

Applications of SVD in Linear Algebra Homogeneous equations, Ax = 0 Minimum-norm solution is x=0 (trivial solution) Impose a constraint, “Constrained” optimization problem Special Case If rank(A)=n-1 (m ¸ n-1, n=0) then x= vn ( is a constant) Genera Case If rank(A)=n-k (m ¸ n-k, n-k+1== n=0) then x=1vn-k+1++kvn with 21++2n=1 Has appeared before Homogeneous solution of a linear system of equations Computation of Homogrpahy using DLT Estimation of Fundamental matrix For proof: Johnson and Wichern, “Applied Multivariate Statistical Analysis”, pg 79

What is the use of SVD? SVD can be used to compute optimal low-rank approximations of arbitrary matrices. Face recognition Represent the face images as eigenfaces and compute distance between the query face image in the principal component space Data mining Latent Semantic Indexing for document extraction Image compression Karhunen Loeve (KL) transform performs the best image compression In MPEG, Discrete Cosine Transform (DCT) has the closest approximation to the KL transform in PSNR

Singular Value Decomposition Illustration of SVD dimensions and sparseness

SVD example Let Thus m=3, n=2. Its SVD is Typically, the singular values arranged in decreasing order.

Low-rank Approximation SVD can be used to compute optimal low-rank approximations. Approximation problem: Find Ak of rank k such that Ak and X are both mn matrices. Typically, want k << r. Frobenius norm

Low-rank Approximation Solution via SVD set smallest r-k singular values to zero k column notation: sum of rank 1 matrices

Approximation error How good (bad) is this approximation? It’s the best possible, measured by the Frobenius norm of the error: where the i are ordered such that i  i+1. Suggests why Frobenius error drops as k increased.