Recitation: SVD and dimensionality reduction

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Covariance Matrix Applications
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
Dimensionality Reduction PCA -- SVD
PCA + SVD.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Lecture 19 Singular Value Decomposition
Principal Components Analysis Babak Rasolzadeh Tuesday, 5th December 2006.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
3D Geometry for Computer Graphics
Unsupervised Learning - PCA The neural approach->PCA; SVD; kernel PCA Hertz chapter 8 Presentation based on Touretzky + various additions.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
10-603/15-826A: Multimedia Databases and Data Mining SVD - part I (definitions) C. Faloutsos.
3D Geometry for Computer Graphics
Lecture 20 SVD and Its Applications Shang-Hua Teng.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
SVD(Singular Value Decomposition) and Its Applications
Summarized by Soo-Jin Kim
Principle Component Analysis Presented by: Sabbir Ahmed Roll: FH-227.
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Machine Learning CS 165B Spring Course outline Introduction (Ch. 1) Concept learning (Ch. 2) Decision trees (Ch. 3) Ensemble learning Neural Networks.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Presented By Wanchen Lu 2/25/2013
CS246 Topic-Based Models. Motivation  Q: For query “car”, will a document with the word “automobile” be returned as a result under the TF-IDF vector.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
CSE 185 Introduction to Computer Vision Face Recognition.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:
Irena Váňová. B A1A1. A2A2. A3A3. repeat until no sample is misclassified … labels of classes Perceptron algorithm for i=1...N if then end * * * * *
Presented by: Muhammad Wasif Laeeq (BSIT07-1) Muhammad Aatif Aneeq (BSIT07-15) Shah Rukh (BSIT07-22) Mudasir Abbas (BSIT07-34) Ahmad Mushtaq (BSIT07-45)
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Principal Components Analysis ( PCA)
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
CSE 554 Lecture 8: Alignment
Introduction to Vectors and Matrices
Eigen & Singular Value Decomposition
CS479/679 Pattern Recognition Dr. George Bebis
Lecture 8:Eigenfaces and Shared Features
Lecture: Face Recognition and Feature Reduction
Dimensionality reduction
LSI, SVD and Data Management
Singular Value Decomposition
CS485/685 Computer Vision Dr. George Bebis
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Introduction PCA (Principal Component Analysis) Characteristics:
Matrix Algebra and Random Vectors
SVD, PCA, AND THE NFL By: Andrew Zachary.
X.1 Principal component analysis
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Feature space tansformation methods
Digital Image Processing Lecture 21: Principal Components for Description Prof. Charlene Tsai *Chapter 11.4 of Gonzalez.
Lecture 13: Singular Value Decomposition (SVD)
Introduction to Vectors and Matrices
Principal Component Analysis
Subject :- Applied Mathematics
Lecture 20 SVD and Its Applications
Presentation transcript:

Recitation: SVD and dimensionality reduction Zhenzhen Kou Thursday, April 21, 2005

SVD Intuition: find the axis that shows the greatest variation, and project all points into this axis f2 e1 e2 f1

SVD: Mathematical Background The reconstructed matrix Xk = Uk.Sk.Vk’ is the closest rank-k matrix to the original matrix R. Xk = X m X n U m X r S r X r V’ r X n Uk m X k Vk’ k X n xSk k X k

SVD: The mathematical formulation Let X be the M x N matrix of M N-dimensional points SVD decomposition X= U x S x VT U(M x M) U is orthogonal: UTU = I columns of U are the orthogonal eigenvectors of XXT called the left singular vectors of X V(N x N) V is orthogonal: VTV = I columns of V are the orthogonal eigenvectors of XTX called the right singular vectors of X S(M x N) diagonal matrix consisting of r non-zero values in descending order square root of the eigenvalues of XXT (or XTX) r is the rank of the symmetric matrices called the singular values

SVD - Interpretation

SVD - Interpretation X = U S VT - example: = x v1

variance (‘spread’) on the v1 axis SVD - Interpretation X = U S VT - example: variance (‘spread’) on the v1 axis x x =

SVD - Interpretation X = U S VT - example: U L gives the coordinates of the points in the projection axis x x =

Dimensionality reduction set the smallest eigenvalues to zero: x x =

Dimensionality reduction x x ~

Dimensionality reduction x x ~

Dimensionality reduction x x ~

Dimensionality reduction ~

Dimensionality reduction Equivalent: ‘spectral decomposition’ of the matrix: x x =

Dimensionality reduction Equivalent: ‘spectral decomposition’ of the matrix: l1 x x = u1 u2 l2 v1 v2

Dimensionality reduction ‘spectral decomposition’ of the matrix: m r terms = u1 l1 vT1 + u2 l2 vT2 +... n n x 1 1 x m

Dimensionality reduction approximation / dim. reduction: by keeping the first few terms (Q: how many?) m = u1 l1 vT1 + u2 l2 vT2 +... n assume: l1 >= l2 >= ...

Dimensionality reduction A heuristic: keep 80-90% of ‘energy’ (= sum of squares of li ’s) m = u1 l1 vT1 + u2 l2 vT2 +... n assume: l1 >= l2 >= ...

Another example-Eigenface The PCA problem in HW5 Face data X Eigenvectors associated with the first few large eigenvalues of XXT have face-like images

Dimensionality reduction Matrix V in the SVD decomposition (X = USVT ) is used to transform the data. XV (= US) defines the transformed dataset. For a new data element x, xV defines the transformed data. Keeping the first k (k < n) dimensions, amounts to keeping only the first k columns of V.

Principal Components Analysis (PCA) Transfer the dataset to the center by subtracting the means: let matrix X be the result. Compute the matrix XTX. The covariance matrix except for constants. Project the dataset along a subset of the eigenvectors of XTX. Matrix V in the SVD decomposition (X= U S VT ) contains the eigenvectors of XTX. Also known as K-L transform.