Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling csc610, 2001.

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

Eigen Decomposition and Singular Value Decomposition
EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
Machine Learning Lecture 8 Data Processing and Representation
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Section 5.1 Eigenvectors and Eigenvalues. Eigenvectors and Eigenvalues Useful throughout pure and applied mathematics. Used to study difference equations.
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Started detailed look at PCA Reviewed linear algebra Today: More linear algebra Multivariate.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Computer Graphics Recitation 5.
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 9(b) Principal Components Analysis Martin Russell.
Face Recognition Jeremy Wyatt.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Correlation. The sample covariance matrix: where.
Matrices CS485/685 Computer Vision Dr. George Bebis.
SVD(Singular Value Decomposition) and Its Applications
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
The Multiple Correlation Coefficient. has (p +1)-variate Normal distribution with mean vector and Covariance matrix We are interested if the variable.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
CS654: Digital Image Analysis Lecture 15: Image Transforms with Real Basis Functions.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
Last update Heejune Ahn, SeoulTech
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
Feature Extraction 主講人:虞台文.
Chapter 13 Discrete Image Transforms
Presented by: Muhammad Wasif Laeeq (BSIT07-1) Muhammad Aatif Aneeq (BSIT07-15) Shah Rukh (BSIT07-22) Mudasir Abbas (BSIT07-34) Ahmad Mushtaq (BSIT07-45)
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Image Processing Architecture, © Oleh TretiakPage 1Lecture 5 ECEC 453 Image Processing Architecture Lecture 5, 1/22/2004 Rate-Distortion Theory,
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Image Transformation Spatial domain (SD) and Frequency domain (FD)
CSSE463: Image Recognition Day 27
LECTURE 10: DISCRIMINANT ANALYSIS
9.3 Filtered delay embeddings
Lecture 8:Eigenfaces and Shared Features
The regression model in matrix form
4. DIGITAL IMAGE TRANSFORMS 4.1. Introduction
CS485/685 Computer Vision Dr. George Bebis
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Digital Image Procesing The Karhunen-Loeve Transform (KLT) in Image Processing DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL.
Recitation: SVD and dimensionality reduction
Matrix Algebra and Random Vectors
Further Matrix Algebra
CSSE463: Image Recognition Day 25
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Principal Components What matters most?.
LECTURE 09: DISCRIMINANT ANALYSIS
CSSE463: Image Recognition Day 25
Lecture 13: Singular Value Decomposition (SVD)
Principal Component Analysis
Eigenvalues and Eigenvectors
Digital Image Processing
Principal Components What matters most?.
Presentation transcript:

Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling csc610, 2001

Next: Biological vision Please look at eye book and pathways readings on www page.

Image dependent basis Other transforms, (Fourier and Cosine) uses a fixed basis. Better compression can be achieved by tailoring the basis to the image (pixel) statistics.

Sample data Let x =[x 1, x 2,…,x w ] be a sequence of random variables (vectors) Example 1: x = Column flattening of an image, so x is a sample over several images. Example 2: x = Column flattening of an image block. x could be sampled from only one image. Example 3: x = optic flow field.

Natural images = stochastic variables The samplings x =[x 1, x 2,…,x w ] of natural image intensities or optic flows are far from uniform! Draw x 1, x 2,…,x w : Localized clouds of points. Particularly localized for e.g. – Groups of faces or similar objects – Lighting – Small motions represented as intensity changes or optic flow. (Note coupling through optic flow equation)

Dimensionality reduction Find an ON basis B which “compacts” x Can write: x = y 1 B 1 + y 2 B 2 +…+ y w B w But can drop some and only use a subset, i.e. x = y 1 B 1 + y 2 B 2 Result: Fewer y values encode almost all the data in x

Estimated Covariance Matrix Note 1: Can more compactly write: C = X’*X and m = sum(x)/w Note 2: Covariance matrix is both symmetric and real, therefore the matrix can be diagonalized and its eigenvalues will be real

Coordinate Transform Diagonalize C, ie find B s.t. B’CB = D = diag( ) Note: 1. C symmetric => B orthogonal 2. B columns = C eigenvectors

Definition KL, PCA, Hotelling transform The forward coordinate transform y = B’*(x-m x ) Inverse transform x = B*y + m x Note: m y = E[y] = 0 zero mean C = E[yy’] = D diagonal cov matrix

Study eigenvalues Typically quickly decreasing: Write: x = y 1 B 1 + y 2 B 2 +…+ y m B m +…+ y w B w But can drop some and only use a subset, i.e. x m = y 1 B 1 + y 2 B 2 +…+ y m B m Result: Fewer y values encode almost all the data in x m

Remaining error Mean square error of the approximation is Proof: (on blackboard)

Example eigen-images 1: Lighting variation

Example eigen-images 2 Motions Motions with my arm sampled densely

Human Arm Animation Training sequenceSimulated Animation

Modulating Basis Vectors Vectors: 5 and 61-6

Practically: C=XX’ quite large, eig(C) impractical. Instead: 1. C=X’X, [d,V]= eig(C), eigenvectors U=XV 2. [U,s,V] = svd(X) (since C = XX’ = Us^2U’)

Summary Hotelling,PCA,KLT Advantages: 1. Optimal coding in the least square sense 2. Theoretcially well founded 3. Image dependent basis: We can interpret variation in our particular set of images. Disadvantages: 1. Computationally intensive 2. Image dependent basis (we have to compute it)