N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Covariance Matrix Applications
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
Machine Learning Lecture 8 Data Processing and Representation
PCA + SVD.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
An introduction to Principal Component Analysis (PCA)
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
Symmetric Matrices and Quadratic Forms
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Principal Component Analysis
Face Recognition Jeremy Wyatt.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Bayesian belief networks 2. PCA and ICA
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Principal Component Analysis. Consider a collection of points.
Techniques for studying correlation and covariance structure
Matrices CS485/685 Computer Vision Dr. George Bebis.
Separate multivariate observations
5.1 Orthogonality.
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
III. Multi-Dimensional Random Variables and Application in Vector Quantization.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Additive Data Perturbation: data reconstruction attacks.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
CSE 185 Introduction to Computer Vision Face Recognition.
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
III. Multi-Dimensional Random Variables and Application in Vector Quantization.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Principle Component Analysis and its use in MA clustering Lecture 12.
Principal Component Analysis (PCA)
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
CSSE463: Image Recognition Day 10 Lab 3 due Weds Lab 3 due Weds Today: Today: finish circularity finish circularity region orientation: principal axes.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Feature Extraction 主講人:虞台文.
Chapter 13 Discrete Image Transforms
Objectives: Normal Random Variables Support Regions Whitening Transformations Resources: DHS – Chap. 2 (Part 2) K.F. – Intro to PR X. Z. – PR Course S.B.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 11:59pm Lab 3 due Weds, 11:59pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
6 vector RVs. 6-1: probability distribution A radio transmitter sends a signal to a receiver using three paths. Let X1, X2, and X3 be the signals that.
Principal Components Analysis ( PCA)
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Unsupervised Learning II Feature Extraction
Principal Component Analysis
Factor Analysis An Alternative technique for studying correlation and covariance structure.
Matrices and Vectors Review Objective
Principal Component Analysis (PCA)
Principal Component Analysis
Techniques for studying correlation and covariance structure
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Digital Image Procesing The Karhunen-Loeve Transform (KLT) in Image Processing DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL.
Matrix Algebra and Random Vectors
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Factor Analysis An Alternative technique for studying correlation and covariance structure.
Feature space tansformation methods
Symmetric Matrices and Quadratic Forms
Principal Components What matters most?.
Digital Image Processing Lecture 21: Principal Components for Description Prof. Charlene Tsai *Chapter 11.4 of Gonzalez.
Principal Component Analysis
Principal Components What matters most?.
Eigenvectors and Eigenvalues
Outline Variance Matrix of Stochastic Variables and Orthogonal Transforms Principle Component Analysis Generalized Eigenvalue Decomposition.
Presentation transcript:

n– variate Gaussian

Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances. 2) Linear transformations of jointly Gaussian random variables give jointly Gaussian random variables. 3)Any vector of n jointly Gaussian R.V.’s can be linearly transformed to a vector of n independent Gaussian R.V.’s. Find A and  such that AKA T =  where  is diagonal.

Use (PQ) T = Q T P T Use P -1 Q -1 R -1 = (RQP) -1

In this case  Y i are uncorrelated (= independent) Gaussian with mean  i and variance i. When A is chosen to be P T  C is diagonal, A is called Karhunen – Loeve Transform (KLT). This process is called Principal Component Analysis (PCA). and the Y i are called the principal components of  X.  X  Gaussian  Y i are independent.  any multivariate Gaussian can be linearly transformed into independent R.V.’s. (also Gaussian).

Conceptually, PCA transforms  X into a new co-ordinate system corresponding To the principal axes of the Gaussian pdf [ see example 4.49 in the textbook] x1x1 x2x2 y1y1 y2y2 In the n-variate case, if we just re-order Y i such that 1 > 2 > > n then Y 1 lies along the direction of maximum variance for  X. Y 2 lies along the direction of maximum variance orthogonal to Y and so on  Maximum amount of variance of X is captured in the minimum number of components. This is useful for dimensionality reduction and other applications in pattern recognition and coding.

Principal Components for Dimensionality Reduction Y = P T X  X = PY Suppose we keep only the first m < n components of P. define R = n  m matrix of the first m columns of P. Y = R T X   m and if we try to get X back from Y, we get some error. X = R Y Let J 2 = E ( | X - X | 2 ) It can be shown that So by choosing eigenvectors corresponding to the m largest eigenvalues, we maximize retention of information.

Generating Correlated Gaussian R.V.’s The reverse of the KLT/PCA can be used to produce correlated Gaussian R.V.’s from uncorrelated ones. Let  X ~ n- variate Gaussian with mean  0 and covariance I i.e. X i are uncorrelated with unit variance We want  Y ~ n- variate Gaussian with mean  0 and covariance C. This is needed, for example, when we build a model and need a Gaussian signal with a certain mean and variance.

(Note: We could have obtained this directly from the earlier result C = AKA T, since K = I here) Since we can write C = P  P T (C is symmetric like all covariance matrices) where P = matrix whose columns are orthonormal eigenvectors of C.  = Diagonal matrix of eigenvalues, i, of C.