Download presentation
Presentation is loading. Please wait.
Published byKelly Gallagher Modified over 9 years ago
1
n– variate Gaussian
2
Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances. 2) Linear transformations of jointly Gaussian random variables give jointly Gaussian random variables. 3)Any vector of n jointly Gaussian R.V.’s can be linearly transformed to a vector of n independent Gaussian R.V.’s. Find A and such that AKA T = where is diagonal.
3
Use (PQ) T = Q T P T Use P -1 Q -1 R -1 = (RQP) -1
5
In this case Y i are uncorrelated (= independent) Gaussian with mean i and variance i. When A is chosen to be P T C is diagonal, A is called Karhunen – Loeve Transform (KLT). This process is called Principal Component Analysis (PCA). and the Y i are called the principal components of X. X Gaussian Y i are independent. any multivariate Gaussian can be linearly transformed into independent R.V.’s. (also Gaussian).
6
Conceptually, PCA transforms X into a new co-ordinate system corresponding To the principal axes of the Gaussian pdf [ see example 4.49 in the textbook] x1x1 x2x2 y1y1 y2y2 In the n-variate case, if we just re-order Y i such that 1 > 2 > 3.... > n then Y 1 lies along the direction of maximum variance for X. Y 2 lies along the direction of maximum variance orthogonal to Y 1.... and so on Maximum amount of variance of X is captured in the minimum number of components. This is useful for dimensionality reduction and other applications in pattern recognition and coding.
7
Principal Components for Dimensionality Reduction Y = P T X X = PY Suppose we keep only the first m < n components of P. define R = n m matrix of the first m columns of P. Y = R T X m and if we try to get X back from Y, we get some error. X = R Y Let J 2 = E ( | X - X | 2 ) It can be shown that So by choosing eigenvectors corresponding to the m largest eigenvalues, we maximize retention of information.
8
Generating Correlated Gaussian R.V.’s The reverse of the KLT/PCA can be used to produce correlated Gaussian R.V.’s from uncorrelated ones. Let X ~ n- variate Gaussian with mean 0 and covariance I i.e. X i are uncorrelated with unit variance We want Y ~ n- variate Gaussian with mean 0 and covariance C. This is needed, for example, when we build a model and need a Gaussian signal with a certain mean and variance.
10
(Note: We could have obtained this directly from the earlier result C = AKA T, since K = I here) Since we can write C = P P T (C is symmetric like all covariance matrices) where P = matrix whose columns are orthonormal eigenvectors of C. = Diagonal matrix of eigenvalues, i, of C.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.