III. Multi-Dimensional Random Variables and Application in Vector Quantization.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
Ch 7.7: Fundamental Matrices
Hypothesis testing –Revisited A method for deciding whether the sample that you are looking at has been changed by some type of treatment (Independent.
1 8. Numerical methods for reliability computations Objectives Learn how to approximate failure probability using Level I, Level II and Level III methods.
Dimension reduction (1)
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
II. Linear Block Codes. © Tallal Elshabrawy 2 Last Lecture H Matrix and Calculation of d min Error Detection Capability Error Correction Capability Error.
An introduction to Principal Component Analysis (PCA)
Eigenvalues and Eigenvectors
Probability Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Symmetric Matrices and Quadratic Forms
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Tch-prob1 Chapter 4. Multiple Random Variables Ex Select a student’s name from an urn. S In some random experiments, a number of different quantities.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Correlations and Copulas Chapter 10 Risk Management and Financial Institutions 2e, Chapter 10, Copyright © John C. Hull
Principal Component Analysis Principles and Application.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Techniques for studying correlation and covariance structure
Separate multivariate observations
Computer vision: models, learning and inference Chapter 5 The Normal Distribution.
Correlations and Copulas 1. Measures of Dependence 2 The risk can be split into two parts: the individual risks and the dependence structure between them.
Summarized by Soo-Jin Kim
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
Additive Data Perturbation: data reconstruction attacks.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Techniques for studying correlation and covariance structure Principal Components Analysis (PCA) Factor Analysis.
Operations on Multiple Random Variables
1 Matrix Algebra and Random Vectors Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Chapter 28 Cononical Correction Regression Analysis used for Temperature Retrieval.
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
III. Multi-Dimensional Random Variables and Application in Vector Quantization.
Matrix Notation for Representing Vectors
Math 4030 – 6a Joint Distributions (Discrete)
Visualizing and Exploring Data 1. Outline 1.Introduction 2.Summarizing Data: Some Simple Examples 3.Tools for Displaying Single Variable 4.Tools for Displaying.
Principle Component Analysis and its use in MA clustering Lecture 12.
1 Probability and Statistical Inference (9th Edition) Chapter 4 Bivariate Distributions November 4, 2015.
Principal Component Analysis (PCA)
Discrete-time Random Signals
Principal Component Analysis Zelin Jia Shengbin Lin 10/20/2015.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
CSSE463: Image Recognition Day 10 Lab 3 due Weds Lab 3 due Weds Today: Today: finish circularity finish circularity region orientation: principal axes.
ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission ( ) JPEG block based transform coding.... Why DCT for Image transform? DFT DCT.
Feature Extraction 主講人:虞台文.
Chapter 13 Discrete Image Transforms
Presented by: Muhammad Wasif Laeeq (BSIT07-1) Muhammad Aatif Aneeq (BSIT07-15) Shah Rukh (BSIT07-22) Mudasir Abbas (BSIT07-34) Ahmad Mushtaq (BSIT07-45)
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 11:59pm Lab 3 due Weds, 11:59pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
6 vector RVs. 6-1: probability distribution A radio transmitter sends a signal to a receiver using three paths. Let X1, X2, and X3 be the signals that.
CSSE463: Image Recognition Day 10 Lab 3 due Weds, 3:25 pm Lab 3 due Weds, 3:25 pm Take-home quiz due Friday, 4:00 pm Take-home quiz due Friday, 4:00 pm.
Principal Component Analysis
Factor Analysis An Alternative technique for studying correlation and covariance structure.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
ECE 417 Lecture 4: Multivariate Gaussians
Lecture 14 PCA, pPCA, ICA.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Matrix Algebra and Random Vectors
Factor Analysis An Alternative technique for studying correlation and covariance structure.
Feature space tansformation methods
Principal Components What matters most?.
Problem 1.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Principal Component Analysis
Principal Components What matters most?.
Outline Variance Matrix of Stochastic Variables and Orthogonal Transforms Principle Component Analysis Generalized Eigenvalue Decomposition.
Presentation transcript:

III. Multi-Dimensional Random Variables and Application in Vector Quantization

© Tallal Elshabrawy 2

3

4

5

6

7 Karhunen-Loeve Decomposition v1v1 u2u2 X=[X 1, X 2 ] X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let u 1, u 2 (unit vectors) be a new co-ordinate system such that u 1, u 2 are the eigen vectors of the covariance matrix R X u1u1 v2v2 u 1, u 2 are orthogonal to each other For distinct eigen vectors, values

© Tallal Elshabrawy 8 Karhunen-Loeve Decomposition v1v1 u2u2 X=[X 1, X 2 ] X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let u 1, u 2 (unit vectors) be a new co-ordinate system such that u 1, u 2 are the eigen vectors of the covariance matrix R X u1u1 v2v2 Representation of X over u 1, u 2 Projection of X over u 1 unit vector u 1 Projection of X over u 2 unit vector u 2

© Tallal Elshabrawy 9 Karhunen-Loeve Decomposition v1v1 u2u2 X=[X 1, X 2 ] X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let u 1, u 2 (unit vectors) be a new co-ordinate system such that u 1, u 2 are the eigen vectors of the covariance matrix R X u1u1 v2v2 Diagonalization of R X

© Tallal Elshabrawy 10 Karhunen-Loeve Decomposition v1v1 u2u2 X=[X 1, X 2 ] X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let u 1, u 2 (unit vectors) be a new co-ordinate system such that u 1, u 2 are the eigen vectors of the covariance matrix R X u1u1 v2v2 Diagonalization of R X

© Tallal Elshabrawy 11 Karhunen-Loeve Decomposition v1v1 u2u2 X=[X 1, X 2 ] X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let u 1, u 2 (unit vectors) be a new co-ordinate system such that u 1, u 2 are the eigen vectors of the covariance matrix R X u1u1 v2v2 Diagonalization of R X

© Tallal Elshabrawy 12 K-L Transformation of 2-D Random Variable v1v1 u2u2 X X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let Y = [Y 1, Y 2 ] be the transformation of the random variable X over the co- ordinate system u 1, u 2 u1u1 v2v2 X1X1 X2X2 Y Y1Y1 Y2Y2

© Tallal Elshabrawy 13 K-L Transformation of 2-D Random Variable v1v1 u2u2 X X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let Y = [Y 1, Y 2 ] be the transformation of the random variable X over the co- ordinate system u 1, u 2 u1u1 v2v2 X1X1 X2X2 Y Y1Y1 Y2Y2 Define E[X 1 ] E[X 2 ] E[Y 1 ] E[Y 2 ]

© Tallal Elshabrawy 14 K-L Transformation of 2-D Random Variable v1v1 u2u2 X X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let Y = [Y 1, Y 2 ] be the transformation of the random variable X over the co- ordinate system u 1, u 2 u1u1 v2v2 X1X1 X2X2 Y Y1Y1 Y2Y2 Covariance Matrix R X

© Tallal Elshabrawy 15 K-L Transformation of 2-D Random Variable v1v1 u2u2 X X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let Y = [Y 1, Y 2 ] be the transformation of the random variable X over the co- ordinate system u 1, u 2 u1u1 v2v2 X1X1 X2X2 Y Y1Y1 Y2Y2 Covariance Matrix R Y

© Tallal Elshabrawy 16 K-L Transformation of 2-D Random Variable v1v1 u2u2 X X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let Y = [Y 1, Y 2 ] be the transformation of the random variable X over the co- ordinate system u 1, u 2 u1u1 v2v2 X1X1 X2X2 Y Y1Y1 Y2Y2 Covariance Matrix R Y

© Tallal Elshabrawy 17 K-L Transformation of 2-D Random Variable v1v1 u2u2 X X = [X 1, X 2 ] is a sample vector from a two-dimensional random variable defined over v 1, v 2 (unit vectors) co-ordinate system Let Y = [Y 1, Y 2 ] be the transformation of the random variable X over the co- ordinate system u 1, u 2 u1u1 v2v2 X1X1 X2X2 Y Y1Y1 Y2Y2 Y 1 and Y 2 are UNCORRELATED Covariance Matrix R Y

© Tallal Elshabrawy 18 K-L Transformation of 2-D Random Variable Therefore using principle component analysis, it is possible to transform a random variable X with components X 1 and X 2 that are correlated into another random variable Y whose components Y 1 and Y 2 are uncorrelated v1v1 u2u2 X u1u1 v2v2 X1X1 X2X2 Y Y1Y1 Y2Y2 u 1, u 2 are eigen vectors of R X λ 1, λ 2 are eigen values of R X

© Tallal Elshabrawy 19 Two-Dimensional Gaussian Distribution In the previous slides, we have talked about the transformed random variable Y whose components are uncorrelated and have mean m Y and covariance matrix R Y. What would be the distribution of Y. Well this depends on what is the distribution of X Generally X and Y do not follow a Gaussian distribution. However, if X is a two-dimensional Gaussian distribution then Y as well would be a two-dimensional Gaussian distribution

© Tallal Elshabrawy 20 Two-Dimensional Gaussian Distribution Suppose we have a two-dimensional random variable Y whose components Y 1 and Y 2 are independent and Gaussian

© Tallal Elshabrawy 21 Two-Dimensional Gaussian Distribution Suppose we have a two-dimensional random variable Y whose components Y 1 and Y 2 are independent and Gaussian

© Tallal Elshabrawy 22 Two-Dimensional Gaussian Distribution Suppose we have a two-dimensional random variable Y whose components Y 1 and Y 2 are independent and Gaussian

© Tallal Elshabrawy 23 Two-Dimensional Gaussian Distribution Suppose we have a two-dimensional random variable Y whose components Y 1 and Y 2 are independent and Gaussian This formula is valid for any multi- dimensional Gaussian random variable whether its components are correlated or not

© Tallal Elshabrawy 24 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 25 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 26 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 27 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 28 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 29 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 30 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 31 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 32 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 33 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 34 Two-Dimensional Gaussian Distribution

© Tallal Elshabrawy 35 Two-Dimensional Gaussian Distribution More Energy Less Energy Y 1 and Y 2 Correlated

© Tallal Elshabrawy 36 Two-Dimensional Gaussian Distribution Equal Energy By rotating the axis, the resultant transformed random variables are uncorrelated