Chapter 28 Cononical Correction Regression Analysis used for Temperature Retrieval.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3.3 Hypothesis Testing in Multiple Linear Regression
General Linear Model With correlated error terms  =  2 V ≠  2 I.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Dimension reduction (1)
Integration of sensory modalities
The General Linear Model. The Simple Linear Model Linear Regression.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
TFIDF-space  An obvious way to combine TF-IDF: the coordinate of document in axis is given by  General form of consists of three parts: Local weight.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Tch-prob1 Chapter 4. Multiple Random Variables Ex Select a student’s name from an urn. S In some random experiments, a number of different quantities.
3D Geometry for Computer Graphics
Ordinary least squares regression (OLS)
Linear and generalised linear models
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Linear and generalised linear models
Basics of regression analysis
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Linear regression models in matrix terms. The regression function in matrix terms.
Separate multivariate observations
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Chapter 12 Spatial Sharpening of Spectral Image Data.
SVD(Singular Value Decomposition) and Its Applications
Multiple Linear Regression - Matrix Formulation Let x = (x 1, x 2, …, x n )′ be a n  1 column vector and let g(x) be a scalar function of x. Then, by.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Some matrix stuff.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Canonical Correlation Analysis and Related Techniques Simon Mason International Research Institute for Climate Prediction The Earth Institute of Columbia.
Modern Navigation Thomas Herring
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Chapter 21 R(x) Algorithm a) Anomaly Detection b) Matched Filter.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Matrix Notation for Representing Vectors
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Psychology 202a Advanced Psychological Statistics October 22, 2015.
Status of Reference Network Simulations John Dale ILC-CLIC LET Beam Dynamics Workshop 23 June 2009.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
1 G Lect 4W Multiple regression in matrix terms Exploring Regression Examples G Multiple Regression Week 4 (Wednesday)
Université d’Ottawa / University of Ottawa 2001 Bio 8100s Applied Multivariate Biostatistics L11.1 Lecture 11: Canonical correlation analysis (CANCOR)
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2015 Professor Brandon A. Jones Lecture 26: Cholesky and Singular Value.
Estimation Econometría. ADE.. Estimation We assume we have a sample of size T of: – The dependent variable (y) – The explanatory variables (x 1,x 2, x.
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Lecture XXVII. Orthonormal Bases and Projections Suppose that a set of vectors {x 1,…,x r } for a basis for some space S in R m space such that r  m.
College Algebra Chapter 6 Matrices and Determinants and Applications
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Review of Linear Algebra
Unfolding Problem: A Machine Learning Approach
Multivariate Analysis: Theory and Geometric Interpretation
Principal Component Analysis
OVERVIEW OF LINEAR MODELS
Singular Value Decomposition SVD
5.4 General Linear Least-Squares
OVERVIEW OF LINEAR MODELS
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Unfolding with system identification
Presentation transcript:

Chapter 28 Cononical Correction Regression Analysis used for Temperature Retrieval

This method attempts to generate an optimum statistical relationship between a number of input variables (one or more of which are generally the parameters of interest) and a number of output variables (usually the observed spectral vectors).

Lacking probability distributions, we can best estimate y given x by minimizing the squared error between a linear estimate of y expressed as and y according to: (1)

Ordinary Least Squares (OLS) regression theory tells us that the optimal linear combination of X is given by: More fundamentally, the least square prediction of Y is given by or (2) (3) (4)

note that if we mean center X (i.e. subtract the mean value (u) and scale by the number of observations (n) then X T X is the covariance matrix i.e. i.e. x is redefined as: (5) (6)

we can decompose the covariance matrix into its principle components (aka eigen vectors). The Eigen vector matrix can be expressed as The X data are projected (transformed) onto the principle components space according to: (7) (8)

Cononical Correlation Analysis (CCA) PCR assumes the principle components of X will yield good predictions of Y based only on the covariance of X. In cononical correlating analysis (CCA), the joint covariance of X and Y is considered. In this process, an optimal orthogonal space is produced where the projections of both X and Y are maximally correlated.

The cononical correlations are the eigen values of: where Ψ is the k x k diagonal matrix of squared cononical correlations, k is the min (p, q), A is the matrix of column eigen vectors used to transform X and B is the matrix of column eigen vectors used to transform Y, i.e. (9) (10) (11) (12)

Hernandez-Baquero and Schott point out three useful properties: 1.The cononical variables are orthogonal 2.The cononical correlations are the maximum linear correlations between the data sets 3. and

To find the estimates of Y (i.e. ) from the predicted values If we regress the cononical variables using Equation 2, we obtain let and use property 3 above, such that (13) (14) (15)

Canonical Correlation Analysis y1y1 y2y2 y3y3 yqyq x1x1 x2x2 x3x3 xpxp v1v1 v2v2 vrvr u1u1 u2u2 urur weights loadings

From a practical standpoint, we still need in CCR to invert the covariance matrices as expressed in Equations 9 and 10. In general, these matrices may be singular (i.e. of reduced rank) and their I nverse will not exist. In these cases, we can use the singular value decomposition (SVD) to define the rank and reconstruct the matrices using the SVD approach discussed elsewhere in these notes. The reconstructed matrices are more amenable to inversions.

Examples In the first case, the Y ensemble was made up of n temperature and water vapor profiles at q altitudes to form a n x 2q matrix (i.e. each new row was temperatures at q altitudes followed by q corresponding water vapor values The X data were generated to simulate nighttime Modis Airborne Simulator (MAS) collection that took place over Death Valley at 21km

The X example for this first test was made up of p = 9 element spectral radiance vectors (i.e. 9 spectral bands) forming a ( n x p ) matrix.

Locations of Radiosonde Data

IGARSS: Hernandez Table 1

CCA Implementation MODTRAN Radiosonde CCA

IGARSS: Hernandez Table 2

CCR was used to predict the τ(λ), (L uλ ), (L pλ ) and vectors and solve for the surface leaving radiance according to (16)

A second experiment was conducted with 10 bands of LWIR data from the MASTER sensor flown over 2 water and 1 land target on 3 different missions shown in Figure 3 from Hernandez-Baquero and Schott (SPIE).

Aero Sense: Hernandez Figure 3

Aero Sense: Hernandez Table 2

Brightness temperatures were used because they are more linear with temperature and CCR is a linear process

Aero Sense: Hernandez Table 3