EMPIRICAL ORTHOGONAL FUNCTIONS

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Noise & Data Reduction. Paired Sample t Test Data Transformation - Overview From Covariance Matrix to PCA and Dimension Reduction Fourier Analysis - Spectrum.
Canonical Correlation
Covariance Matrix Applications
Mutidimensional Data Analysis Growth of big databases requires important data processing.  Need for having methods allowing to extract this information.
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
Factor Analysis Continued
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Notes on Principal Component Analysis Used in: Moore, S.K., N.J. Mantua, J.P. Kellogg and J.A. Newton, 2008: Local and large-scale climate forcing of Puget.
Principal component analysis (PCA)
Principle Component Analysis What is it? Why use it? –Filter on your data –Gain insight on important processes The PCA Machinery –How to do it –Examples.
CHAPTER 19 Correspondence Analysis From: McCune, B. & J. B. Grace Analysis of Ecological Communities. MjM Software Design, Gleneden Beach, Oregon.
Canonical correlations
EOF ANALYSIS Means of examining variations in beach profiles in a compact fashion Describes data variability in terms of orthogonal functions or statistical.
Ordinary least squares regression (OLS)
What is EOF analysis? EOF = Empirical Orthogonal Function Method of finding structures (or patterns) that explain maximum variance in (e.g.) 2D (space-time)
Principal Component Analysis Principles and Application.
Dirac Notation and Spectral decomposition
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Correlation. The sample covariance matrix: where.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
Compiled By Raj G. Tiwari
Summarized by Soo-Jin Kim
Principle Component Analysis Presented by: Sabbir Ahmed Roll: FH-227.
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Review for Chapter 4 Important Terms, Symbols, Concepts 4.1. Systems of Linear Equations in Two Variables.
The Multiple Correlation Coefficient. has (p +1)-variate Normal distribution with mean vector and Covariance matrix We are interested if the variable.
Some matrix stuff.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Inc.
Eigen Decomposition Based on the slides by Mani Thomas Modified and extended by Longin Jan Latecki.
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
13.6 MATRIX SOLUTION OF A LINEAR SYSTEM.  Examine the matrix equation below.  How would you solve for X?  In order to solve this type of equation,
Curve-Fitting Regression
What is the determinant of What is the determinant of
Principal Component Analysis Machine Learning. Last Time Expectation Maximization in Graphical Models – Baum Welch.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
Multivariate Statistics with Grouped Units Hal Whitehead BIOL4062/5062.
Oceanography 569 Oceanographic Data Analysis Laboratory Kathie Kelly Applied Physics Laboratory 515 Ben Hall IR Bldg class web site: faculty.washington.edu/kellyapl/classes/ocean569_.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Ltd.
Central limit theorem revisited Throw a dice twelve times- the distribution of values is not Gaussian Dice Value Number Of Occurrences.
Central limit theorem - go to web applet. Correlation maps vs. regression maps PNA is a time series of fluctuations in 500 mb heights PNA = 0.25 *
EMPIRICAL ORTHOGONAL FUNCTIONS 2 different modes SabrinaKrista Gisselle Lauren.
Complex Empirical Orthogonal Functions – James River Data Linear combination of spatial predictors or modes that are normal or orthogonal to each other.
REVIEW Linear Combinations Given vectors and given scalars
Image Transformation Spatial domain (SD) and Frequency domain (FD)
Linear Equations in Linear Algebra
Information Management course
Review of Matrix Operations
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Principal Component Analysis
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Linear Equations in Linear Algebra
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Chapter 4 Systems of Linear Equations; Matrices
Introduction to Statistical Methods for Measuring “Omics” and Field Data PCA, PcoA, distance measure, AMOVA.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Principal Component Analysis
Correlation and Covariance
Chapter 4 Systems of Linear Equations; Matrices
Eigen Decomposition Based on the slides by Mani Thomas
Outline Variance Matrix of Stochastic Variables and Orthogonal Transforms Principle Component Analysis Generalized Eigenvalue Decomposition.
Presentation transcript:

EMPIRICAL ORTHOGONAL FUNCTIONS 2 different modes Sabrina Krista Gisselle Lauren

Principal Component Analysis or Empirical Orthogonal Functions Subtidal Flow at Chesapeake Bay Entrance cm/s Linear combination of spatial predictors or modes that are normal or orthogonal to each other EOF is equivalent to “factor analysis” a data reduction method in social sciences Gives a compact representation of the temporal and spatial variability of several (or many) time series in terms of orthogonal functions (statistical modes)

Drum Head (circular membrane) vibrating modes https://en.wikipedia.org/wiki/Vibrations_of_a_circular_membrane

Write data series Um(t) = U(zm, t) as: fim are orthogonal spatial functions, also known as eigenvectors or EOFs m are each of the time series (function of depth or horizontal distance) ai(t) are the amplitudes or weights of the spatial functions as they change in time are the eigenvalues of the problem (represent the variance explained by each mode i)

Subtidal Flow at Chesapeake Bay Entrance (cm/s)

85% of variability Eigenvectors (spatial functions) or EOFs 1% f3m f2m f1m a1 a2 13% of variability

Measured Mode 1+2 Mode 1+2+3

ai is the amplitude of ith orthogonal mode at any time t Goal: Write data series U at any location m as the sum of M orthogonal spatial functions fim: ai is the amplitude of ith orthogonal mode at any time t For fim to be orthogonal, we require that: Two functions are orthogonal when sum (or integral) of their product over a space or time is zero Orthogonality condition means that the time-averaged covariance of the amplitudes satisfies: (overbar denotes time average) variance of each orthogonal mode

If we form the co-variance matrix of the data use to get to get use Multiplying both sides times fik, summing over all k and using the orthogonality condition: eigenvectors Canonical form of eigenvalue problem eigenvalues eigenvalues of mean product Covariance matrix if means of Um(t) are removed

Cmk is the covariance matrix; I is the unit matrix and  are the EOFs Eigenvalue problem corresponding to a linear system of equations:

For a non-trivial solution (  0): time-dependent amplitudes of ith mode Sum of variances in data = sum of variance in eigenvalues

Matrix = [6637,18] rows > columns

Matrix ul = [6637,18] >> uc=cov(ul); >> u1=ul(:,1); >> sum((u1-mean(u1)).^2)/(length(u1)-1) ans = 9.6143 >> u2=ul(:,2); >> sum((u1-mean(u1)).*(u2-mean(u2)))/(length(u1)-1) ans = 10.1154

Covariance Matrix Maximum covariance at surface

>> uc=cov(ul); >> [v,d]=eig(uc); eigenvalues (or lambda) >> lambda=diag(d)/sum(diag(d));

>> uc=cov(ul); >> [v,d]=eig(uc);

>> uc=cov(ul); >> [v,d]=eig(uc); >> v=fliplr(v); %flips matrix left to right

Mode 1 85.3% Mode 2 13.2%

>> ts=ul*v; ts=[6637,18] Mode 1 85.3% Mode 2 13.2% Mode 2 13.2%

vt(k,:,:)=ts(:,k)*v(:,k)'; end vt=[18, 6637,18] >> for k=1:nz vt(k,:,:)=ts(:,k)*v(:,k)'; end vt=[18, 6637,18] mode # evolution in time time series # >> v1=squeeze(vt(1,:,:))’; >> v2=squeeze(vt(2,:,:))’; Depth (m)

Depth (m)

Depth (m)

Suggestions for Final Project: Calculate Complex EOFs of separate records (raw and filtered) Calculate Complex EOFs of all records at the same time (raw and filtered) Describe and understand spatial variability of EOF modes Describe and understand temporal variability of EOF coefficients (amplitudes) Perform wavelet analysis (with coherence & cross-wavelet) of the EOF coefficients (vary in time) and possible parameters (e.g wind) linked to EOF coefficient temporal variability You could also calculate coherence squared between EOF coefficients and possible parameters causing the variability Write up your story