Introducing Error Co-variances in the ARM Variational Analysis Minghua Zhang (Stony Brook University/SUNY) and Shaocheng Xie (Lawrence Livermore National.

Slides:



Advertisements
Similar presentations
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Advertisements

Component Analysis (Review)
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
CS479/679 Pattern Recognition Dr. George Bebis
1 Regression Models & Loss Reserve Variability Prakash Narayan Ph.D., ACAS 2001 Casualty Loss Reserve Seminar.
P M V Subbarao Professor Mechanical Engineering Department
The General Linear Model. The Simple Linear Model Linear Regression.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Chapter 20 Basic Numerical Procedures
Factor Analysis Purpose of Factor Analysis
Procrustes analysis Purpose of procrustes analysis Algorithm Various modifications.
1 EE 616 Computer Aided Analysis of Electronic Networks Lecture 9 Instructor: Dr. J. A. Starzyk, Professor School of EECS Ohio University Athens, OH,
Ordinary least squares regression (OLS)
Course AE4-T40 Lecture 5: Control Apllication
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Tables, Figures, and Equations
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Weights of Observations
Today Wrap up of probability Vectors, Matrices. Calculus
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Eigensystems - IntroJacob Y. Kazakia © Eigensystems 1.
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
METU Informatics Institute Min 720 Pattern Classification with Bio-Medical Applications PART 2: Statistical Pattern Classification: Optimal Classification.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 13 Oct 14, 2005 Nanjing University of Science & Technology.
Some matrix stuff.
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Modern Navigation Thomas Herring
Dusanka Zupanski And Scott Denning Colorado State University Fort Collins, CO CMDL Workshop on Modeling and Data Analysis of Atmospheric CO.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Basic Numerical Procedures Chapter 19 1 Options, Futures, and Other Derivatives, 7th Edition, Copyright © John C. Hull 2008.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
The “ ” Paige in Kalman Filtering K. E. Schubert.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Multivariate Statistics Confirmatory Factor Analysis I W. M. van der Veld University of Amsterdam.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Chapter 28 Cononical Correction Regression Analysis used for Temperature Retrieval.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Multivariate Analysis and Data Reduction. Multivariate Analysis Multivariate analysis tries to find patterns and relationships among multiple dependent.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Wind – Chill Index “A Calculus Approach” By Felix Garcia.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Geology 5670/6670 Inverse Theory 6 Feb 2015 © A.R. Lowry 2015 Read for Mon 9 Feb: Menke Ch 5 (89-114) Last time: The Generalized Inverse; Damped LS The.
Université d’Ottawa / University of Ottawa 2003 Bio 8102A Applied Multivariate Biostatistics L4.1 Lecture 4: Multivariate distance measures l The concept.
Biointelligence Laboratory, Seoul National University
Chapter 7. Classification and Prediction
LECTURE 10: DISCRIMINANT ANALYSIS
CH 5: Multivariate Methods
Regression.
Precisions of Adjusted Quantities
Kalman Filtering: Control with Limited/Noisy Measurements
Multivariate Analysis: Theory and Geometric Interpretation
Estimation Error and Portfolio Optimization
Estimation Error and Portfolio Optimization
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
OVERVIEW OF LINEAR MODELS
EE513 Audio Signals and Systems
5.2 Least-Squares Fit to a Straight Line
Estimation Error and Portfolio Optimization
Numerical Analysis Lecture10.
OVERVIEW OF LINEAR MODELS
Feature space tansformation methods
Principles of the Global Positioning System Lecture 11
Factor Analysis BMTRY 726 7/19/2018.
LECTURE 09: DISCRIMINANT ANALYSIS
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Estimation Error and Portfolio Optimization
Presentation transcript:

Introducing Error Co-variances in the ARM Variational Analysis Minghua Zhang (Stony Brook University/SUNY) and Shaocheng Xie (Lawrence Livermore National Laboratory) 1. Introduction Any optimization algorithm involves the minimization of a cost function. For multi-variable analysis, such as analysis of ARM data with several stations, many levels and time steps, the cost function contains a error covariance matrix. Elements of the matrix determines how observations are weighted to produce the analysis. In NWP community, it is well known that the error covariance has a major impact on the quality of the optimal analysis. For example, if several measurements are highly correlated, each individual data entry should be given small weight relative to an independent data entry. Because of the complexities to derive the error covariance matrix from actual data, the current ARM variational analysis assumes that errors are independent. Since most of the errors are due to sampling rather than random instrument error, this assumption needs to be improved. 4. Error Structures and Correlation Matrices Analysis increments or errors in observations relative to the first iteration of the variational analysis for the TWP-ICE temperature and u wind are shown in Figure 1. The correlation matrices in the vertical direction for the two variables are shown in Figure 2. The matrices derived from the AR1 model are shown in Figure 3. The AR(1) model captures the general features of the correlations. 5. Summary An AR(1) model is used to characterize the error covariance in the vertical direction in the ARM variational analysis that allows inversion of the covariance matrix for the minimization of the cost function. The model captures the de-correlation lengths and the different matrix structures for different variables. The numerical algorithm is being tested and implemented into the variational analysis of TWP-ICE data. ITPA 2. The Problem For a field experiment such as TWP-ICE, the atmospheric state variables of winds (u,v), temperature (  ) and specific humidity (q) at S stations, K levels, and N time steps are written as whereor Similarly and With truth & observations as: We write errors: and Maximum likelihood or minimum variance leads to cost function: where is populated by covariance among all stations/levels/variables/ time steps. The dimension of this matrix is In TWP-ICE, this is (4X45X6X201)^2 = Not only this matrix is too large to invert, but also the covariance cannot be easily obtained from data. 3. A New Method Since the constraints are vertically integrated, we first assume errors to be vertically correlated. This reduces the cost function to: The minimization of I is subject to the five constraints of column integrated conservations at each time step: Where are similarly defined. These matrices are KXK in dimension. To obtain an AR(1) model is used such that: where and so on. It can be shown that from an AR(1) model that: are calculated from the data. Covariance between two levels for other variables can be similarly calculated. It can be also shown that: can be similarly obtained. This is a symmetric polynomial matrix that can be inverted using the Cholesky decomposition. When the correlation length scale is short, it is a narrow diagonal matrix. The analysis is then calculated from: with constraints: Figure 3Figure 2 Figure 1 The merit of the above matrix structure is that it yields an explicit solution from the cost function term in the E-L equation.