1 アンサンブルカルマンフィルターによ る大気海洋結合モデルへのデータ同化 On-line estimation of observation error covariance for ensemble-based filters Genta Ueno The Institute of Statistical.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Spatial autoregressive methods
Copula Regression By Rahul A. Parsa Drake University &
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
GENERAL LINEAR MODELS: Estimation algorithms
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
粒子フィルタ法を利用した日本沿岸部に おける潮位の長期変動解析 長尾大道 樋口知之(統計数理研究所) 三浦 哲 稲津大祐(東北大学理学研究科) 第 1 回 データ同化ワークショップ Apr. 22, 2011 Outline  Time-series analysis of tide gauge records.
Kalman Filter CMPUT 615 Nilanjan Ray. What is Kalman Filter A sequential state estimator for some special cases Invented in 1960’s Still very much used.
Variance and covariance M contains the mean Sums of squares General additive models.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Factor Analysis Purpose of Factor Analysis
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Discriminative Training of Kalman Filters P. Abbeel, A. Coates, M
Generative Models Rong Jin. Statistical Inference Training ExamplesLearning a Statistical Model  Prediction p(x;  ) Female: Gaussian distribution N(
Retrieval Theory Mar 23, 2008 Vijay Natraj. The Inverse Modeling Problem Optimize values of an ensemble of variables (state vector x ) using observations:
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Linear and generalised linear models
Models for model error –Additive noise. What is Q(x 1, x 2, t 1, t 2 )? –Covariance inflation –Multiplicative noise? –Parameter uncertainty –“Structural”
An introduction to Particle filtering
Overview and Mathematics Bjoern Griesbach
Variance and covariance Sums of squares General linear models.
ROBOT MAPPING AND EKF SLAM
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Review of Lecture Two Linear Regression Normal Equation
Statistical learning and optimal control:
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
Alignment and classification of time series gene expression in clinical studies Tien-ho Lin, Naftali Kaminski and Ziv Bar-Joseph.
Computer vision: models, learning and inference Chapter 19 Temporal models.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss High-resolution data assimilation in COSMO: Status and.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Probabilistic Robotics Bayes Filter Implementations.
Modern Navigation Thomas Herring
Dusanka Zupanski And Scott Denning Colorado State University Fort Collins, CO CMDL Workshop on Modeling and Data Analysis of Atmospheric CO.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
2004 SIAM Annual Meeting Minisymposium on Data Assimilation and Predictability for Atmospheric and Oceanographic Modeling July 15, 2004, Portland, Oregon.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Applications of optimal control and EnKF to Flow Simulation and Modeling Florida State University, February, 2005, Tallahassee, Florida The Maximum.
MODEL ERROR ESTIMATION EMPLOYING DATA ASSIMILATION METHODOLOGIES Dusanka Zupanski Cooperative Institute for Research in the Atmosphere Colorado State University.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Using Kalman Filter to Track Particles Saša Fratina advisor: Samo Korpar
Nonlinear State Estimation
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
The Unscented Kalman Filter for Nonlinear Estimation Young Ki Baik.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Probability Theory and Parameter Estimation I
Ch3: Model Building through Regression
CH 5: Multivariate Methods
Probabilistic Robotics
Filtering and State Estimation: Basic Concepts
10701 / Machine Learning Today: - Cross validation,
A Short Introduction to the Bayes Filter and Related Models
Bayes and Kalman Filter
Principles of the Global Positioning System Lecture 11
2. University of Northern British Columbia, Prince George, Canada
Kalman Filtering COS 323.
Mathematical Foundations of BME
Sarah Dance DARC/University of Reading
Presentation transcript:

1 アンサンブルカルマンフィルターによ る大気海洋結合モデルへのデータ同化 On-line estimation of observation error covariance for ensemble-based filters Genta Ueno The Institute of Statistical Mathematics

2 Covariance matrix in DA State space model Cost function

3 Filtered estimates with different θ Large Q  large  h 大 ) Large R (large  ) Which one should be chosen?

4 Ensemble approx. of distribution Ensemble Kalman filter (EnKF), Particle filter(PF) Non-Gaussian dist. Ensemble approx. / Particle approx. Gaussian dist. Exactly represented Kalman filter (KF)

5 Kalman gain Simulation Filtered dist. at t-1Predicted dist. at t Filtered dist. at

6 EnKF and PF Resampling Approx. Kalman gain EnKFPF KF

Likelihood Which is the most likely distribution that produces observation y obs ? Likelihood L(  ) = p(y obs |θ) In this example,  3 is most likely. y obs

Likelihood of time series Find θ that maximizes L(θ). In practice, log-likelihood is easy to handle:

Likelihood of time series Observation model Predicted dist. Non-Gaussian dist. [due to nonlinear model] If it were Gaussian, likelihood

10 Estimation of covariance matrix Minimizing innovation [predicted error] Bayes estimation Naive Ensemble mean and covariance of state Adjustment according to cost function Matcing with innovation covariance 1.With assumption of Gaussian dist. of state Maximum likelihood Ensemble mean of likelihood 2.Without assumption of Gaussian dist. of state This study Covariance matching Ueno et al., Q. J. R. Met. Soc. (2010)

11 Ensemble approx. of likelihood Find θ that maximizes the ensemble approx. log-likelihood. Observation model Ensemble mean of likelihood of each member x t|t-1 (n)

12 Regularization of Rt 12 Sample covariance (singular due to n<<p) Regularization with Gaussian graphical model 12 neighborhood

13 Maximum likelihood

14 Data and Model year longitude The color shows SSH anomalies.

15 Filtered estimates with different θ Large Q  large  h 大 ) Large R (large  ) Which one should be chosen?

16 System noise: magnitude

17 System noise: zonal correlation length

18 System noise: meridional correlation length

19 Observation noise: magnitude

20 Estimates with MLE magnitude = (5.95cm) 2, correlation lengths= (2.38, 2.52deg) Filtered estimate Smoothed estimate year longitude

21 Summary for the first half Maximum likelihood estimation can be carried out even for non- Gaussian state distribution with ensemble approximation Applicable for ensemble-based filters such as EnKF and PF Estimated parameters: … Tractable for just four parameters? Ueno et al., Q. J. R. Met. Soc. (2010)

22 Motivation for the second half The output of DA (i.e. “analysis”) varies with prescribed parameter θ, where θ = (B, Q 1:T, R 1:T ) B: covariance matrix of the initial state (i.e. V 0|0 ) Q t : covariance matrix of system noise R t : covariance matrix of observation noise My interest is how to construct optimal θ for a fixed dynamic model Only four parameters so far … We allow more degree of freedom on R 1:T (dim y t ) 2 /2 elements at maximum

23 Likelihood of Rt Current assumption Log-likelihood

24 Estimation design Use ℓ t (R 1:t ) for estimating R t only It is of course that R 1:t-1 are parameters of ℓ t (R 1:t ) But they are assumed to have been estimated with former log-likelihood, ℓ 1 (R 1 ), …, ℓ t-1 (R 1:t-1 ), and to be fixed at current time step t. R t is estimated at each time step t. Bad news: The estimated Rt may vary significantly between different time steps. A time-constant R cannot be estimated within the present framework.

25 Experiment Assumed structure of R t

26 Data and Model year longitude The color shows SSH anomalies.

27 Estimate of R t (Temporal mean) var cov Case  t  similar output for  Case diagonal: large variance near equator, small variance for off- equator Case  t  uniform variance with intermediate value

28 Estimate of R t (Spatial mean) var Case  t  : small variance for first half, large for second half Case diagonal: large variance around 1998 Case  t  : similar for the diagonal case year -2002

29 Filtered estimates Case  t  : false positive anomalies in the east Case  t  : negative anomalies in the east, but the equatorial Kelvin waves unclear Case diagonal: negative anomalies and equatorial Kelvin reproduced

30 Iteration times Only 2-4 times Small number of parameters requires large iteration numbers

31 Summary of the second half An on-line and iterative algorithm for estimating observation error covariance matrix Rt. The optimality condition of Rt leads a condition of Rt in a closed form. Application to a coupled atmosphere-ocean model Only 4-5 iterations are necessary A diagonal matrix with independent elements produces more likely estimates than those of scalar multiplication of fixed matrices (  or I).