Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics.

Slides:



Advertisements
Similar presentations
1 -Classification: Internal Uncertainty in petroleum reservoirs.
Advertisements

Ch. 19 Unbiased Estimators Ch. 20 Efficiency and Mean Squared Error CIS 2033: Computational Probability and Statistics Prof. Longin Jan Latecki Prepared.
Ordinary Differential Equations
The General Linear Model. The Simple Linear Model Linear Regression.
Kalman Filtering, Theory and Practice Using Matlab Wang Hongmei
Observers and Kalman Filters
The Simple Linear Regression Model: Specification and Estimation
Economics 20 - Prof. Anderson1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Single-Channel Speech Enhancement in Both White and Colored Noise Xin Lei Xiao Li Han Yan June 5, 2002.
1Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 6. Heteroskedasticity.
Advanced data assimilation methods- EKF and EnKF Hong Li and Eugenia Kalnay University of Maryland July 2006.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Linear and generalised linear models
Kalman Filtering Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics TexPoint fonts used in EMF. Read.
Economics Prof. Buckles
Classification and Prediction: Regression Analysis
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.
1  The goal is to estimate the error probability of the designed classification system  Error Counting Technique  Let classes  Let data points in class.
Regression and Correlation Methods Judy Zhong Ph.D.
Lecture 11: Kalman Filters CS 344R: Robotics Benjamin Kuipers.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
A statistical model Μ is a set of distributions (or regression functions), e.g., all uni-modal, smooth distributions. Μ is called a parametric model if.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Example A company purchases air filters at a rate of 800 per year $10 to place an order Unit cost is $25 per filter Inventory carry cost is $2/unit per.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Human-Computer Interaction Kalman Filter Hanyang University Jong-Il Park.
Estimation of the spectral density function. The spectral density function, f( ) The spectral density function, f(x), is a symmetric function defined.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 18: Minimum Variance Estimator.
University of Colorado Boulder ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 26: Singular Value Decomposition.
The “ ” Paige in Kalman Filtering K. E. Schubert.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION The Minimum Variance Estimate ASEN 5070 LECTURE.
Estimation of the spectral density function. The spectral density function, f( ) The spectral density function, f(x), is a symmetric function defined.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
An Introduction to Kalman Filtering by Arthur Pece
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Dept. E.E./ESAT-STADIUS, KU Leuven
An Introduction To The Kalman Filter By, Santhosh Kumar.
State Observer (Estimator)
K. Ensor, STAT Spring 2005 Estimation of AR models Assume for now mean is 0. Estimate parameters of the model, including the noise variace. –Least.
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Robust Localization Kalman Filter & LADAR Scans
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 10: Batch.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’Île-de-France Année Universitaire Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
École Doctorale des Sciences de l'Environnement d’ Î le-de-France Année Modélisation Numérique de l’Écoulement Atmosphérique et Assimilation.
Geology 6600/7600 Signal Analysis 26 Oct 2015 © A.R. Lowry 2015 Last time: Wiener Filtering Digital Wiener Filtering seeks to design a filter h for a linear.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Multi-Dimensional Credibility Excess Work Comp Application.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
ASEN 5070: Statistical Orbit Determination I Fall 2014
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Dynamic Models, Autocorrelation and Forecasting
ECE 7251: Signal Detection and Estimation
CH 5: Multivariate Methods
PSG College of Technology
Probabilistic Robotics
Kalman Filtering: Control with Limited/Noisy Measurements
Static Output Feedback and Estimators
Bayes and Kalman Filter
Parametric Methods Berlin Chen, 2005 References:
Kalman Filter: Bayes Interpretation
Presentation transcript:

Chapter 27: Linear Filtering - Part I: Kalman Filter Standard Kalman filtering – Linear dynamics

Kalman Filter Model dynamics – discrete time  x k+1 = M(x k ) + W k+1 or Mx k + w k+1  x k : true state  w k : model error  x k ∊ R n, M: R n  R n, M ∊ R n x n Observation  z k = h(x k ) + v k or Hx k + v k  h: R n  R m, H ∊ R m x n, v k ∊ R n  E(v k ) = 0, COV(v k ) = R k

Filtering, Smoothing, Prediction F N = {z i | 1 ≤ I ≤ N } [Wiener, 1942] [Kolmogorov, 1942] k < N Smoothing k = N filtering k > N Prediction Go to page 464 – classification

Statement of Problem – Linear case x 0 ~ N(m 0, P 0 ) x k+1 = M k x k + w k+1, w k ~ N(0, Q k ) z k = H k x k + v k, v k ~ N(0, R k ) Given F k = { z j | 1 ≤ j ≤ k }, find the best estimate of x k that minimizes the mean squared error E[(x k - ) T( x k - )] = tr[ E(x k - ) (x k - ) T ] = tr[ ] If is also unbiased => it is min. variance!

Model Forecast Step At time k = 0, F 0 – initial information is given  Given, the predictable part of x 1 is  Error in prediction  0 1

Forecast covariance Covariance  Predicted Observation  E[z 1 / x 1 = x 1 f ] = E[H 1 x 1 +v 1 |x 1 =x 1 f ] = H 1 x 1 f  COV(z 1 |x 1 f ) = E{[z 1 – E(z 1 |x 1 f )][z 1 – E(z 1 |x 1 f )] T } = E(v 1 v 1 T ) = R 1

Basic idea At time k = 1 Fast Forward  Time k-1 to k 0 x 1 f z 1 P 1 f R 1 1 k-1 k x k f z k P k f R k

Forecast from k-1 to k   ∴  ∴

Observations at time k Model predicted observations 

Data Assimilation Step prior Kalman Gain Innovation

Posterior estimate – also known as analysis Substitute and simplify 

Covaiance of analysis => where

Conditions on the Kalman gain – minimization of total variance K k = P k f H k T D k -1 = P k f H k T [H k P k f H k T + R k ]

on the Kalman gain Comments on the Kalman gain 3. An Interpretation of K k : n = m, H k = I  Let P k f = Diag [ P 11 f P 22 f … P nn f ] R k = Diag [ R 11 R 22 … R nn ]  K k = P k f H k T [H k P k f H k T + R k ] -1 = P k f [P k f + R k ] -1 = Diag[ P 11 f /(P 11 f +R 11 ), P 22 f /(P 22 f +R 22 ), ….., P nn f /(P nn f +R nn )]  = x k f + K k [z-Hx k f ] = x k f + K k [z-x k f ] = (I-K k )x k f + K k z   ∴ If P ii f is large, z i,k has a larger weight

Comments – special cases 4. is independent of observations 5. No observations  x k f = P k f = for all k ≥ 0  x k f = M k-1 x k-1 f = M k-1 M k-2 M k-3 …..M 1 M 0 x 0 f  P k f = M k-1 P k-1 f M k-1 T + Q k = M(k-1:0)P 0 M T (k-1:0) + ∑ M(k-1:j+1)Q j+1 M T (k-1:j+1) where M(i,j) = M i M i-1 M i-2 ….M j, Q j ≡ 0  P k+1 f = M(k-1:0)P 0 M T (k-1:0)

Special cases - continued 6. No Dynamics  M k =I, W k ≡ 0, Q k ≡ 0  x k+1 = x k = x  z k = H k x + v k  x k f = with = E(x 0 )  P k f = with = P 0  =>  Same as ( ) – ( ) Static case

Special cases 7. When observations are perfect  R k ≡ 0  => Kk = P k f H k T [H k P k f H k T + R k ] -1 = P k f H k T [H k P k f H k T ] -1  H k : m x n, P k f : n x n, H k T : n x m  => [H k P k f H k T ] -1 : m x m  Recall:  From ( ):

Special cases  ∴ ( I- K k H k ) = ( I – K k H k ) 2, idempotent  Fact: Idempotent matrices are singular  => Rank of ( I- K k H k ) ≤ n – 1  ∴ Rank( ) ≤ min {Rank( I- K k H k ), Rank( P k f )} ≤ n – 1  ∴ Rank of ≤ n – 1  ∴ When R k is small, this will cause computational instability

Special cases 8. Residual Checking  r k = z k – H k x k f = innovation  = x k f + K k r k  r k = z k –H k x k f = H k x k + v k – H k x k f = H k (x k – x k f ) + v k = H k e k f + v k  ∴ COV(r k ) = H k P k f H k T + R k  ∴ By computing r k and its covariance, we can check if the filter is working O.K.

10. Computational Cost

Example Scalar Dynamics with No Observation a > 0, w k ~ N( 0, q), x 0 ~ N(m 0, P 0 ) x k = ax k-1 + w k E(x k ) = a k E(x 0 ) = a k m 0 P k = Var(x k ) = Var(ax k-1 +w k ) = a 2 P k-1 + q ∴ P k = a 2k P 0 + q[(a 2k -1)/(a 2 – 1)]

Scalar dynamics Note: For a given m 0, P 0, q, the behavior of the moments depends on a 1. 0 < a < 1  lim E(x k )  0, lim P k = q/(1-a 2 ) 2. 1 < a < ∞  lim E(x k ) = 0, lim P k = ∞ 3. a = 1  x k = x 0 + ∑w k  E(x k ) = m 0  P k = P 0 + kq

Example Kalman Filtering x k+1 = ax k + w k+1, w k+1 ~(0,q) z k = hx k + v k, v k ~N(0,r) x k+1 f = a P k+1 f = a 2 + q = x k f + K k [z k – hx k f ] K k = P k f h[h 2 P k f + r] -1 = hr -1 = P k f – (P k f ) 2 h 2 [h 2 P k f + r] -1 = P k f r[h 2 P k f +r] -1

Recurrences: Analysis of Stability HW. 1 x k+1 f = a(1-K k h)x k f + aK k z k e k+1 f = a(1-K k h)e k f + aK k v k + w k+1 P k+1 f = a 2 +q = P k f r(h 2 P k f +r) -1

Example continued P k+1 f = a 2 P k f r / (h 2 P k f +r) + q P k+1 f / r = a 2 P k f / (h 2 P k f + r) + q/r = a 2 (P k f /r) / [h 2 (P k f /r) + 1] + q/r P k+1 = a 2 P k /(h 2 P k +1) + α  α = q/r (ratio)  P k = P k f /r Riccati equation ( First-order, scalar, nonlinear)

Asymptotic Properties Let h = 1 P k+1 = a 2 P k /(P k +1) + α Let δ k = P k+1 – P k = a 2 P k /(P k +1) – P k + α = [-P k 2 + P k (a α) + α]/(P k +1) ∴ δ k = g(P k )/(P k +1)  where g(P k ) = -P k 2 + P k (a 2 + α - 1) + α

Example continued When P k+1 = P k, => δ k = 0 => equilibrium => δ k = 0 if g(P k ) = 0 -P k 2 + P k (a 2 + α – 1) + α = 0 -P k 2 + β P k + α = 0 Evaluate the derivative of g(.) at P * and P * g’(P k ) = -2P k + β β

Example continued ∴ P * is an attractor – stable P * is a repellor – unstable Thus, => P * = a 2 P * /(P * +1) + α

Rate of Convergence Let y k = p k – p *, p k+1 = a 2 p k /(p k +1) + α y k+1 = pk+1 – p* = [a 2 p k /(p k +1) + α] - [a 2 p * /(p * +1) + α] = a 2 p k /(p k +1) - a 2 p * /(p * +1) = a 2 (p k – p * )/[(1+p k )(1+p * )] = a 2 y k /[(1+p k )(1+p * )] ∴ 1/y k+1 = [(1+p k )(1+p * )]/ a 2 y k = [(1+y k +p * )(1+p * )] / a 2 y k = [(1+p * )/a] 2 /y k + (1+p * )/a 2

Rate convergence - continued z k = 1/y k => z k+1 = cz k + b where c = [(1+p * )/a] 2 and b = (1+p * )/a 2 Iterating: ∴ when c> 1 (ie) c = [(1+p * )/a] 2 >1 When this is true, y k  0 at exp. rate

Rate of convergence - continued From ( ) h = 1 Analysis Covariance Converges

Stability of the Filter h = 1 e k+1 f = a(1-K k h)e k f + aK k v k + w k+1 -- homo part K k = p k f /(p k f +r) = p k /(p k +1) 1 – K k = 1/(p k +1) ∴