Download presentation
Published byHeather Grant Modified over 9 years ago
1
The Kalman Filter ECE 7251: Spring 2004 Lecture 17 2/16/04
Prof. Aaron D. Lanterman School of Electrical & Computer Engineering Georgia Institute of Technology AL:
2
The Setup for Kalman Filtering
State equation Measurement eqn. Process “noise” covariance are uncorrelated with each other and for different k Measurement noise covariance Initial guess, before taking any data Covariance indicating confidence of initial guess
3
Constant Velocity Model
For small sample intervals T, the following model is commonly used (Blackman & Popoli, Sec ): Bar-Shalom & Li’s rule of thumb for selecting q
4
Constant Acceleration Model
For small sample intervals T, another common model is (Blackman & Popoli, p. 202):
5
Singer Maneuver Model Often a good idea to encourage the acceration process to tend back towards zero (Blackman & Popoli, Sec ): where R.A. Singer, “Estimating Optimal Tracking Filter Performance for Manned Maneuvering Targets,” IEEE Trans. On Aerospace and Electronic Systems, vol. 5, pp , July 1970.
6
Extending to Higher Dimensions
Easily extended to two or more dimensions by making processes in the different coordinates independent: This is just a convenient mathematical model Of course, for real aircraft, the motion in the different coordinates is not independent!
7
Goal of Kalman Filtering
Goal: Find MMSE (conditional mean) estimates (Prediction) Define Filter error covariance Prediction covariance
8
C A L A C + + + - + A Tale of Two Systems Delay Delay +
(based on a lecture by J.A. O’Sullivan)
9
Step 1: Prediction Recall Predicted state: Predicted covariance:
Putting a Gaussian random vector through a linear transformation yields another Gaussian random vector:
10
Step 2A: State Update Recall Consider predicted data Kalman Gain
Conditioned on all data up to k Kalman Gain
11
Step 2B: Covariance Update
Covariance matrix tells us how much confidence we should have in our state estimates
12
Putting it All Together
The Kalman filter state and covariance updates: (dropping k|k notation) (the innovations) Note that the Kalman gains don’t involve the data, and can hence be computed offline ahead of time: (♥)
13
The Innovation Sequence
Let the prediction of the data given the previous data be given by The innovations are defined as: Innovations are orthogonal to the data: Innovations process is white: When trying on real data, testing innovations for “whiteness” tells you how accurate your models are
14
Assorted Tidbits For non-Gaussian statistics (process and measurement noise), the Kalman filter is the best linear MMSE estimator Combining covariance update steps yield the discrete Riccati equation: Under some conditions, DRE has a fixed point and ; in this case, Kalman filter acts like a Wiener filter for large k tells us our confidence in out state estimates. If it is small, then is small, then the filter is saturated; we pay little attention to measurements.
15
Pseudomeasurement Approach
Problem with Kalman filter applied to radar: data is rarely a linear function of the parameters Canonical example: radar measuring range and angle Here we consider a 2-D scenario, and consider just the azimuth angle; could extend to include an elevation angle One possible solution: convert raw polar data into Cartesian coordinate “pseudomeasurements”, then process using Kalman filter Need covariance on the new data – use CRB-type transformation formulas
16
Trouble with Pseudomeasurements
Accuracy of covariance transformation depends on accuracy of state estimate Can have problems with significant bias for large cross-range errors (Bar-Shalom & Li, Sec ) Bar-Shalom & Li offer an improved approach with “debiased conversion” Original measurement errors may be Gaussian, but converted measurement errors are not Kalman filter only truly optimal for Gaussian measurement errors In more general problems, we may not always have a nice invertible mapping from the original parameter space to the sensor space!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.