Presentation is loading. Please wait.

Presentation is loading. Please wait.

NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.

Similar presentations


Presentation on theme: "NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre."— Presentation transcript:

1 NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

2 NCAF Manchester 2000 2 6 July 2000 Basic Kalman Filter Equations Prediction Updating (correction)

3 NCAF Manchester 2000 3 6 July 2000 Definitions System state: mean estimate x - vector of size q e.g. for a single channel, constant acceleration model x 1 = position x 2 = velocity x 3 = acceleration

4 NCAF Manchester 2000 4 6 July 2000 Definitions System state: covariance (uncertainty) P - matrix of size q x q e.g. P 11 = uncertainty (variance) in position P 22 = uncertainty in velocity P 12 = covariance in position/velocity...

5 NCAF Manchester 2000 5 6 July 2000 Definitions State dynamics: state transition matrix A - matrix of size q x q e.g. for the constant acceleration example

6 NCAF Manchester 2000 6 6 July 2000 Definitions State dynamics: process noise (uncertainty) Q - matrix of size q x q e.g. (modelling acceleration as constant with a long duration; 3 rd derivative is random and instantaneous)

7 NCAF Manchester 2000 7 6 July 2000 Definitions Measurement space: observation matrix C - matrix of size r x q e.g. for the simple single measurement constant acceleration model: C = [1 0 0]

8 NCAF Manchester 2000 8 6 July 2000 Definitions Measurement space: innovation sequence e - error between observation and prediction a.k.a. the innovation sequence should be zero-mean, random white noise  if not, there is a problem with either » the model, or » a sensor

9 NCAF Manchester 2000 9 6 July 2000 Definitions Measurement space: measurement noise matrix R - matrix of size r x r e.g. for the single measurement case, R =  2 R is the covariance matrix of the measurement noise. It is usually diagonal (i.e. uncorrelated sensor noise).

10 NCAF Manchester 2000 10 6 July 2000 Definitions Correction phase: Kalman gain matrix K - matrix of size q x r K is the optimal correction factor to obtain the Minimum Mean Squared Error estimate of the system state mean and covariance

11 NCAF Manchester 2000 11 6 July 2000 Definitions Confidence: Relative contributions to uncertainty Predicted measurement - uncertainty due to model Actual measurement - uncertainty due to sensor noise Relative weighting for the innovation error

12 NCAF Manchester 2000 12 6 July 2000 Definitions Correction phase: system stiffness Model uncertainty >> Measurement noise  Believe Sensor (e.g. Gain is high) Low system stiffness, low lag Model uncertainty << Measurement noise  Believe Model (e.g. Gain is low) High system stiffness, high lag Relative weighting (Gain) for the innovation error

13 NCAF Manchester 2000 13 6 July 2000 Definitions Correction phase: updating state estimates “Best” estimates of the system state mean and covariance, at time k, conditioned on all measurements up to and including time k

14 NCAF Manchester 2000 14 6 July 2000 Comparison with Fixed-Coefficient Filters If the A, C, Q and R matrices do not vary with k then the single channel CA Kalman filter settles down have steady-state Kalman gains which are equivalent to a standard  -filter, but with optimum coefficients selected automatically. This approach is known as Weiner filtering. It is computationally much simpler than a Kalman filter, but the steady-state assumptions may not be valid in many practical cases.

15 NCAF Manchester 2000 15 6 July 2000 Advantages of Kalman Filters Copes with variable A, C, Q and R matrices Copes with the large uncertainty of the initialisation phase Copes with missing data Provides a convenient measure of estimation accuracy (via the covariance matrix P) Fuses information from multiple-sensors

16 NCAF Manchester 2000 16 6 July 2000 Disadvantages of Kalman Filters Computationally complex (especially for large numbers of sensor channels r) Requires conditional independence of the measurement errors, sample-to-sample Requires linear models for state dynamics and observation processes (A and C) Getting a suitable value for Q (a.k.a. tuning) can be something of a black art

17 NCAF Manchester 2000 17 6 July 2000 Dealing with the disadvantages [1] Computationally complex (especially for large numbers of sensor channels r) Computers are getting faster all the time New algorithms for fast matrix inversion

18 NCAF Manchester 2000 18 6 July 2000 Dealing with the disadvantages [2] Requires conditional independence of the measurement errors, sample-to-sample The independence criterion can be removed by using Covariance Intersection updating A B C a,b,c

19 NCAF Manchester 2000 19 6 July 2000 Covariance Intersection vs Kalman updates Standard Kalman filter update: Covariance Intersection update: Kalman filter covariance (alternative form):

20 NCAF Manchester 2000 20 6 July 2000 Dealing with the disadvantages [3] Requires linear models for state dynamics and observation processes (A and C) This can be overcome by using the EKF (Extended Kalman Filter), but only at the expense of your sanity and all your free time A superior alternative to EKF is the Unscented Filter

21 NCAF Manchester 2000 21 6 July 2000 Unscented Filter - General Problem n-dimensional vector random variable x with mean and covariance We want to predict the distribution of an m-dimensional vector random variable y, where y is related to x by the non-linear transformation In filtering there are two such transformations: » the state transition » the observation

22 NCAF Manchester 2000 22 6 July 2000 Unscented Filter - Solution Compute the set  of 2n points from the rows or columns of the matrices This set is zero mean with covariance P. Compute a set of points with the same covariance, but with mean as Transform each point as Compute and by computing the mean and covariance of the 2n points in the set This is computationally much simpler and significantly more accurate (lower MSE) than the corresponding EKF

23 NCAF Manchester 2000 23 6 July 2000 Dealing with the disadvantages [4] Getting a suitable value for Q (a.k.a. tuning) can be something of a black art There are many heuristics for dealing with process noise (mainly to keep the filter stable); one method is to switch between alternative models We have Matlab toolboxes for learning Q (and all the other matrices) from empirical data, using a technique called Expectation Maximisation (EM)

24 NCAF Manchester 2000 24 6 July 2000 Summary The Kalman Filter is the way to fuse multiple data sources, providing that their errors are conditionally independent Covariance Intersection is the conservative way to deal with inter-dependence in distributed systems The Unscented Filter is universally superior to the EKF for all cases involving non-linear transformations Many heuristics exist to extend and improve the basic Kalman filter for tracking and sensor bias detection

25 NCAF Manchester 2000 25 6 July 2000 End of Talk Unless there are any questions – make ‘em brief


Download ppt "NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre."

Similar presentations


Ads by Google