Download presentation
Presentation is loading. Please wait.
Published byElaine Austin Modified over 6 years ago
1
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
ASEN 5070 LECTURE 18 10/09/09
2
Kalman (Sequential Filter)
Kalman solved the problem: given , , and Find , using is white noise i.e. it is not correlated with any other random variable (e.g., ) and not correlated with past values of .
3
Alternate derivation of the Kalman (Sequential) Filter algorithm
Begin with Eq (4.4.29) Given , , Where and Note that this requires an matrix inversion where n is the dimension of the state deviation vector, We can reformulate Eq (4.4.29) into a form requiring the inversion of a matrix, where p is the dimension of the observation deviation vector, Note that in general Using the Schure identity (Theorem 4 of Appendix B) Let , ,
4
Alternate derivation of the Kalman (Sequential) Filter algorithm
Then the 1st term in Eq (4.4.29), , becomes Using the fact that we can write as Hence, Eq (1) can be written as Note that this involves a matrix inversion. If the observation at each time is a scalar such as range, this will be a scalar inversion. In any case we can process the observations one at a time so that this need only be a scalar inversion.
5
Alternate derivation of the Kalman (Sequential) Filter algorithm
We may simplify this expression further. In Eq (2) define a quantity called the Kalman or Optimal gain as Hence, from Eq (2) Substituting Eq (4) into Eq (4.4.29) yields
6
Alternate derivation of the Kalman (Sequential) Filter algorithm
But (Substitute for ) Factor out We have shown that
7
Alternate derivation of the Kalman (Sequential) Filter algorithm
Hence, Eq (5) becomes The Sequential Processing Algorithm is illustrated in Fig of the text. The Kalman or Sequential Filter is generally divided into a time and measurement (or observation) update, i.e. Time Update Measurement Update
8
Matrix Dimensions The Kalman (sequential) filter differs from the batch filter as follows: It uses in place of . IT uses in place of This means that Is reinitialized at each observation time. Or if we don’t reinitialize, then i.e., we must invert At each stage
9
The time update Set k=k+1 and return to (1)
10
What is the Role of the Kalman Gain?
Let’s examine a simple case where & are both scalars and we observe directly i.e. so Furthermore assume that is a constant so that Then at time k,
11
What is the Role of the Kalman Gain?
The Kalman is The measurement update becomes
12
What is the Role of the Kalman Gain?
Hence, the best estimate of is a weighted average of the predicted estimate and the measurement. If the measurement is very noisy (inaccurate) relative to the predicted state ( ) Then and Hence, the measurement has little effect. Conversely if and the predicted estimate has little effect. In general the Kalman gain provides a relative weighting between the apriori and the tracking data.
13
Problem 43c Given: apriori information
14
Problem 43c for all values of i
Find: using the Kalman filter. Use the batch processor to find 1. Time update at (Note that there is no observations at )
15
Problem 43c Measurement update: Kalman Gain
16
Problem 43c Compute
17
Problem 43c
18
Problem 43c Using the batch processor to get
19
Problem 43c Hence agrees with from Kalman filter
20
Problem 43c
21
Problem 43c Map the estimation error covariance matrix from the epoch time, to in one time unit steps and plot the envelope of the one standard deviation error in x1 and x2 and the correlation coefficient between x1 and x2 . Recall that
22
Problem 43c
23
Processing Observations One at a Time
Given a vector of observations at we may process these as scalar observations by performing the measurement update times and skipping the time update. Algorithm 1. Do the time update at 2. Measurement update – we must deal with the matrices
24
Processing Observations One at a Time
2a. Process the 1st element of Compute where
25
Processing Observations One at a Time
2. Do not do a time update but do a measurement update by processing the 2nd element of (i.e and are not mapped to ) Compute etc. Process pth element of . Compute
26
Processing Observations One at a Time
let Time update to and repeat this procedure Note: must be a diagonal matrix. If not we would apply a whitening Transformation as described in the next slides.
27
Whitening Transformation
(1) Factor where is the square root of multiply Eq. (1) by is chosen to be upper triangular let
28
Whitening Transformation
Now Hence, the new observation has an error with zero mean and unit variance. We would now process the new observation and use .
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.