Presentation is loading. Please wait.

Presentation is loading. Please wait.

University of Colorado Boulder ASEN 5070 Statistical Orbit Determination I Fall 2012 Professor Jeffrey S. Parker Professor George H. Born Lecture 18: CKF,

Similar presentations


Presentation on theme: "University of Colorado Boulder ASEN 5070 Statistical Orbit Determination I Fall 2012 Professor Jeffrey S. Parker Professor George H. Born Lecture 18: CKF,"— Presentation transcript:

1 University of Colorado Boulder ASEN 5070 Statistical Orbit Determination I Fall 2012 Professor Jeffrey S. Parker Professor George H. Born Lecture 18: CKF, Numerical Compensations 1

2 University of Colorado Boulder  Homework 6 and 7 graded asap  Homework 8 announced today.  Exam 2 on the horizon. Actually, it’s scheduled for 11/8! A week from Thursday.  We still have quite a bit of material to cover.  Exam 2 will cover: ◦ Batch vs. CKF vs. EKF ◦ Probability and statistics (good to keep this up!) ◦ Observability ◦ Numerical compensation techniques, such as the Joseph and Potter formulation. ◦ No calculators should be necessary ◦ Open Book, Open Notes 2

3 University of Colorado Boulder 3

4 University of Colorado Boulder 4

5 University of Colorado Boulder 5

6 University of Colorado Boulder 6 Only 46% of the class got this right

7 University of Colorado Boulder 7

8 University of Colorado Boulder 8

9 University of Colorado Boulder 9

10 University of Colorado Boulder 10 0% of the class got this right!

11 University of Colorado Boulder 11 Observable quantities: Semimajor axis of both orbits Eccentricity of both orbits True anomaly of both satellites Inclination difference between the orbits Node difference between the orbits Arg Peri difference between the orbits

12 University of Colorado Boulder 12

13 University of Colorado Boulder 13 100% of the class got this right!

14 University of Colorado Boulder 14

15 University of Colorado Boulder 15

16 University of Colorado Boulder 16

17 University of Colorado Boulder 17

18 University of Colorado Boulder 18

19 University of Colorado Boulder 19

20 University of Colorado Boulder 20

21 University of Colorado Boulder 21 Probably the hardest question on this quiz and 78% got it right.

22 University of Colorado Boulder  Homework 8  Due in 9 days 22

23 University of Colorado Boulder The Kalman (sequential) filter differs from the batch filter as follows: 1.It uses in place of. 2.It uses in place of Or if we don’t reinitialize, then i.e., we must invertat each stage This means thatis reinitialized at each observation time.

24 University of Colorado Boulder Given a vector of observations at we may process these as scalar observations by performing the measurement update times and skipping the time update. Algorithm 1. Do the time update at 2. Measurement update

25 University of Colorado Boulder 2a. Process the 1 st element of. Compute where

26 University of Colorado Boulder 2. Do not do a time update but do a measurement update by processing the 2 nd element of (i.e. and are not mapped to ) Compute etc. Process p th element of. Compute

27 University of Colorado Boulder let Time update to and repeat this procedure Note: must be a diagonal matrix. Why? If not we would apply a “Whitening Transformation” as described in the next slides.

28 University of Colorado Boulder (1) Where is any sort of observation error covariance

29 University of Colorado Boulder (1) Factor where is the square root of is chosen to be upper triangular multiply Eq. (1) by Where is any sort of observation error covariance

30 University of Colorado Boulder (1) Factor where is the square root of is chosen to be upper triangular multiply Eq. (1) by let Where is any sort of observation error covariance

31 University of Colorado Boulder Now Hence, the new observation has an error with zero mean and unit variance. We would now process the new observation and use.

32 University of Colorado Boulder  Notice the decomposition in the whitening algorithm. ◦ S is a “square root” of R.  Several ways of computing S.  A good way is the Cholesky Decomposition (Section 5.2).  Major benefit of S vs. R: if the condition number of R is large, say 16, the condition number of S is half as large, say 8.  It’s much more accurate to invert S than R. 32

33 University of Colorado Boulder  Let M be a symmetric positive definite matrix, and let R be an upper triangular matrix computed such that  By inspection, 33 Etc. See 5.2.1

34 University of Colorado Boulder  M:  Use Matlab’s “chol” function to compute R: 34

35 University of Colorado Boulder  Consider the case where we have the least squares condition  If M is poorly conditioned, then inverting M will not produce good results.  If we decompose M first, then the accuracy of the solution will improve.  Let  Then  Easy to forward-solve for z and then we can backward-solve for x-hat. 35

36 University of Colorado Boulder  Notice that the Cholesky Decomposition includes square roots. These are very expensive operations!  A square root free algorithm is given in 5.2.2.  Idea: 36

37 University of Colorado Boulder  Processing an observation vector one element at a time.  Whitening  Cholesky  Quick Break  Next topic: Joseph, Potter, … 37

38 University of Colorado Boulder  Least squares estimation began with Gauss  1963: Kalman’s sequential approach ◦ Introduced minimum variance ◦ Introduced process noise ◦ Permitted covariance analyses without data  Schmidt proposed a linearization method that would work for OD problems ◦ Supposed that linearizing around the best estimate trajectory is better than linearizing around the nominal trajectory  1970: Extended Kalman Filter  Gradually, researchers identified problems. ◦ (a) Divergence due to the use of incorrect a priori statistics and unmodeled parameters. ◦ (b) Divergence due to the presence of nonlinearities. ◦ (c) Divergence due to the effects of computer round-off. 38

39 University of Colorado Boulder  Numerical issues cause the covariance matrix to lose their symmetry and nonnegativity  Possible corrections: ◦ (a) Compute only the upper (or lower) triangular entries and force symmetry ◦ (b) Compute the entire matrix and then average the upper and lower fields ◦ (c) Periodically test and reset the matrix ◦ (d) Replace the optimal Kalman measurement update by other expressions (Joseph, Potter, etc) ◦ (e) Use larger process noise and measurement noise covariances. 39

40 University of Colorado Boulder  Potter is credited with introducing square root factorization. ◦ Worked for the Apollo missions!  1968: Andrews extended Potter’s algorithms to include process noise and correlated measurements.  1965 – 1969: Development of the Householder transformation ◦ Worked for Mariner 9 in 1971!  1969: Dyer-McReynolds filter added additional process noise effects. ◦ Worked for Mariner 10 in 1973 for Venus and Mercury! 40

41 University of Colorado Boulder  Batch: ◦ Large matrix inversion w/ potential poorly-conditioned matrix.  Sequential: ◦ Lots and lots of matrix inversions w/ potentially poorly-conditioned matrices.  EKF: ◦ Divergence: If the observation data noise is too large. ◦ Saturation: If covariance gets too small, new data stops influencing the solution. 41

42 University of Colorado Boulder  Replace with  This formulation will always retain a symmetric matrix, but it may still lose positive definiteness. 42

43 University of Colorado Boulder Need to show that First derive the Joseph formulation subst. subtract from both sides and rearrange

44 University of Colorado Boulder form where since is independent of

45 University of Colorado Boulder Next, show that this is the same as the conventional Kalman

46 University of Colorado Boulder Next, show that this is the same as the conventional Kalman subst. for

47 University of Colorado Boulder  Question on Joseph?  Next: Demo! 47

48 University of Colorado Boulder The following example problem from Bierman (1977) illustrates the numerical characteristics of several algorithms we have studied to date. Consider the problem of estimating and from scalar measurements and Where and are uncorrelated zero mean random variables with unit variance. If they did not have the above traits we could perform a whitening transformation so that they do. In matrix form

49 University of Colorado Boulder The a priori covariance associated with our knowledge of is assumed to be Where and. The quantity is assumed to be small enough such that computer round-off produces This estimation problem is well posed. The observation provides an estimate of which, when coupled with should accurately determine. However, when the various data processing algorithms are applied several diverse results are obtained.

50 University of Colorado Boulder Let the gain and estimation error covariance associated with be denoted as and respectively. Similarly the measurement is associated with and respectively. Note that this is a linear parameter (constant) estimation problem. Hence, We will process the observation one at a time. Hence,

51 University of Colorado Boulder Note: Since we will process the observations one at a time, the matrix inversion in previous equation is a scalar inversion. The exact solution is where

52 University of Colorado Boulder where The estimation error covariance associated with processing the first data point is

53 University of Colorado Boulder The exact solution for is given by: Processing the second observation: where

54 University of Colorado Boulder The conventional Kalman filter yields (using ) Note that is no longer positive definite. The diagonals of a matrix must be positive. However does exist since.

55 University of Colorado Boulder Compute P 2 using the Conventional Kalman Now is not positive definite ( ) nor does it have positive terms on the diagonal. In fact, the conventional Kalman filter has failed for these observations.

56 University of Colorado Boulder The Joseph formulation yields

57 University of Colorado Boulder The Batch Processor yields P 2 for the batch

58 University of Colorado Boulder P 2 for the batch To order, the exact solution for is

59 University of Colorado Boulder Summary of Results Exact to order Conventional Kalman Joseph Batch

60 University of Colorado Boulder  Homework 6 and 7 graded asap  Homework 8 announced today.  Exam 2 on the horizon. Actually, it’s scheduled for 11/8! A week from Thursday.  We still have quite a bit of material to cover.  Exam 2 will cover: ◦ Batch vs. CKF vs. EKF ◦ Probability and statistics (good to keep this up!) ◦ Observability ◦ Numerical compensation techniques, such as the Joseph and Potter formulation. ◦ No calculators should be necessary ◦ Open Book, Open Notes 60


Download ppt "University of Colorado Boulder ASEN 5070 Statistical Orbit Determination I Fall 2012 Professor Jeffrey S. Parker Professor George H. Born Lecture 18: CKF,"

Similar presentations


Ads by Google