Presentation is loading. Please wait.

Presentation is loading. Please wait.

ASEN 5070: Statistical Orbit Determination I Fall 2015

Similar presentations


Presentation on theme: "ASEN 5070: Statistical Orbit Determination I Fall 2015"— Presentation transcript:

1 ASEN 5070: Statistical Orbit Determination I Fall 2015
Professor Brandon A. Jones Lecture 35: Fixed-Interval Smoothing

2 Announcements Homework 10 due on Friday
Lecture quiz due by 5pm on Friday Already posted to D2L

3 Homework 10 Expected Accuracy

4 Homework 10 Expected Results
The following slides outline expected agreement between your filter and the online solutions We do not expect the same plots, but this illustrates roughly what you should get

5 CKF Relative Difference at 18,340 s

6 CKF Relative Difference with Batch at Epoch

7 Lecture Quiz 7

8 Question 1 Percent Correct: 76%
When analyzing the performance of a filter, one output to consider when characterize the performance is the residuals. When assessing accuracy based on post-fit residual ratios, we expect random values that are mostly between -3 and 3. True False

9 Question 2 Percent Correct: 63%
Assume we have a scenario where we want to estimate the position and velocity of a satellite and the height of a table in the Engineering Center for a total of n=7 state parameters.  We are given m >> 7 independent observations of the range and range-rate of the satellite taken from a station in Hawaii (p=2).  Hence, in this case, the H-tilde matrix is 2x7 and there are many observation times.  We use the Batch processor and accumulate the sum of H^T*H for each observation time.  There is no a priori information available. What conclusion can we draw from this scenario? The system is observable and the accumulated information matrix is rank n The system is unobservable and the accumulated information matrix is rank n The system is unobservable and the accumulated information matrix is rank p-1 The system is unobservable and the accumulated information matrix is rank n-1 17% 12% 7% 63%

10 Question 3 Percent Correct: 56% Consider the scenario where:
There are two spacecraft in Earth orbit. The two vehicles are in a two-body gravity field. There is no modeling error. We have an infinite precision computer Negligible measurement error with a small variance and zero mean. Range observations are gathered relative to a ground station with a fixed location in the inertial space. We make no other assumptions on the scenario.  Which of the following statements apply to this scenario? The position and velocity states of both spacecraft are observable The position and velocity states of neither spacecraft are observable We can estimate the state of one spacecraft at a time If we add a third spacecraft to the scenario, then we can estimate the states of all three 27% 56% 15% 2%

11 Question 4 Percent Correct: 92%
When characterizing the performance of a filter by inspecting the covariance matrix, we should take a look at: The variances on the diagonal of the matrix The correlation coefficients using the off-diagonal terms Both of the above. Neither of the above.

12 Question 5 Percent Correct: 76%
In the method of orthogonal transformations, we generate the matrix Q such that: QH = [ R; 0 ] and the resulting R matrix is orthogonal. True False

13 Probability Ellipsoids (Revisit)

14 The Probability Ellipsoid

15 The Probability Ellipsoid

16 2D Example

17 2D Example

18 Fixed Interval Smoothing

19 Motivation The batch processor provides an estimate based on a full span of data When including process noise, we lose this equivalence between the batch and any of the sequential processors Is there some way to update the estimated state using information gained from future observations?

20 Forward-Backward Smoothing
Forward-backward smoothing is a method by which a state estimate (and optionally, the covariance) may be constructed using observations before and after the epoch. Step 1. Process all observations using a CKF with process noise (SNC, DMC, etc.). Step 2. Start with the last observation processed and smooth back through the observations.

21 Our Notation for Smoothing
Based on observations up to and including Value/vector/matrix Time of current estimate As presented in the book, the most common source of confusion for smoothing is the notation Indexing errors are the main source of software bugs!

22 Smoothing visualization
Process observations forward in time: If you were to process them backward in time (given everything needed to do that):

23 Smoothing visualization
Process observations forward in time: If you were to process them backward in time (given everything needed to do that):

24 Smoothing visualization
Smoothing does not actually combine them, but you can think about it in order to conceptualize what smoothing does. Smoothing results in a much more consistent solution over time. And it results in an optimal estimate using all observations.

25 Smoothing Caveats: If you use process noise or some other way to increase the covariance, the result is that the optimal estimate at any time really only pays attention to observations nearby. While this is good, it also means smoothing doesn’t always have a big effect. Smoothing shouldn’t remove the white noise found on the signals. It’s not a “cleaning” function, it’s a “use all the data for your estimate” function.

26 Smoothing of State Estimate
First, we use If Q = 0,

27 Smoothing of State Estimate
Hence, in the CKF, we store:

28 Smoothing of Covariance
Optionally, we may smooth the state error covariance matrix

29 Smoothing Algorithm

30 Smoothing Algorithm

31 Smoothing If we suppose that there is no process noise (Q=0), then the smoothing algorithm reduces to the CKF mapping relationships:

32 An example: 4-41 and 4-42 Book p. 283

33 An example: 4-41 and 4-42 Book p. 284

34 Smoothing Say there are 100 observations
We want to construct new estimates using all data, i.e.,

35 Smoothing Say there are 100 observations

36 Smoothing Say there are 100 observations

37 Smoothing Say there are 100 observations


Download ppt "ASEN 5070: Statistical Orbit Determination I Fall 2015"

Similar presentations


Ads by Google