ASEN 5070: Statistical Orbit Determination I Fall 2014 Professor Brandon A. Jones Lecture 25: Potter Algorithm and Decomposition Methods
Announcements/Reminders Homework 8 Due Friday (10/31) Lecture Quizzes Due by 5pm Today Next one due by 5pm 10/31 Exam 2 – Friday, November 7
Announcements/Reminders
Xkcd Comic (http://xkcd.com/1132/)
Potter Algorithm (continued)
Covariance Condition Number The condition number of P may be described by With p significant digits, there are estimation difficulties as If we can’t change the condition number, is there something else we can do?
Square-Root Formulation For W above, the condition number is Is there something we can do to instead operate on W ?
Time Update for W (one method)
Derivation so far…
Potter Algorithm Assumptions We must process the observations one at a time If we have multiple observations at a single time, this requires that R be diagonal. What can we do if the observations at a single time have a non-zero correlation?
Potter Square-Root Filter Derivation
Potter Square-Root Filter Derivation
Potter Square-Root Filter Derivation
Potter Measurement Update Process the observations one at a time Repeat if multiple observations available at a single time More computationally expensive than Kalman, but more accurate W after the measurement update is not triangular! (Important for some algorithms) Motivates the derivation of the triangular square-root method (pp. 335-340)
How do we get W ? If we are given P as a priori information, how do we get W ? If P is diagonal, this is trivial: Great, but what if it isn’t diagonal? Cholesky decomposition
Cholesky Decomposition
Square-Root Methods Provides improved numeric stability Method defined by Atilde Potter algorithm assumed the processing of one measurement at a time
How do we get W ? If we are given P as a priori information, how do we get W ? If P is diagonal, this is easy: Great, but what if it isn’t diagonal? Cholesky decomposition
Cholesky Decomposition Cholesky Decomposition of p.d. matrix: MATLAB:
Solution Algorithm Algorithm found in book Eq. 5.2.6 Implementations readily available in most high-level languages: MATLAB: Be sure to check the documentation for default behavior (lower or upper)
Cholesky-Based Least Squares
Weighted LS w/ A Priori Recall the weighted least squares: Instead, we will write: M is the information matrix
Solution via Inversion Usually, we solve via matrix inversion If the number of estimated parameters is large, then this is expensive and possibly inaccurate Estimate gravity field of degree 360 n ≈ 129,600
Solution via Cholesky Decomposition Instead, let’s write the equations in terms of the Cholesky decomposition R here is not the obs. error covariance matrix!
Solve for z Using Forward Substitution Eq. 5.2.7 in the Book
Solve for x Using Backward Substitution Eq. 5.2.8 in the Book
Covariance Matrix Solution We may also solve for the covariance matrix using the Cholesky decomposition
Covariance Matrix Solution Using this directly still requires an n×n matrix inversion! Eq. 5.2.9 provides a simple algorithm to get S by leveraging
Covariance Matrix Solution Eq. 5.2.9:
SVD-Based Least Squares (not in book)
Singular Value Decomposition (SVD) The SVD of any real m×n matrix H is
Pseudoinverse via SVD
Pseudoinverse via SVD It turns out that we can solve the linear system using the pseudoinverse given by the SVD
LS Solution via SVD For the linear system the solution minimizes the least squares cost function
Improved Conditioning with SVD Recall that for the normal solution, This squares the condition number of H ! Instead, SVD operates on H, thereby improving solution accuracy
State Estimate Covariance via SVD The covariance matrix P with R the identity matrix is: Home Practice Exercise: Derive the equation for P above
Advantages/Disadvantages of SVD Solving the LS problem via SVD provides one of (if not the most) numerically stable solutions Also a square-root method (does not square the condition number of H ) Generating the SVD is more computationally intensive than most methods