Download presentation
Presentation is loading. Please wait.
Published byElla Phyllis Anderson Modified over 9 years ago
1
Geology 5670/6670 Inverse Theory 4 Feb 2015 © A.R. Lowry 2015 Read for Fri 6 Feb: Menke Ch 4 (69-88) Last time: The Generalized Inverse The Generalized Inverse uses Singular Value Decomposition to recast the problem as p non-zero eigenvalues & eigenvectors of the system of equations: The resulting singular value decomposition of G is, p ≤ min(N, M) This yields pseudoinverses: for OLS ( p = M ) for over-parameterized ( p = N )
2
Aside on solution length: Suppose we have a simple over-parameterized gravity problem that can be solved exactly by both: and The shorter solution length is smoother, varies less wildly (like what one expects in the real Earth, where most fields vary are “self-similar” and vary according to fractal statistics properties that are spatially or temporally “nearer together” tend to be more similar) 11 22 33 44
3
In the case of the overparameterized (minimum structure) SVD, the model resolution matrix: is not the M -rank identity matrix (because V M is semiorthogonal!), reflecting the non-uniqueness of solution. (The difference from I is a measure of how non-unique!) The data resolution matrix in this case: is the identity matrix because the data are fit exactly…
4
The Generalized Inverse p ≤ min(N,M) has The generalized inverse has the dual properties that it minimizes both e T e and m T m of the solution
5
A common application of GI - SVD, in the presence of noisy data, is to limit the eigenvalue & eigenvector matrices to the first p for which p ≥ (& zero the rest assuming that they are “in the noise”) Recall the parameter covariance matrix (for constant data variance 2 ): The trace of C m gives a measure of solution variance: Recall for OLS, so this does reduce solution variance (& by a lot if the smallest i are very small!)
6
BUT this comes at a cost. The model resolution matrix : relates the model estimate to the true model via: It provides a measure of how well individual model parameters can be resolved: Desirable… Not so much.
7
The model resolution matrix : relates the model estimate to the true model via: In the absence of noise, is a weighted average of the true parameters. The spread of R gives a measure of model resolution: And this measure grows as p is reduced.
8
So we have a tradeoff between resolution and variance: (solution variance) decreasing p (model irresolution) This tradeoff (degraded model resolution is required to get reduced solution variance) is an inherent limitation of all inverse problems…
9
Damped Least Squares (Menke §3.8-3.9) Suppose we have an over-determined problem that is ill-conditioned (i.e., M << 1 ) so the determinant of G + is close to zero. Can we reduce solution variance without throwing away parameters? Idea : Combine a minimization of e T e and m T m for the over-determined (least-squares) problem! Define a new objective function that combines residual length & solution length: To minimize set
10
Recall so: or: Thus, the pseudoinverse for damped least squares ( DLS ) is: The condition number for OLS is ; Identity: If eigenvalues of A are i, eigenvalues of A + kI are i + k So condition number for DLS is
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.