Download presentation
Presentation is loading. Please wait.
Published byGerard Kennedy Modified over 8 years ago
1
Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 (163-188) Last time: The Sensitivity Matrix (Revisited) The Sensitivity Matrix, or Kernel Matrix, G can be thought of as the matrix of derivatives: and this holds true regardless of the nature of the model equation/operator F ! If F is linear, the derivatives are independent of the model parameter space m and the second derivatives are zero, so the minimum error solution is found in a single step. If F is nonlinear but the derivatives can be evaluated analytically, derivatives depend on m and gradient search methods iteratively search for minimum error. If analytical derivatives can’t be found, can evaluate derivatives numerically using:
2
For inequality constraints on an L 2 problem, use Quadratic Programming. General statement of the QP problem is something like: Minimize subject to So want to express our problem in these forms. Note that:
3
So our objective function to minimize is (i.e. let ; ) If our constraints are of the form Like with linear programming, can treat quadratic programming as a black box and find a suitable algorithm (e.g. in Matlab) to solve.
4
Stochastic Inversion: Suppose we know (or expect) something about the model parameters m before we begin the inversion, i.e., we know the expected value and an a priori covariance matrix We can express this problem as in which the remainder part of the model has We seek to find a generalized inverse G + that minimizes the mean square error: of:
5
Expanding and re-writing using a few of our math tricks, Minimizing this is equivalent to minimizing: You can convince yourself (if so inclined) that the solution to this minimax problem is: or: (Does this remind you of anything we’ve seen before?)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.