Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.

Slides:



Advertisements
Similar presentations
Environmental Data Analysis with MatLab
Advertisements

Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Eigen Decomposition and Singular Value Decomposition
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Environmental Data Analysis with MatLab Lecture 8: Solving Generalized Least Squares Problems.
Component Analysis (Review)
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Data Modeling and Parameter Estimation Nov 9, 2005 PSCI 702.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Environmental Data Analysis with MatLab Lecture 16: Orthogonal Functions.
Lecture 3 Probability and Measurement Error, Part 2.
P M V Subbarao Professor Mechanical Engineering Department
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Visual Recognition Tutorial
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Lecture 3 Review of Linear Algebra Simple least-squares.
Curve-Fitting Regression
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 7 Advanced Topics in Least Squares. the multivariate normal distribution for data, d p(d) = (2  ) -N/2 |C d | -1/2 exp{ -1/2 (d-d) T C d -1 (d-d)
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Lecture 10: Support Vector Machines
Linear Least Squares QR Factorization. Systems of linear equations Problem to solve: M x = b Given M x = b : Is there a solution? Is the solution unique?
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Today Wrap up of probability Vectors, Matrices. Calculus
Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
G(m)=d mathematical model d data m model G operator d=G(m true )+  = d true +  Forward problem: find d given m Inverse problem (discrete parameter estimation):
Modern Navigation Thomas Herring
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
Curve-Fitting Regression
تهیه کننده : نرگس مرعشی استاد راهنما : جناب آقای دکتر جمشید شنبه زاده.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION The Minimum Variance Estimate ASEN 5070 LECTURE.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Machine Learning CUNY Graduate Center Lecture 2: Math Primer.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: MLLR For Two Gaussians Mean and Variance Adaptation MATLB Example Resources:
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Geology 5670/6670 Inverse Theory 4 Feb 2015 © A.R. Lowry 2015 Read for Fri 6 Feb: Menke Ch 4 (69-88) Last time: The Generalized Inverse The Generalized.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Lecture 16: Image alignment
Introduction to Vectors and Matrices
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
Linear regression Fitting a straight line to observations.
Singular Value Decomposition SVD
Introduction to Vectors and Matrices
Presentation transcript:

Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empircal Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture Introduce a new way to quantify the spread of resolution Find the Generalized Inverse that minimizes this spread Include minimization of the size of variance Discuss how resolution and variance trade off

Part 1 A new way to quantify the spread of resolution

criticism of Direchlet spread() functions when m represents m(x) is that they don’t capture the sense of “being localized” very well

These two rows of the model resolution matrix have the same Direchlet spread … R ij index, j ii … but the left case is better “localized”

old way new way

old way new way add this function

old way new way Backus-Gilbert Spread Function

grows with physical distance between model parameters zero when i=j

if w(i,i)=0 then

for one spatial dimension m is discretized version of m(x) m = [m(Δx), m(2 Δx), … m(M Δx) ] T w(i,j) = (i-j) 2 would work fine

for two spatial dimension m is discretized version of m(x,y) on K⨉L grid m = [m(x 1, y 1 ), m(x 1, y 2 ), … m(x K, y L ) ] T w(i,j) = (x i -x j ) 2 + (y i -y j ) 2 would work fine

Part 2 Constructing a Generalized Inverse

under-determined problem find G -g that minimizes spread(R)

under-determined problem find G -g that minimizes spread(R) with the constraint

under-determined problem find G -g that minimizes spread(R) with the constraint (since R ii is not constrained by spread function)

once again, solve for each row of G -g separately spread of k th row of resolution matrix R with symmetric in i,j

for the constraint with [1] k =

Lagrange Multiplier Equation now set ∂Φ/∂G -g kp +

S g (k) – λu = 0 with g (k)T the k th row of G -g solve simultaneously with u T g (k) = 1

putting the two equations together

construct inverse A, b, c unknown

construct inverse so

= g (k) = g (k)T = uTuT k th row of G -g

Backus-Gilbert Generalized Inverse (analogous to Minimum Length Generalized Inverse) written in terms of components with

i j j i (A) (B) (C) (D)

Part 3 Include minimization of the size of variance

minimize

new version of J k

with

same

so adding size of variance is just a small modification to S

Backus-Gilbert Generalized Inverse (analogous to Damped Minimum Length) written in terms of components with

In MatLab GMG = zeros(M,N); u = G*ones(M,1); for k = [1:M] S = G * diag(([1:M]-k).^2) * G'; Sp = alpha*S + (1-alpha)*eye(N,N); uSpinv = u'/Sp; GMG(k,:) = uSpinv / (uSpinv*u); end

$20 Reward! to the first person who sends me MatLab code that computes the BG generalized inverse without a for loop (but no creation of huge 3-indexed quantities, please. Memory requirements need to be similar to my code)

The Direchlet analog of the Backus-Gilbert Generalized Inverse is the Damped Minimum Length Generalized Inverse

0 special case #2 1 ε2ε2 G -g [GG T +ε 2 I] =G T G -g =G T [GG T +ε 2 I] -1 I damped minimum length

Part 4 the trade-off of resolution and variance

the value of a localized average with small spread is controlled by few data and so has large variance the value of a localized average with large spread is controlled by many data and so has small variance

(A) (B)

small spread large variance large spread small variance

α near 1 small spread large variance α near 0 large spread small variance

(A) (B) α=0 α=1 α=½α=½ α=0 α=1 α=½α=½ log 10 spread of model resolution spread of model resolution log 10 size of covariance Trade-Off Curves