Lecture 5 A Priori Information and Weighted Least Squared.

Slides:



Advertisements
Similar presentations
Environmental Data Analysis with MatLab
Advertisements

Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Environmental Data Analysis with MatLab Lecture 21: Interpolation.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Environmental Data Analysis with MatLab Lecture 8: Solving Generalized Least Squares Problems.
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Optimization methods Review
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
Solving Linear Systems (Numerical Recipes, Chap 2)
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Visual Recognition Tutorial
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Maximum likelihood (ML) and likelihood ratio (LR) test
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Lecture 3 Review of Linear Algebra Simple least-squares.
Curve-Fitting Regression
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4: Practical Examples. Remember this? m est = m A + M [ d obs – Gm A ] where M = [G T C d -1 G + C m -1 ] -1 G T C d -1.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Maximum likelihood (ML) and likelihood ratio (LR) test
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Lecture 9 Interpolation and Splines. Lingo Interpolation – filling in gaps in data Find a function f(x) that 1) goes through all your data points 2) does.
Linear and generalised linear models
Linear and generalised linear models
Lecture 3: Inferences using Least-Squares. Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Maximum likelihood (ML)
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
G(m)=d mathematical model d data m model G operator d=G(m true )+  = d true +  Forward problem: find d given m Inverse problem (discrete parameter estimation):
Orthogonality and Least Squares
Curve-Fitting Regression
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Solving Linear Systems of Equations
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 4. Least squares.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION The Minimum Variance Estimate ASEN 5070 LECTURE.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: MLLR For Two Gaussians Mean and Variance Adaptation MATLB Example Resources:
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 ( ) Last time: The Sensitivity Matrix (Revisited)
Geology 5670/6670 Inverse Theory 4 Feb 2015 © A.R. Lowry 2015 Read for Fri 6 Feb: Menke Ch 4 (69-88) Last time: The Generalized Inverse The Generalized.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 14: Applications of Filters.
1 SYSTEM OF LINEAR EQUATIONS BASE OF VECTOR SPACE.
Lecture 16: Image alignment
Environmental Data Analysis with MatLab 2nd Edition
Unfolding Problem: A Machine Learning Approach
5.2 Least-Squares Fit to a Straight Line
Unfolding with system identification
Presentation transcript:

Lecture 5 A Priori Information and Weighted Least Squared

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empircal Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture Classify Inverse Problems as Overdetermined, Underdetermined and Mixed-Determined Further Develop the Notion of A Priori Information Apply A Priori Information to Solving Inverse Problems

Part 1 Classification of Inverse Problems on the Basis of Information Content

“Even Determined” Exactly enough data is available to determine the model parameters E=0 and solution unique (a rare case)

“Over Determined” More than enough data is available to determine the model parameters E>0 and solution unique

“Under Determined” Insufficient data is available to determine all the model parameters E=0 and solution non-unique

This configuration is also underdetermined since only the mean is determined

“Mixed Determined” More than enough data is available to constrain some the model parameters Insufficient data is available to constrain other model parameters E>0 and solution non-unique

this configuration is also mixed-determined the average of the two blocks is over-determined and the difference between the two blocks is under- determined

mixed-determined some linear combinations of model parameters are not determined by the data (very common)

what to do? add a priori information that supplement observations

Part 2 a priori information

a priori information preconceptions about the behavior of the model parameters

example of a priori information model parameters are: small near a given value have a known average value smoothly varying with position solve a known differential equation positive etc.

dangerous? perhaps … but we have a lot of experience about the world in general, so why not put that experience to work

one approach to solving a mixed-determined problem “of all the solutions that minimize E=||e|| 2 choose the one with minimum L =||m|| 2 ”

“of all the solutions that minimize E choose the one with minimum L ” turns out to be hard to do, since you have to know how to divide up the model parameters into two groups one over-determined one under-determined

next best thing “of all the solutions that minimize E choose the one with minimum L ” “choose the solutions that minimizes E + ε 2 L ”

minimize when ε 2 is chosen to be small, the E will be approximately minimized and the solution will be small

minimize damped least-squares solution

minimize damped least-squares solution Very similar to least-squares Just add ε 2 to diagonal of G T G

Part 3 Using Prior Information to Solve Inverse Problems

m is small minimize L=mTmL=mTm

m is close to minimize

m varies slowly with position ( m is flat) characterize steepness with first-difference approximation for dm/dx

m varies smoothly with position ( m is smooth) characterize roughness with second-difference approximation for d 2 m/dx ⋱⋱⋱ 1 1

m varies slowly/smoothly with position minimize with W m = D T D

Suppose that some data are more accurately determined than others minimize

example when d 3 is more accurately measured than the other data

weighted damped least squares minimize E + ε 2 L with and

weighted damped least squares solution

a bit complicated, but …

equivalent to solving by simple least squares

m =

top rows data equation Gm=d weighted by W e 1/2

m = top rows a priori equation m= weighted by ε D

you can even use this equation to implement constraints just by making ε very large

example fill in missing data of discretized version of m(z) m(z i ) zizi djdj

set up M=100 model parameters N<M data data, when available, gives values of model parameter d i = m j d = Gm associates datum with corresponding model parameter each row has M-1 zeros and a single one a priori information of smoothness in interior x’s flatness at ends

F m = f

associates datum with corresponding model parameter

F m = f roughness in interior

F m = f steepness at left

F m = f steepness at right

computational efficiency 1.Use sparse matrices 2.Use solver that does not require forming F T F

1. Sparse matrices F = spalloc(N, M, 3*N);

2A. Use Biconjugate Gradient Algorithm (an iterative method to solve F T Fm = F T f ) clear F; global F; tol = 1e-6; maxit = 3*M; mest = F'*f, tol, maxit ); stop iterations when solution is good enough

2A. Use Biconjugate Gradient Algorithm (an iterative method to solve F T Fm = F T f ) clear F; global F; tol = 1e-6; maxit = 3*M; mest = F'*f, tol, maxit ); stop iterations after maximum iteration is reached, regardless of whether it is good enough

2A. Use Biconjugate Gradient Algorithm (an iterative method to solve F T Fm = F T f ) clear F; global F; tol = 1e-6; maxit = 3*M; mest = F'*f, tol, maxit ); FTfFTf

2A. Use Biconjugate Gradient Algorithm (an iterative method to solve F T Fm = F T f ) clear F; global F; tol = 1e-6; maxit = 3*M; mest = F'*f, tol, maxit ); function that calculates F T F times a vector v

2B. Function to multiply a vector by F T F do as y = F T (Fv) so F T F never calculated function y = weightedleastsquaresfcn(v,transp_flag) global F; temp = F*v; y = F'*temp; return

z m(z)