Lecture 22 Exemplary Inverse Problems including Filter Design.

Slides:



Advertisements
Similar presentations
Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
Advertisements

Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Environmental Data Analysis with MatLab Lecture 21: Interpolation.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Environmental Data Analysis with MatLab Lecture 8: Solving Generalized Least Squares Problems.
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 15 Orthogonal Functions Fourier Series. LGA mean daily temperature time series is there a global warming signal?
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Environmental Data Analysis with MatLab Lecture 13: Filter Theory.
Environmental Data Analysis with MatLab Lecture 16: Orthogonal Functions.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Environmental Data Analysis with MatLab Lecture 11: Lessons Learned from the Fourier Transform.
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 22 Adjunct Methods. Part 1 Motivation Motivating scenario We want to predict tomorrow’s weather, u(t) … We have a atmospheric model chugging.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Lecture 3 Review of Linear Algebra Simple least-squares.
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4: Practical Examples. Remember this? m est = m A + M [ d obs – Gm A ] where M = [G T C d -1 G + C m -1 ] -1 G T C d -1.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Lecture 9 Interpolation and Splines. Lingo Interpolation – filling in gaps in data Find a function f(x) that 1) goes through all your data points 2) does.
Linear and generalised linear models
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
Linear and generalised linear models
Basics of regression analysis
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Today Wrap up of probability Vectors, Matrices. Calculus
By Mary Hudachek-Buswell. Overview Atmospheric Turbulence Blur.
CS448f: Image Processing For Photography and Vision Wavelets Continued.
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
Some matrix stuff.
G(m)=d mathematical model d data m model G operator d=G(m true )+  = d true +  Forward problem: find d given m Inverse problem (discrete parameter estimation):
SVD: Singular Value Decomposition
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Camera Calibration Course web page: vision.cis.udel.edu/cv March 24, 2003  Lecture 17.
Geology 5670/6670 Inverse Theory 4 Feb 2015 © A.R. Lowry 2015 Read for Fri 6 Feb: Menke Ch 4 (69-88) Last time: The Generalized Inverse The Generalized.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 14: Applications of Filters.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
President UniversityErwin SitompulSMI 10/1 Lecture 10 System Modeling and Identification Dr.-Ing. Erwin Sitompul President University
Lecture 16: Image alignment
Linear Filters and Edges Chapters 7 and 8
Unfolding Problem: A Machine Learning Approach
Unfolding with system identification
Lecture 7 Spatial filtering.
Presentation transcript:

Lecture 22 Exemplary Inverse Problems including Filter Design

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empircal Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture solve a few exemplary inverse problems image deblurring deconvolution filters minimization of cross-over errors

Part 1 image deblurring

three point blur (applied to each row of pixels)

null vectors are highly oscillatory

solve with minimum length

note that GG T can deduced analytically and is Toeplitz might lead to a computational advantage

Solution Possibilities 1.Use sparse matrix for G together with mest=G’*((G*G’)\d) (maybe damp a little, too) 2. Use analytic version of GG T together with mest=G’*(GGT\d) (maybe damp a little, too) 3. Use sparse matrix for G together with bicg() to solve GG T λ=d (maybe with a little damping, too) and then use m est = G T λ

Solution Possibilities 1.Use sparse matrix for G together with mest=G’*((G*G’)\d) (maybe damp a little, too) 2. Use analytic version of GG T together with mest=G’*(GGT\d) (maybe damp a little, too) 3. Use sparse matrix for G together with bicg() to solve GG T λ=d (maybe with a little damping, too) and then use m est = G T λ we used the simplest, which worked fine

image blurred due to camera motion (100 point blur)

(A)(B)(C) (D)(E)(F) pixel number

[G -g ] 728 R 728 row number (A) (B) sidelobes

Part 2 deconvolution filter

Convolution general relationship for linear systems with translational invariance

Convolution general relationship for linear systems with translational invariance model m(t) and data d(t) related by linear operator

Convolution general relationship for linear systems with translational invariance only relative time matters

underlying principle linear superposition

time, t, after impulse d(t)= g (t) 0 time, t, after impulse m(t)=δ(t) 0 If the input of a spike m(t)= δ (t) spike causes the output of d(t)= g (t)

m(t 0 )g(t  t 0 ) m(t) time, t t0t0 d(t) time, t t0t0 spike of amplitude, m(t 0 ) Then the general input m(t) causes the general output d(t)=m(t)*g(t)

convolution d=m*g

discrete convolution d=m*g standard matrix from d=Gm

seismic reflection sounding

want airgun pulse to be as spiky as possible p(t) = g(t) * r(t) pressure = airgun pulse * sea floor response so as to be able to detect pulses in sea floor response p(t) ≈ r(t)

time, t g(t) actual airgun pulse is ringy

so construct a deconvolution filter m(t) so that g(t) *m(t) = δ(t) and apply it to the data p(t)*m(t) = g(t)*m(t)*r(t) = r(t) p(t) =g(t) * r(t)

g(t) *m(t) = δ(t) and apply it to the data p(t)*m(t) = g(t)*m(t)*r(t) = r(t) p(t) =g(t) r(t) this is the equation we need to solve so construct a deconvolution filter m(t) so that

g(t) *m(t) = δ(t) Gm = d discrete approximation of delta function m = use discrete approximation of convolution...

solve with damped least squares m est = [G T G + ε 2 I] -1 G T d with d = [1, 0, 0,..., 0] T (or something similar) matrices G T G and G T d can be calculated analytically

approximately Toeplitz with elements

autocorrelation of g

cross-correlation of g and d

Solution Possibilities 1.Use sparse matrix for G together with mest=(G’*G)\(G’*d) (maybe damping a little, too) 2. Use analytic versions of G T G and G T d together with mest=GTG\GTd (maybe damp a little, too) 3. Never form G, just work with its columns, g use bicg() to solve G T G m = G T d but use conv() to compute G T (Gv) 4. Same as 3 but add a priori information of smoothness

Solution Possibilities 1.Use sparse matrix for G together with mest=(G’*G)\(G’*d) (maybe damping a little, too) 2. Use analytic versions of G T G and G T d together with mest=GTG\GTd (maybe damp a little, too) 3. Never form G, just work with its columns, g use bicg() to solve G T G m = G T d but use conv() to compute G T (Gv) 4. Same as 3 but add a priori information of smoothness we used this complicated but very fast method

time, t g(t) m(t) m(t)*g(t) (A) (B) (C)

(A) Original (B) After deconvolution d(t) d(t)*m(t)

Part 3 minimization of cross-over errors

longitude latitude true estimated gravity anomaly, mgal longitude note streaks

general idea data s is measured along tracks data along each track is off by an additive constant theory s j obs (track i) = s j true (track i) + m (track i) goal is to estimate the constants by minimizing the error at track intersections

cross-over points

i th intersection has ascending track A i and descending track D i s Ai obs = s Ai true + m Ai s Di obs = s Di true + m Di subtract s Ai obs -s Di obs = m Ai - m Di has form d=Gm

the matrix G is very sparse every row is all zeros, except for a single +1 and a single -1

note that this problem has an inherent non-uniqueness m is determined only to an overall additive constant one possibility is to use damped least squares, to choose the smallest m (you can always add a constant later)

the matrices G T G and G T d can be calculated semi-analytically

recipe starting with zeroed G T G and G T d

Solution Possibilities 1.Use sparse matrix for G together with damped least squares mest=(G’*G+e2*speye(M,M))\(G’*d) 2. Use analytic versions of G T G and G T d add damping directly to the diagonal of G T G then use mest=GTGpe2I\GTd 3. Use sparse matrix for G together with bicg() version of damped least squares 4. Methods 1 or 2, but use hard constraint instead of damping to implement Σ i m i = 0

Solution Possibilities 1.Use sparse matrix for G together with damped least squares mest=(G’*G*e2*speye(M,M))\(G’*d) 2. Use analytic versions of G T G and G T d add damping directly to the diagonal of G T G then use mest=GTG\GTd 3. Use sparse matrix for G together with bicg() version of damped least squares 4. Methods 1 or 2, but use hard constraint instead of damping our choice

longitude latitude (A) (B) (D) (C) gravity anomaly, mgal longitude