Download presentation
Presentation is loading. Please wait.
Published byAnabel Reynolds Modified over 9 years ago
1
G(m)=d mathematical model d data m model G operator d=G(m true )+ = d true + Forward problem: find d given m Inverse problem (discrete parameter estimation): find m given d Discrete linear inverse problem: Gm=d
2
G(m)=d mathematical model Discrete linear inverse problem: Gm=d Method of Least Squares: Minimize E=∑ e i 2 = ∑ (d i obs -d i pre ) 2 o o o o o e i d i pre D i obs z i
3
E=e T e=(d-Gm) T (d-Gm) =∑ [d i -∑G ij m j ] [d i -∑G ik m k ] =∑ ∑ m j m k ∑ G ij G ik -2∑ m j ∑ G ij d i + ∑ d i d i ∂/∂m q [∑ ∑ m j m k ∑ G ij G ik ] = ∑ ∑ [ jq m k +m j kq ]∑G ij G ik = 2 ∑ m k ∑ G iq G ik -2 ∂/∂m q [∑ m j ∑ G ij d i ] = -2∑ jq ∑G ij d i = -2∑ G iq d i ∂/∂m q [∑d i d i ]=0 i j k j k i j i i j k i k i j I j i i i
4
∂/∂m q = 0 = 2 ∑ m k ∑ G iq G ik - 2∑ G iq d i In matrix notation: G T Gm - G T d = 0 m est = [G T G] -1 G T d assuming [G T G] -1 exists This is the least squares solution to Gm=d k i i
5
Example of fitting a straight line m est = [G T G] -1 G T d assuming [G T G] -1 exists m ∑ x i ∑ x i ∑ x i 2 1 1 … 1 1 x 1 [G T G] = 1 x 2 = x 1 x 2.. x m. 1 x m m ∑ x i -1 ∑ x i ∑ x i 2 [G T G] -1 =
6
Example of fitting a straight line m est = [G T G] -1 G T d assuming [G T G] -1 exists ∑ d i ∑ x i d i 1 1 … 1 d 1 [G T d] = d 2 = x 1 x 2.. x m. d m m ∑ x i -1 ∑ x i ∑ x i 2 [G T G] -1 G T d = ∑ d i ∑ x i d i
7
The existence of the Least Squares Solution m est = [G T G] -1 G T d assuming [G T G] -1 exists Consider the straight line problem with only 1 data point o ? m ∑ x i -1 1 x 1 -1 [G T G] -1 = = ∑ x i ∑ x i 2 x 1 x 1 2 The inverse of a matrix is proportional to the reciprocal of the determinant of the matrix, i.e., [G T G] -1 1/(x 1 2 -x 1 2 ), which is clearly singular, and the formula for the least squares fails.
8
Classification of inverse problems: Over-determined Under-determined Mixed-determined Even-determined
9
Over-determined problems: Too much information contained in Gm=d to possess an exact solution … Least squares gives a ‘best’ approximate solution.
10
Even-determined problems: Exactly enough information to determine the model parameters. There is only one solution and it has zero prediction error
11
Under-determined Problems: Mixed-determined problems - non-zero prediction error Purely underdetermined problems - zero prediction error
12
Purely Under-determined Problems: # of parameters > # of equations Possible to find more than 1 solution with 0 prediction error (actually infinitely many) To obtain a solution, we must add some information not contained in Gm=d : a priori information Example: Fitting a straight line through a single data point, we may require that the line passes through the origin Common a priori assumption: Simple model solution best. Measure of simplicity could be Euclidian length, L=m T m = ∑ m i 2
13
Purely Under-determined Problems: Problem: Find the m est that minimizes L=m T m = ∑ m i 2 subject to the constraint that e=d-Gm=0 (m)= L+∑ i e i = ∑ m i 2 +∑ i [ d i - ∑ G ij m j ] ∂ (m)/∂m q = 2 ∑ m i ∂m i /∂m q -∑ i ∑ G ij ∂m j /∂m q ] = 2m q - ∑ i G iq = 0 In matrix notation: 2m = G T (1), along with Gm=d (2) Inserting (1) into (2) we get d=Gm=G[G T /2], = 2[GG T ] -1 d and inserting into (1): m = G T [GG T ] -1 d - solution exist when purely underdetermined
14
Mixed-determined problems Over Under Mixed determined determined determined 1)Partition into overdetermined and underdetermined parts, solve by LS and minimum norm - SVD (later) 2)Minimize some combination of the prediction error and solution length for the unpartitioned model (m)=E+ 2 L=e T e+ 2 m T m m est =[G T G+ 2 I] -1 G T d - damped least squares
15
Mixed-determined problems (m)=E+ 2 L=e T e+ 2 m T m m est =[G T G+ 2 I] -1 G T d - damped least squares Regularization parameter 0th-order Tikhonov Regularization ||m|| ||Gm-d|| Min ||m|| 2, ||Gm-d|| 2 < min ||Gm-d|| 2, ||m|| 2 < ||m|| ‘L-curves’ ||Gm-d||
16
Other A Priori Info: Weighted Least Squares Data weighting (weighted measures if prediction error) E=e T W e e W e is a weighting matrix, defining relative contribution of each individual error to the total prediction error (usually diagonal). For example, for 5 observations, the 3rd may be twice as accurately determined as the others: Diag(W e )=[1, 1, 2, 1, 1] T Completely overdetermined problem: m est =[G T W e G] -1 G T W e d
17
Other A Priori Info: Constrained Regression d i =m 1 +m 2 x i Constraint: line must pass through (x’,d’): d’=m 1 +m 2 x’ Fm= [1 x’] [m 1 m 2 ] T = [d’] Similar to the unconstrained solution (2.5) we get: m 1 est M ∑ x i 1 -1 ∑ d i m 2 est = ∑ x i ∑ x i 2 x’ ∑ x i d i 1 1 x’ 0 d’ o o o o (x’,d’) o d x M ∑ x i -1 ∑ x i ∑ x i 2 Unconstrained solution: [G T G] -1 G T d = ∑ d i ∑ x i d i
18
Other A Priori Info: Weighting model parameters Instead of using minimum length as solution simplicity, One may impose smoothness in the model: -1 1 m 1 -1 1 m 2 l =... = Dm... -1 1 m N D is the flatness matrix L=l T l=[Dm] T [Dm]=m T D T Dm=m T W m m, W m =D T D firsth-order Tikhonov Regularization - min||Gm-d|| 2 2 + ||Lm|| 2 2
19
Other A Priori Info: Weighting model parameters Instead of using minimum length as solution simplicity, One may impose smoothness in the model: 1 -2 1 m 1 1 -2 1 m 2 l =.... = Dm.... 1 -2 1 m N D is the roughness matrix L=l T l=[Dm] T [Dm]=m T D T Dm=m T W m m, W m =D T D 2nd-order Tikhonov Regularization- min||Gm-d|| 2 2 + ||Lm|| 2 2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.