adjustment theory / least squares adjustment Tutorial at IWAA2010 / examples Markus Schlösser adjustment theory Hamburg,
Markus Schlösser | adjustment theory | | Page 2 random numbers > Computer generated random numbers are only pseudo-random numbers Mostly only uniformly distributed prn are availiable (C, Pascal, Excel, …) Some packages (octave, matlab, etc.) have normally distributed prn („randn“) > Normally distributed prn can be obtained by Box-Muller method Sum of 12 U(0,1) (is an example for central limit theorem) ….
Markus Schlösser | adjustment theory | | Page 3 random numbers / distributions
Markus Schlösser | adjustment theory | | Page 4 random numbers / distributions
Markus Schlösser | adjustment theory | | Page 5 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 10 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability, 9 (10-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value
Markus Schlösser | adjustment theory | | Page 6 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 100 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability, 99 (100-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value
Markus Schlösser | adjustment theory | | Page 7 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 1000 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability, 999 (1000-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value
Markus Schlösser | adjustment theory | | Page 8 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 10 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability,9 (10-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value blunder
Markus Schlösser | adjustment theory | | Page 9 error propagation > assume we have instrument stand S fixed point F S and F both with known (error free) coordinates horizontal angle to F and P, distance from S to P instrument accuracy well known from other experiments > looking for coordinates of P confidence ellipse of P
Markus Schlösser | adjustment theory | | Page 10 error propagation gon t SF P F S Y [m]X [m]Point m d SP [m] gon =r SP [gon]=X gon r SF [gon] Parameters Observations Unknowns m17.836Y P [m] m = X P [m] =Z standard deviation of observations Variance / Covariance Matrix
Markus Schlösser | adjustment theory | | Page 11 error propagation =F F contains partitial derivative of build the difference quotient (numerically) with
Markus Schlösser | adjustment theory | | Page 12 error propagation ZZ = covariance matrix of unknowns variances of coordinates are on the main diagonal BUT, this information is incomplete and could even be misleading, better use Helmert‘s error ellipse:
Markus Schlösser | adjustment theory | | Page 13 error propagation or even better, use a confidence ellipse. That means that with a chosen probablity P the target point is inside this confidence ellipse. P = 0.99 (=99%) Quantil of -distribution, with 2 degrees of freedom A 0.99 = 0.61mm B 0.99 = 0.21mm = 50gon
Markus Schlösser | adjustment theory | | Page 14 network adjustment Example: Adjustment of a 2D-network with angular and distance measurements
Markus Schlösser | adjustment theory | | Page 15 adjustment theory > f = 0 no adjustment, but error propagation possible no control of measurement > f > 0 adjustment possible measurement is controlled by itself f > 100 typical for large networks > f < 0 scratch your head
Markus Schlösser | adjustment theory | | Page 16 network adjustment S S S N N N N N N N N1 Y [m]X [m]Name small + regular network 2D for easier solution and smaller matrices 3 instrument stands (S1, S2, S3) 8 target points (N1 … N8) all points are unknown (no fixed points) initial coordinates are arbitrary, they just have to represent the geometry of the network
Markus Schlösser | adjustment theory | | Page 17 network adjustment - input vector of unknowns vector of observations vector of coarse coordinates vector of standard deviations
Markus Schlösser | adjustment theory | | Page 18 network adjustment
Markus Schlösser | adjustment theory | | Page 19 network adjustment – design matrix
Markus Schlösser | adjustment theory | | Page 20 network adjustment A-Matrix has lots of zero-elements Network points instrument stands orientation unknowns
Markus Schlösser | adjustment theory | | Page 21 network adjustment P is a diagonal matrix, because we assume that observations are uncorrelated
Markus Schlösser | adjustment theory | | Page 22 network adjustment Normal matrix shows dependencies between elements Normal matrix is singular when adjusting networks without fixed points easy inversion of N is not possible network datum has to be defined add rows and colums, to make the matrix regular
Markus Schlösser | adjustment theory | | Page 23 network adjustment datum deficiency for 2D-network with distances: 2 translations 1 rotation minimize the total matrix trace means to put the network on all point coordinates additional rows and columns look as Constraints: No shift of network in x No shift of network in y No rotation of network around z
Markus Schlösser | adjustment theory | | Page 24 network adjustment after addition of G, Normalmatrix is regular and thus invertible N -1 is in general fully occupied
Markus Schlösser | adjustment theory | | Page 25 network adjustment
Markus Schlösser | adjustment theory | | Page 26 network adjustment adjusted coordinates and orientation unknowns information about the error ellipses
Markus Schlösser | adjustment theory | | Page 27 network adjustment
Markus Schlösser | adjustment theory | | Page 28 network adjustment building the covariance matrix of unknowns (with empirical s 0 2 ) 2D-Network degrees of freedom error probability 1-
Markus Schlösser | adjustment theory | | Page 29 network adjustment error ellipses with P=0.01 error probability for all network points
Markus Schlösser | adjustment theory | | Page 30 network adjustment confidence ellipses for all network points relative confidence ellipses beewen some network points
Markus Schlösser | adjustment theory | | Page 31 network adjustment Relative confidence ellipses are most useful in accelerator sience, because most of the time you are only interested in relative accuracy between components. For relative ellipse between N2 and N4 Ellipse parameters are then calculated from rel N2N4
Markus Schlösser | adjustment theory | | Page 32 network adjustment estimation of s 0 2 from corrections v is used as a statistical test, to proof that the model parameters are right à priori variances are ok, with P = 0.99
Markus Schlösser | adjustment theory | | Page 33 adjustment Example: 2D - ellipsoid fid deviation of position and rotation of an ellipsoidal flange
Markus Schlösser | adjustment theory | | Page 34 flange adjustment known parameters (e.g. from workshop drawing) unknowns with initial value Observations constraints
Markus Schlösser | adjustment theory | | Page 35 flange adjustment Since it is not (easily) possible to separate unknowns and observations in the constraints, we use the general adjustment model: B contains the derivative of with respect to L A contains the derivative of with respect to X k are the Lagranges Multiplicators (“Korrelaten”) x is the vector of unknowns w is the vector (L,X 0 )
Markus Schlösser | adjustment theory | | Page 36 flange adjustment
Markus Schlösser | adjustment theory | | Page 37 flange adjustment Result:
Markus Schlösser | adjustment theory | | Page 38 the end for now may your [vv] always be minimal …