Adjustment theory / least squares adjustment Tutorial at IWAA2010 / examples Markus Schlösser adjustment theory Hamburg, 15.09.2010.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Welcome to PHYS 225a Lab Introduction, class rules, error analysis Julia Velkovska.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Sampling: Final and Initial Sample Size Determination
Ch11 Curve Fitting Dr. Deshi Ye
Lecture 3 Probability and Measurement Error, Part 2.
The adjustment of the observations
The General Linear Model. The Simple Linear Model Linear Regression.
Wolfgang Niemeier Hamburg, Sept. 15, 2010 Tutorial: Error Theory and Adjustment of Networks Institut für Geodäsie und Photogrammetrie.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Ka-fu Wong © 2003 Chap 9- 1 Dr. Ka-fu Wong ECON1003 Analysis of Economic Data.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Linear and generalised linear models
Copyright © 2014 by McGraw-Hill Higher Education. All rights reserved.
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Let sample from N(μ, σ), μ unknown, σ known.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Lecture II-2: Probability Review
Modern Navigation Thomas Herring
Principles of Least Squares
Standard error of estimate & Confidence interval.
Separate multivariate observations
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Chapter 6: Sampling Distributions
Geology 5670/6670 Inverse Theory 26 Jan 2015 © A.R. Lowry 2015 Read for Wed 28 Jan: Menke Ch 4 (69-88) Last time: Ordinary Least Squares (   Statistics)
Multiple Linear Regression - Matrix Formulation Let x = (x 1, x 2, …, x n )′ be a n  1 column vector and let g(x) be a scalar function of x. Then, by.
GEO7600 Inverse Theory 09 Sep 2008 Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of.
1 SVY207: Lecture 18 Network Solutions Given many GPS solutions for vectors between pairs of observed stations Compute a unique network solution (for many.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
1 Introduction to Estimation Chapter Concepts of Estimation The objective of estimation is to determine the value of a population parameter on the.
1 SAMPLE MEAN and its distribution. 2 CENTRAL LIMIT THEOREM: If sufficiently large sample is taken from population with any distribution with mean  and.
© 2003 Prentice-Hall, Inc.Chap 6-1 Business Statistics: A First Course (3 rd Edition) Chapter 6 Sampling Distributions and Confidence Interval Estimation.
Random Sampling, Point Estimation and Maximum Likelihood.
Linear Regression Andy Jacobson July 2006 Statistical Anecdotes: Do hospitals make you sick? Student’s story Etymology of “regression”
Sampling distributions chapter 6 ST 315 Nutan S. Mishra Department of Mathematics and Statistics University of South Alabama.
Determination of Sample Size: A Review of Statistical Theory
Estimation Chapter 8. Estimating µ When σ Is Known.
Manijeh Keshtgary Chapter 13.  How to report the performance as a single number? Is specifying the mean the correct way?  How to report the variability.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Confidence Intervals Lecture 3. Confidence Intervals for the Population Mean (or percentage) For studies with large samples, “approximately 95% of the.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
How Errors Propagate Error in a Series Errors in a Sum Error in Redundant Measurement.
Understanding Basic Statistics
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
ES 07 These slides can be found at optimized for Windows)
Section 6.2 Confidence Intervals for the Mean (Small Samples) Larson/Farber 4th ed.
Topics Semester I Descriptive statistics Time series Semester II Sampling Statistical Inference: Estimation, Hypothesis testing Relationships, casual models.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate its.
Simple and multiple regression analysis in matrix form Least square Beta estimation Beta Simple linear regression Multiple regression with two predictors.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Sampling and Sampling Distributions. Sampling Distribution Basics Sample statistics (the mean and standard deviation are examples) vary from sample to.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Canadian Bioinformatics Workshops
Estimating standard error using bootstrap
Confidence Intervals Cont.
(5) Notes on the Least Squares Estimate
Introduction, class rules, error analysis Julia Velkovska
Sample Mean Distributions
Statistics in Applied Science and Technology
CONCEPTS OF ESTIMATION
Modelling data and curve fitting
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
5.4 General Linear Least-Squares
OVERVIEW OF LINEAR MODELS
Principles of the Global Positioning System Lecture 11
Survey Networks Theory, Design and Testing
How Confident Are You?.
Presentation transcript:

adjustment theory / least squares adjustment Tutorial at IWAA2010 / examples Markus Schlösser adjustment theory Hamburg,

Markus Schlösser | adjustment theory | | Page 2 random numbers > Computer generated random numbers  are only pseudo-random numbers  Mostly only uniformly distributed prn are availiable (C, Pascal, Excel, …)  Some packages (octave, matlab, etc.) have normally distributed prn („randn“) > Normally distributed prn can be obtained by  Box-Muller method  Sum of 12 U(0,1) (is an example for central limit theorem)  ….

Markus Schlösser | adjustment theory | | Page 3 random numbers / distributions

Markus Schlösser | adjustment theory | | Page 4 random numbers / distributions

Markus Schlösser | adjustment theory | | Page 5 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 10 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability, 9 (10-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value

Markus Schlösser | adjustment theory | | Page 6 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 100 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability, 99 (100-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value

Markus Schlösser | adjustment theory | | Page 7 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 1000 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability, 999 (1000-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value

Markus Schlösser | adjustment theory | | Page 8 random variables / repeated measurements Random variable Observations „real“ value (normally unknown) normal distributed errors "Real" Value3.1534(normally not known) Sigma0.0010(theoretical standard deviation) From 10 Measurements Mean Median s single (empirical standard deviation for single measurement) s mean (empirical standard deviation for mean value) t (0.975;9) quantil of student's t-distribution, 5% error probability,9 (10-1) degrees of freedom PV P(mean - PV <= mean <= mean+PV) = 0.95confidence interval for mean value blunder

Markus Schlösser | adjustment theory | | Page 9 error propagation > assume we have  instrument stand S  fixed point F  S and F both with known (error free) coordinates  horizontal angle to F and P, distance from S to P  instrument accuracy well known from other experiments > looking for  coordinates of P  confidence ellipse of P

Markus Schlösser | adjustment theory | | Page 10 error propagation gon t SF P F S Y [m]X [m]Point m d SP [m] gon =r SP [gon]=X gon r SF [gon] Parameters Observations Unknowns m17.836Y P [m] m = X P [m] =Z standard deviation of observations Variance / Covariance Matrix

Markus Schlösser | adjustment theory | | Page 11 error propagation =F F contains partitial derivative of  build the difference quotient (numerically) with

Markus Schlösser | adjustment theory | | Page 12 error propagation  ZZ = covariance matrix of unknowns variances of coordinates are on the main diagonal BUT, this information is incomplete and could even be misleading, better use Helmert‘s error ellipse:

Markus Schlösser | adjustment theory | | Page 13 error propagation or even better, use a confidence ellipse. That means that with a chosen probablity P the target point is inside this confidence ellipse. P = 0.99 (=99%) Quantil of -distribution, with 2 degrees of freedom A 0.99 = 0.61mm B 0.99 = 0.21mm  = 50gon

Markus Schlösser | adjustment theory | | Page 14 network adjustment Example: Adjustment of a 2D-network with angular and distance measurements

Markus Schlösser | adjustment theory | | Page 15 adjustment theory > f = 0  no adjustment, but error propagation possible  no control of measurement > f > 0  adjustment possible  measurement is controlled by itself  f > 100 typical for large networks > f < 0  scratch your head

Markus Schlösser | adjustment theory | | Page 16 network adjustment S S S N N N N N N N N1 Y [m]X [m]Name  small + regular network  2D for easier solution and smaller matrices  3 instrument stands (S1, S2, S3)  8 target points (N1 … N8)  all points are unknown (no fixed points)  initial coordinates are arbitrary, they just have to represent the geometry of the network

Markus Schlösser | adjustment theory | | Page 17 network adjustment - input vector of unknowns vector of observations vector of coarse coordinates vector of standard deviations

Markus Schlösser | adjustment theory | | Page 18 network adjustment

Markus Schlösser | adjustment theory | | Page 19 network adjustment – design matrix

Markus Schlösser | adjustment theory | | Page 20 network adjustment A-Matrix has lots of zero-elements Network points instrument stands orientation unknowns

Markus Schlösser | adjustment theory | | Page 21 network adjustment P is a diagonal matrix, because we assume that observations are uncorrelated

Markus Schlösser | adjustment theory | | Page 22 network adjustment Normal matrix shows dependencies between elements Normal matrix is singular when adjusting networks without fixed points easy inversion of N is not possible network datum has to be defined add rows and colums, to make the matrix regular

Markus Schlösser | adjustment theory | | Page 23 network adjustment datum deficiency for 2D-network with distances: 2 translations 1 rotation minimize the total matrix trace means to put the network on all point coordinates additional rows and columns look as Constraints: No shift of network in x No shift of network in y No rotation of network around z

Markus Schlösser | adjustment theory | | Page 24 network adjustment after addition of G, Normalmatrix is regular and thus invertible N -1 is in general fully occupied

Markus Schlösser | adjustment theory | | Page 25 network adjustment

Markus Schlösser | adjustment theory | | Page 26 network adjustment adjusted coordinates and orientation unknowns information about the error ellipses

Markus Schlösser | adjustment theory | | Page 27 network adjustment

Markus Schlösser | adjustment theory | | Page 28 network adjustment building the covariance matrix of unknowns (with empirical s 0 2 ) 2D-Network degrees of freedom error probability 1- 

Markus Schlösser | adjustment theory | | Page 29 network adjustment error ellipses with P=0.01 error probability for all network points

Markus Schlösser | adjustment theory | | Page 30 network adjustment confidence ellipses for all network points relative confidence ellipses beewen some network points

Markus Schlösser | adjustment theory | | Page 31 network adjustment Relative confidence ellipses are most useful in accelerator sience, because most of the time you are only interested in relative accuracy between components. For relative ellipse between N2 and N4 Ellipse parameters are then calculated from  rel N2N4

Markus Schlösser | adjustment theory | | Page 32 network adjustment estimation of s 0 2 from corrections v is used as a statistical test, to proof that the model parameters are right à priori variances are ok, with P = 0.99

Markus Schlösser | adjustment theory | | Page 33 adjustment Example: 2D - ellipsoid fid deviation of position and rotation of an ellipsoidal flange

Markus Schlösser | adjustment theory | | Page 34 flange adjustment known parameters (e.g. from workshop drawing) unknowns with initial value Observations constraints

Markus Schlösser | adjustment theory | | Page 35 flange adjustment Since it is not (easily) possible to separate unknowns and observations in the constraints, we use the general adjustment model: B contains the derivative of  with respect to L A contains the derivative of  with respect to X k are the Lagranges Multiplicators (“Korrelaten”) x is the vector of unknowns w is the vector  (L,X 0 )

Markus Schlösser | adjustment theory | | Page 36 flange adjustment

Markus Schlösser | adjustment theory | | Page 37 flange adjustment Result:

Markus Schlösser | adjustment theory | | Page 38 the end for now may your [vv] always be minimal …