The adjustment of the observations

Slides:



Advertisements
Similar presentations
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
Advertisements

The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
The Maximum Likelihood Method
Estimation of Means and Proportions
Mean, Proportion, CLT Bootstrap
Errors and Horizontal distance measurement
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
Sampling: Final and Initial Sample Size Determination
Simple Regression. Major Questions Given an economic model involving a relationship between two economic variables, how do we go about specifying the.
Visual Recognition Tutorial
The Simple Linear Regression Model: Specification and Estimation
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Lecture 5 Probability and Statistics. Please Read Doug Martinson’s Chapter 3: ‘Statistics’ Available on Courseworks.
Environmentally Conscious Design & Manufacturing (ME592) Date: May 5, 2000 Slide:1 Environmentally Conscious Design & Manufacturing Class 25: Probability.
Statistical Background
Types of Errors Difference between measured result and true value. u Illegitimate errors u Blunders resulting from mistakes in procedure. You must be careful.
Inferences About Process Quality
Statistical Treatment of Data Significant Figures : number of digits know with certainty + the first in doubt. Rounding off: use the same number of significant.
1 Seventh Lecture Error Analysis Instrumentation and Product Testing.
Hypothesis Testing Using The One-Sample t-Test
Sampling Designs Avery and Burkhart, Chapter 3 Source: J. Hollenbeck.
Principles of Least Squares
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Statistical Hypothesis Testing. Suppose you have a random variable X ( number of vehicle accidents in a year, stock market returns, time between el nino.
Estimation and Hypothesis Testing. The Investment Decision What would you like to know? What will be the return on my investment? Not possible PDF for.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Stats for Engineers Lecture 9. Summary From Last Time Confidence Intervals for the mean t-tables Q Student t-distribution.
RMTD 404 Lecture 8. 2 Power Recall what you learned about statistical errors in Chapter 4: Type I Error: Finding a difference when there is no true difference.
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
Measurement Uncertainties Physics 161 University Physics Lab I Fall 2007.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
Physics 114: Exam 2 Review Lectures 11-16
Theory of Errors in Observations
1 Statistical Distribution Fitting Dr. Jason Merrick.
STA Lecture 181 STA 291 Lecture 18 Exam II Next Tuesday 5-7pm Memorial Hall (Same place) Makeup Exam 7:15pm – 9:15pm Location TBA.
Uncertainty & Error “Science is what we have learned about how to keep from fooling ourselves.” ― Richard P. FeynmanRichard P. Feynman.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
STA Lecture 191 STA 291 Lecture 19 Exam II Next Tuesday 5-7pm Memorial Hall (Same place as exam I) Makeup Exam 7:15pm – 9:15pm Location CB 234.
How Errors Propagate Error in a Series Errors in a Sum Error in Redundant Measurement.
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Chapter 2 Statistical Background. 2.3 Random Variables and Probability Distributions A variable X is said to be a random variable (rv) if for every real.
1 Introduction What does it mean when there is a strong positive correlation between x and y ? Regression analysis aims to find a precise formula to relate.
Statistics 300: Elementary Statistics Sections 7-2, 7-3, 7-4, 7-5.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
Machine Learning 5. Parametric Methods.
Surveying II. Lecture 1.. Types of errors There are several types of error that can occur, with different characteristics. Mistakes Such as miscounting.
Chi Square Test for Goodness of Fit Determining if our sample fits the way it should be.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
SURVEYING II (CE 6404) UNIT II SURVEY ADJUSTMENTS
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Confidence Intervals Cont.
SUR-2250 Error Theory.
Surveying 1 / Dr. Najeh Tamim
The Maximum Likelihood Method
Quantitative Methods Simple Regression.
The Maximum Likelihood Method
Chapter 9 Hypothesis Testing.
Introduction to Instrumentation Engineering
Measure of precision - Probability curve shows the relationship between the size of an error and the probability of its occurrence. It provides the most.
Lecture # 2 MATHEMATICAL STATISTICS
6.3 Sampling Distributions
Chapter 8 Estimation.
Survey Networks Theory, Design and Testing
Propagation of Error Berlin Chen
Presentation transcript:

The adjustment of the observations Surveying II. The adjustment of the observations of a single quantity

A short revision Total error: the difference between the true value and the observation: The total error (ei) can be subdivided into two parts, the systematic error (di), and the random error (xi): system. error random err.

A short revision The probability density function The PDF of a normally distributed probabilistic variable:

A short revision The „3-sigma” rule: where: a – is the expected value of the x probabilistic variable

In order to compute ei the true value should be known. A short revision The mean error of the variable can be computed using the error (Gauss): . In order to compute ei the true value should be known. Thus the mean error must be estimated from the observations: Number of redundant observations (degree of freedom)

The relationship between the weight and the mean error: A short revision The relationship between the weight and the mean error: ,

Accuracy vs. Precision The result is accurate but not precise. The result is precise but not accurate. Precision: The value is close to the true value (total error is low) Accuracy: The approach gives similar results under changes conditions, too. (the mean error computed from the observations is low)

Adjusting the observations of a single quantity „A single observation is not an observation…” In case of more observations, discrepancies are experienced: Let’s assume, that the observations are statistically independent, the observations are free of systematic error : the mean error of the individual observations are known:

Adjusting the observations of a single quantity Task: to remove the discrepancies from the observations and to compute the most likely value of the quantity. Corrections are applied to each observation: The adjusted value of the observations (no system. error) Question: How could the correction values be determined (infinite number of possible correction sets) Let’s minimize the corrections!

Adjusting the observations of a single quantity Usually the weighted square of the corrections are minimized: „least squares adjustment” Let’s combine the two equations:

Adjusting the observations of a single quantity Let’s find the minimum of the function: thus:

Adjusting the observations of a single quantity By reordering this equation: Note that this formula is the formula of the weighted mean: an undistorted estimation; the most efficient one;

Adjusting the observations of a single quantity Let’s check the computations: The corrections are computed for each observation: Let’s compute the following sum: Check!

Adjusting the observations of a single quantity Task: The computation of the mean error of the adjusted value. The law of error propagation can be used: Thus the mean error of the adjusted value is:

Adjusting the observations of a single quantity The mean error of the adjusted value: Introducing the relationship between the weight and the mean error:

Adjusting the observations of a single quantity Thus the mean error of the adjusted value can be computed by: Please note: It is necessary to know the mean error of the unit weight before the computation of the mean error (‘a priori’ mean error of unit weight) In this case the computed mean error values are based on variables, which are known before the adjustment process. These are the ‘a priori’ mean error values. They can be used for planning the observations.

Adjusting the observations of a single quantity After the adjustment, the mean error of the unit weight can be estimated using the following equation: where f is the number of redundant observations (degree of freedom) In case of n observations of a single quantity f=n-1: This equation uses quantities available after the adjustment process. ‘a posteriori’ mean error of unit weight

Adjusting the observations of a single quantity Using the ‘a posteriori’ (after the adjustment) mean error of unit weights, the mean error of the observations as well as the adjusted value can be computed:

Adjusting the observations of a single quantity Question: How much is the weight of the adjusted value? Since the mean error of the unit weight is known, using the relationship between the weight and the mean error, the weight of the adjusted value can be computed: Thus the weight of the adjusted value equals to the sum of the weights of the observations.

The ‘a priori’ and the ‘a posteriori’ mean error the ‘a priori’ mean error reflect our knowledge before the observations (instrument specifications, prior experiences); the ‘a posteriori’ mean error can be computed after the adjustment of the observations (experienced mean error); in both cases the mean error of the unit weight can be computed; when the two values are close to each other, then: the ‘a priori’ mean error values are realistic (our observations are accurate enough) our observations are not affected by blunders

The process of adjustment Given observations and mean err.: L1, L2, L3, …, Ln and m1, m2, m3, …, mn Let’s choose an ‘a priori’ mean error of the unit weight: m Define the weights of the observations: Compute the adjusted value:

The process of adjustment Compute the corrections: Check the adjustment: Compute the ‘a posteriori’ mean error of unit weight: Compute the weight of the adjusted value: Compute the ‘a posteriori’ mean error of the observations and the adjusted value:

When the ‘a priori’ mean error of observations are equal In this case: Thus the observations have a unit weight! The adjusted value: The corrections:

When the ‘a priori’ mean error of observations are equal The ‘a posteriori’ mean error of unit weight: The weight of the adjusted value: The ‘a posteriori’ mean error of the observations and the adjusted values:

A simple example L1 = 121.115m ± 10mm L2 = 121.119m ± 5mm The results of the distance observations between two points are given with their mean error values: L1 = 121.115m ± 10mm L2 = 121.119m ± 5mm L3 = 121.121m ± 5mm L4 = 121.118m ± 10mm L5 = 121.116m ± 10mm

A simple example L0= 120,110 Li [mm] mi mi2 Pi piLi vi Pivi Vi2 Pivi2 +5 10 100 1 +4 16 +9 5 25 4 +36 +11 +44 -2 -8 +8 +1 +6 +3 9 S 11 +99 42 Let’s choose an ‘a priori’ mean error of u.w.: m=5 The adjusted value: The ‘a posteriori’ mean error of u.w.: