Weights of Observations

Slides:



Advertisements
Similar presentations
SURVEY ADJUSTMENTS.
Advertisements

Chapter 4 Systems of Linear Equations; Matrices
TRAVERSING Computations.
The Laws of Linear Combination James H. Steiger. Goals for this Module In this module, we cover What is a linear combination? Basic definitions and terminology.
Class 25: Even More Corrections and Survey Networks Project Planning 21 April 2008.
Statistics for the Social Sciences
Class 5: Thurs., Sep. 23 Example of using regression to make predictions and understand the likely errors in the predictions: salaries of teachers and.
The Laws of Linear Combination James H. Steiger. Goals for this Module In this module, we cover What is a linear combination? Basic definitions and terminology.
The Simple Regression Model
Combining Individual Securities Into Portfolios (Chapter 4)
Traversing Chapter 9.
Linear regression models in matrix terms. The regression function in matrix terms.
Principles of Least Squares
Separate multivariate observations
Lesson Applying Profiling Leveling Techniques. Interest Approach Discuss with the students why producers sometimes put drainage ditches in their fields.
Lecture 3-2 Summarizing Relationships among variables ©
Simple linear regression and correlation analysis
1 SVY207: Lecture 18 Network Solutions Given many GPS solutions for vectors between pairs of observed stations Compute a unique network solution (for many.
Class Meeting #11 Data Analysis. Types of Statistics Descriptive Statistics used to describe things, frequently groups of people.  Central Tendency 
Copyright © Cengage Learning. All rights reserved. 5 Probability Distributions (Discrete Variables)
Investment Analysis and Portfolio management Lecture: 24 Course Code: MBF702.
Some matrix stuff.
The ‘a priori’ mean error of levelling. Computation of heighting lines and joints Budapesti Műszaki és Gazdaságtudományi Egyetem Általános- és Felsőgeodézia.
Tuesday August 27, 2013 Distributions: Measures of Central Tendency & Variability.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Modern Navigation Thomas Herring
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
Chapter 7 Sampling Distributions Statistics for Business (Env) 1.
Measures of Central Tendency & Variability Dhon G. Dungca, M.Eng’g.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Jeopardy Hypothesis Testing t-test Basics t for Indep. Samples Related Samples t— Didn’t cover— Skip for now Ancient History $100 $200$200 $300 $500 $400.
How Errors Propagate Error in a Series Errors in a Sum Error in Redundant Measurement.
Risk Analysis & Modelling Lecture 2: Measuring Risk.
4.5 Inverse of a Square Matrix
Statistics in Applied Science and Technology Chapter14. Nonparametric Methods.
Adjustment of Level Nets. Introduction In this chapter we will deal with differential leveling only In SUR2101 you learned how to close and adjust a level.
Learning Objectives for Section 4.5 Inverse of a Square Matrix
Introducing Error Co-variances in the ARM Variational Analysis Minghua Zhang (Stony Brook University/SUNY) and Shaocheng Xie (Lawrence Livermore National.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Lesson Applying Differential Leveling Techniques.
SVY207: Lecture 10 Computation of Relative Position from Carrier Phase u Observation Equation u Linear dependence of observations u Baseline solution –Weighted.
Theory of Errors in Observations Chapter 3 (continued)
Simple and multiple regression analysis in matrix form Least square Beta estimation Beta Simple linear regression Multiple regression with two predictors.
Copyright © Cengage Learning. All rights reserved. 8 9 Correlation and Regression.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Law of Sines  Use the Law of Sines to solve oblique triangles (AAS or ASA).  Use the Law of Sines to solve oblique triangles (SSA).  Find the.
SURVEYING II (CE 6404) UNIT II SURVEY ADJUSTMENTS
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
Surveying 1 / Dr. Najeh Tamim
ELEC 3105 Lecture 2 ELECTRIC FIELD LINES …...
Linear Algebra Review.
Essentials of Modern Business Statistics (7e)
Graphing Equations and Inequalities
Level Circuit Adjustment
Precisions of Adjusted Quantities
CHAPTER 26: Inference for Regression
The Coordinate Plane; Graphs of Equations; Circles
One-Way Analysis of Variance
Day 107 – A rectangle and a square on the x-y plane
Numerical Analysis Lecture 17.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Day 54 – Congruence in opposite dilation
Math review - scalars, vectors, and matrices
Derivatives of Inverse Functions
Survey Networks Theory, Design and Testing
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

Weights of Observations

Introduction Weights can be assigned to observations according to their relative quality Example: Interior angles of a traverse are measured – half of them by an inexperienced operator and the other half by the best instrument person. Relative weight should be applied. Weight is inversely proportional to variance

Relation to Covariance Matrix With correlated observations, weights are related to the inverse of the covariance matrix, Σ. For convenience, we introduce the concept of a cofactor. The cofactor is related to its associated covariance element by a scale factor which is the inverse of the reference variance.

Recall, Covariance Matrix For independent observations, the off-diagonal terms are all zero.

Cofactor Matrix We can also define a cofactor matrix which is related to the covariance matrix. The weight matrix is then:

Weight Matrix for Independent Observations Covariance matrix is diagonal Inverse is also diagonal, where each diagonal term is the reciprocal of the corresponding variance element Therefore, the weight for observation i is: If the weight, wi = 1, then is the variance of an observation of unit weight (reference variance)

Reference Variance It is an arbitrary scale factor (a priori) A convenient value is 1 (one) In that case the weight of an independent observation is the reciprocal of its variance

Simple Weighted Mean Example A distance is measured three times, giving values of 151.9, 152.5, and 152.5. Compute the mean. Same answer by weighted mean. The value 152.5 appears twice so it can be given a relative weight of 2.

Weighted Mean Formula

Weighted Mean – Example 2 A line was measured twice, using two different total stations. The distance observations are listed below along with the computed standard deviations based on the instrument specifications. Compute the weighted mean. D1 = 1097.253 m σ1 = 0.010 m D2 = 1097.241 m σ2 = 0.005 m Solution: First, compute the weights.

Example - Continued Now, compute the weighted mean. Notice that the value is much closer to the more precise observation.

Standard Deviations – Weighted Case When computing a weighted mean, you want an indication of standard deviation of observations. Since there are different weights, there will be different standard deviations A single representative value is the standard deviation of an observation of unit weight We can also compute standard deviation for a particular observation And compute the standard deviation of the weighted mean

Standard Deviation Formulas Standard deviation of unit weight Standard deviation of observation, i Standard deviation of the weighted mean

Weights for Angles and Leveling If all other conditions are equal, angle weights are directly proportional to the number of turns For differential leveling it is conventional to consider entire lines of levels rather than individual setups. Weights are: Inversely proportional to line length Inversely proportional to number of setups

Angle Example 9.2 This example asks for an “adjustment” and uses the concept of a correction factor which has not been described at this point. We will skip this type of problem until we get to the topic of least squares adjustment.

Differential Leveling Example Four different routes were taken to determine the elevation difference between two benchmarks (see table). Computed the weighted mean elevation difference.

Example - Continued Weights: (note that weights are multiplied by 12 to produce integers, but this is not necessary) Compute weighted mean: What about significant figures?

Example - Continued Compute residuals Compute standard deviation of unit weight Compute standard deviation of the mean

Example - Continued Standard deviations of weighted observations:

Summary Weighting allows us to consider different precisions of individual observations So far, the examples have been with simple means Soon, we will look at least squares adjustment with weights In adjustments involving observations of different types (e.g. angles and distances) it is essential to use weights