Principles of Least Squares

Slides:



Advertisements
Similar presentations
The Simple Linear Regression Model Specification and Estimation Hill et al Chs 3 and 4.
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
SURVEY ADJUSTMENTS.
Welcome to PHYS 225a Lab Introduction, class rules, error analysis Julia Velkovska.
Statistical Techniques I EXST7005 Simple Linear Regression.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Data Modeling and Parameter Estimation Nov 9, 2005 PSCI 702.
Chapter 10 Curve Fitting and Regression Analysis
The adjustment of the observations
P M V Subbarao Professor Mechanical Engineering Department
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Curve-Fitting Regression
Class notes for ISE 201 San Jose State University
Probability & Statistics for Engineers & Scientists, by Walpole, Myers, Myers & Ye ~ Chapter 11 Notes Class notes for ISE 201 San Jose State University.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Weights of Observations
Calibration & Curve Fitting
LIAL HORNSBY SCHNEIDER
Objectives of Multiple Regression
Least-Squares Regression
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
Simple Linear Regression
Taylor Series.
Introduction to Error Analysis
Adjustment of Triangulation. Introduction Triangulation was the preferred method for horizontal control surveys until the EDM was developed Angles could.
Stats for Engineers Lecture 9. Summary From Last Time Confidence Intervals for the mean t-tables Q Student t-distribution.
3/2003 Rev 1 I – slide 1 of 33 Session I Part I Review of Fundamentals Module 2Basic Physics and Mathematics Used in Radiation Protection.
R. Kass/W03P416/Lecture 7 1 Lecture 7 Some Advanced Topics using Propagation of Errors and Least Squares Fitting Error on the mean (review from Lecture.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Modern Navigation Thomas Herring
Curve-Fitting Regression
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
How Errors Propagate Error in a Series Errors in a Sum Error in Redundant Measurement.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION The Minimum Variance Estimate ASEN 5070 LECTURE.
Measurements and Errors. Definition of a Measurement The application of a device or apparatus for the purpose of ascertaining an unknown quantity. An.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Surveying II. Lecture 1.. Types of errors There are several types of error that can occur, with different characteristics. Mistakes Such as miscounting.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
SURVEYING II (CE 6404) UNIT II SURVEY ADJUSTMENTS
R. Kass/Sp07P416/Lecture 71 More on Least Squares Fit (LSQF) In Lec 5, we discussed how we can fit our data points to a linear function (straight line)
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Coordinate Transformations
The simple linear regression model and parameter estimation
Adjustment of Trilateration
Part 5 - Chapter
CH 5: Multivariate Methods
Level Circuit Adjustment
Simple Linear Regression - Introduction
Precisions of Adjusted Quantities
Random Error Propagation
Statistical Methods For Engineers
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Linear regression Fitting a straight line to observations.
6.5 Taylor Series Linearization
5.2 Least-Squares Fit to a Straight Line
5.4 General Linear Least-Squares
Regression Lecture-5 Additional chapters of mathematics
Discrete Least Squares Approximation
Random Error Propagation
Propagation of Error Berlin Chen
Propagation of Error Berlin Chen
Presentation transcript:

Principles of Least Squares

Introduction In surveying, we often have geometric constraints for our measurements Differential leveling loop closure = 0 Sum of interior angles of a polygon = (n-2)180° Closed traverse: Σlats = Σdeps = 0 Because of measurement errors, these constraints are generally not met exactly, so an adjustment should be performed

Random Error Adjustment We assume (hope?) that all systematic errors have been removed so only random error remains Random error conforms to the laws of probability Should adjust the measurements accordingly Why?

Definition of a Residual If M represents the most probable value of a measured quantity, and zi represents the ith measurement, then the ith residual, vi is: vi = M – zi

Fundamental Principle of Least Squares In order to obtain most probable values (MPVs), the sum of squares of the residuals must be minimized. (See book for derivation.) In the weighted case, the weighted squares of the residuals must be minimized. Technically the weighted form shown assumes that the measurements are independent, but we can handle the general case involving covariance.

Stochastic Model The covariances (including variances) and hence the weights as well, form the stochastic model Even an “unweighted” adjustment assumes that all observations have equal weight which is also a stochastic model The stochastic model is different from the mathematical model Stochastic models may be determined through sample statistics and error propagation, but are often a priori estimates.

Mathematical Model The mathematical model is a set of one or more equations that define an adjustment condition Examples are the constraints mentioned earlier Models also include collinearity equations in photogrammetry and the equation of a line in linear regression It is important that the model properly represents reality – for example the angles of a plane triangle should total 180°, but if the triangle is large, spherical excess cause a systematic error so a more elaborate model is needed.

Types of Models Conditional and Parametric A conditional model enforces geometric conditions on the measurements and their residuals A parametric model expresses equations in terms of unknowns that were not directly measured, but relate to the measurements (e.g. a distance expressed by coordinate inverse) Parametric models are more commonly used because it can be difficult to express all of the conditions in a complicated measurement network

Observation Equations Observation equations are written for the parametric model One equation is written for each observation The equation is generally expressed as a function of unknown variables (such as coordinates) equals a measurement plus a residual We want more measurements than unknowns which gives a redundant adjustment

Elementary Example Consider the following three equations involving two unknowns. If Equations (1) and (2) are solved, x = 1.5 and y = 1.5. However, if Equations (2) and (3) are solved, x = 1.3 and y = 1.1 and if Equations (1) and (3) are solved, x = 1.6 and y = 1.4. (1) x + y = 3.0 (2) 2x – y = 1.5 (3) x – y = 0.2 If we consider the right side terms to be measurements, they have errors and residual terms must be included for consistency.

Example - Continued x + y – 3.0 = v1 2x – y – 1.5 = v2 To find the MPVs for x and y we use a least squares solution by minimizing the sum of squares of residuals.

Example - Continued To minimize, we take partial derivatives with respect to each of the variables and set them equal to zero. Then solve the two equations. These equations simplify to the following normal equations. 6x – 2y = 6.2 -2x + 3y = 1.3

Example - Continued Solve by matrix methods. We should also compute residuals: v1 = 1.514 + 1.443 – 3.0 = -0.044 v2 = 2(1.514) – 1.443 – 1.5 = 0.086 v3 = 1.514 – 1.443 – 0.2 = -0.128

Systematic Formation of Normal Equations

Resultant Equations Following derivation in the book results in:

Example – Systematic Approach Now let’s try the systematic approach to the example. (1) x + y = 3.0 + v1 (2) 2x – y = 1.5 + v2 (3) x – y = 0.2 + v3 Create a table: a b l a2 ab b2 al bl 1 3.0 2 -1 1.5 4 -2 -1.5 0.2 -0.2 Σ=6 Σ=-2 Σ=3 Σ=6.2 Σ=1.3 Note that this yields the same normal equations.

Matrix Method Matrix form for linear observation equations: AX = L + V Where: Note: m is the number of observations and n is the number of unknowns. For a redundant solution, m > n .

Least Squares Solution Applying the condition of minimizing the sum of squared residuals: ATAX = ATL or NX = ATL Solution is: X = (ATA)-1ATL = N -1ATL and residuals are computed from: V = AX – L

Example – Matrix Approach

Matrix Form With Weights Weighted linear observation equations: WAX = WL + WV Normal equations: ATWAX = NX = ATWL

Matrix Form – Nonlinear System We use a Taylor series approximation. We will need the Jacobian matrix and a set of initial approximations. The observation equations are: JX = K + V Where: J is the Jacobian matrix (partial derivatives) X contains corrections for the approximations K has observed minus computed values V has the residuals The least squares solution is: X = (JTJ)-1JTK = N-1JTK

Weighted Form – Nonlinear System The observation equations are: WJX = WK + WV The least squares solution is: X = (JTWJ)-1JTWK = N-1JTWK

Example 10.2 Determine the least squares solution for the following: F(x,y) = x + y – 2y2 = -4 G(x,y) = x2 + y2 = 8 H(x,y) = 3x2 – y2 = 7.7 Use x0 = 2, and y0 = 2 for initial approximations.

Example - Continued Take partial derivatives and form the Jacobian matrix.

Example - Continued Form K matrix and set up least squares solution.

Example - Continued Add the corrections to get new approximations and repeat. x0 = 2.00 – 0.02125 = 1.97875 y0 = 2.00 + 0.00458 = 2.00458 Add the new corrections to get better approximations. x0 = 1.97875 + 0.00168 = 1.98043 y0 = 2.00458 + 0.01004 = 2.01462 Further iterations give negligible corrections so the final solution is: x = 1.98 y = 2.01

Linear Regression Fitting x,y data points to a straight line: y = mx + b

Observation Equations In matrix form: AX = L + V

Example 10.3 Fit a straight line to the points in the table. Compute m and b by least squares. In matrix form: point x y A 3.00 4.50 B 4.25 C 5.50 D 8.00

Example - Continued

Standard Deviation of Unit Weight Where: m is the number of observations and n is the number of unknowns Question: What about x-values? Are they observations?

Fitting a Parabola to a Set of Points Equation: Ax2 + Bx + C = y This is still a linear problem in terms of the unknowns A, B, and C. Need more than 3 points for a redundant solution.

Example - Parabola

Parabola Fit Solution - 1 Set up matrices for observation equations

Parabola Fit Solution - 2 Solve by unweighted least squares solution Compute residuals

Condition Equations Establish all independent, redundant conditions Residual terms are treated as unknowns in the problem Method is suitable for “simple” problems where there is only one condition (e.g. interior angles of a polygon, horizon closure)

Condition Equation Example

Condition Example - Continued

Condition Example - Continued

Condition Example - Continued Note that the angle with the smallest standard deviation has the smallest residual and the largest SD has the largest residual

Example Using Observation Equations

Observation Example - Continued

Observation Example - Continued Note that the answer is the same as that obtained with condition equations.

Simple Method for Angular Closure Given a set of angles and associated variances and a misclosure, C, residuals can be computed by the following:

Angular Closure – Simple Method