GG 313 Geological Data Analysis # 18 On Kilo Moana at sea October 25, 2005 Orthogonal Regression: Major axis and RMA Regression.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Derivation of Kinematic Equations
Statistical Techniques I EXST7005 Simple Linear Regression.
4.5: Linear Approximations and Differentials
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Quantitative Methods 2 Lecture 3 The Simple Linear Regression Model Edmund Malesky, Ph.D., UCSD.
P M V Subbarao Professor Mechanical Engineering Department
Optimization using Calculus
Curve-Fitting Regression
ESTIMATING THE REGRESSION COEFFICIENTS FOR SIMPLE LINEAR REGRESSION.
Lecture 5 Curve fitting by iterative approaches MARINE QB III MARINE QB III Modelling Aquatic Rates In Natural Ecosystems BIOL471 © 2001 School of Biological.
Constant velocity Average velocity equals the slope of a position vs time graph when an object travels at constant velocity.
Notes on Weighted Least Squares Straight line Fit Passing Through The Origin Amarjeet Bhullar November 14, 2008.
Chapter 5 Plotting Data Curve Fitting, Good Graphing Practices Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Regression Analysis British Biometrician Sir Francis Galton was the one who used the term Regression in the later part of 19 century.
Calibration & Curve Fitting
Managerial Economics Managerial Economics = economic theory + mathematical eco + statistical analysis.
Systems of Equations and Inequalities
Logarithmic and Exponential Functions
Least-Squares Regression
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2011 Hawkes Learning Systems. All rights reserved. Hawkes Learning Systems: College Algebra.
Derivation of Kinematic Equations
Every slope is a derivative. Velocity = slope of the tangent line to a position vs. time graph Acceleration = slope of the velocity vs. time graph How.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Regression analysis Control of built engineering objects, comparing to the plan Surveying observations – position of points Linear regression Regression.
Sketching quadratic functions To sketch a quadratic function we need to identify where possible: The y intercept (0, c) The roots by solving ax 2 + bx.
Derivation of Kinematic Equations
Curve-Fitting Regression
GG 313 Geological Data Analysis Lecture 13 Solution of Simultaneous Equations October 4, 2005.
1 Example 1 Evaluate Solution Since the degree 2 of the numerator equals the degree of the denominator, we must begin with a long division: Thus Observe.
CORRELATION. Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson’s coefficient of correlation.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
Vectors and the Geometry of Space Section 10.4 Lines and Planes in Space 2015.
Section 11.1 What is a differential equation?. Often we have situations in which the rate of change is related to the variable of a function An equation.
Lecture 39 Numerical Analysis. Chapter 7 Ordinary Differential Equations.
OLS Regression What is it? Closely allied with correlation – interested in the strength of the linear relationship between two variables One variable is.
© 2010 Pearson Education Inc.Goldstein/Schneider/Lay/Asmar, CALCULUS AND ITS APPLICATIONS, 12e– Slide 1 of 33 Chapter 3 Techniques of Differentiation.
CHAPTER 4 ESTIMATES OF MEAN AND ERRORS. 4.1 METHOD OF LEAST SQUARES I n Chapter 2 we defined the mean  of the parent distribution and noted that the.
Chapter 14 Introduction to Regression Analysis. Objectives Regression Analysis Uses of Regression Analysis Method of Least Squares Difference between.
Differential Equations Second-Order Linear DEs Variation of Parameters Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Modified Bessel Equations 홍성민. General Bessel Equation of order n: (1) The general solution of Eq.(1) Let’s consider the solutions of the 4.
Coordinate Transformations
Notes on Weighted Least Squares Straight line Fit Passing Through The Origin Amarjeet Bhullar November 14, 2008.
Derivation of Kinematic Equations
Derivation of Kinematic Equations
Differential Equations
We will be looking for a solution to the system of linear differential equations with constant coefficients.
Ordinary Least Squares (OLS) Regression
A Session On Regression Analysis
Lesson 5.3 How do you write linear equations in point-slope form?
2-4: Tangent Line Review &
LESSON 21: REGRESSION ANALYSIS
Derivation of Kinematic Equations
Regression Models - Introduction
Linear regression Fitting a straight line to observations.
Derivation of Kinematic Equations
Least Squares Regression Line
4.5: Linear Approximations and Differentials
Discrete Least Squares Approximation
13.9 Day 2 Least Squares Regression
Derivation of Kinematic Equations
Chapter 7 Functions of Several Variables
5.4 Finding Linear Equations
Derivation of Kinematic Equations
Derivation of Kinematic Equations
Presentation transcript:

GG 313 Geological Data Analysis # 18 On Kilo Moana at sea October 25, 2005 Orthogonal Regression: Major axis and RMA Regression

Homework due - Any problems? Is the fit meaningful?

Curve fitting: The formula for obtaining each of the curve fitting algorithms is the same: 1)Decide what constitutes an “error”. 2)Decide what kind of curve parameters will be matched. 3)Take the first derivative of the error function with respect to each unkown. 4)Solve the resulting simultaneous equations for the unknowns where the derivative (slope) = 0.

Major Axis: In this case of orthogonal regression, the uncertainty is assumed to be in both x and y equally, and the best-fit curve minimizes the distance of each observation from the best fit line:

We will minimize the sum of the squares of the perpendicular distances from each point to a line: (4.13) The upper case letters are the observations and the lower case are the coordinates of our best-fit line. The line is defined by: (4.14) Or: (4.15)

The problem is to minimize the errors, but with the two constraints provided by minimizing E and fitting the line, we need help. Such help is provided by a method called Lagrange multipliers, where we form a new function, F, by adding the original error function(4.13) and the constraint equations (4.15). Each constraint is scaled by an unknown constant, I : (4.16) We now have an equation we can differentiate, finding the necessary partial derivatives: (4.17)

Looking at each partial derivative in turn: (4.18) (4.19) (4.20) (4.21) Each i represents a different equation, thus, from 4.18 and 4.19: (4.22) (4.23)

Now we go bak to (4.14) and plug in x i and y i from the equations above, and solve for i : (4.25) This gives us n+2 equations and n+2 unknowns (n i ’s and a and b). Substituting i from 4.25 into 4.20 and 4.21 yields: (4.27) (4.26) (4.28)

From (4.26) we see that (4.29) (4.30) After some algebra, and letting and noting that we finally solve for the slope: (4.32)

This gives us two solutions for b that are orthogonal to each other. One is the actual slope and the other is perpendicular to it. Reduced Major Axis (RMA) Regression:

In RMA regression we minimize the areas of the rectangle formed by the data points and the nearest point on the regression line. (4.33) The constraint is still a straight line, y i =a+bx i, and the Lagrange multiplier method leads to the equations like 4.16 and 4.17 giving: (4.37) (4.35) (4.36) (4.34)

Since we get again (4.30): From 4.34 and 4.35, we get: Substituting into the equation for a line, (4.38) (4.39) (4.41) (4.40) (4.42)

After plugging into (4.37), and more algebra, we get: (4.43) Thus, the Reduced Major Axis regression line is particularly simple to calculate, requiring only the means and standard deviations of the X i and Y i.