Least squares method Let adjustable parameters for structure refinement be uj Then if R = S w(hkl) (|Fobs| – |Fcalc|)2 = S w D2 Must get ∂R/∂ui = 0 one.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

The Maximum Likelihood Method
Multiple Regression W&W, Chapter 13, 15(3-4). Introduction Multiple regression is an extension of bivariate regression to take into account more than.
STROUD Worked examples and exercises are in the text PROGRAMME F6 POLYNOMIAL EQUATIONS.
Statistical Techniques I EXST7005 Simple Linear Regression.
P M V Subbarao Professor Mechanical Engineering Department
GG 313 Geological Data Analysis # 18 On Kilo Moana at sea October 25, 2005 Orthogonal Regression: Major axis and RMA Regression.
Curve-Fitting Regression
Econ 140 Lecture 181 Multiple Regression Applications III Lecture 18.
Linear fits You know how to use the solver to minimize the chi^2 to do linear fits… Where do the errors on the slope and intercept come from?
Notes on Weighted Least Squares Straight line Fit Passing Through The Origin Amarjeet Bhullar November 14, 2008.
SOLVING SYSTEMS USING SUBSTITUTION
Principles of Least Squares
Calibration & Curve Fitting
Ordinary Least-Squares Emmanuel Iarussi Inria. Many graphics problems can be seen as finding the best set of parameters for a model, given some data Surface.
Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.
4.5 Solving Systems using Matrix Equations and Inverses.
4.5 Solving Systems using Matrix Equations and Inverses OBJ: To solve systems of linear equations using inverse matrices & use systems of linear equations.
MTH 161: Introduction To Statistics
Identity & Inverse Matrices
Have we ever seen this phenomenon before? Let’s do some quick multiplication…
Curve-Fitting Regression
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
Least squares & Rietveld Have n points in powder pattern w/ observed intensity values Y i obs Minimize this function: Have n points in powder pattern w/
Chapter 3- Model Fitting. Three Tasks When Analyzing Data: 1.Fit a model type to the data. 2.Choose the most appropriate model from the ones that have.
4.7 Solving Systems using Matrix Equations and Inverses
GUIDED PRACTICE for Example – – 2 12 – 4 – 6 A = Use a graphing calculator to find the inverse of the matrix A. Check the result by showing.
Solve by taking roots. Warm up. Homework Review Completing the Square.
PreCalculus Section 1.6 Solve quadratic equations by: a. Factoring b. Completing the square c. Quadratic formula d. Programmed calculator Any equation.
3.8B Solving Systems using Matrix Equations and Inverses.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
Chapter 12: Correlation and Linear Regression 1.
Data Modeling Patrice Koehl Department of Biological Sciences
PreCalculus Section 1. 6 Solve quadratic equations by: a. Factoring b
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and 2 Now, we need procedures to calculate  and 2 , themselves.
The simple linear regression model and parameter estimation
Notes on Weighted Least Squares Straight line Fit Passing Through The Origin Amarjeet Bhullar November 14, 2008.
Chapter 11: Linear Regression and Correlation
The Maximum Likelihood Method
Part 5 - Chapter
Part 5 - Chapter 17.
Topic 10 - Linear Regression
The Inverse of a Square Matrix
The Maximum Likelihood Method
The Method of Least-Squares
Solving Systems in 3 Variables using Matrices
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
CHE 391 T. F. Edgar Spring 2012.
Probability and Statistics for Computer Scientists Second Edition, By: Michael Baron Section 11.1: Least squares estimation CIS Computational.
The Maximum Likelihood Method
Part 5 - Chapter 17.
Today’s class Multiple Variable Linear Regression
Chi Square Distribution (c2) and Least Squares Fitting
Regression Models - Introduction
Chapter 3 Multiple Linear Regression
Systems of Linear Equations in Engineering
J.-F. Pâris University of Houston
Linear regression Fitting a straight line to observations.
Nonlinear regression.
6.5 Taylor Series Linearization
Econometrics I Professor William Greene Stern School of Business
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and 2 Now, we need procedures to calculate  and 2 , themselves.
Chapter 13 Additional Topics in Regression Analysis
Homework Check.
Chengyuan Yin School of Mathematics
Solving Linear Systems of Equations - Inverse Matrix
Regression Models - Introduction
Multiple linear regression
Homework Check.
Presentation transcript:

Least squares method Let adjustable parameters for structure refinement be uj Then if R = S w(hkl) (|Fobs| – |Fcalc|)2 = S w D2 Must get ∂R/∂ui = 0 one eqn/parameter hkl hkl hkl hkl

Least squares method Let adjustable parameters for structure refinement be uj Then if R = S w(hkl) (|Fobs| – |Fcalc|)2 = S w D2 Must get ∂R/∂ui = 0 one eqn/parameter Then S w D ∂|Fc|/∂ui = 0 hkl hkl hkl hkl

Least squares Simple example – again To solve simultaneous linear eqns: a11x1 + a12x2 + … = y1 a21x1 + a22x2 + … = y2 If: Then simultaneous eqns given by A x = y

Least squares Suppose: a11x1 + a12x2 + … ≈ y1 a21x1 + a22x2 + … ≈ y2 Then: a11x1 + a12x2 + … – y1 = e1 a21x1 + a22x2 + … – y2 = e2 No exact solution as before – but can get best solution by minimizing S ei 2 i

Least squares a11x1 + a12x2 + … – y1 = e1 a21x1 + a22x2 + … – y2 = e2 No exact solution as before – but can get best solution by minimizing S ei Also – note that no. observations > no. of variable parameters (n > m) Minimize: 2 i

Least squares Minimize:

Least squares To illustrate calcn, let n, m = 2 (a11x1 + a12x2 – y1)2 = e12 (a21x1 + a22x2 – y2)2 = e22 Take partial derivative wrt x1, set = 0: (a11x1 + a12x2 – y1) a11 = 0 (a21x1 + a22x2 – y2) a21 = 0

Least squares To illustrate calcn, let n, m = 2 (a11x1 + a12x2 – y1)2 = e12 (a21x1 + a22x2 – y2)2 = e22 Take partial derivative wrt x1, set = 0: (a11x1 + a12x2 – y1) a11 = 0 (a21x1 + a22x2 – y2) a21 = 0 (a11 a11) x1 + (a11 a12) x2 = (a11) y1 (a21 a21) x1 + (a21 a22) x2 = (a21) y2 (a11 a11 + a21 a21) x1 + (a11 a12 + a21 a22) x2 = (a11 y1 + a21 y2 )

Least squares (a11 a11 + a21 a21) x1 + (a11 a12 + a21 a22) x2 = (a11 y1 + a21 y2 ) x1 S ai1 + x2 S ai1 ai2 = S ai1 yi 2 2 2 2 i=1 i=1 i=1

Least squares (a11 a11 + a21 a21) x1 + (a11 a12 + a21 a22) x2 = (a11 y1 + a21 y2 ) x1 S ai1 + x2 S ai1 ai2 = S ai1 yi Now consider: 2 2 2 2 i=1 i=1 i=1

Least squares (a11 a11 + a21 a21) x1 + (a11 a12 + a21 a22) x2 = (a11 y1 + a21 y2 ) x1 S ai1 + x2 S ai1 ai2 = S ai1 yi Now consider: AT A 2 2 2 2 i=1 i=1 i=1

Least squares (a11 a11 + a21 a21) x1 + (a11 a12 + a21 a22) x2 = (a11 y1 + a21 y2 ) x1 S ai1 + x2 S ai1 ai2 = S ai1 yi Now consider: AT A And: (AT A) x = (AT y ) 2 2 2 2 i=1 i=1 i=1

Least squares In general:

Least squares In general: And: (AT A) x = (AT y )

Least squares In general: (AT A) x = (AT y ) x = (AT A)-1 (AT y )

Least squares Again: ƒs are not linear in xi

Least squares Again: ƒs are not linear in xi Expand ƒs in Taylor series

Least squares Again: ƒs are not linear in xi Expand ƒs in Taylor series

Least squares Solve, as before:

Least squares Solve, as before:

Least squares Solve, as before:

Least squares Weighting factors matrix:

Least squares So: Need set of initial parameters xjo Problem solution gives shifts ∆xj, not xj

Least squares So: Need set of initial parameters xjo Problem solution gives shifts ∆xj, not xj Eqns not exact, so refinement process requires no. of cycles to complete the refinement Add shifts ∆xj to xjo for each new refinement cycle

Least squares How good are final parameters? Use usual procedure to calculate standard deviations, s(xj) no. observations no. parameters

Least squares Warning: Frequently, all parameters cannot be “let go” at the same time How to tell which parameters can be refined simultaneously?

Least squares Warning: Frequently, all parameters cannot be “let go” at the same time How to tell which parameters can be refined simultaneously? Use correlation matrix: Calc correlation matrix for each refinement cycle Look for strong interactions (rij > + 0.5 or < – 0.5, roughly) If 2 parameters interact, hold one constant