6.1 Introduction to Chi-Square Space

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Ordinary Least-Squares
Pattern Recognition and Machine Learning
1 Functions and Applications
12-1 Multiple Linear Regression Models Introduction Many applications of regression analysis involve situations in which there are more than.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Autocorrelation Lecture 18 Lecture 18.
Least-Squares Regression
Stat13-lecture 25 regression (continued, SE, t and chi-square) Simple linear regression model: Y=  0 +  1 X +  Assumption :  is normal with mean 0.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Chapter 13 Statistics © 2008 Pearson Addison-Wesley. All rights reserved.
1 Least squares procedure Inference for least squares lines Simple Linear Regression.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
© 2008 Pearson Addison-Wesley. All rights reserved Chapter 1 Section 13-6 Regression and Correlation.
Model Construction: interpolation techniques 1392.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Curve-Fitting Regression
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Physics 114: Lecture 18 Least Squares Fit to Arbitrary Functions Dale E. Gary NJIT Physics Department.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Estimating standard error using bootstrap
The simple linear regression model and parameter estimation
Regression Analysis AGEC 784.
Statistical Data Analysis - Lecture /04/03
Probability Theory and Parameter Estimation I
Linear Regression.
distance prediction observed y value predicted value zero
Background on Classification
CH 5: Multivariate Methods
Analyzing Redistribution Matrix with Wavelet
Elementary Statistics
2.1 Equations of Lines Write the point-slope and slope-intercept forms
Suppose the maximum number of hours of study among students in your sample is 6. If you used the equation to predict the test score of a student who studied.
Roberto Battiti, Mauro Brunato
Fitting Curve Models to Edges
Lecture Slides Elementary Statistics Thirteenth Edition
Matrices Definition: A matrix is a rectangular array of numbers or symbolic elements In many applications, the rows of a matrix will represent individuals.
Statistical Methods For Engineers
Modelling data and curve fitting
J.-F. Pâris University of Houston
Linear regression Fitting a straight line to observations.
Algebra: Graphs, Functions, and Linear Systems
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Nonlinear regression.
6.2 Grid Search of Chi-Square Space
6.7 Practical Problems with Curve Fitting simple conceptual problems
6.5 Taylor Series Linearization
Regression Chapter 8.
Confidence Ellipse for Bivariate Normal Data
Solve Systems of Linear Inequalities
5.2 Least-Squares Fit to a Straight Line
5.4 General Linear Least-Squares
5.1 Introduction to Curve Fitting why do we fit data to a function?
6.6 The Marquardt Algorithm
Regression Statistics
Discrete Least Squares Approximation
6.3 Gradient Search of Chi-Square Space
Spatially Periodic Activation Patterns of Retrosplenial Cortex Encode Route Sub-spaces and Distance Traveled  Andrew S. Alexander, Douglas A. Nitz  Current.
Volume 91, Issue 5, Pages (September 2016)
The Ventriloquist Effect Results from Near-Optimal Bimodal Integration
Higher National Certificate in Engineering
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Residuals and Residual Plots
Presentation transcript:

6.1 Introduction to Chi-Square Space the basic idea of searching chi-square space an example with two parameters and no error used to demonstrate salient features of chi-square space some comments about Mathcad contour graphs a look at two parameter chi-square space in the presence of random error changes in chi-square space as random errors increase 6.1 : 1/8

The Basic Idea What happens if you have a non-linear equation that cannot be modified into, or approximated by, a linear form? As long as y has normally distributed error, a least-squares solution is still possible by empirically finding the equation coefficients which yield a minimum chi-square. The empirical procedures use repetitive guesses that "home-in" on the least-squares solution. The guessing procedure is made logistically difficult as the number of parameters increases (dimensionality of the problem). These are graphs of "chi-squared space." 6.1 : 2/8

Two-Parameters, No Error (1) This example will find the least-squares coefficients that minimize chi-squared for the following non-linear function describing a peak. The true coefficient values were a0 = 50.69167 and a1 = 2.13494. With no noise sy = 0. The following data were collected across the peak: (45,0.005)(46,0.017) (47,0.042)(48,0.084) (49,0.137)(50,0.177) (51,0.185)(52,0.155) (53,0.104)(54,0.056) (55,0.024) 6.1 : 3/8

Two-Parameters, No Error (2) A chi-square map is a matrix of chi-square values plotted as a contour graph. The coefficients are varied over a range of values that is expected to include the minimum. In this case: 1.00  a0  3.00, Da0 = 0.02 50.00  a1  52.00, Da1 = 0.02 A B C 6.1 : 4/8

Mathcad Contour Graphs The data matrix and its contour graph have the rows and columns rotated by 90. The default minimum is violet, which often makes the minimum difficult to locate. I plot the negative matrix to make red the minimum. A direct plot of the matrix has very little resolution near the minimum. I plot the natural log of the values to enhance small c2 differences. 6.1 : 5/8

Two Parameters with Error (1) Random noise with s = 0.01 was added to the data on slide 3. Trial #1 had the following values. (45,0.001)(46,0.010)(47,0.037)(48,0.075)(49,0.120)(50,0.178) (51,0.184)(52,0.160)(53,0.126)(54,0.064)(55,0.034) Three sets of measurements were made. The graphs below show the three sets of data along with regression lines using the coefficients that gave a minimum in chi-square space. 6.1 : 6/8

Two Parameters with Error (2) The contour graphs below show how the "details" of chi-square space change when error randomly moves the minimum. as with the linear least-squares, the minimum does not correspond to the true value (the true value is marked +) the contours are elliptically shaped, indicating that a change in a0 affects chi-square more than an equal-sized change in a1 (a0 will be estimated more precisely) the non-concentric ellipses indicate that m0-Da0 affects the fit more than m0+Da0 elliptical contours oriented at an angle to the parameter axes indicate that the two parameters have a non-zero covariance 6.1 : 7/8

Two Parameters with Error (3) As the noise level increases, the graph flattens and the minimum can be found farther from the true value. In the contour plot at the right the increased area of each contour is indicative of flattening. The graph below shows why chi-square space flattens near the minimum. The black line in the graph at the left is the value of chi-square without error. To obtain the dashed red lines (chi-square with error) the horizontal blue lines were added to the black lines (before taking the log). 6.1 : 8/8