Introduction to Error Analysis

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Theory of Measurements and Errors
Welcome to PHYS 225a Lab Introduction, class rules, error analysis Julia Velkovska.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Chap 8-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 8 Estimation: Single Population Statistics for Business and Economics.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Chapter 10 Curve Fitting and Regression Analysis
Ch11 Curve Fitting Dr. Deshi Ye
P M V Subbarao Professor Mechanical Engineering Department
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Experimental Uncertainties: A Practical Guide What you should already know well What you need to know, and use, in this lab More details available in handout.
The Simple Linear Regression Model: Specification and Estimation
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Curve-Fitting Regression
Chapter 8 Estimation: Single Population
of Experimental Density Data Purpose of the Experiment
Data Handling l Classification of Errors v Systematic v Random.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 171 CURVE.
Chapter 7 Estimation: Single Population
SIMPLE LINEAR REGRESSION
Statistical Treatment of Data Significant Figures : number of digits know with certainty + the first in doubt. Rounding off: use the same number of significant.
1 Seventh Lecture Error Analysis Instrumentation and Product Testing.
Principles of Least Squares
Separate multivariate observations
Calibration & Curve Fitting
SIMPLE LINEAR REGRESSION
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
Confidence Interval Estimation
Regression Analysis (2)
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Population All members of a set which have a given characteristic. Population Data Data associated with a certain population. Population Parameter A measure.
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Curve Fitting.
Measures of Central Tendency and Dispersion Preferred measures of central location & dispersion DispersionCentral locationType of Distribution SDMeanNormal.
Physics 114: Exam 2 Review Lectures 11-16
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 8-1 Confidence Interval Estimation.
How Errors Propagate Error in a Series Errors in a Sum Error in Redundant Measurement.
Summary Part 1 Measured Value = True Value + Errors = True Value + Errors Errors = Random Errors + Systematic Errors How to minimize RE and SE: (a)RE –
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
NON-LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 6 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 7-1 Business Statistics: A Decision-Making Approach 6 th Edition Chapter.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
Analysis of Experimental Data; Introduction
Measurements and Their Analysis. Introduction Note that in this chapter, we are talking about multiple measurements of the same quantity Numerical analysis.
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 7: Regression.
ESTIMATION METHODS We know how to calculate confidence intervals for estimates of  and  2 Now, we need procedures to calculate  and  2, themselves.
Statistics for Business and Economics 8 th Edition Chapter 7 Estimation: Single Population Copyright © 2013 Pearson Education, Inc. Publishing as Prentice.
CORRELATION-REGULATION ANALYSIS Томский политехнический университет.
Chapter 6: Random Errors in Chemical Analysis. 6A The nature of random errors Random, or indeterminate, errors can never be totally eliminated and are.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Part 5 - Chapter
Part 5 - Chapter 17.
Introduction, class rules, error analysis Julia Velkovska
Chapter 12 Curve Fitting : Fitting a Straight Line Gab-Byung Chae
Introduction to Instrumentation Engineering
Part 5 - Chapter 17.
Modelling data and curve fitting
Chapter 7 Estimation: Single Population
Linear regression Fitting a straight line to observations.
5.2 Least-Squares Fit to a Straight Line
Discrete Least Squares Approximation
CHAPTER – 1.2 UNCERTAINTIES IN MEASUREMENTS.
Propagation of Error Berlin Chen
Propagation of Error Berlin Chen
Presentation transcript:

Introduction to Error Analysis Atomic Lab Introduction to Error Analysis

Significant Figures For any quantity, x, the best measurement of x is defined as xbest ±x In an introductory lab, x is rounded to 1 significant figure Example: x=0.0235 -> x=0.02 g= 9.82 ± 0.02 Right and Wrong Wrong: speed of sound= 332.8 ± 10 m/s Right: speed of sound= 330 ± 10 m/s Always keep significant figures throughout calculation, otherwise rounding errors introduced

Statistically the Same Student A = 30 ± 2 Student B = 34 ± 5 Since the uncertainties for A & B overlap, these numbers are statistically the same

Precision Mathematical Definition Precision of speed of sound= 10/330 ~ 0.33 or 33% So often we write: speed of sound= 330 ± 33%

Propagation of Uncertainties– Sums & Differences Suppose that x, …, w are independent measurements with uncertainties x, …, w and you need to calculate q= x+…+z-(u+….+w) If the uncertainties are independent i.e. w is not sum function of x etc then Note: q < x+…+ z+ u+…+ w

Propagation of Uncertainties– Products and Quotients Suppose that x, …, w are independent measurements with uncertainties x, …, w and you need to calculate If the uncertainties are independent i.e. w is not sum function of x etc then

Functions of 1 Variable Suppose = 20 ± 3 deg and want to find cos  3 deg is 0.05 rad |(d(cos)/d|=| -sin|= sin (cos)= sin*  = sin(20o)*(0.05) (cos 20o) = 0.02 rad and cos 20o= 0.94 So cos= 0.94 ± 0.02

Power Law Suppose q= xn and x ± x

Types of Errors Measure the period of a revolution of a wheel As we repeat measurements some will be more or some less These are called “random errors” In this case, caused by reaction time

What if the clock is slow? We would never know if our clock is slow; we would have to compare to another clock This is a “systematic error” In some cases, there is not a clear difference between random and systematic errors Consider parallax: Move head around: random error Keep head in 1 place: systematic

Mean (or average)

Deviation Need to calculate an average or “standard” deviation To eliminate the possibility of a zero deviation, we square di When you divide by N-1, it is called the population standard deviation If dividing by N, the sample standard deviation

Standard Deviation of the Mean The uncertainty in the best measurement is given by the standard deviation of the mean (SDOM) If the xbest = the mean, then sbest =smean

Histograms Number of times that value has occurred Value

Distribution of a Craps Game Bell Curve Or Normal Distribution

Bell Curve Centroid or Mean x+s x-s 68 % Between x-2s to x+2s, 95% of population 2s is usually defined as Error

Gaussian X0 In the Gaussian, x0 is the mean and sx is the standard deviation. They are mathematically equivalent to formulae shown earlier

Error and Uncertainty While definitions vary between scientists, most would agree to the following definitions Uncertainty of measurement is the value of the standard deviation (1 s) Error of the measurement is the value of two times the standard deviation (2 s)

Full Width at Half Maximum A special quantity is the full width at half maximum (FWHM) The FWHM is measured by taking ½ of the maximum value (usually at the centroid) The width of distribution is measured from the left side of the centroid at the point where the frequency is this half value It is measured to the corresponding value on the right side of the centroid. Mathematically, the FWHM is related to the standard deviation by FWHM=2.354*sx

Weighted Average Suppose each measurement has a unique uncertainty such as x1 ± s1 x2 ± s2 … xN ± sN What is the best value of x?

We need to construct statistical weights We desire that measurements with small errors have the largest influence and the ones with the largest errors have very little influence Let w=weight= 1/si2 This formula can be used to determine the centroid of a Gaussian where the weights are the values of the frequency for each measurement

Least Squares Fitting What if you want to fit a straight line through your data? In other words, yi = A*xi + B First, you need to calculate residuals Residual= Data – Fit or Residual= yi – (A*xi+B) When as the Fit approaches the Data, the residuals should be very small (or zero).

Big Problem Some residuals >0 Some residuals <0 If there is no bias, then rj = -rk and then rj + rk =0 The way to correct this is to square rj and rk and then the sum of the squares is positive and greater than 0

Chi-square, c2 We need to minimize this function with respect to A and B so We take the partial derivative of w.r.t. these variables and set the resulting derivatives equal to 0

Chi-square, c2

Using Determinants

A Pseudocode Dim x(100), y(100) xsum=0 x2sum=0 Xysum=0 N=100 Ysum=0 For i=1 to 100 xsum=xsum+x(i) ysum=ysum+y(i) xysum=xysum+x(i)*y(i) x2sum=x2sum+x(i)*x(i) Next I Delta= N*x2sum-(xsum*xsum) A=(N*xysum-xsum*ysum)/Delta B=(x2sum*ysum-xsum*xysum)/Delta

c2 Values If calculated properly, c2 start at large values and approach 1 This is because the residual at a given point should approach the value of the uncertainty Your best fit is the values of A and B which give the lowest c2 What if c2 is less than 1?! Your solution is over determined i.e. a larger number of degrees of freedom than the number of data points Now you must change A and B until the c2 doesn’t vary too much

Without Proof

Extending the Method Obviously, can be expanded to larger polynomials i.e. Becomes a matrix inversion problem Exponential Functions Linearize by taking logarithm Solve as straight line

Extending the Method Power Law Multivariate multiplicative function

Uglier Functions q=f(x,y,z) Use a gradient search method Gradient is a vector which points in the direction of steepest ascent f = a direction So follow f until it hits a minimum

Correlation Coefficient, r2 r2 starts at 0 and approaches 1 as fit gets better r2 shows the correlation of x and y … i.e. is y=f(x)? If r2 <0.5 then there is no correlation.