16722 We:20090204measurements & distributions114+1 mean (average) mean value of all measurements = ((each value observed) * (number of times.

Slides:



Advertisements
Similar presentations
Experimental Measurements and their Uncertainties
Advertisements

Probability Distributions CSLU 2850.Lo1 Spring 2008 Cameron McInally Fordham University May contain work from the Creative Commons.
© 2004 Prentice-Hall, Inc.Chap 5-1 Basic Business Statistics (9 th Edition) Chapter 5 Some Important Discrete Probability Distributions.
© 2003 Prentice-Hall, Inc.Chap 5-1 Basic Business Statistics (9 th Edition) Chapter 5 Some Important Discrete Probability Distributions.
Visual Recognition Tutorial
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
Experimental Uncertainties: A Practical Guide What you should already know well What you need to know, and use, in this lab More details available in handout.
Point estimation, interval estimation
Level 1 Laboratories University of Surrey, Physics Dept, Level 1 Labs, Oct 2007 Handling & Propagation of Errors : A simple approach 1.
Air-Water Heat Exchanger Lab In this lab, YOU will design, conduct, and analyze your experiment. The lab handout will not tell you exactly what to measure.
of Experimental Density Data Purpose of the Experiment
Example: Cows Milk Benefits –Strong Bones –Strong Muscles –Calcium Uptake –Vitamin D Have you ever seen any statistics on cow’s milk? What evidence do.
Statistical Treatment of Data Significant Figures : number of digits know with certainty + the first in doubt. Rounding off: use the same number of significant.
1 Seventh Lecture Error Analysis Instrumentation and Product Testing.
Measurement, Quantification and Analysis Some Basic Principles.
Central Tendency and Variability
Chapter 6 Random Error The Nature of Random Errors
Copyright © 2012 Pearson Education. All rights reserved Copyright © 2012 Pearson Education. All rights reserved. Chapter 10 Sampling Distributions.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Introduction l Example: Suppose we measure the current (I) and resistance (R) of a resistor. u Ohm's law relates V and I: V = IR u If we know the uncertainties.
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 31.
Chapter 5 Sampling Distributions
Chapter 2 Data Handling.
1 9/8/2015 MATH 224 – Discrete Mathematics Basic finite probability is given by the formula, where |E| is the number of events and |S| is the total number.
Handling Data and Figures of Merit Data comes in different formats time Histograms Lists But…. Can contain the same information about quality What is meant.
Estimation of Statistical Parameters
Probability, contd. Learning Objectives By the end of this lecture, you should be able to: – Describe the difference between discrete random variables.
Error Analysis Significant Figures and Error Propagation.
Physics 114: Exam 2 Review Lectures 11-16
Part 1: Basic Principle of Measurements
LECTURER PROF.Dr. DEMIR BAYKA AUTOMOTIVE ENGINEERING LABORATORY I.
16722 Mo: synchronous detection93+1 assignment (15)
Probability (Ch. 6) Probability: “…the chance of occurrence of an event in an experiment.” [Wheeler & Ganji] Chance: “…3. The probability of anything happening;
Error Propagation. Errors Most of what we know is derived by making measurements. However, it is never possible to measure anything exactly (eventually.
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Relative Values. Statistical Terms n Mean:  the average of the data  sensitive to outlying data n Median:  the middle of the data  not sensitive to.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 3.
Is statistics relevant to you personally? Bush Dukakis Undecided Month 1 Month 2 Headline: Dukakis surges past Bush in polls!  4% 42% 40% 18% 41% 43%
Survey – extra credits (1.5pt)! Study investigating general patterns of college students’ understanding of astronomical topics There will be 3~4 surveys.
Chapter 7: Sampling Distributions Section 7.1 How Likely Are the Possible Values of a Statistic? The Sampling Distribution.
Radiation Detection and Measurement, JU, 1st Semester, (Saed Dababneh). 1 Radioactive decay is a random process. Fluctuations. Characterization.
INTRODUCTORY LECTURE 3 Lecture 3: Analysis of Lab Work Electricity and Measurement (E&M)BPM – 15PHF110.
16722 We: noise51+1 noise We: noise51+2 noise: signal you don’t want technical noise: sources we can control.
NON-LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 6 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
BME 353 – BIOMEDICAL MEASUREMENTS AND INSTRUMENTATION MEASUREMENT PRINCIPLES.
R.Kass/F02 P416 Lecture 1 1 Lecture 1 Probability and Statistics Introduction: l The understanding of many physical phenomena depend on statistical and.
Measurements and Their Analysis. Introduction Note that in this chapter, we are talking about multiple measurements of the same quantity Numerical analysis.
 An understanding of uncertainty is an important pre-requisite for students in an introductory physics course to fully comprehend scientific measurement.
Uncertainty2 Types of Uncertainties Random Uncertainties: result from the randomness of measuring instruments. They can be dealt with by making repeated.
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
CHAPTER- 3.2 ERROR ANALYSIS. 3.3 SPECIFIC ERROR FORMULAS  The expressions of Equations (3.13) and (3.14) were derived for the general relationship of.
CHAPTER- 3.1 ERROR ANALYSIS.  Now we shall further consider  how to estimate uncertainties in our measurements,  the sources of the uncertainties,
Chapter 6: Random Errors in Chemical Analysis. 6A The nature of random errors Random, or indeterminate, errors can never be totally eliminated and are.
Uncertainty in Measurement How would you measure 9 ml most precisely? What is the volume being measured here? What is the uncertainty measurement? For.
Home Reading Skoog et al. Fundamental of Analytical Chemistry. Chapters 5 and 6.
measurements & distributions
Measurement, Quantification and Analysis
Chapter 5 Sampling Distributions
Chapter 4 – Part 3.
Introduction to Instrumentation Engineering
Hidden Markov Models Part 2: Algorithms
Chapter 5 Sampling Distributions
Chapter 5 Sampling Distributions
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Quantitative Reasoning
Chapter 5 Sampling Distributions
Propagation of Error Berlin Chen
Propagation of Error Berlin Chen
Presentation transcript:

16722 We: measurements & distributions114+1 mean (average) mean value of all measurements = ((each value observed) * (number of times it is observed)) / (total number of observations) = (Σ k k=0 to m n(k) * k) / (Σ k k=0 to m n(k)) k is the count observed in one measurement n(k) is the number of times k is observed m is the largest count in all the observations

16722 We: measurements & distributions114+2

16722 We: measurements & distributions114+3 width – standard deviation the most common measure of a distribution’s width is its “standard deviation”: the RMS value (σ) of the deviation of each of the measurements from the mean of all the measurements σ 2 = (Σ k k=0 to m n(k) * (k - ) 2 ) / (Σ k k=0 to n n(k)) (for a counting measurement in which each outcome is an integer; for continuous outcomes you need to replace this with an integral) as we discussed, it is observed and provable that the standard deviation (σ) of a counting experiment whose mean is is 1/2 /2 hmm... should I subtract 1 from the denominator?

16722 We: measurements & distributions

16722 We: measurements & distributions114+5

16722 We: measurements & distributions114+6 shape: the Poisson distribution if the average number of counts in repeated samples is (also written N-overbar) then the probability of a particular experiment giving N counts is remember that any sample N must be an integer but if happens to be an integer it is just a numerical accident... you should expect to be a “floating point” number

16722 We: measurements & distributions114+7

16722 We: measurements & distributions114+8 Poisson distribution – origin 1 = p(0) + p(1) + p(2) + … + p(i) + …

16722 We: measurements & distributions114+9 it is a pervasive rule-of-thumb that in any measurement that can be traced to a counting process, the uncertainty in any particular count is the square root of the average number of counts observed over all the experiments for a single counting experiment, your best guess for the uncertainty of the result is the square root of the result the key point to remember

16722 We: measurements & distributions assignment (21) A set of counting measurements averages counts per sample; what are the probabilities of the next sample yielding 0, 1, 2, 3, 4, and 5 counts? (22) Think through and describe intuitively the distribution of counting measurement values whose mean is very small, say 0.01; show that your intuition is consistent with what you calculate using the Poisson distribution formula.

16722 We: measurements & distributions distribution of measurements whose outcome can take on any value

16722 We: measurements & distributions there is no such thing... any real measurement is limited by the resolution of the instrument to a finite number of discrete outcomes how finely are you willing to read the scale? how many digits are there on the display? what is the resolution of your computer’s ADC? but we understand that certain measurements in principle take on a continuum of values our experiments yield average values of that continuum over the resolution bandwidth

16722 We: measurements & distributions Gaussian (“normal”) distribution

16722 We: measurements & distributions in practice measured values are constrained to certain discrete values that the instrument (or the person reading it) can assign here the raw measurements are limited to exact steps and multiples of 0.1 mm (the red squares) in a set of replicated measurements, each of these “quantized” values appears an integral number of times in the actual finite-sized set of replicated measurements, statistical fluctuations are observed (the blue diamonds)

16722 We: measurements & distributions Gaussian measurement model in words: the probability of a measurement being between x-Δx/2 and x+Δx/2 is … note: σ has the same units as x, i.e., millimeters the distribution has its mean and peak at x = x 0 in this figure x 0 = 1000 mm, σ = 0.5 mm the Gaussian distribution is often a good model, particularly when the measurement scatter is “natural” vs. “technical”

16722 We: measurements & distributions the actual data (the blue diamonds) plus the model (the Gaussian function of mean x 0 and standard deviation σ) plus an algorithm for (in some stated sense) minimizing the difference between them together allow you to estimate x 0 (which you state as the outcome of your set of measurements) and σ (which you state as your estimate of the uncertainty of your outcome) “estimating the parameters of the model”

16722 We: measurements & distributions compare and contrast Poisson (counting) and Gaussian (“normal”) distributions

16722 We: measurements & distributions measurement whose outcome is an integer count (n) mean value of outcomes = 12.5 (your choice) standard deviation of outcomes = /2 (you have no choice) defined only for integer values of n relative probability measurement (n, mean x0)

16722 We: measurements & distributions measurement x whose outcome is a continuum value mean value of the continuum = 12.5 (your choice) standard deviation of outcomes = /2 (your choice) it looks like there is a dimensional error, but these are dimensionless counts, so it is okay measurement (n, mean x0) 1/2

16722 We: measurements & distributions visual examination shows that for greater than about 10 the Poisson distribution is practically indistinguishable from the Gaussian distribution with mean and standard deviation 1/2 1/2 but be careful... the Gaussian distribution has two independent variables, the mean and the standard deviation (width) the Poisson distribution has one independent variable, the mean; its standard deviation (width) is completely determined by its mean compare & contrast

16722 We: measurements & distributions measurements with multiple sources of uncertainty vs. multiple measurements of the same quantity made by different instruments with different uncertainties

16722 We: measurements & distributions if you take away just one... … quantitative detail from this course, it should be that you know without stopping to think about it … how to estimate the error in a measurement composed of measurements of several different quantities for each of which you have an error estimate … how to combine separate measurements of the same quantity and estimate the error of their “fusion”

16722 We: measurements & distributions combining multiple error sources in complex measurements there are usually multiple sources of error or uncertainty measure or estimate each contribution  x i to the combined measurement X(x 1, x 2, x 3,...) in the worst possible instance all contributions have the same sign; in the best instance the signs conspire to make the net error zero in the statistically-typical case they combine with “random phase”, i.e., in quadrature:  X = (  x  x  x …) ½ where  x i = (∂X(x 1, x 2, x 3,...)/∂x i )  x i

16722 We: measurements & distributions example: measure the volume of a box V = (L +/-  L) (W +/-  W) (H +/-  H) ∂V/∂L = W H ∂V/∂W = LH ∂V/∂H = L W  V = ((W H  L) 2 + (L H  W) 2 + (L W  H) 2 ) 1/2  V/V = (  L/L) 2 + (  W/W) 2 + (  H/H) 2 ) 1/2 note: this ignores the possibility that the box’s angles are not precisely 90 o ; the formula for V should properly include the angles and the consequences of errors in their measurements

16722 We: measurements & distributions what if the components have different dimensionality? say you are given base area & height: V = (A +/-  A) (H +/-  H) ∂V/∂A = H ∂V/∂H = A  V = ((H  A) 2 + (A  H) 2 ) 1/2  V/V = ((  A/A) 2 + (  H/H) 2 ) 1/2 and if A = L W then  A = ((L  W) 2 + (W  L) 2 ) 1/2 so  V = ((L  W) 2 + (W  L) 2 + (A  H) 2 ) 1/2

16722 We: measurements & distributions contrasting error propagation and sensor fusion error propagation: when a single measurement has multiple error sources, the overall error estimate is (  component_error_sources 2 ) ½ the overall error estimate is (as you should expect) bigger than the biggest single error source in practice the overall error estimate is often dominated by the biggest single error source

16722 We: measurements & distributions when multiple measurements are made of the same measurand using different instruments, their individual contributions are weighted by the reciprocals of the squares of their individual error estimates the overall error estimate is smaller than the smallest individual error source as you should naturally expect! in practice the overall error estimate is often dominated by the smallest single error source this is the most basic kind of “sensor fusion”

16722 We: measurements & distributions example: multiple temperature measurements three thermometers give measurements +/-1  T 1 = /- 2.0 C T 2 = /- 2.5 C T 3 = /- 1.0 C what is your best estimate of the actual temperature, and what is your estimate of the error in your estimate of the actual temperature?

16722 We: measurements & distributions weight each measurement by its “goodness”, i.e., the reciprocal of square of its error estimate: T = (  T i /  i 2 ) / (  1 /  i 2 ) = (19./ / /1. 2 )/(1/ / /1. 2 ) = C  i -2 ) -1/2 = 0.84 C as expected, the error estimated for the combined measurement estimate is smaller than any of the individual errors as expected, the combined measurement estimate is close to the best individual measurement

16722 We: measurements & distributions reminder make sure this is so clear to you that you never have to think about it twice: when a single measurement is composed of multiple sub-measurements the overall error estimate is the sum-in-quadrature of the component error contributions when multiple measurements of the same measurand are available they are fused by averaging with each weighted by the reciprocal of the square of its own error estimate

16722 We: measurements & distributions assignment (23) You make five range measurements using an ultrasonic sensor; the sensor’s digital display reports ranges {9.9, 10.1, 10.0, 9.9, 10.1} m. The sensor was calibrated at 20 C. The speed-of- sound scales with the square-root of the absolute temperature. Using digital thermometer, you make three measurements: {25.0, 24.9, 25.1} C. Using a mercury-in-glass thermometer you make five measurements: {23.0, 23.5, 26.0, 24.0, 23.5} C. What is your best estimate of the range and its error. (Use one standard deviation for all your error estimates.)

16722 We: measurements & distributions some people sometimes use different weights, e.g.,   , 1 <  < 2, e.g., because they feel it is conservative to give more weight to the less-good data, but  = 2 is usually strictly correct here is a “seat of the pants” derivation, if you are willing to accept some assumptions on faith: the output measurement is a linear combination of the input measurements. Consider the case of just two: m = a m1 + (1-a) m2, our aim is to determine optimal weights a, and (1-a) the output error depends on the same weights applied to the component standard deviations (not their squares):  2 = (a  1)2 + ((1-a)  )2

16722 We: measurements & distributions take the correct value for a to be the one that minimizes  : d  /da == 0 gives a =  22/(  12+  22) (1-a) =  12/(  12+  22) from which: m = (m1/  12 + m2/  22)/(1/  /  22)  = (1/  12+1/  22)-1/2 extend to more than two measurements by applying this result recursively

16722 We: measurements & distributions next few topics coming up... data acquisition light sensors

anytime sensor fusion updates what if you already have an observation of a quantity's mean and standard deviation  … and you make a new measurement x' with standard deviation  ' … … and you want to update and  with your new knowledge of x' and  '? it is the same as the problem we just did: = ( /  2 + x'/  ' 2 )/(1/  2 + 1/  ' 2 )  new = (1/  2 + 1/  ' 2 ) -1/2

anytime sensor fusion pop quiz what do you call an algorithm that implements the above idea, i.e., that the best way to update prior information with new information is to weight each by the square of its standard deviation? (just kidding about the quiz :-) )