Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory.

Slides:



Advertisements
Similar presentations
Probability Distribution
Advertisements

Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory Radiobioassay and Radiochemical Measurements Conference October 29, 2009.
Chapter 7 Hypothesis Testing
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 18 Sampling Distribution Models.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
1-1 Copyright © 2015, 2010, 2007 Pearson Education, Inc. Chapter 13, Slide 1 Chapter 13 From Randomness to Probability.
Probability Distributions CSLU 2850.Lo1 Spring 2008 Cameron McInally Fordham University May contain work from the Creative Commons.
Chapter 4 Probability and Probability Distributions
CHAPTER 13: Binomial Distributions
U eatworms.swmed.edu/~leon u
Chapter 18 Sampling Distribution Models
MARLAP Measurement Uncertainty
Chapter 4 Discrete Random Variables and Probability Distributions
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Class 5: Thurs., Sep. 23 Example of using regression to make predictions and understand the likely errors in the predictions: salaries of teachers and.
Prof. Bart Selman Module Probability --- Part d)
Cal State Northridge  320 Ainsworth Sampling Distributions and Hypothesis Testing.
Statistics. Large Systems Macroscopic systems involve large numbers of particles.  Microscopic determinism  Macroscopic phenomena The basis is in mechanics.
Chapter 4 Probability Distributions
Statistics.
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Overview Parameters and Statistics Probabilities The Binomial Probability Test.
Probability and Statistics Review
3-1 Introduction Experiment Random Random experiment.
8-2 Basics of Hypothesis Testing
Some standard univariate probability distributions
1 Binomial Probability Distribution Here we study a special discrete PD (PD will stand for Probability Distribution) known as the Binomial PD.
The Poisson Distribution (Not in Reif’s book)
McGraw-Hill Ryerson Copyright © 2011 McGraw-Hill Ryerson Limited. Adapted by Peter Au, George Brown College.
Copyright © 2012 Pearson Education. All rights reserved Copyright © 2012 Pearson Education. All rights reserved. Chapter 10 Sampling Distributions.
Problem A newly married couple plans to have four children and would like to have three girls and a boy. What are the chances (probability) their desire.
Lecture Slides Elementary Statistics Twelfth Edition
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 4 and 5 Probability and Discrete Random Variables.
Discrete Distributions
Chapter 5 Sampling Distributions
Chapter 6: Probability Distributions
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Review and Preview This chapter combines the methods of descriptive statistics presented in.
Slide 1 Copyright © 2004 Pearson Education, Inc..
Topics: Statistics & Experimental Design The Human Visual System Color Science Light Sources: Radiometry/Photometry Geometric Optics Tone-transfer Function.
1 Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Basic Principle of Statistics: Rare Event Rule If, under a given assumption,
1-1 Copyright © 2015, 2010, 2007 Pearson Education, Inc. Chapter 13, Slide 1 Chapter 13 From Randomness to Probability.
LECTURER PROF.Dr. DEMIR BAYKA AUTOMOTIVE ENGINEERING LABORATORY I.
Copyright © 2009 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
BINOMIALDISTRIBUTION AND ITS APPLICATION. Binomial Distribution  The binomial probability density function –f(x) = n C x p x q n-x for x=0,1,2,3…,n for.
Biostatistics, statistical software VII. Non-parametric tests: Wilcoxon’s signed rank test, Mann-Whitney U-test, Kruskal- Wallis test, Spearman’ rank correlation.
Biostatistics Class 3 Discrete Probability Distributions 2/8/2000.
Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 5 Discrete Random Variables.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 6 Random Variables 6.3 Binomial and Geometric.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 18 Sampling Distribution Models.
1 Chapter 8 Hypothesis Testing 8.2 Basics of Hypothesis Testing 8.3 Testing about a Proportion p 8.4 Testing about a Mean µ (σ known) 8.5 Testing about.
June 11, 2008Stat Lecture 10 - Review1 Midterm review Chapters 1-5 Statistics Lecture 10.
POSC 202A: Lecture 4 Probability. We begin with the basics of probability and then move on to expected value. Understanding probability is important because.
MARLAP Chapter 20 Detection and Quantification Limits Keith McCroan Bioassay, Analytical & Environmental Radiochemistry Conference 2004.
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
Business Statistics for Managerial Decision Farideh Dehkordi-Vakil.
SAMPLING DISTRIBUTION OF MEANS & PROPORTIONS. PPSS The situation in a statistical problem is that there is a population of interest, and a quantity or.
1 Definitions In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test is a standard procedure for testing.
SAMPLING DISTRIBUTION OF MEANS & PROPORTIONS. SAMPLING AND SAMPLING VARIATION Sample Knowledge of students No. of red blood cells in a person Length of.
SAMPLING DISTRIBUTION OF MEANS & PROPORTIONS. SAMPLING AND SAMPLING VARIATION Sample Knowledge of students No. of red blood cells in a person Length of.
STROUD Worked examples and exercises are in the text Programme 29: Probability PROGRAMME 29 PROBABILITY.
Chapter 5 Probability Distributions 5-1 Overview 5-2 Random Variables 5-3 Binomial Probability Distributions 5-4 Mean, Variance and Standard Deviation.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 5 Discrete Random Variables.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
AP Statistics From Randomness to Probability Chapter 14.
Theoretical distributions: the other distributions.
Discrete Random Variables
Business Statistics Topic 4
Log Linear Modeling of Independence
Probability distributions
Presentation transcript:

Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory

Issue  Most rad-chemists learn early to estimate “counting uncertainty” by square root of the count C.  They are likely to learn that this works because C has a “Poisson” distribution.  They may not learn why that statement is true, but they become comfortable with it.

“The standard deviation of C equals its square root. Got it.”

The Poisson distribution  What’s special about a Poisson distribution?  What is really unique is the fact that its mean equals its variance: μ = σ 2  This is why we can estimate the standard deviation σ by the square root of the observed value – very convenient.  What other well-known distributions have this property? None that I can name.

The Poisson distribution in Nature  How does Nature produce a Poisson distribution?  The Poisson distribution is just an approximation – like a normal distribution.  It can be a very good approximation of another distribution called a binomial distribution.

Binomial distribution  You get a binomial distribution when you perform a series of N independent trials of an experiment, each having two possible outcomes (success and failure).  The probability of success p is the same for each trial (e.g., flipping a coin, p = 0.5).  If X is number of successes, it has the “binomial distribution with parameters N and p.” X ~ Bin(N, p)

Poisson approximation  The mean of X is Np and the variance is Np(1 − p).  When p is tiny, the mean and variance are almost equal, because (1 − p) ≈ 1.  Example: N is number of atoms of a radionuclide in a source, p is probability of decay and counting of a particular atom during the counting period (assuming half- life isn’t short), and C is number of counts.

Poisson counting  In this case the mean of C is Np and the variance is also approximately Np.  We can treat C as Poisson: C ~ Poi(Np)

Poisson – Summary  In a nutshell, the Poisson distribution describes occurrences of relatively rare (very rare) events (e.g., decay and counting of an unstable atom)  Where significant numbers are observed only because the event has so many chances to occur (e.g., very large number of these atoms in the source)

Violating the assumptions  Imagine measuring 222 Rn and progeny by scintillation counting – Lucas cell or LSC.  Assumptions for the binomial/Poisson distribution are violated. How?  First, the count time may not be short enough compared to the half-life of 222 Rn.  The binomial probability p may not be small.  If you were counting just the radon, you might need the binomial distribution and not the Poisson approximation.

More importantly...  We actually count radon + progeny.  We may start with N atoms of 222 Rn in the source, but we don’t get a simple “success” or “failure” to record for each one.  Each atom might produce one or more counts as it decays.  C isn’t just the number of “successes.”

Lucas 1964  In 1964 Henry Lucas published an analysis of the counting statistics for 222 Rn and progeny in a Lucas cell.  Apparently many rad-chemists either never heard of it or didn’t fully appreciate its significance.  You still see counting uncertainty for these measurements being calculated as.

Radon decay  Slightly simplified decay chain:  A radon atom emits three α-particles and two β-particles on its way to becoming 210 Pb (not stable but relatively long-lived).  In a Lucas cell we count just the alphas – 3 of them in this chain.

Thought experiment  Let’s pretend that for every 222 Rn atom that decays during the counting period, we get exactly 3 counts (for the 3 α-particles that will be emitted).  What happens to the counting statistics?

Non-Poisson counting  C is always a multiple of 3 (e.g., 0, 3, 6, 9, 12,...).  That’s not Poisson – A Poisson variable can assume any nonnegative value.  More important question to us: What is the relationship between the mean and the variance of C?

Index of dispersion, J  The ratio of the variance V(C) to the mean E(C) is called the index of dispersion.  Often denoted by D, but Lucas used J.  That’s why this factor is sometimes called a “J factor”  For a Poisson distribution, J = 1.  What happens to J when you get 3 counts per decaying atom?

Mean and variance  Say D is the number of radon atoms that decay during the counting period and C is the number of counts produced.  Assume D is Poisson, so V(D) = E(D). C = 3 × D So, E(C) = 3 × E(D) V(C) = 9 × V(D) J = V(C) / E(C) = 3 × V(D) / E(D) = 3

Index of dispersion  So, the index of dispersion for C is 3, not 1 which we’re accustomed to seeing.  This thought experiment isn’t realistic.  You don’t really get exactly 3 counts for each atom of analyte that decays.  It’s much trickier to calculate J.

Technique  Fortunately you really only have to consider a typical atom of the analyte (e.g., 222 Rn) at the start of the analysis.  What is the index of dispersion J for the number of counts C that will be produced by this hypothetical atom as it decays?  Easiest approach involves a statistical technique called conditioning.

Conditioning  Consider the possible ways the atom can decay, and group them into mutually exclusive alternative cases (“events”) that together cover all the possibilities.  It is convenient to define the events in terms of the states the atom is in at the beginning and end of the counting period.  Calculate the probability of each event.

Conditioning - Continued  For each event, calculate the conditional expected values of C and C 2 given the event (i.e., assuming the event occurs).  Next calculate the overall expected values E(C) and E(C 2 ) as probability-weighted averages of the conditional values.  Calculate V(C) = E(C 2 ) − E(C) 2.  Finally, J = V(C) / E(C).  Details left to the reader.

Radium-226  Sometimes you measure radon to quantify the parent 226 Ra.  Let J be the index of dispersion for the number of counts produced by a typical atom of the analyte 226 Ra – not radon.  Technique for finding J (conditioning) is the same, but the details are different.  Value of J is always > 1 in this case.  Bounds: 1 < J ≤ 1 + CF × 2 / 3

Thorium-234  If you beta-count a sample containing 234 Th, you’re counting both 234 Th and the short-lived decay product 234m Pa.  With ~50 % beta detection efficiency, you have non-Poisson statistics here too.  The counts often come in pairs.  The value of J doesn’t tend to be as large as when counting radon in a Lucas cell or LSC (less than 1.5)

Gross alpha/beta?  If you don’t know what you’re counting, how can you estimate J?  You really can’t.  Probably most methods implicitly assume J = 1.  But who really knows?

Testing for J > 1  You can test J > 1 with a χ 2 test, but you may need a lot of measurements.

Important points  Suspect non-Poisson counting if:  One atom can produce more than one count as it decays through a series of short-lived states  Detection efficiency is high  Together these effects tend to give you on average more than one count per decaying atom.

Questions