Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory Radiobioassay and Radiochemical Measurements Conference October 29, 2009.

Slides:



Advertisements
Similar presentations
The t Test for Two Independent Samples
Advertisements

1/2/2014 (c) 2001, Ron S. Kenett, Ph.D.1 Parametric Statistical Inference Instructor: Ron S. Kenett Course Website:
Fundamentals of Probability
Lecture Slides Elementary Statistics Eleventh Edition
Introductory Mathematics & Statistics for Business
STATISTICS Sampling and Sampling Distributions
STATISTICS HYPOTHESES TEST (I)
STATISTICS HYPOTHESES TEST (II) One-sample tests on the mean and variance Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National.
STATISTICS POINT ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
Detection of Hydrological Changes – Nonparametric Approaches
STATISTICS Univariate Distributions
STATISTICS Random Variables and Distribution Functions
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
BUS 220: ELEMENTARY STATISTICS
Summary of Convergence Tests for Series and Solved Problems
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Copyright © 2010 Pearson Education, Inc. Slide
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Overview of Lecture Parametric vs Non-Parametric Statistical Tests.
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
C82MST Statistical Methods 2 - Lecture 2 1 Overview of Lecture Variability and Averages The Normal Distribution Comparing Population Variances Experimental.
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
CS1512 Foundations of Computing Science 2 Week 3 (CSD week 32) Probability © J R W Hunter, 2006, K van Deemter 2007.
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
1 Correlation and Simple Regression. 2 Introduction Interested in the relationships between variables. What will happen to one variable if another is.
The Poisson distribution
Chapter 17 Probability Models
Chapter 7 Sampling and Sampling Distributions
Solve Multi-step Equations
Chapter 7 Hypothesis Testing
6. Statistical Inference: Example: Anorexia study Weight measured before and after period of treatment y i = weight at end – weight at beginning For n=17.
Random Walk Models for Stock Prices Statistics and Data Analysis Professor William Greene Stern School of Business Department of IOMS Department of Economics.
1 Econ 240A Power Four Last Time Probability.
Chapter 6 Normal Distributions Understandable Statistics Ninth Edition
5-1 Chapter 5 Theory & Problems of Probability & Statistics Murray R. Spiegel Sampling Theory.
Chapter 7, Sample Distribution
Chris Morgan, MATH G160 January 8, 2012 Lecture 13
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 22 Comparing Two Proportions.
VOORBLAD.
Weighted moving average charts for detecting small shifts in process mean or trends The wonders of JMP 1.
Hypothesis Tests: Two Independent Samples
Chapter 4 Inference About Process Quality
Quantitative Analysis (Statistics Week 8)
© 2012 National Heart Foundation of Australia. Slide 2.
Module 17: Two-Sample t-tests, with equal variances for the two populations This module describes one of the most utilized statistical tests, the.
25 seconds left…...
Slippery Slope
Sampling Distributions for Counts and Proportions
Statistical Inferences Based on Two Samples
Evaluation of precision and accuracy of a measurement
1 Random Sampling - Random Samples. 2 Why do we need Random Samples? Many business applications -We will have a random variable X such that the probability.
Chapter Thirteen The One-Way Analysis of Variance.
Chapter 8 Estimation Understandable Statistics Ninth Edition
PSSA Preparation.
Chapter 11: The t Test for Two Related Samples
Experimental Design and Analysis of Variance
Testing Hypotheses About Proportions
Simple Linear Regression Analysis
Multiple Regression and Model Building
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 14 From Randomness to Probability.
January Structure of the book Section 1 (Ch 1 – 10) Basic concepts and techniques Section 2 (Ch 11 – 15): Inference for quantitative outcomes Section.
Basics of Statistical Estimation
Commonly Used Distributions
Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory.
Discrete Random Variables
Presentation transcript:

Keith D. McCroan US EPA National Air and Radiation Environmental Laboratory Radiobioassay and Radiochemical Measurements Conference October 29, 2009

Counting uncertainty Most rad-chemists learn early to estimate counting uncertainty by square root of the count C. They are likely to learn that this works because C has a Poisson distribution. They may not learn why that statement is true, but they become comfortable with it. 2

The standard deviation of C equals its square root. Got it. 3

The Poisson distribution Whats special about a Poisson distribution? What is really unique is the fact that its mean equals its variance: μ = σ 2 This is why we can estimate the standard deviation σ by the square root of the observed value – very convenient. What other well-known distributions have this property? None that I can name. 4

The Poisson distribution in Nature How does Nature produce a Poisson distribution? The Poisson distribution is just an approximation – like a normal distribution. It can be a very good approximation of another distribution called a binomial distribution. 5

Binomial distribution You get a binomial distribution when you perform a series of N independent trials of an experiment, each having two possible outcomes (success and failure). The probability of success p is the same for each trial (e.g., flipping a coin, p = 0.5). If X is number of successes, it has the binomial distribution with parameters N and p. X ~ Bin(N, p) 6

Poisson approximation The mean of X is Np and the variance is Np(1 p). When p is tiny, the mean and variance are almost equal, because (1 p) 1. Example: N is number of atoms of a radionuclide in a source, p is probability of decay and counting of a particular atom during the counting period (assuming half- life isnt short), and C is number of counts. 7

Poisson counting In this case the mean of C is Np and the variance is also approximately Np. We can consider C to be Poisson: C ~ Poi(μ) where μ = Np 8

Poisson – Summary In a nutshell, the Poisson distribution describes occurrences of relatively rare (very rare) events (e.g., decay and counting of an unstable atom) Where significant numbers are observed only because the event has so many chances to occur (e.g., very large number of these atoms in the source) 9

Violating the assumptions Imagine measuring 222 Rn and progeny by scintillation counting – Lucas cell or LSC. Assumptions for the binomial/Poisson distribution are violated. How? First, the count time may not be short enough compared to the half-life of 222 Rn. The binomial probability p may not be small. If you were counting just the radon, you might need the binomial distribution and not the Poisson approximation. 10

More importantly... We actually count radon + progeny. We may start with N atoms of 222 Rn in the source, but we dont get a simple success or failure to record for each one. Each atom might produce one or more counts as it decays. C isnt just the number of successes. 11

Lucas 1964 In 1964 Henry Lucas published an analysis of the counting statistics for 222 Rn and progeny in a Lucas cell. Apparently many rad-chemists either never heard of it or didnt fully appreciate its significance. You still see counting uncertainty for these measurements being calculated as. 12

Radon decay Slightly simplified decay chain: A radon atom emits three α-particles and two β-particles on its way to becoming 210 Pb (not stable but relatively long-lived). In a Lucas cell we count just the alphas – 3 of them in this chain. 13

Thought experiment Lets pretend that for every 222 Rn atom that decays during the counting period, we get exactly 3 counts (for the 3 α-particles that will be emitted). What happens to the counting statistics? 14

Non-Poisson counting C is always a multiple of 3 (e.g., 0, 3, 6, 9, 12,...). Thats not Poisson – A Poisson variable can assume any nonnegative value. More important question to us: What is the relationship between the mean and the variance of C? 15

Index of dispersion, J The ratio of the variance V(C) to the mean E(C) is called the index of dispersion. Often denoted by D, but Lucas used J. Thats why this factor is sometimes called a J factor For a Poisson distribution, J = 1. What happens to J when you get 3 counts per decaying atom? 16

Mean and variance Say D is the number of radon atoms that decay during the counting period and C is the number of counts produced. Assume D is Poisson, so V(D) = E(D). By assumption, C = 3 × D. So, E(C) = 3 × E(D) V(C) = 9 × V(D) J = V(C) / E(C) = 3 × V(D) / E(D) = 3 17

Index of dispersion So, the index of dispersion for C is 3, not 1 which were accustomed to seeing. This thought experiment isnt realistic. You dont really get exactly 3 counts for each atom of analyte that decays. Its much trickier to calculate J correctly. 18

Technique Fortunately you really only have to consider a typical atom of the analyte (e.g., 222 Rn) at the start of the analysis. What is the index of dispersion J for the number of counts C that will be produced by this hypothetical atom as it decays? Easiest approach involves a statistical technique called conditioning. 19

Conditioning Consider all the possible histories for the atom – i.e., all the different ways the atom can decay. It is convenient to define the histories in terms of the states the atom is in at the beginning and end of the counting period. Calculate the probability of each history typically using Bateman equations 20

Conditioning - Continued For each history, calculate the conditional expected values of C and C 2 given that history (i.e., assuming it occurs). Next calculate the overall expected values E(C) and E(C 2 ) as probability-weighted averages of the conditional values. Calculate V(C) = E(C 2 ) E(C) 2. Finally, J = V(C) / E(C). Details left to the reader. 21

Radium-226 Sometimes you measure radon to quantify the parent 226 Ra. Let J be the index of dispersion for the number of counts produced by a typical atom of the analyte 226 Ra – not radon. Technique for finding J (conditioning) is the same, but the details are different. Value of J is always > 1 in this case. 22

Thorium-234 If you beta-count a sample containing 234 Th, youre counting both 234 Th and the short-lived decay product 234m Pa. With ~50 % beta detection efficiency, you have non-Poisson statistics here too. The counts often come in pairs. The value of J doesnt tend to be as large as when counting radon in a Lucas cell or LSC (less than 1.5). 23

Gross alpha/beta? If you dont know what youre counting, how can you estimate J? You really cant. Probably most methods implicitly assume J = 1. But who really knows? 24

Simplification Assume every radiation of the decaying atom has detection efficiency ε or 0. Then where m 1 is expected number of detectable radiations from an atom of analyte during the counting interval m 2 is expected square of this number 25

Bounds for J m 1 m 2 Nm 1, where N is the maximum number of counts per atom. So, 1 ε × m 1 J 1 + ε × (N m 1 1) In many situations m 1 is very small. Then 1 J 1 + ε × (N 1) E.g., for 226 Ra measured by 222 Rn in a Lucas cell, N = 3. So, 1 J 1 + 2ε 26

Remember Suspect non-Poisson counting if: One atom can produce more than one count (N > 1) as it decays through a series of short- lived states Detection efficiency (ε) is high Together these effects tend to give you on average more than one count per decaying atom. In many cases, 1 J 1 + ε × (N 1). 27

Questions? 28

Reference Lucas, H.F., Jr., and D.A. Woodward Journal of Applied Physics 35:

Testing for J > 1 You can test J > 1 with a χ 2 test, but you may need a lot of measurements. 30