Sampling distributions - for counts and proportions IPS chapter 5.1 © 2006 W. H. Freeman and Company.

Slides:



Advertisements
Similar presentations
Sampling Distributions for Counts and Proportions
Advertisements

Chapter 8: Binomial and Geometric Distributions
CHAPTER 13: Binomial Distributions
Chapter 4 Probability Distributions
Discrete Probability Distributions
Slide 1 Statistics Workshop Tutorial 4 Probability Probability Distributions.
Lecture Slides Elementary Statistics Twelfth Edition
Slide 1 Statistics Workshop Tutorial 7 Discrete Random Variables Binomial Distributions.
Sampling Distributions For Counts and Proportions IPS Chapter 5.1 © 2009 W. H. Freeman and Company.
Probability Models Binomial, Geometric, and Poisson Probability Models.
Chapter 17 Probability Models Binomial Probability Models Poisson Probability Models.
Objectives (BPS chapter 13) Binomial distributions  The binomial setting and binomial distributions  Binomial distributions in statistical sampling 
Chapter 6: Random Variables
CHAPTER 6 Random Variables
Problem A newly married couple plans to have four children and would like to have three girls and a boy. What are the chances (probability) their desire.
Probability Models Chapter 17.
1 Overview This chapter will deal with the construction of probability distributions by combining the methods of Chapter 2 with the those of Chapter 4.
Chapter 5 Sampling Distributions
Discrete Random Variables and Probability Distributions
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 7 Sampling Distributions.
AP STATISTICS LESSON 8 – 1 ( DAY 2 ) THE BINOMIAL DISTRIBUTION (BINOMIAL FORMULAS)
Notes – Chapter 17 Binomial & Geometric Distributions.
5.5 Distributions for Counts  Binomial Distributions for Sample Counts  Finding Binomial Probabilities  Binomial Mean and Standard Deviation  Binomial.
The Binomial and Geometric Distribution
Slide 1 Copyright © 2004 Pearson Education, Inc..
Probability Theory General Probability Rules. Objectives General probability rules  Independence and the multiplication rule  Applying the multiplication.
Bernoulli Trials Two Possible Outcomes –Success, with probability p –Failure, with probability q = 1  p Trials are independent.
BINOMIALDISTRIBUTION AND ITS APPLICATION. Binomial Distribution  The binomial probability density function –f(x) = n C x p x q n-x for x=0,1,2,3…,n for.
Binomial Formulas Target Goal: I can calculate the mean and standard deviation of a binomial function. 6.3b h.w: pg 404: 75, 77,
Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 7 Sampling Distributions.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 6 Random Variables 6.3 Binomial and Geometric.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 6: Random Variables Section 6.3 Day 1 Binomial and Geometric Random.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 6: Random Variables Section 6.3 Binomial and Geometric Random Variables.
Binomial Random Variables Binomial Probability Distributions.
Lecture 8: More on the Binomial Distribution and Sampling Distributions June 1, 2004 STAT 111 Introductory Statistics.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 6 Random Variables 6.3 Binomial and Geometric.
Sampling Distributions Chapter 18. Sampling Distributions A parameter is a measure of the population. This value is typically unknown. (µ, σ, and now.
SAMPLING DISTRIBUTION OF MEANS & PROPORTIONS. PPSS The situation in a statistical problem is that there is a population of interest, and a quantity or.
CHAPTER 17 BINOMIAL AND GEOMETRIC PROBABILITY MODELS Binomial and Geometric Random Variables and Their Probability Distributions.
Notes – Chapter 17 Binomial & Geometric Distributions.
Probability Theory and Specific Distributions (Moore Ch5 and Guan Ch6)
Binomial setting and distributions Binomial distributions are models for some categorical variables, typically representing the number of successes in.
Reminder: What is a sampling distribution? The sampling distribution of a statistic is the distribution of all possible values taken by the statistic when.
Parameter versus statistic  Sample: the part of the population we actually examine and for which we do have data.  A statistic is a number summarizing.
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 5-1 Chapter 5 Some Important Discrete Probability Distributions Business Statistics,
Statistics 17 Probability Models. Bernoulli Trials The basis for the probability models we will examine in this chapter is the Bernoulli trial. We have.
+ The Practice of Statistics, 4 th edition – For AP* STARNES, YATES, MOORE Chapter 6: Random Variables Section 6.3 Binomial and Geometric Random Variables.
Slide 1 Copyright © 2004 Pearson Education, Inc. Chapter 5 Probability Distributions 5-1 Overview 5-2 Random Variables 5-3 Binomial Probability Distributions.
IPS Chapter 5 © 2012 W.H. Freeman and Company  5.1: The Sampling Distribution of a Sample Mean  5.2: Sampling Distributions for Counts and Proportions.
12. Discrete probability distributions
13. Sampling distributions
Chapter 6 Inferences Based on a Single Sample: Estimation with Confidence Intervals Slides for Optional Sections Section 7.5 Finite Population Correction.
Binomial and Geometric Random Variables
CHAPTER 14: Binomial Distributions*
Parameter versus statistic
Chapter 5 Sampling Distributions
Chapter 5 Sampling Distributions
Chapter 5 Sampling Distributions
The Practice of Statistics in the Life Sciences Fourth Edition
The Practice of Statistics in the Life Sciences Fourth Edition
CHAPTER 6 Random Variables
Chapter 5 Sampling Distributions
Probability distributions
Probability Theory and Specific Distributions (Moore Ch5 and Guan Ch6)
Chapter 6: Random Variables
Chapter 5 Sampling Distributions
CHAPTER 6 Random Variables
12/12/ A Binomial Random Variables.
Chapter 8: Binomial and Geometric Distributions
Chapter 5: Sampling Distributions
Presentation transcript:

Sampling distributions - for counts and proportions IPS chapter 5.1 © 2006 W. H. Freeman and Company

Objectives (IPS chapter 5.1) Sampling distributions for counts and proportions  Binomial distributions  Sampling distribution of a count  Sampling distribution of a proportion  Normal approximation  Binomial formulas

Reminder: the two types of data  Quantitative  Something that can be counted or measured and then averaged across individuals in the population (e.g., your height, your age, your IQ score)  Categorical  Something that falls into one of several categories. What can be counted is the proportion of items in each category (e.g., success/failure, tree species, your blood type—A, B, AB, O). How do you figure it out? Ask:  What are the n individuals/units in the sample (of size “n”)?  What is being recorded about those n individuals/units?  Is that a number (  quantitative) or a statement (  categorical)?

Binomial distributions Binomial distributions are probability models for some categorical variables, often the number of successes in a series of n trials. The observations must meet these requirements:  The total number of observations n is fixed in advance.  Each observation falls into exactly 1 of 2 categories.  The outcomes of all are independent.  All observations have the same probability of “success,” p. We record the next 50 births at a local hospital. Each newborn is either a boy or a girl; each baby is either born on a Sunday or not.

We express a binomial distribution for the number of X of successes among n observations as a function of the parameters n and p.  The parameter n is the total number of observations.  The parameter p is the probability of success on each observation.  The number of successes X can be any integer between 0 and n. A coin is flipped 10 times. Each outcome is either a head or a tail. Say the variable X is the number of heads among those 10 flips, our count of “successes.” On each flip, the probability of success, “heads,” is 0.5. Assume independent outcomes. The number X of heads among 10 flips has the binomial distribution B(n = 10, p = 0.5).

Applications for binomial distributions Binomial distributions to assign probabilities to the number of times that a particular event will occur in a sequence of observations.  In a clinical trial, a patient’s condition may improve or not. We study the number of patients who improved (improved/not improved).  In mark and recapture experiments animals are tagged. In subsequent year we examine newly captured animals to see if they are tagged from previous seasons (tagged/not tagged).  In quality control we assess the number of defective items in a lot of goods. The item meets specifications or it does not.

Imagine that coins are spread out so that half of them are heads up, and half tails up. Close your eyes and pick one. The probability that this coin is heads up is 0.5. Likewise, choosing a simple random sample from any population is not quite a binomial setting. However, when the population is large, removing a few items has a very small effect on the composition of the remaining population and we often “play like” the observations are independent. However, if you don’t put the coin back in the pile, the probability of picking up another coin and having it also be heads up is now less than 0.5. Therefore, the successive observations are not independent.

Sampling distribution of a count A population contains a proportion p of successes. If the population is much larger than the sample, the count X of successes in an SRS of size n has approximately the binomial distribution B(n, p). The n observations will be nearly independent when the size of the population is much larger than the size of the sample. As a rule of thumb, the binomial sampling distribution for counts can be used when the population is at least 20 times as large as the sample.

Reminder: Sampling variability Each time we take a random sample from a population, we are likely to get a different set of individuals and, thus, our calculation will come out differently. This is called sampling variability. The sampling distribution of a statistic is a probability distribution that describes all the possible values of the statistic and the probabilities associated with those values.

Binomial mean and standard deviation The center and spread of the binomial distribution for a count X are defined by the mean  and standard deviation  : Effect of changing p when n is fixed. a) n = 10, p = 0.25 b) n = 10, p = 0.5 c) n = 10, p = 0.75 For small samples, binomial distributions are skewed when p is different from 0.5. a) b) c)

Color blindness The frequency of color blindness (dyschromatopsia) in the Caucasian American male population is estimated to be about 8%. We take a random sample of size 20 from this population. The population is certainly much larger than 20 times the sample size, thus we can approximate the sampling distribution by B(n = 20, p = 0.08).  What is the probability that two or fewer men in the sample are color blind?  Table C => P(X=0)+P(X=1)+P(X=2)= =.7824  What is the probability that more than two will be color blind?  P(x > 2) = 1  P(x ≤ 2) =1  =  What is the probability that exactly five will be color blind?  P(x = 2) =.2711

What are the mean and standard deviation of the count of color blind individuals in the SRS of 20 Caucasian American males? µ = np = 20*0.08 = 1.6 σ = √np(1  p) = √(20*0.08*0.92) = 1.21 p =.08 n = 10 p =.08 n = 75 µ = 10*0.08 = 0.8 µ = 75*0.08 = 6 σ = √(10*0.08*0.92) = 0.86 σ = √(75*0.08*0.92) = 3.35 What if we take an SRS of size 10? Of size 75?

Sampling distribution of a proportion We are often more interested in the proportion of successes than the count. In statistical sampling the sample proportion of successes, p-hat, is used to estimate the proportion p of successes in a population. For any SRS of size n, the sample proportion of successes is:  In an SRS of 50 students in an undergrad class, 10 are Hispanic: p hat = (10)/(50) = 0.2 (proportion of Hispanics in sample)  The 30 subjects in an SRS are asked to taste an unmarked brand of coffee and rate it “would buy” or “would not buy.” Eighteen subjects rated the coffee “would buy.” p hat = (18)/(30) = 0.6 (proportion of “would buy”)

If the sample size is much smaller than the population size and if p is the proportion of successes in the whole population, then the mean and standard deviation of p-hat are:  Because the mean of p-hat is p, we say that the sample proportion p-hat in an SRS is an unbiased estimator of the population proportion p.  The variability of p hat about its mean decreases as the sample size increases. So large samples usually give close estimates of the population proportion p.

Normal approximation If n is large, and p is not too close to 0 or 1, then the binomial distribution can be usefully approximated by the normal distribution, N(  = np,   = np(1  p)). Practically speaking, the Normal approximation can be used when both np ≥10 and n(1  p) ≥10. If X is the number of successes in the sample and p-hat = X/n (the sample proportion), their sampling distributions for large n, are:  X approximately N(µ = np, σ 2 = np(1 − p))  p hat is approximately N (µ = p, σ 2 = p(1 − p)/n)

Sampling distribution of p ^ The sampling distribution of is never exactly normal. But as the sample size increases, the sampling distribution of becomes approximately normal. The normal approximation is most accurate for any fixed n when p is close to 0.5, and least accurate when p is near 0 or near 1.

Color blindness Recall that the frequency of color blindness in the Caucasian American male population is about 8%. If we take a random sample of size 125 from this population. What is the probability that six individuals or fewer in the sample are color blind?  Sampling distribution of the count X: B(n = 125, p = 0.08) Exact calculations lead to P(X < 6) = or about 12%  Normal approximation for X requires  125*0.8=10 &  = 9.2 = 3.03 X= 6 if Z = (6-10)/3.03 = -1.32; P(X<6) = P(Z<-1.32) =.0934 The normal approximation is reasonable, though not perfect. Here p = 0.08 is not close to 0.5 when the normal approximation is at its best. A sample size of 125 is the smallest sample size that can allow use of the normal approximation (np = 10 and n(1  p) = 115).

Sampling distributions for the color blindness example. n = 50 n = 125 n =1000 The larger the sample size the better the normal approximation suits the binomial distribution. Avoid sample sizes too small for np or n(1  p) to reach at least 10 (e.g., n = 50).

The normal distribution is a better approximation of the binomial distribution if we perform a continuity correction where x’ = x is substituted for x, and P(X ≤ x) is replaced by P(X ≤ x + 0.5). Why? A binomial random variable is a discrete variable that can only take whole numerical values. In contrast, a normal random variable is a continuous variable that can take any numerical value. P(X ≤ 10) for a binomial variable is P(X ≤ 10.5) using a normal approximation. P(X < 10) for a binomial variable excludes the outcome X = 10, so we exclude the entire interval from 9.5 to 10.5 and calculate P(X ≤ 9.5) when using a normal approximation. Normal approximation: continuity correction

Color blindness As before, the frequency of color blindness in the Caucasian American male population is about 8%. We take a random sample of size 125 from this population. Sampling distribution of the count X: B(n = 125, p = 0.08) The continuity correction provides a more accurate estimate: All we have to do is use 6.5 in the “formula” instead of 6. Binomial P(X ≤ 6) =  this is the exact probability Normal P(X ≤ 6) = , while P(X ≤ 6.5) =  estimates.1198 much more accurately

Binomial formulas The number of ways of arranging k successes in a series of n observations (with constant probability p of success) is the number of possible combinations (unordered sequences). This can be calculated with the binomial coefficient: The binomial coefficient “n_choose_k” uses the factorial notation “!”. The factorial n! for any positive integer n is: n! = n × (n − 1) × (n − 2) × · · · × 3 × 2 × 1 Where k = 0, 1, 2,..., or n.

Calculations for binomial probabilities The binomial coefficient counts the number of ways in which k successes can be arranged among n observations. The binomial probability P(X = k) is this count multiplied by the probability of any specific arrangement of the k successes: XP(X) 012…k…n012…k…n n C 0 p 0 q n = q n n C 1 p 1 q n-1 n C 2 p 2 q n-2 … n C x p k q n-k … n C n p n q 0 = p n Total1 The probability that a binomial random variable takes any range of values is the sum of each probability for getting exactly that many successes in n observations. P(X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2)

Color blindness Referring again to the color blindness example. What is the probability that exactly five individuals in the sample are color blind?  P(x = 5) = (n! / k!(n  k)!)p k (1  p) n-k = (25! / 5!(20)!) P(x = 5) = (16*17*18*19*20 / 1*2*3*4*5) P(x = 5) = 31,008 * * = Obviously, these kinds of calculations can be tedious.