Download presentation
Presentation is loading. Please wait.
Published byMay Terry Modified over 9 years ago
2
Copyright © 2009 Pearson Education, Inc. Chapter 23 Inferences About Means
3
Copyright © 2009 Pearson Education, Inc. Slide 1- 3 Getting Started Now that we know how to create confidence intervals and test hypotheses about proportions, it’d be nice to be able to do the same for means. Just as we did before, we will base both our confidence interval and our hypothesis test on the sampling distribution model. The Central Limit Theorem told us that the sampling distribution model for means is Normal with mean μ and standard deviation
4
Copyright © 2009 Pearson Education, Inc. Slide 1- 4 All we need is a random sample of quantitative data. And the true population standard deviation, σ. Well, that’s a problem… Getting Started (cont.)
5
Copyright © 2009 Pearson Education, Inc. Slide 1- 5 Getting Started (cont.) Proportions have a link between the proportion value and the standard deviation of the sample proportion. This is not the case with means—knowing the sample mean tells us nothing about We’ll do the best we can: estimate the population parameter σ with the sample statistic s. Our resulting standard error is
6
Copyright © 2009 Pearson Education, Inc. Slide 1- 6 Getting Started (cont.) We now have extra variation in our standard error from s, the sample standard deviation. We need to allow for the extra variation so that it does not mess up the margin of error and P-value, especially for a small sample. And, the shape of the sampling model changes—the model is no longer Normal. So, what is the sampling model?
7
Copyright © 2009 Pearson Education, Inc. Slide 1- 7 Gosset’s t William S. Gosset, an employee of the Guinness Brewery in Dublin, Ireland, worked long and hard to find out what the sampling model was. The sampling model that Gosset found has been known as Student’s t. The Student’s t-models form a whole family of related distributions that depend on a parameter known as degrees of freedom. We often denote degrees of freedom as df, and the model as t df.
8
Copyright © 2009 Pearson Education, Inc. Slide 1- 8 A Confidence Interval for Means? A practical sampling distribution model for means When the conditions are met, the standardized sample mean follows a Student’s t-model with n – 1 degrees of freedom. We estimate the standard error with
9
Copyright © 2009 Pearson Education, Inc. Slide 1- 9 When Gosset corrected the model for the extra uncertainty, the margin of error got bigger. Your confidence intervals will be just a bit wider and your P-values just a bit larger than they were with the Normal model. By using the t-model, you’ve compensated for the extra variability in precisely the right way. A Confidence Interval for Means? (cont.)
10
Copyright © 2009 Pearson Education, Inc. Slide 1- 10 When the conditions are met, we are ready to find the confidence interval for the population mean, μ. The confidence interval is where the standard error of the mean is The critical value depends on the particular confidence level, C, that you specify and on the number of degrees of freedom, n – 1, which we get from the sample size. A Confidence Interval for Means? (cont.) One-sample t-interval for the mean
11
Copyright © 2009 Pearson Education, Inc. Slide 1- 11 Student’s t-models are unimodal, symmetric, and bell shaped, just like the Normal. But t-models with only a few degrees of freedom have much fatter tails than the Normal. A Confidence Interval for Means? (cont.)
12
Copyright © 2009 Pearson Education, Inc. Slide 1- 12 As the degrees of freedom increase, the t-models look more and more like the Normal. In fact, the t-model with infinite degrees of freedom is exactly Normal. A Confidence Interval for Means? (cont.)
13
Copyright © 2009 Pearson Education, Inc. Slide 1- 13 Assumptions and Conditions Gosset found the t-model by simulation. Years later, when Sir Ronald A. Fisher showed mathematically that Gosset was right, he needed to make some assumptions to make the proof work. We will use these assumptions when working with Student’s t-model.
14
Copyright © 2009 Pearson Education, Inc. Slide 1- 14 Assumptions and Conditions (cont.) Independence Assumption: Independence Assumption. The data values should be independent. Randomization Condition: The data arise from a random sample or suitably randomized experiment. Randomly sampled data (particularly from an SRS) are ideal. 10% Condition: When a sample is drawn without replacement, the sample should be no more than 10% of the population.
15
Copyright © 2009 Pearson Education, Inc. Slide 1- 15 Assumptions and Conditions (cont.) Normal Population Assumption: We can never be certain that the data are from a population that follows a Normal model, but we can check the Nearly Normal Condition: The data come from a distribution that is unimodal and symmetric. Check this condition by making a histogram or Normal probability plot.
16
Copyright © 2009 Pearson Education, Inc. Slide 1- 16 Assumptions and Conditions (cont.) Nearly Normal Condition: The smaller the sample size (n < 15 or so), the more closely the data should follow a Normal model. For moderate sample sizes (n between 15 and 40 or so), the t works well as long as the data are unimodal and reasonably symmetric. For sample sizes larger than 40 or 50, the t methods are safe to use unless the data are extremely skewed.
17
Copyright © 2009 Pearson Education, Inc. Slide 1- 17 More Cautions About Interpreting Confidence Intervals Remember that interpretation of your confidence interval is key. What NOT to say: “90% of all the vehicles on Triphammer Road drive at a speed between 29.5 and 32.5 mph.” The confidence interval is about the mean not the individual values. “We are 90% confident that a randomly selected vehicle will have a speed between 29.5 and 32.5 mph.” Again, the confidence interval is about the mean not the individual values.
18
Copyright © 2009 Pearson Education, Inc. Slide 1- 18 More Cautions About Interpreting Confidence Intervals (cont.) What NOT to say: “The mean speed of the vehicles is 31.0 mph 90% of the time.” The true mean does not vary—it’s the confidence interval that would be different had we gotten a different sample. “90% of all samples will have mean speeds between 29.5 and 32.5 mph.” The interval we calculate does not set a standard for every other interval—it is no more (or less) likely to be correct than any other interval.
19
Copyright © 2009 Pearson Education, Inc. Slide 1- 19 Make a Picture, Make a Picture, Make a Picture Pictures tell us far more about our data set than a list of the data ever could. The only reasonable way to check the Nearly Normal Condition is with graphs of the data. Make a histogram of the data and verify that its distribution is unimodal and symmetric with no outliers. You may also want to make a Normal probability plot to see that it’s reasonably straight.
20
Copyright © 2009 Pearson Education, Inc. Slide 1- 20 A Test for the Mean The assumptions and conditions for the one-sample t-test for the mean are the same as for the one-sample t-interval. We test the hypothesis H 0 : = 0 using the statistic The standard error of the sample mean is When the conditions are met and the null hypothesis is true, this statistic follows a Student’s t-model with n – 1 df. We use that model to obtain a P-value. One-sample t-test for the mean
21
Copyright © 2009 Pearson Education, Inc. Slide 1- 21 Finding t-Values By Hand The Student’s t-model is different for each value of degrees of freedom. Because of this, Statistics books usually have one table of t-model critical values for selected confidence levels.
22
Copyright © 2009 Pearson Education, Inc. Slide 1- 22 Finding t-Values By Hand (cont.) Alternatively, we could use technology to find t critical values for any number of degrees of freedom and any confidence level you need. What technology could we use? The Appendix of ActivStats on the CD Any graphing calculator or statistics program
23
Copyright © 2009 Pearson Education, Inc. Slide 1- 23 Significance and Importance Remember that “statistically significant” does not mean “actually important” or “meaningful.” Because of this, it’s always a good idea when we test a hypothesis to check the confidence interval and think about likely values for the mean.
24
Copyright © 2009 Pearson Education, Inc. Slide 1- 24 Intervals and Tests Confidence intervals and hypothesis tests are built from the same calculations. In fact, they are complementary ways of looking at the same question. The confidence interval contains all the null hypothesis values we can’t reject with these data.
25
Copyright © 2009 Pearson Education, Inc. Slide 1- 25 Intervals and Tests (cont.) More precisely, a level C confidence interval contains all of the possible null hypothesis values that would not be rejected by a two-sided hypothesis test at alpha level 1 – C. So a 95% confidence interval matches a 0.05 level test for these data. Confidence intervals are naturally two-sided, so they match exactly with two-sided hypothesis tests. When the hypothesis is one sided, the corresponding alpha level is (1 – C)/2.
26
Copyright © 2009 Pearson Education, Inc. Slide 1- 26 Sample Size To find the sample size needed for a particular confidence level with a particular margin of error (ME), solve this equation for n: The problem with using the equation above is that we don’t know most of the values. We can overcome this: We can use s from a small pilot study. We can use z* in place of the necessary t value.
27
Copyright © 2009 Pearson Education, Inc. Slide 1- 27 Sample Size (cont.) Sample size calculations are never exact. The margin of error you find after collecting the data won’t match exactly the one you used to find n. The sample size formula depends on quantities you won’t have until you collect the data, but using it is an important first step. Before you collect data, it’s always a good idea to know whether the sample size is large enough to give you a good chance of being able to tell you what you want to know.
28
Copyright © 2009 Pearson Education, Inc. Slide 1- 28 Degrees of Freedom If only we knew the true population mean, µ, we would find the sample standard deviation as But, we use instead of µ, though, and that causes a problem. When we use to calculate s, our standard deviation estimate would be too small. The amazing mathematical fact is that we can compensate for the smaller sum exactly by dividing by which we call the degrees of freedom.
29
Copyright © 2009 Pearson Education, Inc. Slide 1- 29 What Can Go Wrong? Don’t confuse proportions and means. Ways to Not Be Normal: Beware multimodality. The Nearly Normal Condition clearly fails if a histogram of the data has two or more modes. Beware skewed data. If the data are very skewed, try re-expressing the variable. Set outliers aside—but remember to report on these outliers individually.
30
Copyright © 2009 Pearson Education, Inc. Slide 1- 30 What Can Go Wrong? (cont.) …And of Course: Watch out for bias—we can never overcome the problems of a biased sample. Make sure data are independent. Check for random sampling and the 10% Condition. Make sure that data are from an appropriately randomized sample. Interpret your confidence interval correctly. Many statements that sound tempting are, in fact, misinterpretations of a confidence interval for a mean. A confidence interval is about the mean of the population, not about the means of samples, individuals in samples, or individuals in the population.
31
Copyright © 2009 Pearson Education, Inc. Slide 1- 31 What have we learned? Statistical inference for means relies on the same concepts as for proportions—only the mechanics and the model have changed. The reasoning of inference, the need to verify that the appropriate assumptions are met, and the proper interpretation of confidence intervals and P-values all remain the same regardless of whether we are investigating means or proportions.
32
Copyright © 2009 Pearson Education, Inc. Chapter 24 Comparing Means
33
Copyright © 2009 Pearson Education, Inc. Slide 1- 33 Plot the Data The natural display for comparing two groups is boxplots of the data for the two groups, placed side- by-side. For example:
34
Copyright © 2009 Pearson Education, Inc. Slide 1- 34 Comparing Two Means Once we have examined the side-by-side boxplots, we can turn to the comparison of two means. Comparing two means is not very different from comparing two proportions. This time the parameter of interest is the difference between the two means, 1 – 2.
35
Copyright © 2009 Pearson Education, Inc. Slide 1- 35 Comparing Two Means (cont.) Remember that, for independent random quantities, variances add. So, the standard deviation of the difference between two sample means is We still don’t know the true standard deviations of the two groups, so we need to estimate and use the standard error
36
Copyright © 2009 Pearson Education, Inc. Slide 1- 36 Comparing Two Means (cont.) Because we are working with means and estimating the standard error of their difference using the data, we shouldn’t be surprised that the sampling model is a Student’s t. The confidence interval we build is called a two-sample t-interval (for the difference in means). The corresponding hypothesis test is called a two-sample t-test.
37
Copyright © 2009 Pearson Education, Inc. Slide 1- 37 Sampling Distribution for the Difference Between Two Means When the conditions are met, the standardized sample difference between the means of two independent groups can be modeled by a Student’s t-model with a number of degrees of freedom found with a special formula. We estimate the standard error with
38
Copyright © 2009 Pearson Education, Inc. Slide 1- 38 Assumptions and Conditions Independence Assumption (Each condition needs to be checked for both groups.): Randomization Condition: Were the data collected with suitable randomization (representative random samples or a randomized experiment)? 10% Condition: We don’t usually check this condition for differences of means. We will check it for means only if we have a very small population or an extremely large sample.
39
Copyright © 2009 Pearson Education, Inc. Slide 1- 39 Assumptions and Conditions (cont.) Normal Population Assumption: Nearly Normal Condition: This must be checked for both groups. A violation by either one violates the condition. Independent Groups Assumption: The two groups we are comparing must be independent of each other. (See Chapter 25 if the groups are not independent of one another…)
40
Copyright © 2009 Pearson Education, Inc. Slide 1- 40 Two-Sample t-Interval When the conditions are met, we are ready to find the confidence interval for the difference between means of two independent groups. The confidence interval is where the standard error of the difference of the means is The critical value depends on the particular confidence level, C, that you specify and on the number of degrees of freedom, which we get from the sample sizes and a special formula.
41
Copyright © 2009 Pearson Education, Inc. Slide 1- 41 Degrees of Freedom The special formula for the degrees of freedom for our t critical value is a bear: Because of this, we will let technology calculate degrees of freedom for us!
42
Copyright © 2009 Pearson Education, Inc. Slide 1- 42 Testing the Difference Between Two Means The hypothesis test we use is the two-sample t- test for the difference between means. The conditions for the two-sample t-test for the difference between the means of two independent groups are the same as for the two- sample t-interval.
43
Copyright © 2009 Pearson Education, Inc. Slide 1- 43 A Test for the Difference Between Two Means We test the hypothesis H 0 : 1 – 2 = 0, where the hypothesized difference, 0, is almost always 0, using the statistic The standard error is When the conditions are met and the null hypothesis is true, this statistic can be closely modeled by a Student’s t-model with a number of degrees of freedom given by a special formula. We use that model to obtain a P-value.
44
Copyright © 2009 Pearson Education, Inc. Slide 1- 44 Back Into the Pool Remember that when we know a proportion, we know its standard deviation. Thus, when testing the null hypothesis that two proportions were equal, we could assume their variances were equal as well. This led us to pool our data for the hypothesis test.
45
Copyright © 2009 Pearson Education, Inc. Slide 1- 45 Back Into the Pool (cont.) For means, there is also a pooled t-test. Like the two-proportions z-test, this test assumes that the variances in the two groups are equal. But, be careful, there is no link between a mean and its standard deviation…
46
Copyright © 2009 Pearson Education, Inc. Slide 1- 46 Back Into the Pool (cont.) If we are willing to assume that the variances of two means are equal, we can pool the data from two groups to estimate the common variance and make the degrees of freedom formula much simpler. We are still estimating the pooled standard deviation from the data, so we use Student’s t-model, and the test is called a pooled t-test (for the difference between means).
47
Copyright © 2009 Pearson Education, Inc. Slide 1- 47 *The Pooled t-Test If we assume that the variances are equal, we can estimate the common variance from the numbers we already have: Substituting into our standard error formula, we get: Our degrees of freedom are now df = n 1 + n 2 – 2.
48
Copyright © 2009 Pearson Education, Inc. Slide 1- 48 *The Pooled t-Test and Confidence Interval for Means The conditions for the pooled t-test and corresponding confidence interval are the same as for our earlier two- sample t procedures, with the assumption that the variances of the two groups are the same. For the hypothesis test, our test statistic is which has df = n 1 + n 2 – 2. Our confidence interval is
49
Copyright © 2009 Pearson Education, Inc. Slide 1- 49 Is the Pool All Wet? So, when should you use pooled-t methods rather than two-sample t methods? Never. (Well, hardly ever.) Because the advantages of pooling are small, and you are allowed to pool only rarely (when the equal variance assumption is met), don’t. It’s never wrong not to pool.
50
Copyright © 2009 Pearson Education, Inc. Slide 1- 50 Why Not Test the Assumption That the Variances Are Equal? There is a hypothesis test that would do this. But, it is very sensitive to failures of the assumptions and works poorly for small sample sizes—just the situation in which we might care about a difference in the methods. So, the test does not work when we would need it to.
51
Copyright © 2009 Pearson Education, Inc. Slide 1- 51 Is There Ever a Time When Assuming Equal Variances Makes Sense? Yes. In a randomized comparative experiment, we start by assigning our experimental units to treatments at random. Each treatment group therefore begins with the same population variance. In this case assuming the variances are equal is still an assumption, and there are conditions that need to be checked, but at least it’s a plausible assumption.
52
Copyright © 2009 Pearson Education, Inc. Slide 1- 52 What Can Go Wrong? Watch out for paired data. The Independent Groups Assumption deserves special attention. If the samples are not independent, you can’t use two-sample methods. Look at the plots. Check for outliers and non-normal distributions by making and examining boxplots.
53
Copyright © 2009 Pearson Education, Inc. Slide 1- 53 What have we learned? We’ve learned to use statistical inference to compare the means of two independent groups. We use t-models for the methods in this chapter. It is still important to check conditions to see if our assumptions are reasonable. The standard error for the difference in sample means depends on believing that our data come from independent groups, but pooling is not the best choice here. The reasoning of statistical inference remains the same; only the mechanics change.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.