Presentation is loading. Please wait.

Presentation is loading. Please wait.

Slide 21-1 Copyright © 2004 Pearson Education, Inc.

Similar presentations


Presentation on theme: "Slide 21-1 Copyright © 2004 Pearson Education, Inc."— Presentation transcript:

1 Slide 21-1 Copyright © 2004 Pearson Education, Inc.

2 Slide 21-2 Copyright © 2004 Pearson Education, Inc. More About Tests Chapter 21 Created by Jackie Miller, The Ohio State University

3 Slide 21-3 Copyright © 2004 Pearson Education, Inc. Zero In on the Null To perform a hypothesis test, H 0 must be a statement about the value of a parameter from a model. We then use this value to find how likely it is that we would observe something as extreme as or more extreme than we did, given that H 0 is true.

4 Slide 21-4 Copyright © 2004 Pearson Education, Inc. Zero In on the Null (cont.) How do we choose H 0 ? The appropriate null arises directly from the context of the problem— it is not dictated by the data, but instead by the situation. A good way to identify the null and the alternative is to think about the Why of the situation. To write H 0, you can’t just choose any parameter value you like. The null must relate to the question at hand—it is context dependent.

5 Slide 21-5 Copyright © 2004 Pearson Education, Inc. How to Think About P-Values A P-value is a conditional probability—the probability of the observed statistic given that the null hypothesis is true. The P-value is NOT the probability that H 0 is true. (It’s not even the probability that H 0 is true given the data.) Be careful to interpret the P-value correctly.

6 Slide 21-6 Copyright © 2004 Pearson Education, Inc. Alpha Levels Sometimes we need to make a firm decision about whether or not to reject the null hypothesis. When the P-value is small, it tells us that our data are rare given H 0. How rare is “rare”?

7 Slide 21-7 Copyright © 2004 Pearson Education, Inc. Alpha Levels We can define “rare event” arbitrarily by setting a threshold (alpha level, denoted by  ) for our P-value. If our P-value falls below that point, we’ll reject H 0. We call such results statistically significant.

8 Slide 21-8 Copyright © 2004 Pearson Education, Inc. Alpha Levels (cont.) Common alpha levels are.10,.05, and.01. –The alpha level is also called the significance level. –(When we reject the null hypothesis, we say that the test is “significant at that level.”) You need to consider your alpha level carefully and choose an appropriate one for the situation.

9 Slide 21-9 Copyright © 2004 Pearson Education, Inc. Alpha Levels (cont.) What can you say if the P-value does not fall below  ? –You should say that “The data have failed to provide sufficient evidence to reject the null hypothesis.” –Don’t say that you “accept the null hypothesis.”

10 Slide 21-10 Copyright © 2004 Pearson Education, Inc. Alpha Levels (cont.) Recall that, in a jury trial, if we do not find the defendant guilty, we say the defendant is “not guilty”—we don’t say that the defendant is “innocent.”

11 Slide 21-11 Copyright © 2004 Pearson Education, Inc. Alpha Levels (cont.) The P-value gives the reader far more information than just stating that you reject or fail to reject the null. In fact, by providing a P-value to the reader, you allow that person to make his or her own decisions about the test. –What you consider to be statistically significant might not be the same as what someone else considers statistically significant. –There is more than one alpha level that can be used, but each test will give only one P-value.

12 Slide 21-12 Copyright © 2004 Pearson Education, Inc. Critical Values Again When making a confidence interval, we’ve found a critical value to correspond to our selected confidence level. Prior to the use of technology, P-values were difficult to find, and it was easier to select a few common alpha values and learn the corresponding critical values for the Normal model.

13 Slide 21-13 Copyright © 2004 Pearson Education, Inc. Critical Values Again (cont.) Rather than looking up your z-score in the table, you could just check it directly against these critical values. –Any z-score larger in magnitude than a particular critical value leads us to reject H 0. –Any z-score smaller in magnitude than a particular critical value leads us to fail to reject H 0.

14 Slide 21-14 Copyright © 2004 Pearson Education, Inc. Critical Values Again (cont.) Here are the traditional critical values from the Normal model:  1-sided2-sided.051.6451.96.012.282.575.0013.093.29

15 Slide 21-15 Copyright © 2004 Pearson Education, Inc. Critical Values Again (cont.) When the alternative is one-sided, the critical value puts all of  on one side: When the alternative is two-sided, the critical value splits  equally into two tails:

16 Slide 21-16 Copyright © 2004 Pearson Education, Inc. Making Errors Here’s some shocking news for you: nobody’s perfect. Even with lots of evidence we can still make the wrong decision. When we perform a hypothesis test, we can make mistakes in two ways: I.The null hypothesis is true, but we mistakenly reject it. (Type I error) II.The null hypothesis is false, but we fail to reject it. (Type II error)

17 Slide 21-17 Copyright © 2004 Pearson Education, Inc. Making Errors (cont.) Which type of error is more serious depends on the situation at hand. In other words, the gravity of the error is context dependent. Here’s an illustration of the four situations in a hypothesis test:

18 Slide 21-18 Copyright © 2004 Pearson Education, Inc. Making Errors (cont.) Since a Type I error is rejecting a true null hypothesis, the probability of a Type I error is our  level. Determining the size of a Type II error is more difficult. (Note: the probability of a Type II error is assigned the letter .) The difficulty in determining  is that we don’t know how wrong H 0 is.

19 Slide 21-19 Copyright © 2004 Pearson Education, Inc. Making Errors (cont.) There is a tension between Type I and II errors, because if we increase , we decrease , and vice versa. The only way to reduce both types of error is to collect more evidence (i.e., collect more data).

20 Slide 21-20 Copyright © 2004 Pearson Education, Inc. Confidence Intervals and Hypothesis Tests Confidence intervals and hypothesis tests are built from the same calculations. They have the same assumptions and conditions. You can approximate a hypothesis test by examining a confidence interval. Just ask whether H 0 is consistent with a confidence interval for the parameter at the corresponding confidence level.

21 Slide 21-21 Copyright © 2004 Pearson Education, Inc. Confidence Intervals and Hypothesis Tests (cont.) Because confidence intervals are two- sided, they correspond to two-sided tests. In general, a confidence interval with a confidence level of C% corresponds to a two-sided hypothesis test with an  -level of 100 – C%.

22 Slide 21-22 Copyright © 2004 Pearson Education, Inc. Power The power of a test is the probability that it correctly rejects a false null hypothesis. Since  is the probability that a test fails to reject a false null hypothesis, the power of a test can be expressed as 1 – . When we calculate power, we have to imagine that H 0 is false.

23 Slide 21-23 Copyright © 2004 Pearson Education, Inc. Power (cont.) The value of the power depends on how far the truth lies from the null hypothesis value. We call the distance between the null hypothesis value, p 0, and the truth, p, the effect size. The power depends directly on the effect size. It’s easier to see larger effects, so the farther p 0 is from p, the greater the power.

24 Slide 21-24 Copyright © 2004 Pearson Education, Inc. What Can Go Wrong? Don’t change your H 0 after you look at the data. Don’t make what you want to show into your null hypothesis—remember, while you can “reject” the null, you can never “accept” or “prove” the null.

25 Slide 21-25 Copyright © 2004 Pearson Education, Inc. What Can Go Wrong? (cont.) Don’t interpret the P-value as the probability that H 0 is true—the P-value is about the data, not the hypothesis. Don’t believe too strongly in arbitrary alpha levels—it’s better to report your P-value and a confidence interval so that the reader can make her/his own decision.

26 Slide 21-26 Copyright © 2004 Pearson Education, Inc. What Can Go Wrong? (cont.) Don’t confuse practical and statistical significance—just because a test is statistically significant doesn’t mean that it is significant in practice. In spite of all your care, you might make a wrong decision. Always check the conditions—no amount of care in calculating a test can recover from a biased sample.

27 Slide 21-27 Copyright © 2004 Pearson Education, Inc. Key Concepts No matter how much care we take with hypothesis testing, there are ways that we can still make mistakes. Alpha levels are arbitrary, but our P-value provides a lot of information to the reader.


Download ppt "Slide 21-1 Copyright © 2004 Pearson Education, Inc."

Similar presentations


Ads by Google