Download presentation
Presentation is loading. Please wait.
1
Decision Errors and Power
When we perform a statistical test we hope that our decision will be correct, but sometimes it will be wrong. There are two types of incorrect decisions. To help distinguish these two types of error, we give them specific names. The error made by rejecting the null hypothesis H (accepting Ha) when in fact H0 is true is called a type I error. The probability of making a type I error is denoted by . The error made by accepting the null hypothesis H (rejecting Ha) when in fact H0 is false is called a type II error. The probability of making a type II error is denoted by . The probability that a fixed level significant test will reject H0 when a particular alternative value of the parameter is true is called the power of the test to detect that alternative. week 10
2
Significance and type I error
The significance level of any fixed level test is the probability of a Type I error. That is is the probability that the test will reject the null hypothesis H0 when H0 is in fact true. Power and Type II error The power of a fixed level test against a particular alternative is Power = 1- β = 1- P( accepting H0 when H0 is false) = = P( rejecting H0 when H0 is false) week 10
3
Ways to increase Power Increase α. When we increase α the strength of evidence required for the rejection is less. Consider a particular Ha that is farther away from μ0. Values of μ that are in Ha but lie close to μ0 are harder to detect (lower power) then values of μ that are far from μ0. Increase sample size. More data will provide more information about the population so we have a better chance of distinguishing values of µ. Decrease σ. This has the same effect as increasing the sample size: more information about µ. Improving the measurement process and restricting attention to a subpopulation are two common ways to decrease σ. week 10
4
Example We are interested to learn about the mean contents of cola bottles and want to test the following hypotheses: H0: = Ha: < 300. The sample size is n = 6, and the population is assumed to have a normal distribution with = 3. A 5% significance test rejects H0 if z ≤ Z0.05 = where the test statistic z is Power calculations help us see how large a shortfall in the bottle contents the test can be expected to detect. (a) Find the power of this test against the alternative = 299. (b) Find the power against the alternative = 295. (c) Is the power against = 290 higher or lower than the value you found in (b)? Explain why this result makes sense. week 10
5
Solution week 10
6
Exercise You have an SRS of size n = 9 from a normal distribution with σ = 1. You wish to test the following H0: = Ha: > 0 You decide to reject H0 if and to accept H0 otherwise. (a) Find the probability of a Type I error, that is, the probability that your test rejects H0 when in fact = 0. (b) Find the probability of a Type II error when = 0.3. This is the probability that your test accepts H0 when in fact = 0.3. (c) Find the probability of a Type II error when = 1. Answer: (a) P( > 0 when = 0) = P(Z > 0) = 0.50. (b) P( 0 when = 0.3) = P(Z ) = P(Z –0.9) = (c) P( 0 when = 1) = P(Z ) = P(Z –3) = week 10
7
Qestion17 Final Exam Dec 2000 Answer: a and c
When testing H0: μ = 5 vs Ha: μ ≠ 5 at = 0.01 with n =40 suppose that the probability of a type II error () is equal to 0.02 when = 2. Which of the following statements are true? > 0.02 when = 3 > 0.02 if the sample size was 50 (at = 2) > 0.02 if had been twice as large. (at = 2) The power of the test is at = 2 is 0.99 Answer: a and c week 10
8
Exercise A study was carried out to investigate the effectiveness of a treatment subjects participated in the study, with 500 being randomly assigned to the "treatment group" and the other 500 to the "control (or placebo) group". A statistically significant difference was reported between the responses of the two groups (P <0 .005). State whether the following statements are true of false. a) There is a large difference between the effects of the treatment and the placebo. b) There is strong evidence that the treatment is very effective. c) There is strong evidence that there is some difference in effect between the treatment and the placebo. d) There is little evidence that the treatment has some effect. e) The probability that the null hypothesis is true is less than week 10
9
Use and abuse of Tests The spirit of a test of significance is to give a clear statement of the degree of evidence provided by the sample against the null hypothesis. The P-value does this. There is no sharp evidence between “significant” and “not significant” only increasingly strong evidence as the P-value decreases. When large samples are available, even tiny deviations from the null hypothesis will be significant (small P-value). Statistically significant effect need not be practically important. Always plot the data and examine them carefully. Beware of outliers. On the other hand, lack of significant does not imply that H0 is true, especially when the test has low power. When planning a study, verify that the test you plan to use does have high probability of detecting an effect of the size you hope to find. week 10
10
Significant tests are not always valid
Significant tests are not always valid. Faulty data collection, outliers in the data, and testing a hypothesis on the same data that first suggested that hypothesis can invalidate a test. Many tests run at once will probably produce some significant results by chance alone, even if the null hypotheses are true. The reasoning behind statistical significance works well if you decide what effect you are seeking, design an experiment or sample to search for it, and use a test of significance to weight the evidence you get. week 10
11
Example Suppose that the population of scores of the high school seniors that took the SAT-Verbal test this year follows a normal distribution with = 48 and = 90. A report claims that 10,000 students who took part in the national program for improving SAT-verbal scores had a significantly better score (at the 5% level of sig.) than the population as a whole In order to determine if the improvement is of practical significance one should: Find out the actual mean score for the 10,000 students. Fine out the actual p-value. week 10
12
Example 6.23 on page 396 in IPS Suppose that we are testing the hypothesis of no correlation between two variables. With 400 observation, an observed correlation of only r = 0.1 is significant evidence at the α = 0.05 level that the correlation in the population is not zero. The low significance level does not mean there is strong association, only that there is some evidence of some association. This is an example where the test results are statistically significant but not practically significant. week 10
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.