Type I and Type II Errors. For type I and type II errors, we must know the null and alternate hypotheses. H 0 : µ = 40 The mean of the population is 40.

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

Anthony Greene1 Simple Hypothesis Testing Detecting Statistical Differences In The Simplest Case:  and  are both known I The Logic of Hypothesis Testing:
Statistical Techniques I EXST7005 Lets go Power and Types of Errors.
Introduction to Hypothesis Testing
Hypothesis Testing Steps of a Statistical Significance Test. 1. Assumptions Type of data, form of population, method of sampling, sample size.
Hypothesis Testing: Type II Error and Power.
Lecture 2: Thu, Jan 16 Hypothesis Testing – Introduction (Ch 11)
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
Inference about a Mean Part II
Power of a test. power The power of a test (against a specific alternative value) Is the probability that the test will reject the null hypothesis when.
Chapter 9 Hypothesis Testing.
Type II Error, Power and Sample Size Calculations
TESTING A HYPOTHESIS RELATING TO THE POPULATION MEAN 1 This sequence describes the testing of a hypothesis at the 5% and 1% significance levels. It also.
Power of a Test Notes: P 183 and on your own paper.
Overview Definition Hypothesis
Introduction to Hypothesis Testing
Hypothesis testing is used to make decisions concerning the value of a parameter.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc Chapter 9 Introduction to Hypothesis Testing.
Hypothesis Testing.
1 Doing Statistics for Business Doing Statistics for Business Data, Inference, and Decision Making Chapter 8 Hypothesis Testing : An Introduction.
Sections 8-1 and 8-2 Review and Preview and Basics of Hypothesis Testing.
Section 9.1 Introduction to Statistical Tests 9.1 / 1 Hypothesis testing is used to make decisions concerning the value of a parameter.
Hypothesis Testing (Statistical Significance). Hypothesis Testing Goal: Make statement(s) regarding unknown population parameter values based on sample.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap th Lesson Introduction to Hypothesis Testing.
Using Inference to Make Decisions
Overview Basics of Hypothesis Testing
1 Power and Sample Size in Testing One Mean. 2 Type I & Type II Error Type I Error: reject the null hypothesis when it is true. The probability of a Type.
Elementary Statistical Methods André L. Souza, Ph.D. The University of Alabama Lecture 22 Statistical Power.
TYPE II ERROR AND THE POWER OF A TEST A Type I error occurs when the null hypothesis is rejected when it is in fact true. A Type II error occurs when the.
10.2 Tests of Significance Use confidence intervals when the goal is to estimate the population parameter If the goal is to.
1 Lecture note 4 Hypothesis Testing Significant Difference ©
Lecture 16 Dustin Lueker.  Charlie claims that the average commute of his coworkers is 15 miles. Stu believes it is greater than that so he decides to.
Chapter 9 Tests of Hypothesis Single Sample Tests The Beginnings – concepts and techniques Chapter 9A.
Large sample CI for μ Small sample CI for μ Large sample CI for p
Chap 8-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 8 Introduction to Hypothesis.
Power of a Hypothesis test. H 0 True H 0 False Reject Fail to reject Type I Correct Type II Power   Suppose H 0 is true – what if we decide to fail.
Chapter 9: Power I can make really good decisions. Chapter 9: Power Target Goal: I can make really good decisions. 9.1d h.w: pg 548: 23, 25.
Type I and Type II Errors. An analogy may help us to understand two types of errors we can make with inference. Consider the judicial system in the US.
Ex St 801 Statistical Methods Inference about a Single Population Mean.
Statistical Techniques
© 2013 Pearson Education, Inc. Active Learning Lecture Slides For use with Classroom Response Systems Introductory Statistics: Exploring the World through.
Inference as Design Target Goal: I can calculate and interpret a type I and type II error. 9.1c h.w: pg 547: 15, 19, 21.
Power of a test. power The power of a test (against a specific alternative value) Is In practice, we carry out the test in hope of showing that the null.
Hypothesis Testing Steps for the Rejection Region Method State H 1 and State H 0 State the Test Statistic and its sampling distribution (normal or t) Determine.
Chapter 9: Hypothesis Tests for One Population Mean 9.2 Terms, Errors, and Hypotheses.
Power of a test.
Review and Preview and Basics of Hypothesis Testing
Example: Propellant Burn Rate
Power of a test.
Power of a test.
Hypothesis Testing: Hypotheses
CONCEPTS OF HYPOTHESIS TESTING
Introduction to Inference
Chapter Review Problems
P-value Approach for Test Conclusion
Chapter 9: Hypothesis Testing
Introduction to Inference
AP Statistics: Chapter 21
Statistical inference
Power Section 9.7.
More About Tests Notes from
Power of a test.
Power of a test.
Power of a test.
Chapter 8 Making Sense of Statistical Significance: Effect Size, Decision Errors, and Statistical Power.
Power of a test.
Power Problems.
Inference as Decision Section 10.4.
AP STATISTICS LESSON 10 – 4 (DAY 2)
Statistical inference
Presentation transcript:

Type I and Type II Errors

For type I and type II errors, we must know the null and alternate hypotheses. H 0 : µ = 40 The mean of the population is 40. H a : µ ≠ 40 The mean of the population is not 40. For our example to follow: P(Type I error) = αP(Type II error) = β We can define:

Type I error occurs when the null hypothesis is true, but our sample mean is extreme, leading us to believe that the null hypothesis is false. We establish our planned amount of Type I error when we choose α. Of course, we expect to be correct in failing to reject the null hypothesis 1-α, when the null hypothesis is true.

When we choose a level of significance we set α. Our sampling distribution has a mean of 40 and standard deviation of 10, based on population standard deviation of 80 with a sample size of 64. The red regions represent regions of Type I error and have probability α. In this example α is 5%. We get a Type I error when our distribution is centered at 40, but our sample mean happens to be larger than 59.6 or smaller than 20.4.

If we change the significance level to 90%, we change α. Here α is 10%. As you can see, we have increased the probability of Type I error.

Type II error occurs only when the null hypothesis is false. It cannot occur if the null hypothesis is true, by our very definition. When we speak of Type II error we must know that the null hypothesis is not true.

Let’s start with our hypothetical distribution: Now we add an alternate distribution. Our sample mean will come from this sampling distribution, N(68,10), instead of the hypothetical sampling distribution.

The region shaded pink is our probability of Type I error, here 5%. The region shaded blue is the probability of Type II area. Notice that it is under the alternate (blue) distribution.

We make a Type II error whenever the null hypothesis is false, but we get a sample mean that falls into a range that will cause us to fail to reject the null hypothesis. Let’s take a closer look: Sample means between 20 and 60 will “look good” to us, we will not reject the null hypothesis.

Now we check the alternate distribution. Are there times when sample means from this distribution will give us values between 20 and 60? In fact, there are.

To find the probability of Type II area we find the area under the curve. So the probability of Type II error is 21%.

That is, when the true mean is 68, there is a 21% probability that we will fail to reject the null hypothesis. How can we reduce the probability of Type II error? Examine the following figures: Can you see that β is less now, but α is greater?

Here the probability of Type II error is 12.4%

Increasing α does result in a decrease in β. This does not necessarily get you very far ahead. Suppose we could compare to a different alternate distribution. Suppose we could make it have a larger mean, perhaps 72 instead of 68. Would this change β?

Now we have a new alternate distribution N(72,10) and so a new probability. So we now have 11.5% Type II error. While moving the alternate distribution further away reduces Type II error, we cannot always do this. Another approach is to decrease standard deviation. Any way we can accomplish this will have the same effect. Usually you can change sample size.

If our sampling distributions are now N(40,8) and N(68,8) we can find the effect on probability of Type II error. This also shows a reduction in Type II error. Increasing sample size will be our most effective way to minimize Type II error.

With a decrease in the standard deviation we see the probability of Type II error decrease to 6%. Decreasing the standard deviation reduces the amount of overlap between the two distributions, thus reducing the Type II error.

We have seen the difference between Type I and Type II errors. We set the probability of Type I error when we choose a level of significance. The probability of Type II error can be reduced by increasing α, by reducing the standard deviation (perhaps by increasing sample size), or by increasing the distance between the hypothetical and alternate means.

THE END