Type I and Type II Errors. An analogy may help us to understand two types of errors we can make with inference. Consider the judicial system in the US.

Slides:



Advertisements
Similar presentations
Anthony Greene1 Simple Hypothesis Testing Detecting Statistical Differences In The Simplest Case:  and  are both known I The Logic of Hypothesis Testing:
Advertisements

Chapter 12 Tests of Hypotheses Means 12.1 Tests of Hypotheses 12.2 Significance of Tests 12.3 Tests concerning Means 12.4 Tests concerning Means(unknown.
Statistical Techniques I EXST7005 Lets go Power and Types of Errors.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc Chapter 11 Introduction to Hypothesis Testing.
Hypothesis Testing After 2 hours of frustration trying to fill out an IRS form, you are skeptical about the IRS claim that the form takes 15 minutes on.
Hypothesis Testing: Type II Error and Power.
1/55 EF 507 QUANTITATIVE METHODS FOR ECONOMICS AND FINANCE FALL 2008 Chapter 10 Hypothesis Testing.
Lecture 2: Thu, Jan 16 Hypothesis Testing – Introduction (Ch 11)
Power of a Test Notes: P 183 and on your own paper.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc Chapter 11 Introduction to Hypothesis Testing.
Overview Definition Hypothesis
Confidence Intervals and Hypothesis Testing - II
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc Chapter 9 Introduction to Hypothesis Testing.
1 Doing Statistics for Business Doing Statistics for Business Data, Inference, and Decision Making Chapter 8 Hypothesis Testing : An Introduction.
Fundamentals of Hypothesis Testing: One-Sample Tests
1 Virtual COMSATS Inferential Statistics Lecture-17 Ossam Chohan Assistant Professor CIIT Abbottabad.
Statistical Inference Decision Making (Hypothesis Testing) Decision Making (Hypothesis Testing) A formal method for decision making in the presence of.
1 Statistical Inference Greg C Elvers. 2 Why Use Statistical Inference Whenever we collect data, we want our results to be true for the entire population.
Chapter 4 Introduction to Hypothesis Testing Introduction to Hypothesis Testing.
Chapter 8 Introduction to Hypothesis Testing
Presentation on Type I and Type II Errors How can someone be arrested if they really are presumed innocent? Why do some individuals who really are guilty.
TYPE II ERROR AND THE POWER OF A TEST A Type I error occurs when the null hypothesis is rejected when it is in fact true. A Type II error occurs when the.
Chapter 11 Testing Hypotheses about Proportions © 2010 Pearson Education 1.
10.2 Tests of Significance Use confidence intervals when the goal is to estimate the population parameter If the goal is to.
AP STATISTICS LESSON 10 – 4 ( DAY 1 ) INFERENCE AS DECISION.
1 Lecture note 4 Hypothesis Testing Significant Difference ©
Errors & Power. 2 Results of Significant Test 1. P-value < alpha 2. P-value > alpha Reject H o & conclude H a in context Fail to reject H o & cannot conclude.
Hypothesis Testing – A Primer. Null and Alternative Hypotheses in Inferential Statistics Null hypothesis: The default position that there is no relationship.
Economics 173 Business Statistics Lecture 4 Fall, 2001 Professor J. Petry
Chapter 20 Testing Hypothesis about proportions
Unit 8 Section 8-1 & : Steps in Hypothesis Testing- Traditional Method  Hypothesis Testing – a decision making process for evaluating a claim.
Chapter 9: Power I can make really good decisions. Chapter 9: Power Target Goal: I can make really good decisions. 9.1d h.w: pg 548: 23, 25.
1 Hypothesis Testing A criminal trial is an example of hypothesis testing. In a trial a jury must decide between two hypotheses. The null hypothesis is.
Chap 8-1 Fundamentals of Hypothesis Testing: One-Sample Tests.
Chapter 21: More About Tests
Statistical Techniques
Introduction to Inference Tests of Significance Errors in the justice system Actual truth Jury decision GuiltyNot guilty Guilty Not guilty Correct decision.
Inference as Design Target Goal: I can calculate and interpret a type I and type II error. 9.1c h.w: pg 547: 15, 19, 21.
Type I and Type II Errors. For type I and type II errors, we must know the null and alternate hypotheses. H 0 : µ = 40 The mean of the population is 40.
Power of a test. power The power of a test (against a specific alternative value) Is In practice, we carry out the test in hope of showing that the null.
Chapter 12 Tests of Hypotheses Means 12.1 Tests of Hypotheses 12.2 Significance of Tests 12.3 Tests concerning Means 12.4 Tests concerning Means(unknown.
Chapter 9: Hypothesis Tests for One Population Mean 9.2 Terms, Errors, and Hypotheses.
HYPOTHESIS TESTING E. Çiğdem Kaspar, Ph.D, Assist. Prof. Yeditepe University, Faculty of Medicine Biostatistics.
Hypothesis Testing. A statistical Test is defined by 1.Choosing a statistic (called the test statistic) 2.Dividing the range of possible values for the.
More about tests and intervals CHAPTER 21. Do not state your claim as the null hypothesis, instead make what you’re trying to prove the alternative. The.
Introduction to Inference Tests of Significance Proof
Introduction to Inference
Power of a test.
FINAL EXAMINATION STUDY MATERIAL III
Tests of Significance The reasoning of significance tests
Keller: Stats for Mgmt & Econ, 7th Ed Hypothesis Testing
Example: Propellant Burn Rate
CONCEPTS OF HYPOTHESIS TESTING
P-value Approach for Test Conclusion
AP Statistics: Chapter 21
Statistical inference
Chapter 9: Hypothesis Tests Based on a Single Sample
Chapter 11: Introduction to Hypothesis Testing Lecture 5a
Introduction to Inference
Errors In Hypothesis tests
Hypothesis Testing A hypothesis is a claim or statement about the value of either a single population parameter or about the values of several population.
Hypothesis Testing.
CHAPTER 9 Testing a Claim
Introduction to Inference
Power of a test.
Power of a test.
Confidence Intervals.
Power Problems.
STA 291 Spring 2008 Lecture 17 Dustin Lueker.
Statistical inference
Presentation transcript:

Type I and Type II Errors

An analogy may help us to understand two types of errors we can make with inference. Consider the judicial system in the US. There is an obvious goal of convicting guilty people and acquitting innocent ones. When the system works that’s what happens. Sometimes, however, the system fails; the guilty go free or the innocent go to jail.

Consider this square: Innocent Truth: Convict Acquit Action: Type I error Type II error One side represents the truth, the other represents the action we take. In truth, a person is innocent or guilty. In action, we may convict or acquit. In these cases the system works. In these cases it fails, and we call these failures Type I and Type II errors. Guilty

We do not consider the two types of errors to be equivalent. We especially try to avoid convicting innocent people and decide on procedures and rules to prevent that. As times change we may change the rules and change the probabilities of these two types of errors. As one type goes down, the other goes up. More rarely, an improvement in technique, such as with DNA technology, results in a decrease in both types of errors.

Now back to Statistics: H 0 is True Truth: Reject H 0 Fail to reject H 0 Action: Type I error Type II error One side of our square represents truth, the other action. In truth, a null hypothesis is true or false. In action, we reject or fail to reject H 0. Sometimes our system works. Sometimes it fails, and we also call these failures Type I and Type II errors. H 0 is False

For type I and type II errors to exist, we must have null and alternate hypotheses. H 0 : µ = 40 The mean of the distribution is 40. H a : µ  40 The mean of the distribution is not 40. For our example to follow: P(Type I error) =  P(Type II error) =  We can define:

When we choose a fixed level of significance we set . Here we have a mean of 40 and standard deviation of 10. The shaded regions represent regions of Type I error and have probability . In this example  is 5%. (The significance level is 95%.) We get a Type I error when we our distribution is centered at 40, but our sample mean happens to be larger than 60 or smaller than 20.

If we change the significance level to 90%, we change . Here  is 10%. As you can see, we have increased the probability of Type I error.

Type II error occurs only when the null hypothesis is false. It cannot occur if the null hypothesis is true, by our very definition. When we speak of Type II error we must know that the null hypothesis is not true.

Let’s start with our hypothetical distribution: Now we see an alternate distribution. Our samples will come from this distribution, N(68,10), instead of the hypothetical distribution. Now we see both.

The region shaded pink is our probability of Type I error, here 5%. The region shaded blue is the probability of Type II area. Notice that it is under the alternate (blue) distribution.

We make a Type II error whenever the null hypothesis is false, but we get a sample mean that falls into a range that will cause us to fail to reject the null hypothesis. Let’s take a closer look: Sample means between 20 and 60 will “look good” to us, we will not reject the null hypothesis.

Now we check the alternate distribution. Are there times when sample means from this distribution will give us values between 20 and 60? In fact, there are.

To find the probability of Type II area we find the area under the curve. So the probability of Type II error is 21%.

That is, when the true mean is 68, there is a 21% probability that we will fail to reject the null hypothesis. How can we reduce the probability of Type II error? Examine the following figures: Can you see that  is less now, but  is greater?

Here the probability of Type II error is 12.4%

Increasing  does result in a decrease in . This does not necessarily get you very far ahead. Suppose we could have a different alternate distribution. Suppose we could make it have a larger mean, perhaps 72 instead of 68. Would this change  ?

Now we have a new alternate distribution N(72,10) and so a new probability. So we now have 11.5% Type II error. While moving the alternate distribution further away reduces Type II error, usually we cannot do this, for practical reasons. Another approach is to decrease standard deviation. Any way we can accomplish this will have the same effect. Usually you can change sample size.

If our sampling distributions are now N(40,8) and N(68,8) we can find the effect on probability of Type II error. This also shows a reduction in Type II error. Increasing sample size will be our most effective way to minimize Type II error.

With a decrease in the standard deviation we see the probability of Type II error decrease to 6%. Decreasing the standard deviation reduces the amount of overlap between the two distributions, thus reducing the Type II error.

We have seen the difference between Type I and Type II errors. We set the probability of Type I error when we choose a level of significance. The probability of Type II error can be reduced by increasing , by reducing the standard deviation (perhaps by increasing sample size), or by increasing the distance between the hypothetical and alternate means.

THE END