Hypothesis Testing.

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

Anthony Greene1 Simple Hypothesis Testing Detecting Statistical Differences In The Simplest Case:  and  are both known I The Logic of Hypothesis Testing:
Hypothesis Testing making decisions using sample data.
Review You run a t-test and get a result of t = 0.5. What is your conclusion? Reject the null hypothesis because t is bigger than expected by chance Reject.
Evaluating Hypotheses Chapter 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics.
Cal State Northridge  320 Ainsworth Sampling Distributions and Hypothesis Testing.
Evaluating Hypotheses Chapter 9 Homework: 1-9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics ~
Statistics for the Social Sciences Psychology 340 Fall 2006 Hypothesis testing.
Hypothesis Tests for Means The context “Statistical significance” Hypothesis tests and confidence intervals The steps Hypothesis Test statistic Distribution.
Statistics for the Social Sciences Psychology 340 Spring 2005 Hypothesis testing.
8-2 Basics of Hypothesis Testing
BCOR 1020 Business Statistics
PSY 307 – Statistics for the Behavioral Sciences
Overview of Statistical Hypothesis Testing: The z-Test
Testing Hypotheses I Lesson 9. Descriptive vs. Inferential Statistics n Descriptive l quantitative descriptions of characteristics n Inferential Statistics.
1 © Lecture note 3 Hypothesis Testing MAKE HYPOTHESIS ©
Hypothesis testing is used to make decisions concerning the value of a parameter.
Presented by Mohammad Adil Khan
Descriptive statistics Inferential statistics
Introduction to Hypothesis Testing for μ Research Problem: Infant Touch Intervention Designed to increase child growth/weight Weight at age 2: Known population:
Introduction to Biostatistics and Bioinformatics
Tuesday, September 10, 2013 Introduction to hypothesis testing.
Week 8 Fundamentals of Hypothesis Testing: One-Sample Tests
1 Power and Sample Size in Testing One Mean. 2 Type I & Type II Error Type I Error: reject the null hypothesis when it is true. The probability of a Type.
Statistical Inference Decision Making (Hypothesis Testing) Decision Making (Hypothesis Testing) A formal method for decision making in the presence of.
Hypothesis Testing: One Sample Cases. Outline: – The logic of hypothesis testing – The Five-Step Model – Hypothesis testing for single sample means (z.
Chapter 10 Hypothesis Testing
Chapter 8 Introduction to Hypothesis Testing
Exam 1 Median: 74 Quartiles: 68, 84 Interquartile range: 16 Mean: 74.9 Standard deviation: 12.5 z = -1: 62.4 z = -1: 87.4 z = -1z = +1 Worst Question:
Chapter 9 Power. Decisions A null hypothesis significance test tells us the probability of obtaining our results when the null hypothesis is true p(Results|H.
1 Lecture note 4 Hypothesis Testing Significant Difference ©
Hypothesis Testing A procedure for determining which of two (or more) mutually exclusive statements is more likely true We classify hypothesis tests in.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 8 Hypothesis Testing.
Copyright ©2013 Pearson Education, Inc. publishing as Prentice Hall 9-1 σ σ.
© Copyright McGraw-Hill 2004
STA Lecture 221 !! DRAFT !! STA 291 Lecture 22 Chapter 11 Testing Hypothesis – Concepts of Hypothesis Testing.
Chapter 8: Introduction to Hypothesis Testing. Hypothesis Testing A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis.
Hypothesis Testing Steps for the Rejection Region Method State H 1 and State H 0 State the Test Statistic and its sampling distribution (normal or t) Determine.
Hypothesis Testing. A statistical Test is defined by 1.Choosing a statistic (called the test statistic) 2.Dividing the range of possible values for the.
Learning Objectives Describe the hypothesis testing process Distinguish the types of hypotheses Explain hypothesis testing errors Solve hypothesis testing.
+ Homework 9.1:1-8, 21 & 22 Reading Guide 9.2 Section 9.1 Significance Tests: The Basics.
Copyright © 2010 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 7 th Edition Chapter 9 Hypothesis Testing: Single.
Lecture Slides Elementary Statistics Twelfth Edition
Introduction to Hypothesis Testing: The Binomial Test
Chapter Nine Hypothesis Testing.
Module 10 Hypothesis Tests for One Population Mean
Introduction to Hypothesis Testing: The Binomial Test
Putting Things Together
Independent-Samples t-test
Hypothesis Testing: One Sample Cases
Unit 3 Hypothesis.
Putting Things Together
Inference and Tests of Hypotheses
Review Ordering company jackets, different men’s and women’s styles, but HR only has database of employee heights. How to divide people so only 5% of.
Effect Size.
Review You run a t-test and get a result of t = 0.5. What is your conclusion? Reject the null hypothesis because t is bigger than expected by chance Reject.
Three Views of Hypothesis Testing
Review Nine men and nine women are tested for their memory of a list of abstract nouns. The mean scores are Mmale = 15 and Mfemale = 17. The mean square.
Introduction to Hypothesis Testing: The Binomial Test
Hypothesis Testing: Hypotheses
Unlocking the Mysteries of Hypothesis Testing
Chapter Nine Part 1 (Sections 9.1 & 9.2) Hypothesis Testing
Hypothesis Testing A hypothesis is a claim or statement about the value of either a single population parameter or about the values of several population.
Chapter 9: Testing a Claim
Testing Hypotheses I Lesson 9.
1 Chapter 8: Introduction to Hypothesis Testing. 2 Hypothesis Testing The general goal of a hypothesis test is to rule out chance (sampling error) as.
Section 11.1: Significance Tests: Basics
Statistical Test A test of significance is a formal procedure for comparing observed data with a claim (also called a hypothesis) whose truth we want to.
STA 291 Spring 2008 Lecture 17 Dustin Lueker.
Introduction To Hypothesis Testing
Presentation transcript:

Hypothesis Testing

Where Am I? Wake up after a rough night in unfamiliar surroundings Still in Boulder? Expected if in Boulder (large likelihood) Couldn’t happen IF in Boulder (likelihood near zero)  Can’t be in Boulder Surprising but not impossible (moderate likelihood)

Steps of Hypothesis Testing State clearly the two hypotheses Determine which is the null hypothesis (H0) and which is the alternative hypothesis (H1) Compute a relevant test statistic from the sample Find the likelihood function of the test statistic according to the null hypothesis Choose alpha level (a): how willing you are to abandon null (usually .05) Find the critical value: cutoff with probability  of being exceeded under H0 Compare the actual result to the critical value Less than critical value  retain null hypothesis Greater than critical value  reject null hypothesis; accept alternative hypothesis

Specifying Hypotheses Both hypotheses are statements about population parameters Null Hypothesis (H0) Always more specific, e.g. 50% chance, mean of 100 Usually the less interesting, "default" explanation Alternative Hypothesis (H1) More interesting – researcher’s goal is usually to support the alternative hypothesis Less precise, e.g. > 50% chance,  > 100

Test Statistic Statistic computed from sample to decide between hypotheses Relevant to hypotheses being tested Based on mean if hypotheses are about means Based on number correct (frequency) if hypotheses are about probability correct Sampling distribution according to null hypothesis must be fully determined Can only depend on data and on values assumed by H0 Often a complex formula with little intuitive meaning Inferential statistic: Only used in testing reliability

Likelihood Function Probability distribution of a statistic according to a hypothesis Gives probability of obtaining any possible result Usually interested in distribution of test statistic according to null hypothesis Same as sampling distribution, assuming the population is accurately described by the hypothesis Test statistic chosen because we know its likelihood function Binomial test: Binomial distribution t-test: t distribution

Critical Value Cutoff for test statistic between retaining and rejecting null hypothesis If test statistic is beyond critical value, null will be rejected Otherwise, null will be retained Before collecting data: What strength of evidence will you require to reject null? How many correct outcomes? How big a difference between M and m0, relative to sM? Critical region Range of values that will lead to rejecting null hypothesis All values beyond critical value Frequency Probability t Probability

Types of Errors Goal: Reject null hypothesis when it’s false; retain it when it’s true Two ways to be wrong Type I Error: Null is correct but you reject it Type II Error: Null is false but you retain it Type I Error rate IF H0 is true, probability of mistakenly rejecting H0 Proportion of false theories we conclude are true E.g., proportion of useless treatments that are deemed effective Logic of hypothesis testing is founded on controlling Type I Error rate Set critical value to give desired Type I Error rate

Alpha Level Choice of acceptable Type I Error rate Usually .05 in psychology Higher  more willing to abandon null hypothesis Lower  require stronger evidence before abandoning null hypothesis Determines critical value Under the sampling distribution of the test statistic according to the null hypothesis, the probability of a result beyond the critical value is  Test Statistic Sampling Distribution from H0 Critical Value a

Doping Analogy Measure athletes' blood for signs of doping Cheaters have high RBCs, but even honest people vary What rule to use? Must set some cutoff, and punish anyone above it Will inevitably punish some innocent people H0 likelihood function is like distribution of innocent athletes’ RBCs Cutoff determines fraction of innocent people that get unfairly punished This fraction is alpha Distribution of Innocent Athletes Don’t Punish Punish RBC

Power H0 H0 H1 H1 Type II Error rate Power IF H0 is false, probability of failing to reject it E.g., fraction of cheaters that don’t get caught Power IF H0 is false, probability of correctly rejecting it Equal to one minus Type II Error rate E.g., fraction of cheaters that get caught Power depends on sample size Choose sample size to give adequate power Researchers must make a guess at effect size to compute power H0 Type I error rate (a) H0 H1 H1 Type II error rate Power

Two-Tailed Tests Sometimes want to detect effects in either direction Drugs that help or drugs that hurt Formalized in alternative hypothesis m < m0 or m > m0 Two critical values, one in each tail Type I error rate is sum from both critical regions Need to divide errors between both tails Each gets a/2 (2.5%) t tcrit -tcrit M m0 Reject H0 a/2

One-Tailed vs. Two-Tailed Tests tcrit One-tailed a Two-tailed a/2 a/2 -tcrit tcrit t

An Alternative View: p-values Reversed approach to hypothesis testing After you collect sample and compute test statistic How big must a be to reject H0 p-value Measure of how consistent data are with H0 Probability of a value equal to or more extreme than what you actually got Large p-value  H0 is a good explanation of the data Small p-value  H0 is a poor explanation of the data p > : Retain null hypothesis p < : Reject null hypothesis; accept alternative hypothesis Researchers generally report p-values, because then reader can choose own alpha level E.g. “p = .03” If willing to allow 5% error rate, then accept result as reliable If more stringent, say 1% (a = .01), then remain skeptical tcrit for a = .05 tcrit for a = .03 tcrit for a = .01 t t t = 2.15  p = .03