R EVIEW OF T ERMINOLOGY Statistics Parameters Critical Region “Obtained” test statistic “Critical” test statistic Alpha/Confidence Level.

Slides:



Advertisements
Similar presentations
ANALYSIS OF VARIANCE (ONE WAY)
Advertisements

Independent t -test Features: One Independent Variable Two Groups, or Levels of the Independent Variable Independent Samples (Between-Groups): the two.
PTP 560 Research Methods Week 9 Thomas Ruediger, PT.
Chapter 10 Analysis of Variance (ANOVA) Part III: Additional Hypothesis Tests Renee R. Ha, Ph.D. James C. Ha, Ph.D Integrative Statistics for the Social.
Hypothesis Testing IV Chi Square.
ANOVA: Analysis of Variance
Analysis and Interpretation Inferential Statistics ANOVA
R EVIEW OF T- TESTS And then…..an “F” for everyone.
The Two Sample t Review significance testing Review t distribution
Review: What influences confidence intervals?
Conceptual Review Conceptual Formula, Sig Testing Calculating in SPSS
Lecture 9: One Way ANOVA Between Subjects
Two Groups Too Many? Try Analysis of Variance (ANOVA)
One-way Between Groups Analysis of Variance
Statistical Methods in Computer Science Hypothesis Testing II: Single-Factor Experiments Ido Dagan.
Introduction to Analysis of Variance (ANOVA)
Chapter 9: Introduction to the t statistic
The Two Sample t Review significance testing Review t distribution
Inferential Statistics
INFERENTIAL STATISTICS – Samples are only estimates of the population – Sample statistics will be slightly off from the true values of its population’s.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
ANOVA Chapter 12.
PS 225 Lecture 15 Analysis of Variance ANOVA Tables.
Inferential Statistics: SPSS
Hypothesis Testing:.
Week 9 Chapter 9 - Hypothesis Testing II: The Two-Sample Case.
ANOVA Greg C Elvers.
1 Tests with two+ groups We have examined tests of means for a single group, and for a difference if we have a matched sample (as in husbands and wives)
Week 10 Chapter 10 - Hypothesis Testing III : The Analysis of Variance
Week 8 Chapter 8 - Hypothesis Testing I: The One-Sample Case.
Chapter 12: Introduction to Analysis of Variance
ANOVA (Analysis of Variance) by Aziza Munir
Basic concept Measures of central tendency Measures of central tendency Measures of dispersion & variability.
Testing Hypotheses about Differences among Several Means.
ANOVA Conceptual Review Conceptual Formula, Sig Testing Calculating in SPSS.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
Hypothesis testing Intermediate Food Security Analysis Training Rome, July 2010.
Inferential Statistics
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
I. Statistical Tests: A Repetive Review A.Why do we use them? Namely: we need to make inferences from incomplete information or uncertainty þBut we want.
6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis.
6/4/2016Slide 1 The one sample t-test compares two values for the population mean of a single variable. The two-sample t-test of population means (aka.
Introduction to Inferential Statistics Statistical analyses are initially divided into: Descriptive Statistics or Inferential Statistics. Descriptive Statistics.
DIRECTIONAL HYPOTHESIS The 1-tailed test: –Instead of dividing alpha by 2, you are looking for unlikely outcomes on only 1 side of the distribution –No.
ANOVA: Analysis of Variance.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Chapter Seventeen. Figure 17.1 Relationship of Hypothesis Testing Related to Differences to the Previous Chapter and the Marketing Research Process Focus.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Review I A student researcher obtains a random sample of UMD students and finds that 55% report using an illegally obtained stimulant to study in the past.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
ANOVA P OST ANOVA TEST 541 PHL By… Asma Al-Oneazi Supervised by… Dr. Amal Fatani King Saud University Pharmacy College Pharmacology Department.
 Terminology Confidence Interval Confidence level (relation to alpha) We can be X% sure that in the population…  Procedure How many SE to go out? (1.96,
The Analysis of Variance ANOVA
Other Types of t-tests Recapitulation Recapitulation 1. Still dealing with random samples. 2. However, they are partitioned into two subsamples. 3. Interest.
Copyright c 2001 The McGraw-Hill Companies, Inc.1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent variable.
Sampling Distribution (a.k.a. “Distribution of Sample Outcomes”) – Based on the laws of probability – “OUTCOMES” = proportions, means, test statistics.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent.
ANOVA Knowledge Assessment 1. In what situation should you use ANOVA (the F stat) instead of doing a t test? 2. What information does the F statistic give.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
HYPOTHESIS TESTING FOR DIFFERENCES BETWEEN MEANS AND BETWEEN PROPORTIONS.
Independent Samples ANOVA. Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.The Equal Variance Assumption 3.Cumulative.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
Hypothesis Testing Review
And then…..an “F” for everyone!
Analysis of Variance (ANOVA)
I. Statistical Tests: Why do we use them? What do they involve?
Review Calculating in SPSS
Presentation transcript:

R EVIEW OF T ERMINOLOGY Statistics Parameters Critical Region “Obtained” test statistic “Critical” test statistic Alpha/Confidence Level

S IGNIFICANCE T ESTING Old way Find “critical value” of test statistic  The point at which the odds of finding that statistic under the null are less than alpha. Compare your obtained test statistic with the critical test statistic. If your obtained is greater than your critical, reject the null. Odds of finding that “obtained” value are less than alpha (5%, 1%) if the null is true. SPSS Look at “sig” value (aka, “p” value) Assuming the null is true, there is an X percent chance of obtaining this test statistic. If it is less than alpha, reject null

T-T ESTS 1 sample t-test (univariate t-test) Compare sample mean of a single I/R variable to a known population mean Assumes knowledge of population mean (rare) 2-sample t-test (bivariate t-test) Compare two sample means (very common) Dummy IV and I/R Dependent Variable Difference between means across categories of IV Do males and females differ on #hours watching TV?

T HE T DISTRIBUTION Unlike Z, the t distribution changes with sample size (technically, df) As sample size increases, the t- distribution becomes more and more “normal” At df = 120, t critical values are almost exactly the same as z critical values

T AS A “ TEST STATISTIC ” All test statistics indicate how different our finding is from what is expected under null – Mean differences under null hypothesis = 0 – t indicates how different our finding is from zero There is an exact “sig” or “p” value associated with every value of t – SPSS generates the exact probability associated with your obtained t

T - SCORE IS “ MEANINGFUL ” Measure of difference in numerator (top half) of equation Denominator = convert/standardize difference to “standard errors” rather than original metric – Imagine mean differences in “yearly income” versus differences in “# cars owned in lifetime” Very different metric, so cannot directly compare (e.g., a difference of “2” would have very different meaning) t = the number of standard errors that separates means – One sample = x versus µ – Two sample = x males vs. x females

T - TESTING IN SPSS Analyze  compare means  independent samples t-test – Must define categories of IV (the dummy variable) How were the categories numerically coded? Output – Group Statistics = mean values – Levine’s test Not real important, if significant, use t-value and sig value from “equal variances not assumed” row – t = “t obtained ” no need to find “t- critical ” as SPSS gives you “sig” or the exact probability of obtaining the t obtained under the null

SPSS T - TEST EXAMPLE Independent Samples t Test Output: Testing the H o that there is no difference in GPA between white and nonwhite UMD students Is there a difference in the sample? Group Statistics RaceNMeanStd. Deviation Std. Error Mean GPAnonwhite white

I NTERPRETING SPSS O UTPUT Difference in GPA across Race? Obtained t value? Degrees of freedom? Obtained p value? Specific meaning of p-value? Reject null? Independent Samples Test Levene's Test for Equality of Variances FSig.tdf Sig. (2- tailed) Mean Difference GPAEqual variances assumed Equal variances not assumed

SPSS AND 1- TAIL / 2- TAIL SPSS only reports “2-tailed” significant tests To obtain a 1-tail test simple divide the “sig value” in half Sig. (2 tailed) =.10  Sig 1-tail =.05 Sig. (2 tailed) =.03  Sig 1-tail =.015

F ACTORS IN THE P ROBABILITY OF R EJECTING H 0 F OR T- TESTS 1. The size of the observed difference (produces larger t-observed) 2.The alpha level (need larger t-observed in order to reject null) 3.The use of one or two-tailed tests (two tailed tests make it harder to reject null) 4.The size of the sample (larger N produces larger t-values).

A NALYSIS OF V ARIANCE What happens if you have more than two means to compare? IV (grouping variable) = more than two categories Examples Risk level (low medium high) Race (white, black, native American, other) DV  Still I/R (mean)

ANOVA = F-TEST The purpose is very similar to the t-test HOWEVER Computes the test statistic “F” instead of “t” And does this using different logic because you cannot calculate a single distance between three or more means.

ANOVA Why not use multiple t-tests? Error compounds at every stage  probability of making an error gets too large F-test is therefore EXPLORATORY Independent variable can be any level of measurement Technically true, but most useful if categories are limited (e.g., 3-5).

H YPOTHESIS TESTING WITH ANOVA: Different route to calculate the test statistic 2 key concepts for understanding ANOVA: SSB – between group variation (sum of squares) SSW – within group variation (sum of squares) ANOVA compares these 2 type of variance The greater the SSB relative to the SSW, the more likely that the null hypothesis (of no difference among sample means) can be rejected

T ERMINOLOGY C HECK “ Sum of Squares ” = Sum of Squared Deviations from the Mean =  (X i - X) 2 Variance = sum of squares divided by sample size =  (X i - X) 2 = Mean Square N Standard Deviation = the square root of the variance = s ALL INDICATE LEVEL OF “DISPERSION”

T HE F R ATIO Indicates the variance between the groups, relative to variance within the groups F = Mean square (variance) between Mean square (variance) within Between-group variance tells us how different the groups are from each other Within-group variance tells us how different or alike the cases are as a whole sample

ANOVA Example Recidivism, measured as mean # of crimes committed in the year following release from custody: 90 individuals randomly receive 1of the following sentences: Prison (mean = 3.4) Split sentence: prison & probation (mean = 2.5) Probation only (mean = 2.9) These groups have different means, but ANOVA tells you whether they are statistically significant – bigger than they would be due to chance alone

# OF N EW O FFENSES : D EMO OF B ETWEEN & W ITHIN G ROUP V ARIANCE GREEN: PROBATION (mean = 2.9)

# OF N EW O FFENSES : D EMO OF B ETWEEN & W ITHIN G ROUP V ARIANCE GREEN: PROBATION (mean = 2.9) BLUE: SPLIT SENTENCE (mean = 2.5)

# OF N EW O FFENSES : D EMO OF B ETWEEN & W ITHIN G ROUP V ARIANCE GREEN: PROBATION (mean = 2.9) BLUE: SPLIT SENTENCE (mean = 2.5) RED: PRISON (mean = 3.4)

# OF N EW O FFENSES : W HAT WOULD LESS “W ITHIN GROUP VARIATION ” LOOK LIKE ? GREEN: PROBATION (mean = 2.9) BLUE: SPLIT SENTENCE (mean = 2.5) RED: PRISON (mean = 3.4)

ANOVA Example, continued Differences (variance) between groups is also called “ explained variance ” (explained by the sentence different groups received). Differences within groups (how much individuals within the same group vary) is referred to as “ unexplained variance ” Differences among individuals in the same group can’t be explained by the different “treatment” (e.g., type of sentence)

F STATISTIC When there is more within-group variance than between-group variance, we are essentially saying that there is more unexplained than explained variance In this situation, we always fail to reject the null hypothesis This is the reason the F(critical) table (Healey Appendix D) has no values <1

SPSS EXAMPLE Example: 1994 county-level data (N=295) Sentencing outcomes (prison versus other [jail or noncustodial sanction]) for convicted felons Breakdown of counties by region:

SPSS EXAMPLE Question: Is there a regional difference in the percentage of felons receiving a prison sentence? Null hypothesis (H 0 ): There is no difference across regions in the mean percentage of felons receiving a prison sentence. Mean percents by region:

SPSS EXAMPLE These results show that we can reject the null hypothesis that there is no regional difference among the 4 sample means The differences between the samples are large enough to reject H o The F statistic tells you there is almost 20 X more between group variance than within group variance The number under “Sig.” is the exact probability of obtaining this F by chance A.K.A. “VARIANCE”

ANOVA: P OST HOC TESTS The ANOVA test is exploratory ONLY tells you there are sig. differences between means, but not WHICH means Post hoc (“after the fact”) Use when F statistic is significant Run in SPSS to determine which means (of the 3+) are significantly different

OUTPUT: POST HOC TEST This post hoc test shows that 5 of the 6 mean differences are statistically significant (at the alpha =.05 level) (numbers with same colors highlight duplicate comparisons) p value (info under in “Sig.” column) tells us whether the difference between a given pair of means is statistically significant

ANOVA IN SPSS STEPS TO GET THE CORRECT OUTPUT… ANALYZE  COMPARE MEANS  ONE-WAY ANOVA INSERT… INDEPENDENT VARIABLE IN BOX LABELED “FACTOR:” DEPENDENT VARIABLE IN THE BOX LABELED “DEPENDENT LIST:” CLICK ON “POST HOC” AND CHOOSE “LSD” CLICK ON “OPTIONS” AND CHOOSE “DESCRIPTIVE” YOU CAN IGNORE THE LAST TABLE (HEADED “Homogenous Subsets”) THAT THIS PROCEDURE WILL GIVE YOU