1 Advanced Research Methods in Psychology - lecture - Matthew Rockloff Scheffé Post-Hoc Comparisons.

Slides:



Advertisements
Similar presentations
AGENDA I.Homework II.Exam 1 III.Tukey Test. HOMEWORK 9.26, 9.46, 9.58 (do not do the p-value part), and10.14 Due Friday, Feb. 26.
Advertisements

Comparing k Populations Means – One way Analysis of Variance (ANOVA)
Session 3 ANOVA POST HOC Testing STAT 3130 Statistical Methods I.
C82MST Statistical Methods 2 - Lecture 4 1 Overview of Lecture Last Week Per comparison and familywise error Post hoc comparisons Testing the assumptions.
Locating Variance: Post-Hoc Tests Dr James Betts Developing Study Skills and Research Methods (HL20107)
Comparing Means.
Intro to Statistics for the Behavioral Sciences PSYC 1900
Lecture 13 Multiple comparisons for one-way ANOVA (Chapter 15.7)
Two Groups Too Many? Try Analysis of Variance (ANOVA)
Copyright (c) Bani K. Mallick1 STAT 651 Lecture #13.
Statistics for the Social Sciences Psychology 340 Spring 2005 Analysis of Variance (ANOVA)
One-way Between Groups Analysis of Variance
Lecture 12 One-way Analysis of Variance (Chapter 15.2)
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Way ANOVA (Cont.)
Comparing Means.
1 The One-Sample t-Test Advanced Research Methods in Psychology - lecture - Matthew Rockloff.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Chapter 12: Analysis of Variance
Advanced Research Methods in Psychology - lecture - Matthew Rockloff
ANOVA Chapter 12.
Estimation and Hypothesis Testing Faculty of Information Technology King Mongkut’s University of Technology North Bangkok 1.
Inferential Statistics: SPSS
Hypothesis testing – mean differences between populations
Intermediate Applied Statistics STAT 460
ANOVA Greg C Elvers.
Copyright © Cengage Learning. All rights reserved. 10 Inferences Involving Two Populations.
1 The One-Sample t-Test Adapted from publically available slides attributed to Matthew Rockloff.
Independent Samples t-Test (or 2-Sample t-Test)
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
January 31 and February 3,  Some formulae are presented in this lecture to provide the general mathematical background to the topic or to demonstrate.
t(ea) for Two: Test between the Means of Different Groups When you want to know if there is a ‘difference’ between the two groups in the mean Use “t-test”.
Sociology 5811: Lecture 14: ANOVA 2
ANOVA (Analysis of Variance) by Aziza Munir
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Section Inference about Two Means: Independent Samples 11.3.
Hypothesis testing Intermediate Food Security Analysis Training Rome, July 2010.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
I. Statistical Tests: A Repetive Review A.Why do we use them? Namely: we need to make inferences from incomplete information or uncertainty þBut we want.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, October 15, 2013 Analysis of Variance (ANOVA)
6/2/2016Slide 1 To extend the comparison of population means beyond the two groups tested by the independent samples t-test, we use a one-way analysis.
6/4/2016Slide 1 The one sample t-test compares two values for the population mean of a single variable. The two-sample t-test of population means (aka.
Analysis of Variance 1 Dr. Mohammed Alahmed Ph.D. in BioStatistics (011)
Chapter 9 Three Tests of Significance Winston Jackson and Norine Verberg Methods: Doing Social Research, 4e.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Correct decisions –The null hypothesis is true and it is accepted –The null hypothesis is false and it is rejected Incorrect decisions –Type I Error The.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Analysis of Variance (One Factor). ANOVA Analysis of Variance Tests whether differences exist among population means categorized by only one factor or.
Chapter Seventeen. Figure 17.1 Relationship of Hypothesis Testing Related to Differences to the Previous Chapter and the Marketing Research Process Focus.
Example 10.2 Measuring Student Reaction to a New Textbook Hypothesis Tests for a Population Mean.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 13: One-way ANOVA Marshall University Genomics Core.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
Welcome to MM570 Psychological Statistics
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Chapter 13 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 Chapter 13: Multiple Comparisons Experimentwise Alpha (α EW ) –The probability.
ANOVA P OST ANOVA TEST 541 PHL By… Asma Al-Oneazi Supervised by… Dr. Amal Fatani King Saud University Pharmacy College Pharmacology Department.
Two-Way (Independent) ANOVA. PSYC 6130A, PROF. J. ELDER 2 Two-Way ANOVA “Two-Way” means groups are defined by 2 independent variables. These IVs are typically.
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 14 th February 2013.
Other Types of t-tests Recapitulation Recapitulation 1. Still dealing with random samples. 2. However, they are partitioned into two subsamples. 3. Interest.
Analysis of Variance STAT E-150 Statistical Methods.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.Introduction To Basic Ratios 3.Basic Ratios In Excel 4.Cumulative.
Independent Samples ANOVA. Outline of Today’s Discussion 1.Independent Samples ANOVA: A Conceptual Introduction 2.The Equal Variance Assumption 3.Cumulative.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
S519: Evaluation of Information Systems Social Statistics Inferential Statistics Chapter 9: t test.
Posthoc Comparisons finding the differences. Statistical Significance What does a statistically significant F statistic, in a Oneway ANOVA, tell us? What.
Comparing k Populations Means – One way Analysis of Variance (ANOVA)
I. Statistical Tests: Why do we use them? What do they involve?
One-Way Analysis of Variance
Presentation transcript:

1 Advanced Research Methods in Psychology - lecture - Matthew Rockloff Scheffé Post-Hoc Comparisons

2 When to use a Scheffé Posthoc comparison … 1 Whenever an ANOVA model is used to examine the differences among more than 2 groups, a posthoc procedure can be used to compare differences between all pairs of means. Posthoc comparisons are very similar to t-tests. However, posthoc comparisons are more appropriate for multiple tests, because they help control type-I error.

3 When to use a Scheffé Posthoc comparison … 2 Type I-error is the chance of wrongly accepting differences between means as significant. In a t-test, this chance is controlled to be at most 5%. In other words, we only accept 2 means as significantly different if the p-value in the t-test is less than “.05.”

4 When to use a Scheffé Posthoc comparison … 3 In a study that has several groups, we could do several t-tests to compare all the differences between means. However, each t-test has a separate 5% chance of making a wrong conclusion by falsely declaring 2 means significantly different. When we do several tests, the chance of making at least one wrong conclusion starts to expand dramatically.

5 When to use a Scheffé Posthoc comparison … 4 To understand “why” consider a simple toss of a fair coin. Each toss has a 50% chance of yielding “heads.” What if we tossed the coin 100 times? What is the chance of having at least one “heads?” Pretty darn likely!

6 When to use a Scheffé Posthoc comparison … 5 In the same way, the more t-tests we perform, the more likely we’ll get at least one conclusion wrong. Fortunately, posthoc comparisons have been con structured to adjust for this problem. They are more conservative than t-tests and control for Type-I error.

7 Example 8.1 We’ll return once again to the diet example. However, this time there are 3 different diets; including pizza, beer and cream. The outcome is weight gain. The research question follows: “Is there any difference in weight gain between the 3 diets?”

8 Example 8.1 (cont.) The first step is to create an ANOVA table similar to our previous example. See last week’s example for a detailed explanation of how to calculate a Oneway ANOVA. However, a brief hand-worked solution is provided on the next slide 

9 Calculations continued next slide  Weight Gains: PizzaBeerCream S 2 xj = N=15 n=5

10 ANOVA table Source of Variance Sum of Squares Degrees of Freedom Mean SquareF-ratio Critical Value Reject Decision SVSSdfMSFCVReject? BTW Yes WITH TOT4814 Example 8.1 (cont.)

11 Example 8.1 (cont.) Before we can do posthoc comparisons, we must interpret the ANOVA table. Since we now have 3 conditions, the conclusion we make is slightly different: There was model significance for the ANOVA, F(2,12) = 30.00, p <.05, indicating at least one significant difference among the means.

12 Example 8.1 (cont.) The ANOVA table provides us with what is called the “omnibus F-test” or likewise “model significance.” In the Oneway ANOVA, this test shows that there is at least one significant difference between a pair of means. Of course, there are 3 unique pairs of means in our example. The omnibus F-test does not indicate which pairs are significantly different. Therefore, we need a posthoc comparison to examine the difference between all pairs.

13 The Scheffé posthoc calculation by hand There are several types of posthoc comparisons, most of which are named after dead statisticians (e.g., Tukeys HSD, Fishers LSD and Ducans Multiple Range Test). Scheffé is the most conservative of the bunch. It controls experiment wide error rate to 5%, or in other words, there at most a 5% chance of making any type-I errors using the procedure. While this is desirable, it also makes the Scheffé procedure less sensitive.

14 The Scheffé posthoc calculation by hand (cont.) Using Scheffé, we are more exposed to making the Type-II error of not identifying significant difference when they do exist. Other procedures make a different tradeoff between exposure to type-I and type-II error. As such, none is “better” than the other, only different in exposures to this natural tradeoff.

15 The Scheffé posthoc calculation by hand (cont.) The first step to using the Scheffé procedure involves calculating the differences in all pairs of means: L1 = L2 = L3 = In our example: L1 = -2 L2 = -4 L3 = -2

16 The Scheffé posthoc calculation by hand (cont.) Next, we need to compare these differences to Scheffé’s critical “S” value:, where F α is the critical value of the ANOVA table (see previous calculation).

17 The Scheffé posthoc calculation by hand (cont.) In our example, the critical “S” is: Since the absolute value of each difference (L1, L2 and L3) exceeds the critical value, we can declare significant difference between all pairs of means. Summarizing our findings from the ANOVA and the posthoc comparisons, we conclude: (see next slide) = 1.44

18 We conclude … There was model significance for the ANOVA, F(2,12) = 30.00, p <.05, indicating at least one significant difference among the means. Scheffé posthoc comparisons showed that weight gain was higher in the Cream condition (M = 6.00) than the Beer condition (M = 4.00), p <.05 (two- tailed). In turn, weight gain was lower in the Pizza condition (M = 2.00) than either the Beer Condition, p <.05 (two-tailed), or the Cream condition, p <.05 (two-tailed).

19 Alternative conclusion … The conclusion may still be somewhat hard to read. Therefore, an alternative conclusion can refer the reader to a figure that illustrates the differences: See next two slides

20 Alternative conclusion: pt 1 … There was model significance for the ANOVA, F(2,12) = 30.00, p <.05, indicating at least one significant difference among the means. In addition, a Scheffé posthoc comparison showed that all means were significantly different, p <.05 (two-tailed). Weight gain was highest in the Cream condition, followed by the Beer condition and the Pizza condition (see figure 8.1). (See next slide.)

21 Alternative conclusion: pt 2 …

22 Example 8.1 Using SPSS As before, we need to setup variables in SPSS: IndependentVariable = diet (1 = Pizza, 2 = Beer, 3 = Cream) DependentVariable = wtgain (weight gain)

23 Example 8.1 Using SPSS (cont.) The data is entered into the SPSS data view as demonstrated opposite 

24 Example 8.1 Using SPSS (cont.) The SPSS syntax includes commands for both the Oneway ANOVA, and an additional command to graph the results:

25 Example 8.1 Using SPSS (cont.) The results appear in the SPSS output viewer: This ANOVA table simply reproduces the Omnibus test illustrated previously.

26 Example 8.1 Using SPSS (cont.) The table above compares all possible mean pairs of the three conditions. As indicated above, there are only three unique pairs, so this table is repetitive in sections (i.e., Pizza-Beer p-value = Beer-Pizza p-value).

27 Example 8.1 Using SPSS (cont.) Condition means which are significantly different (p <.05) appear in separate columns. In our example, the means for each condition appear in separate columns. The table opposite is a simple representation of which “means” are significantly different from one another.

28 Example 8.1 Using SPSS (cont.) The graph opposite is simply a reproduction of the figure illustrated earlier. It can be modified within SPSS to improve its appearance and make it conform to APA style.

29 The conclusion (again) Our conclusions can be restated given the exact probabilities provided in SPSS: There was model significance for the ANOVA, F(2,12) = 30.00, p <.05, indicating at least one significant difference among the means. Scheffé posthoc comparisons showed that weight gain was higher in the Cream condition (M = 6.00) than the Beer condition (M = 4.00), p =.01 (two- tailed). In turn, weight gain was lower in the Pizza condition (M = 2.00) than either the Beer Condition, p =.01 (two-tailed), or the Cream condition, p <.01 (two-tailed).

30 Advanced Research Methods in Psychology - lecture - Matthew Rockloff Scheffé Post-Hoc Comparisons Thus concludes 