Download presentation
Presentation is loading. Please wait.
Published bySage Roll Modified over 10 years ago
1
Week 2 – PART III POST-HOC TESTS
2
POST HOC TESTS When we get a significant F test result in an ANOVA test for a main effect of a factor with more than two levels, this tells us we can reject H o i.e. the samples are not all from populations with the same mean. We can use post hoc tests to tell us which groups differ from the rest.
3
POST HOC TESTS There are a number of tests which can be used. SPSS has them in the ONEWAY and General Linear Model procedures SPSS does post hoc tests on repeated measures factors, within the Options menu
4
Sample data
5
Post Hoc test button
6
Select desired test
7
ANOVA Table
8
Post Hoc Tests
9
Choice of post-hoc test There are many different post hoc tests, making different assumptions about equality of variance, group sizes etc. The simplest is the Bonferroni procedure
10
Bonferroni Test first decide which pairwise comparisons you will wish to test (with reasonable justification) get SPSS to calculate t-tests for each comparison set your significance criterion alpha to be.05 divided by the total number of tests made
11
Bonferroni test repeated measures factors are best handled this way ask SPSS to do related t-tests between all possible pairs of means only accept results that are significant below.05/k as being reliable (where k is the number of comparisons made)
12
PLANNED COMPARISONS/ CONTRASTS It may happen that there are specific hypotheses which you plan to test in advance, beyond the general rejection of the set of null hypotheses
13
PLANNED COMPARISONS For example: –a) you may wish to compare each of three patient groups with a control group –b) you may have a specific hypothesis that for some subgroup of your design –c) you may predict that the means of the four groups of your design will be in a particular order
14
PLANNED COMPARISONS Each of these can be tested by specifying them beforehand - hence planned comparisons. The hypotheses should be orthogonal - that is independent of each other
15
PLANNED COMPARISONS To compute the comparisons, calculate a t-test, taking the difference in means and dividing by the standard error as estimated from MS within from the ANOVA table
16
TEST OF LINEAR TREND – planned contrast for more than 2 levels, we might predict a constantly increasing change across levels of a factor In this case we can try fitting a model to the data with the constraint that the means of each condition are in a particular rank order, and that they are equidistant apart.
17
TEST OF LINEAR TREND The Between Group Sum of Squares is then partitioned into two components. –the best fitting straight line model through the group means –the deviation of the observed group means from this model
18
TEST OF LINEAR TREND The linear trend component will have one degree of freedom corresponding to the slope of the line. Deviation from linearity will have (k-2) df. Each of these components can be tested, using the Within SS, to see whether it is significant.
19
TEST OF LINEAR TREND If there is a significant linear trend, and non-significant deviation from linearity, then the linear model is a good one. For k>3, The same process can be done for a quadratic trend - a parabola is fit to the means. For example, you may be testing a hypothesis that as dosage level increases, the measure initially rises and then falls (or vice versa).
20
TEST OF LINEAR TREND
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.