Download presentation
1
First example
2
Solution No, main effects appear in the MARGINAL means.
No, any treatment effect will appear in patterns of MEANS in the table, not VARIANCES. Yes, main effects show themselves in differences among the MARGINAL MEANS. We already have a respectable candidate, so we choose C.
3
Second example
4
Drawing the graph
5
Multiple line graph The Category Axis is the horizontal scale
This will obtain a Right handedness line and a Left handedness line.
6
The handedness profiles
7
An interaction No main effect of either factor is evident.
But the effect of Handedness reverses its direction from Task 1 to Task 3. There is a striking interaction pattern. So C is the right answer.
8
Adding a main effect Create a new variable ScoreTwo by adding thirty points to each of the scores of the left-handed group. This will add a main effect of the Handedness factor to the data. The interaction between Task and Handedness will remain. Add 30 to each of the scores in this group.
9
Interaction plus main effect
10
Separation of the lines
The lines no longer cross. Their non-parallelism indicates an interaction. Interactions and main effects are quite separate and either can occur without the other.
11
A three-factor experiment
When another factor (in this case Gender) is added to an experimental design, all THREE factors may interact.
12
The results
13
Simple two-way interactions
A two-way interaction at one level of a third factor is known as a SIMPLE TWO-WAY INTERACTION. There’s a simple two-way interaction of Drug and Alertness in the data from the Male participants, but not in the Female scores. No simple Drug×Alertness interaction in the Female data. The profiles are parallel. The profiles are not parallel, indicating the presence of a simple Drug×Alertness interaction in the Male scores.
14
Three-way interaction
A THREE-WAY INTERACTION is said to occur when the simple interactions between two of the factors are not homogeneous across all levels of a third factor. As with main effects and two-way interactions, a three-way interaction can occur irrespective of whether other effects are present.
15
Summary table for the 3-way ANOVA
16
Some observations There are now three kinds of effect: MAIN EFFECTS
TWO-WAY INTERACTIONS THREE-WAY INTERACTION The same error term is used for all the F tests. The MSwithin is the mean of the cell variances, as in the two-way ANOVA. The same error term is used for all the F-tests.
17
Warning You can have as many factors as you like in your experimental design. But I suggest you have no more than three. A three-way interaction can be difficult enough to explain; a four-way interaction is often impossible to explain. KEEP YOUR DESIGN SIMPLE!
18
Lecture 4 WITHIN SUBJECTS EXPERIMENTS
19
Between subjects experiments
ALL factors are between subjects. We have considered three types of between subjects experiment: the one-factor between subjects experiment; the two-factor between subjects factorial experiment; The three-factor between subjects factorial experiment. Each participant (subject) is tested only once, either (as in the one-factor experiment) under one CONDITION or (as in factorial experiments) one COMBINATION of conditions.
20
Within subjects experiments
In a WITHIN SUBJECTS experiment, each participant is tested at ALL levels of EVERY treatment factor. In a within subjects experiment, ALL factors are within subjects. Such an experiment is also said to have REPEATED MEASURES on all of its treatment factors.
21
Stroop interference You are asked not to READ the words, but to NAME the COLOUR of the print each time. It’s difficult to do, because the natural tendency is to read the word. The more words there are, the more difficult it gets.
22
A within subjects experiment
Stroop (1935) tested the SAME PARTICIPANTS under all three conditions. So each person was serving as his or her own control.
23
The results Hardest task Easiest task Harder than reading Participants took longer to name colours than to read colour words printed in black and white. But they took MUCH longer to name the print colours of the colour words in the ‘conflicting’ condition.
24
The dependent and independent variables
The dependent variable is the time taken by the participant (subject) to go through all the items on a card, either naming the print colour of each item or (in the reading condition) reading the words. The independent variable (or factor) is the condition under which the task is attempted. This Stroop factor has THREE levels.
25
The experimental design
Since each subject (participant) performs under all three conditions, i.e, at all ‘levels’ of the Stroop factor, this is a ONE-FACTOR WITHIN SUBJECTS experiment.
26
Comparing designs In the within subjects experiment, you can obtain each participant’s mean score across all conditions. You can’t do that in the between subjects experiment.
27
Subjects (participants) as a factor in the experiment
Since you can obtain each participant’s mean score, you can have ‘subjects’ as a source in the ANOVA summary table. This will leave a smaller estimate of error variance known as the RESIDUAL.
28
Sums of squares and degrees of freedom
Recall that, for many purposes, what is true of sums of squares SS is also true of their degrees of freedom df. For example, in the one-way ANOVA, the PARTITION OF THE TOTAL SUM OF SQUARES is SStotal = SSbetween + SSwithin We can also write dftotal = dfbetween + dfwithin
29
Suppose … Suppose the experimental design were between subjects, rather than within subjects. Suppose we had three different groups of four people (12 participants in all), each group being tested under one condition. A one-way ANOVA would be appropriate.
30
A comparison of the ANOVAs
Within subjects ANOVA One-way ANOVA In the within subjects analysis, the Within degrees of freedom (9) has been split into Subjects and Residual components. What’s true of the df is true also of the SS.
31
Partition of SStotal total number of scores
N = nk number of subjects per group (one-way ANOVA) or number of subjects (within subjects ANOVA) number of groups or treatments
32
Implications For a given number of observations, the error term in the within subjects ANOVA will be smaller than it would have been for the between subjects (one-way) ANOVA. In the within subjects ANOVA, the main effect of individual differences in average ability is factored out.
33
Advantages of within subjects experiments
The within subjects experiment has several advantages over the between subjects experiment. You cannot argue that people of different abilities were tested under different conditions. You gather as much data with fewer participants. You make maximum use of the participants while they are available.
34
Order effects On the other hand, the performance of a task may be affected by the order in which the tasks are performed. Perhaps it’s easier to name the colours of non-word objects after you have tried the conflicting colour-word task than after you have tried the reading task? If all participants were to perform the three tasks in the same order, therefore, the results might be confounded with ORDER EFFECTS.
35
Counterbalancing We vary the order of presentation, so that each task is presented equally often in First, Second and Third positions. This procedure is known as COUNTERBALANCING.
36
Cyclic permutation Counterbalancing can be achieved by CYCLIC PERMUTATION. A, B, C and D represent the different conditions in the experiment. The condition on the right is continually moved to the leftmost position each time, so that all the other conditions ‘slide’ to the right. Each condition occurs in the same serial position with approximately equal frequency.
37
Homogeneity of covariance
The ANOVA model for within subjects experiments assumes that the data have a property known as SPHERICITY or HOMOGENEITY OF COVARIANCE. This property is tested for by the MAUCHLY SPHERICITY TEST. (There are other tests too.)
38
Mauchly test… In the Mauchly test, the null hypothesis is that the data have ‘sphericity’. So you hope that the null hypothesis will be accepted: the p-value should be high – certainly greater than .05. If the Mauchly test shows significance, you have to adjust the degrees of freedom for your F-test downwards to make the test more stringent. SPSS runs a Mauchly test on the data.
39
Replicating the Stroop experiment
Suppose a researcher decides to replicate Stroop’s experiment. (It’s been done many times. The effect is very robust.) Suppose the results are as follows.
40
The results They look promising – we see some comparatively long times in the Conflict condition. But we need to get to know our data by calculating some statistics and drawing a graph.
41
In Variable View Assign clear names and labels to the variables.
Dispense with the decimals. Note that we haven’t yet told SPSS about the Stroop factor. We can’t do that until we get into the ANOVA dialog boxes.
42
Data View Note how the display in Data View benefits from your earlier good work in Variable View. The numbers are easier to read. The variables are clearly labelled. Notice that there is NO GROUPING VARIABLE, because the participants have not been grouped in any way. In this data set, we only have THREE SEPARATE VARIABLES.
43
Ordering a line graph Choose a simple line graph.
There are no groups, so choose Summaries of separate variables. Click the Define button.
44
The Simple Line dialog Since you chose Summaries of Separate Variables, you can now transfer all three variables in you data set to the Line Represents panel on the right. We are happy that the points represent mean scores. Click OK.
45
Profile of means Experimental condition
46
The profile plot As expected, the Conflict condition proved to be the most difficult. Naming the colours of the zeros was more difficult than reading the words.
47
The within subjects ANOVA
48
Defining the treatment factor
Name of the DV The Add buttons come alive. Click the live Add buttons to transfer the names to the appropriate panels.
49
Assign the variables SPSS knows only that there is a within subjects factor, ‘Stroop’, with three levels. Now you must assign the three variables in your data set to the three different levels.
50
The assignment is made
51
The Options button Click the Options button to order descriptive statistics – otherwise, you will just get the ANOVA summary table. We have ordered the Bonferroni test for pairwise multiple pairwise comparisons later on. The Bonferroni is an equivalent of the Tukey test in the one-way ANOVA.
52
Profile plots You can order profile plots; but watch out!
The scale on the vertical axis will be microscopic, making the graph appear as if there were huge differences among the means. Double-click on the image and adjust the vertical scale in the manner I described in Lecture 2, Slide 53.
53
The output The output is extensive and rather daunting.
An extensive pruning operation is required.
54
Pruning the output To remove this item (left), click on the image.
A rectangular frame around the item appears. Press Delete to remove the item.
55
Remove the followiing
56
Retain: Descriptive Statistics The Mauchly Test
Within Subjects Effects Comparisons
57
Summary of the results As expected, the Conflict condition was the toughest. There is a substantial difference between the mean times for the Conflict and Naming conditions. As expected, the Reading condition was much the easiest. You must request this Descriptive Statistics table when you are completing the dialogs for the ANOVA routine.
58
The basic ANOVA summary table
Notice the additional Subjects factor. As expected, the Stroop factor has had a significant effect. Write this result as, ‘F(2, 6) = ; p < .01’.
59
The SPSS ANOVA summary table
60
The Mauchly test The p-value is high (.900).
There is therefore no evidence to question the assumption of homogeneity of variance. The ordinary ANOVA F-test can be made. Just keep the first rows of the Stroop and Error(Stroop) sections of the table.
61
The SPSS summary table
62
The F-ratio In the within subjects ANOVA, the variance attributable to individual differences is removed from the analysis. A separate error term is calculated. This is MSresidual
63
If you divide the Stroop MS by the Error MS, you will obtain the value of F.
64
The table of pairwise comparisons
65
EDITED TABLE Naming isn’t significantly different from Reading.
But Naming is significantly different from Conflicting. And Reading is also significantly different from Conflicting.
66
Rationale of the Bonferroni test
With unplanned multiple comparisons, there is a heightened risk of at least one test showing significance, even when the null hypothesis is true. The risk increases dramatically with the number of conditions. Suppose we have 8 treatment means. If the null hypothesis is TRUE, the probability of AT LEAST ONE comparison showing significance is .76.
67
Bonferroni rationale…
The Bonferroni method of unplanned or ‘post hoc’ comparisons works by multiplying the p-value of t by the number of paired comparisons possible with an array of means of that particular size. We have three means. Three possible pairs can be chosen (1&2), (1&3), (2&3). So you multiply the p-value by three and compare the product with .05. This procedure makes the Bonferroni test very conservative with large arrays of means.
68
Compare with related t tests
If you ask for related-samples t-tests on the same data, you will see that the p-values given are a third of the corresponding values in the Bonferroni table. TRY THIS YOURSELF!
69
Suppose the Mauchly had shown significance
The Greenhouse-Geisser test is a more conservative F test, in which the degrees of freedom are adjusted downwards. With our data, this test gives the same result as the ordinary F test because, because the data are ‘spherical’.
70
In our ANOVA table … Instead of reading the rows labelled ‘Sphericity assumed’, read the rows labelled ‘Greenhouse-Geisser’. The table now reads: ‘F(1.818, 5.455) = 55.35, p < .01’. Although the values for the degrees of freedom have been adjusted, the value of F remains unchanged.
71
When do conservative F-tests make a difference?
Only in situations where one has a marginally significant result (just beyond the .05 level). A conservative test will not overturn an ordinary test showing a really small p-value of, say, .01.
72
Example
73
Third question
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.