Download presentation
Presentation is loading. Please wait.
1
Analysis of Variance (ANOVA)
2
Figure 13. 2 (p. 389) A typical situation in which ANOVA would be used
Figure (p. 389) A typical situation in which ANOVA would be used. Three separate samples are obtained to evaluate the mean differences among three populations (or treatments) with unknown means. Figure (p. 389)
3
Why are pairwise comparisons not wise?
Pairwise comparisons are unwise for two reasons: (1) They often require many tests (2) They may increase the risk of type I error, i.e. rejection of H0 even when H0 is true
4
Analysis of Variance (ANOVA)
a hypothesis test that evaluates the significance of mean differences between two or more treatments goal is to determine whether the mean differences that are found in sample data are greater than can be reasonable explained by chance alone
5
The size of the mean differences is measured by computing a variance between treatments (MSbetween)
large mean differences produce a large variance and small mean differences produce a small variance The amount of variance expected simply by chance (without any treatment effect) is measured by computing the variance within treatments (MSwithin). The logic behind this second variance is that all of the individuals inside a treatment condition are treated exactly the same. Therefore, any differences that exist (variance) cannot be caused by the treatment and must be due to chance alone the two variances are compared in an F-ratio to determine whether the mean differences (MSbetween) are significantly bigger than chance (MSwithin).
6
ANOVA
7
One-Way ANOVA Partitions Total Variation
Variation due to Random Sampling are due to Individual Differences Within Groups. Variation due to treatment (between) Variation due to random sampling (w/in group) Sum of Squares Among Sum of Squares Between Sum of Squares Treatment (SST) Among Groups Variation Sum of Squares Within Sum of Squares Error (SSE) Within Groups Variation 89
8
Sum of Squares (SS) Total variation = Variation explained by model
Variation due to individuals + Variation due to groups + Unexplained variation Variation explained by model SS (total) = SS (model) + SS (residual) = SS (individuals) + SS (groups) + SSE
9
What is Error? Variance: Standard Deviation
A measure of dispersion, the average of the squared deviations about the mean Standard Deviation A measure of dispersion, the square root of the variance (where M is the mean of the sample) can be used. S² is a biased estimate of σ², however. By far the most common formula for computing variance in a sample is:
11
Inferential statistic
Analyzes different sources of variation in an experiment Between group variation Within group variation (individual differences) If the null hypothesis (H0) is true, there is no difference among groups: 1 = 2 = 3 Random Sampling error
12
If the null hypothesis is true
There is no systematic variation between groups No effect of the independent variable) F-test has a value = 1 if null is true If null is false (Ha), the expected value of F > 1 How much > 1 should F be before we can be sure that it reflects true systematic variation
13
H0= population means are equal Ha
No effect of the independent variable Ha One or more of the means of the populations are not equal There is a difference somewhere Variation caused by the experiment/group should be systematic if null is false
14
Assumption of ANOVA Homogeneity of variance
Within group variation may cause discrepancies and bias interpretation Assumptions of Repeated Measures Assumption Estimation of variance differs independent group ANOVA Sphericity- homogeneity of variance Mauchly's sphericity test Alternative tests: Greenhouse-Geisser *Huynh and Feldt
16
Table 13. 3 (p. 407) A portion of the F distribution table
Table (p. 407) A portion of the F distribution table. Entries in roman type are critical values for the .05 level of significance, and bold type values are for the .01 level of significance. The critical values for df = 2, 12 have been highlighted (see text). Figure (p. 407)
17
One-Way ANOVA F-Test Critical Value
If means are equal, F = MST / MSE 1. Only reject large F! Reject H Do Not Reject H F F a ( p 1 , n p ) Always One-Tail! © T/Maker Co.
18
Multiple Comparisons Procedure
1. Tells Which Population Means Are Significantly Different Example: 1 = 2 3 2. Post Hoc Procedure Done After Rejection of Equal Means in ANOVA Output From Many Statistical computer Programs– various versions (Tukey, Bonferroni, etc.) Eta squared (h 2): proportion of variance in the dependent variable that is attributable to the independent variable 105
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.