Download presentation
Presentation is loading. Please wait.
Published byLouise Dickerson Modified over 9 years ago
1
ANOVA P OST ANOVA TEST 541 PHL By… Asma Al-Oneazi Supervised by… Dr. Amal Fatani King Saud University Pharmacy College Pharmacology Department
2
ANOVA: ANOVA is an abbreviation for the full name of the method: ANalysis Of Variance. ANOVA is the most widely used method of statistical analysis of quantitative data. It calculates the probability that differences among the observed means could simply be due to chance, and enables us to compare more than 2 groups.
3
It is closely related to Student's t-test, but the t-test is only suitable for comparing two treatment means. The ANOVA can be used for comparing several means and in more complex situations. (compare more than 2 groups).
4
W HY ANOVA INSTEAD OF MULTIPLE T - TESTS ? The problem with the multiple t-tests approach is that as the number of groups increases, the number of two sample t- tests also increases. As the number of tests increases the probability of making a Type I error also increases.
5
The advantage of using ANOVA over multiple t-tests is that ANOVA will identify if any two of the group means are significantly different with a single test. If the significance level is set at 0.05, the probability of a Type I error for ANOVA = 0.05 regardless of the number of groups being compared. If the ANOVA F-test is significant, further comparisons can be done to determine which groups have significantly different means.
6
ANOVA H YPOTHESES The Null hypothesis for ANOVA is that the means for all groups are equal. The Alternative hypothesis for ANOVA is that at least two of the means are not equal. The test statistic for ANOVA is the ANOVA F-statistic.
7
ANOVA is used to compare means between three or more groups, so why is it called analysis of variance. The ANOVA F-test is a comparison of the average variability between groups to the average variability within groups. The variability within each group is a measure of the spread of the data within each of the groups. The variability between groups is a measure of the spread of the group means around the overall mean for all groups combined. F = average variability between groups average variability within groups F = average variability between groups average variability within groups
8
ANOVA: F- STATISTIC If variability between groups is large relative to the variability within groups, the F-statistic will be large. If variability between groups is similar or smaller than variability within groups, the F-statistic will be small. If the F-statistic is large enough, the null hypothesis that all means are equal is rejected.
9
T OTAL VARIATION IN TWO PARTS The difference of each observation from the overall mean can be divided into two parts: The difference between the observation and the group mean The difference between the group mean and the overall mean ANOVA makes use of this partitioned variability.
10
SST = SSW + SSB This partitioned relationship is also true for the squared differences: The variability between each observation and the overall (or grand) mean is measured by the ‘sum of squares total’ (SST) The variability within groups is measured by the ‘sum of squares within’ (SSW) “denominator”. The variability between groups is measured by the ‘sum of squares between’ (SSB).“numerator”. As the value of SSW↓ ↑ F meaning: more significant difference is declared with more confidence due to the treatments not error.
11
M EAN S QUARE W ITHIN AND M EAN S QUARE B ETWEEN The mean squares are measures of the average variability and are calculated from the sum of squares divided by the degrees of freedom. MSW = SSW/(N-t) MSW has N-t degrees of freedom where N= total number of observations and t= # groups MSB = SSB / ( t-1) MSB = t-1 degrees of freedom where t= number of groups. F-statistic = MSB / MSW.
12
ANOVA A SSUMPTIONS The observations are from a random sample and they are independent from each other. The observations are normally distributed within each group ANOVA is still appropriate if this assumption is not met when the sample size in each group is at least 30. The variances are approximately equal between groups If the ratio of the largest SD / smallest SD < 2, this assumption is considered to be met. It is not required to have equal sample sizes in all groups.
13
Once a significant F-value is obtained in an Analysis of Variance, a significant F-value tells us only that the means are not all equal (i.e., reject the null hypothesis). We still do not know exactly which means are significantly different from which other ones. We need to examine the numbers more carefully to be able to say exactly where the significant differences are. therefore we need post hoc tests.
14
For example, if there are five groups listed. An ANOVA indicated there is a significant difference! Is it between mean I & II; means II & IV? Means III and IV? In order to find out what means there is a significant difference between, a post hoc test needs to be done. a post hoc test designed to perform a pair wise comparison of the means to see where the significant difference is!
15
P OST - TESTS Bonferroni, gives strict control but can give 'false positive' results. Tukey HSD have good power to handle Type 1 errors. Tukey test is good choice for comparing large numbers of means. REGWQ test is good for comparing all pairs of means, but not useful when group sizes are different. If the sample size varies a little, use Gabriel's procedure. If it varies significantly, use Hochberg's GT2. If the population may not have homogeneous variance, then Games-Howell may be used. Newman-Keuls and Scheffe tests are the most conservative.
16
T HANK Y OU …
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.