Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multiple comparisons - multiple pairwise tests - orthogonal contrasts

Similar presentations


Presentation on theme: "Multiple comparisons - multiple pairwise tests - orthogonal contrasts"— Presentation transcript:

1 Multiple comparisons - multiple pairwise tests - orthogonal contrasts
- independent tests - labelling conventions

2 Multiple tests Problem:
Because we examine the same data in multiple comparisons, the result of the first comparison affects our expectation of the next comparison. This is a general problem in statistics. We often need to do multiple tests on our data. But when we do two tests that involve the same data, the outcome of one affects the outcome of the other…they are not INDEPENDENT.

3 Look at the effect of each treatment on Orang-utan births
This is a general problem in statistics. We often need to do multiple tests on our data. But when we do two tests that involve the same data, the outcome of one affects the outcome of the other…they are not INDEPENDENT. 3 treatments:- a, no active management b, selective logging c, replanting Look at the effect of each treatment on Orang-utan births

4 Multiple tests ANOVA gives f(2,27) = 5.82, p < Therefore at least one different, but which one(s)? Births per km2 One of the most common instances of multiple tests are “post hoc” tests after a significant treatment effect in an ANOVA. Remember, an ANOVA just tests if one level is different from the rest…you then have to figure out which one. Often we do all possible combinations, as shown here… T-tests of all pairwise combinations significant Not significant significant

5 Multiple tests Births per km2
However, as we just discovered in the card example, these are non-independent. Setting alpha tells us that although the second treatment could be higher than the first just because of a fluke, this will happen less than 5% of the time. But if it just happened to be high, that would affect the probability of the second being higher than the third…so the alpha for our second test is not 0.05 anymore as our expectations have changed T-test: <5% chance that this difference was a fluke… affects likelihood of finding a difference in this pair!

6 Multiple tests Solution:
Make alpha your overall “experiment-wise” error rate Births per km2 So how do we change our alphas to get over the problem of non-independence? T-test: <5% chance that this difference was a fluke… affects likelihood (alpha) of finding a difference in this pair!

7 Multiple tests Solution:
Make alpha your overall “experiment-wise” error rate e.g. simple Bonferroni: Divide alpha by number of tests Births per km2 This simplest solution is a Bonferroni test. The regression and multiple comparison summary sheet gives some variations on this theme, but they all have a basic purpose…adjust our expectations for no-independent tests. Alpha / 3 = Alpha / 3 = 0.0167 Alpha / 3 =

8 Multiple tests Tukey post-hoc testing does this for you
Uses the q-distribution Does all the pairwise comparisons. Births per km2 1-2, p < , p > , p < 0.05

9 Orthogonal contrasts Orthogonal = perpendicular = independent
Contrast = comparison Example. We compare the growth of three types of plants: Legumes, graminoids, and asters. These 2 contrasts are orthogonal: 1. Legumes vs. non-legumes (graminoids, asters) 2. Graminoids vs. asters Independent is also called orthogonal. The reasons for this will be clear to students who have taken linear algebra, but otherwise we’ll just say that orthogonal is perpindicular, and two axes that are perpindicular to each other are uncorrelated, i.e. independent. Here’s a stats example totally analogous to the card example…

10 Trick for determining if contrasts are orthogonal:
1. In the first contrast, label all treatments in one group with “+” and all treatments in the other group with “-” (doesn’t matter which way round). Legumes Graminoids Asters How do we figure out if our contrasts (=comparisons) are orthogonal? We could try to reason it out, or, here’s an easy trick…

11 Trick for determining if contrasts are orthogonal:
1. In the first contrast, label all treatments in one group with “+” and all treatments in the other group with “-” (doesn’t matter which way round). 2. In each group composed of t treatments, put the number 1/t as the coefficient. If treatment not in contrast, give it the value “0”. Legumes Graminoids Asters / /2

12 Trick for determining if contrasts are orthogonal:
1. In the first contrast, label all treatments in one group with “+” and all treatments in the other group with “-” (doesn’t matter which way round). 2. In each group composed of t treatments, put the number 1/t as the coefficient. If treatment not in contrast, give it the value “0”. 3. Repeat for all other contrasts. Legumes Graminoids Asters / /2

13 Trick for determining if contrasts are orthogonal:
4. Multiply each column, then sum these products. Legumes Graminoids Asters / /2 / /2 Sum of products = 0

14 Trick for determining if contrasts are orthogonal:
4. Multiply each column, then sum these products. 5. If this sum = 0 then the contrasts were orthogonal! If the sum is zero, then all the tests are orthogonal to each other. Legumes Graminoids Asters / /2 / /2 Sum of products = 0

15 What about these contrasts?
1. Monocots (graminoids) vs. dicots (legumes and asters). 2. Legumes vs. non-legumes Group work! Try these two contrasts…are the orthogonal to each other? Hint for those who are lost: non-legumes are graminoids and asters

16 Important! You need to assess orthogonality in each pairwise combination of contrasts. So if 4 contrasts: Contrast 1 and 2, 1 and 3, 1 and 4, 2 and 3, 2 and 4, 3 and 4.

17 } } How do you program contrasts in JMP (etc.)? Treatment SS
Orthogonal contrasts are just ways to divide up the treatment effect (SS). You can perfectly divide up k levels of a treament into k-1 contrasts (e.g. the example we did a few minutes ago). } Contrast 2

18 How do you program contrasts in JMP (etc.)?
Legumes vs. non-legumes Normal treatments “There was a significant treatment effect (F…). About 53% of the variation between treatments was due to differences between legumes and non-legumes (F1,20 = 6.7).” Legume 1 1 Graminoid 2 2 Aster 3 2 SStreat Df treat 2 1 MStreat 60 MSerror 10 Df error 20 Sound hard? Not really. Just code the treatments that you are averaging over in the same way. For example… Then run your ANOVA again with your new “treatment”. JMP will give you a new SS. What about df? Simple rule of thumb: each contrast costs 1 df (JMP will tell you this anyways). Then look at the F test…significant? Q: What is the SS for the graminoid vs aster treatment? = 55. F1,20 = (67)/1 = 6.7 10 From full model!

19 Even different statistical tests may not be independent !
Example. We examined effects of fertilizer on growth of dandelions in a pasture using an ANOVA. We then repeated the test for growth of grass in the same plots. Problem? Problem is that if the overall data gives you one trend, then subsets are likely to do so as well!

20 Multiple tests b Convention:
Treatments with a common letter are not significantly different a a,b Births per km2 significant Not significant Not significant

21 The data on the next graph is from a study of forestry effects on Cameroonian butterflies (Diane’s data).

22 Challenge: The first person to give Spencer the correct datasheet (all combinations identified as sig or ns) can hand in the lab of their choice 2 days late without penalty! Reminder: We need to take the link to the answer sheet off the website!


Download ppt "Multiple comparisons - multiple pairwise tests - orthogonal contrasts"

Similar presentations


Ads by Google