Presentation is loading. Please wait.

Presentation is loading. Please wait.

PSY2004 Research Methods PSY2005 Applied Research Methods Week Five.

Similar presentations


Presentation on theme: "PSY2004 Research Methods PSY2005 Applied Research Methods Week Five."— Presentation transcript:

1 PSY2004 Research Methods PSY2005 Applied Research Methods Week Five

2

3 Today General principles how it works, what it tells you etc. Next week Extra bits and bobs assumptions, follow-on analyses, effect sizes

4 ANalysis Of VAriance A ‘group’ of statistical tests Useful, hence widely used

5 used with a variety of designs start with independent groups & 1 independent variable. other designs late r

6 revision [inferential stats] trying to make inference about POPULATION on basis of SAMPLE but sampling ‘error’ means sample not quite equal to population

7 Two hypotheses for e.g., difference between group means in your sample H 0 - just sampling error, no difference between population means H 1 - a difference between the population means Decide on the basis of probability of getting your sample were H 0 to be true

8 if that probability is low if such our difference between sample means would be unlikely were H 0 true if it would be a rare event we reject H 0 (and so accept H 1 ) low / unlikely / rare = < 0.05 (5%)

9 what is ANOVA for? despite its name (analysis of variance) looks at differences between means looks at differences between groups means in sample to make inference about differences between group means in the population

10 but we already have the t-test used to compare means e.g., PSY1016 last year: difference between males’ & females’ mean Trait Emotional Intelligence scores can only compare two means at a time what if we have more than two groups/means?

11 e.g., comparison of drug treatments for offenders 12 step programme cognitive-behavioural motivational intervention standard care DV: no. of days drugs taken in a month

12 e.g., comparison of coaching methods professional coaching peer coaching standard tutorial DV: self-reported goal attainment (1-5 scale)

13 we can use the t-test here just lots of them multiple comparisons 12-step vs cog-behavioural 12-step vs standard care cog-behavioural vs standard care bit messy / longwinded but computer does the hard work far more serious potential problem … professional coaching vs peer professional vs standard tutorial peer vs standard tutorial

14 increased chance of making a mistake

15 statistical inference based on probability not certain always a chance that we will make the wrong decision two types of mistake type I – reject H 0 when it is in fact true decide there’s a difference between population means when there isn’t [false positive] type II – fail to reject H 0 when it is in fact false [false negative]

16 type I error [false positive] we reject H 0 when p < 0.05 i.e., less than 5% chance of getting our data (or more extreme) were H 0 to be true 5% is small, but it isn’t zero still a chance of H 0 being true and getting our data still a chance of rejecting H 0 but it being true

17 alpha (α) [criterion for rejecting H 0 - typically 5%] sets a limit on probability of making type I error if H 0 true we would only reject it 5% of the time but multiple comparisons change the situation ….

18 russian roulette [with six chamber revolver] one bullet, spin the cylinder muzzle to temple, pull trigger one-in-six chance of blowing your brains out

19 russian roulette with three guns each gun on its own a one-in-six chance of blowing your brains out for the ‘family’ of three guns the probability of you getting a bullet in your brain is worryingly higher

20 russian roulette [with twenty chamber revolver] one-in-twenty (5%) chance of blowing your brains out with three such guns probability = 1 – (.95 x.95 x.95) =.14

21 just the same with multiple comparisons [with one obvious difference] each individual t-test maximum type I error rate (α) of 5% for a ‘family’ of three such t-tests error rate = 1 – (.95 x.95 x.95) =.14

22 controlling family-wise error various techniques e.g., Bonferroni correction adjust α for each individual comparison where k = number of comparisons

23 for our ‘family’ of three comparisons use adjusted α of.0167 for each t-test family-wise Type I error rate limited to 5% and all is well ….

24 … actually it isn’t. Such ‘corrections’ come at a price increased chance of making a type II error failing to reject H 0 when it is in fact false [false negative] less chance of detecting an effect when there is one aka low ‘power’ [more of this in Week 10]

25 Moving from comparing two means to considering three has complicated matters we seem to face either increased type I error rate or increased type II error rate [lower power]

26 This [finally] is where ANOVA comes in It can help us detect any difference between our 3 (or more) group means without increasing type I error rate or reducing power

27 ANOVA is another NHST [Null Hypothesis Significance Test] need to know what your H 0 and H 1 are H 0 – all the population means are the same, any differences between sample means are simply due to sampling error H 1 – the population means are not all the same NB one-tailed vs two-tailed doesn’t apply

28 How does ANOVA work? the heart of ANOVA is the F ratio a ratio of two estimates of the population variance, both based on the sample what’s that got to do with differences between means? Be patient.

29 e.g., comparison of drug treatments for offenders 12 step programme cognitive-behavioural motivational intervention standard care DV: no. of days drugs taken in a month

30 Random data generated by SPSS [just like PSY1017 W09 labs last year] 3 samples (N=48) all from the same population H 0 [null hypothesis] (no difference between population means) TRUE

31 the sample means are not all the same due to ‘sampling error’ they vary around the overall mean of 6.61 between-group variability

32 the standard deviations show how varied individual scores are for each group within-group variability

33 both between-groups variability and within-groups variability can be used to estimate the population variance Don’t worry [for now] how this is done

34 estimate of population variance based on between-groups variability (differences of groups means around overall mean) = 3.07 estimate of population variance based on within-groups variability (how varied individual scores are for each group) = 2.02

35 F ratio = between-groups estimate within-groups estimate = 3.07 2.02 = 1.52 estimates unlikely to be exactly the same, but similar, and so F ratio will be approximately = 1 WHEN H 0 IS TRUE

36 Random data generated by SPSS 3 samples (N=48), all as before but +1 to all ‘Standard Care’ scores H 0 [null hypothesis] (no difference between population means) FALSE

37 previous example H 0 true new example H 0 false within-groups variability UNCHANGED only between-groups variability affected

38 estimate of population variance based on between-groups variability (differences of groups means around overall mean) = 3.07 [previous], = 9.79 [new] estimate of population variance based on within-groups variability (how varied individual scores are for each group) = 2.02 [previous], = 2.02 [new]

39 F ratio = between-groups estimate within-groups estimate = 3.07 [previous], = 9.79 [new] 2.02 = 1.52 [previous], =4.85 [new] F ratio will tend to be larger WHEN H 0 IS FALSE as only between-groups estimate affected by differences between means.

40 ANOVA is another NHST probability of getting F-ratio (or more extreme) if H 0 true If p < 0.05, reject H 0 H 0 – all the population means are the same and so accept H 1 – the population means are not all the same NB this doesn’t say anything about which means are different to which other ones


Download ppt "PSY2004 Research Methods PSY2005 Applied Research Methods Week Five."

Similar presentations


Ads by Google