Using Statistics in Research Psych 231: Research Methods in Psychology
Announcements Final drafts of class project due in labs this week –Remember to turn in the rough draft with the final draft –Also turn in the checklist from the PIP packet
“Generic” statistical test Tests the question: –Are there differences between groups due to a treatment? Two possibilities in the “real world” XBXB XAXA H 0 : is true (no treatment effect) One population Two samples
“Generic” statistical test Tests the question: –Are there differences between groups due to a treatment? Two possibilities in the “real world” XBXB XAXA XBXB XAXA H 0 : is true (no treatment effect)H 0 : is false (is a treatment effect) Two populations Two samples
“Generic” statistical test Why might the samples be different? (What is the source of the variability between groups)? –ER: Random sampling error –ID: Individual differences (if between subjects factor) –TR: The effect of a treatment XBXB XAXA
“Generic” statistical test The generic test statistic - is a ratio of sources of variability XBXB XAXA Observed difference Difference from chance = TR + ID + ER ID + ER = Computed test statistic –ER: Random sampling error –ID: Individual differences –TR: The effect of a treatment
“Generic” statistical test The generic test statistic distribution –To reject the H 0, you want a computed test statistics that is large This large difference, reflects a large Treatment Effect (TR) –What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic -level determines where these boundaries go
“Generic” statistical test The generic test statistic distribution –To reject the H 0, you want a computed test statistics that is large This large difference, reflects a large Treatment Effect (TR) –What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H 0 Fail to reject H 0
“Generic” statistical test Things that affect the computed test statistic –Size of the treatment effect The bigger the effect, the bigger the computed test statistic –Difference expected by chance (sample error) Sample size Variability in the population
Effect of sample size on Sampling error n = 1 Population mean x Sampling error (Pop mean - sample mean) Population Distribution
Effect of sample size on Sampling error n = 2 Population mean x Population Distribution x Sampling error (Pop mean - sample mean) Sample mean
Effect of sample size on Sampling error n = 10 Population mean Population Distribution Sampling error (Pop mean - sample mean) Sample mean x x x x x x x x x x
Effect of sample size on Sampling error n = 10 Population mean Population Distribution Generally, as the sample size increases, the sampling error decreases n = 2 n = 1
Effect of sample size on Sampling error Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Small population variability Large population variability
Some inferential statistical tests 1 factor with two groups –T-tests Between groups: 2-independent samples Within groups: Repeated measures samples (matched, related) 1 factor with more than two groups –Analysis of Variance (ANOVA) (either between groups or repeated measures) Multi-factorial –Factorial ANOVA
T-test Design –2 separate experimental conditions –Degrees of freedom Based on the size of the sample and the kind of t-test Formula: T = X 1 - X 2 Diff by chance Based on sample error Observed difference Computation differs for between and within t-tests
T-test Reporting your results –The observed difference between conditions –Kind of t-test –Computed T-statistic –Degrees of freedom for the test –The “p-value” of the test “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.” “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.”
Analysis of Variance Designs –More than two groups 1 Factor ANOVA, Factorial ANOVA Both Within and Between Groups Factors Test statistic is an F-ratio Degrees of freedom –Several to keep track of –The number of them depends on the design XBXB XAXA XCXC
Analysis of Variance More than two groups –Now we can’t just compute a simple difference score since there are more than one difference –So we use variance instead of simply the difference Variance is essentially an average difference XBXB XAXA XCXC Observed variance Variance from chance F-ratio =
1 factor ANOVA 1 Factor, with more than two levels –Now we can’t just compute a simple difference score since there are more than one difference A - B, B - C, & A - C XBXB XAXA XCXC
1 factor ANOVA Null hypothesis: H 0 : all the groups are equal XBXB XAXA XCXC X A = X B = X C Alternative hypotheses H A : not all the groups are equal X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B The ANOVA tests this one!! Do further tests to pick between these
1 factor ANOVA Planned contrasts and post-hoc tests: - Further tests used to rule out the different Alternative hypotheses X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C
1 factor ANOVA Reporting your results –The observed differences –Kind of test –Computed F-ratio –Degrees of freedom for the test –The “p-value” of the test –Any post-hoc or planned comparison results “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another”
Factorial ANOVAs We covered much of this in our experimental design lecture More than one factor –Factors may be within or between –Overall design may be entirely within, entirely between, or mixed Many F-ratios may be computed –An F-ratio is computed to test the main effect of each factor –An F-ratio is computed to test each of the potential interactions between the factors
Factorial ANOVA Reporting your results –The observed differences Because there may be a lot of these, may present them in a table instead of directly in the text –Kind of design e.g. “2 x 2 completely between factorial design” –Computed F-ratios May see separate paragraphs for each factor, and for interactions –Degrees of freedom for the test Each F-ratio will have its own set of df’s –The “p-value” of the test May want to just say “all tests were tested with an alpha level of 0.05” –Any post-hoc or planned comparison results Typically only the theoretically interesting comparisons are presented