Download presentation
Presentation is loading. Please wait.
Published byCollin Warner Modified over 8 years ago
1
Inferential Statistics Psych 231: Research Methods in Psychology
2
Announcements Final draft of class experiment due in labs this week. Poster presentations in labs next week
3
Inferential Statistics Purpose: To make claims about populations based on data collected from samples What’s the big deal? Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?
4
Testing Hypotheses Step 2: Set your decision criteria Alpha level (prob of Type I error) Step 1: State your hypotheses Null hypothesis Alternative hypothesis This is the hypothesis that you are testing Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis Reject H 0 “statistically significant differences” Fail to reject H 0 “not statistically significant differences”
5
“Generic” statistical test The generic test statistic - is a ratio of sources of variability Observed difference Difference from chance = TR + ID + ER ID + ER = Computed test statistic XAXA XBXB ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment
6
Some inferential statistical tests 1 factor with two groups T-tests Between groups: 2-independent samples Within groups: Repeated measures samples (matched, related) 1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or repeated measures) Multi-factorial Factorial ANOVA Correlation and regression
7
T-test Design 2 separate experimental conditions Degrees of freedom Based on the size of the sample and the kind of t-test Formula: T = X 1 - X 2 Diff by chance Based on sample error Observed difference Computation differs for between and within t-tests
8
T-test Reporting your results The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.” “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.”
9
Analysis of Variance Designs More than two groups 1 Factor ANOVA, Factorial ANOVA Both Within and Between Groups Factors Test statistic is an F-ratio Degrees of freedom Several to keep track of The number of them depends on the design XBXB XAXA XCXC
10
Analysis of Variance More than two groups Now we can’t just compute a simple difference score since there are more than one difference So we use variance instead of simply the difference Variance is essentially an average difference Observed variance Variance from chance F-ratio = XBXB XAXA XCXC
11
1 factor ANOVA 1 Factor, with more than two levels Now we can’t just compute a simple difference score since there are more than one difference A - B, B - C, & A - C XBXB XAXA XCXC
12
1 factor ANOVA Null hypothesis: H 0 : all the groups are equal X A = X B = X C Alternative hypotheses H A : not all the groups are equal X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B The ANOVA tests this one!! Do further tests to pick between these XBXB XAXA XCXC
13
1 factor ANOVA Planned contrasts and post-hoc tests: - Further tests used to rule out the different Alternative hypotheses X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C
14
Reporting your results The observed differences Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way between groups ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another” 1 factor ANOVA
15
Factorial ANOVAs We covered much of this in our experimental design lecture More than one factor Factors may be within or between Overall design may be entirely within, entirely between, or mixed Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions between the factors
16
Factorial ANOVAs Reporting your results The observed differences Because there may be a lot of these, may present them in a table instead of directly in the text Kind of design e.g. “2 x 2 completely between factorial design” Computed F-ratios May see separate paragraphs for each factor, and for interactions Degrees of freedom for the test Each F-ratio will have its own set of df’s The “p-value” of the test May want to just say “all tests were tested with an alpha level of 0.05” Any post-hoc or planned comparison results Typically only the theoretically interesting comparisons are presented
17
Relationships between variables Properties of a correlation Form (linear or non-linear) Direction (positive or negative) Strength (none, weak, strong, perfect) To examine this relationship you should: Make a scatterplot Compute the Correlation Coefficien t Y X Y X Reporting your results Kind of test Computed correlation Degrees of freedom for the test The “p-value” of the test “The relationship between Variable A and Variable B yielded a significant linear correlation, r(25) = 0.67, p < 0.05.”
18
Y X 1 2 3 4 5 6 123456 Regression Compute the equation for the line that best fits the data points Y = (X)(slope) + (intercept) 2.0 Change in Y Change in X = slope 0.5
19
Y X 1 2 3 4 5 6 123456 Regression Can make specific predictions about Y based on X Y = (X)(.5) + (2.0) X = 5 Y = ? Y = (5)(.5) + (2.0) Y = 2.5 + 2 = 4.5 4.5
20
Regression Also need a measure of error Y = X(.5) + (2.0) + error Y X 1 2 3 4 5 6 123456 Y X 1 2 3 4 5 6 123456 Same line, but different relationships (strength difference)
21
Reporting regression results Reporting your results Kind of test (there are a variety of different kinds of regression procedures) Computed slope(s) and intercept (“parameters”) A measure of the error (there are a variety of these that may be reported) Degrees of freedom for the test(s) The “p-value” of the tests
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.