Download presentation
Presentation is loading. Please wait.
1
Using Statistics in Research Psych 231: Research Methods in Psychology
2
From Correlation to Regression Last time we “imagined” a line through the points Y X 1 2 3 4 5 6 123456
3
Compute the equation for the line that best fits the data points Y = (X)(slope) + (intercept) 2.0 Change in Y Change in X = slope 0.5 Y X 1 2 3 4 5 6 123456 From Correlation to Regression
4
Regression Can make specific predictions about Y based on X Y = (X)(.5) + (2.0) X = 5 Y = ? Y = (5)(.5) + (2.0) Y = 2.5 + 2 = 4.5 4.5 Y X 1 2 3 4 5 6 123456
5
Regression Also need a measure of error Y = X(.5) + (2.0) + error Y X 1 2 3 4 5 6 123456 Y X 1 2 3 4 5 6 123456 Same line, but different relationships (strength difference)
6
Cautions with correlation & regression Don’t make causal claims Don’t extrapolate Extreme scores (outliers) can strongly influence the calculated relationship
7
Inferential Statistics Purpose: To make claims about populations based on data collected from samples What’s the big deal? Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Example Experiment: Group A - gets treatment to improve memory Group B - gets no treatment (control) After treatment period test both groups for memory Results: Group A’s average memory score is 80% Group B’s is 76% Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?
8
Testing Hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis “Reject H 0 ” “Fail to reject H 0 ” Step 1: State your hypotheses
9
Testing Hypotheses Step 1: State your hypotheses “There are no differences (effects)” Generally, “not all groups are equal” You aren’t out to prove the alternative hypothesis (although it feels like this is what you want to do) If you reject the null hypothesis, then you’re left with support for the alternative(s) (NOT proof!) This is the hypothesis that you are testing Null hypothesis (H 0 ) Alternative hypothesis(ses)
10
Testing Hypotheses Step 1: State your hypotheses In our memory example experiment Null H 0 : mean of Group A = mean of Group B Alternative H A : mean of Group A ≠ mean of Group B (Or more precisely: Group A > Group B) It seems like our theory is that the treatment should improve memory. That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics. Instead, we test the H 0 In our memory example experiment Null H 0 : mean of Group A = mean of Group B Alternative H A : mean of Group A ≠ mean of Group B (Or more precisely: Group A > Group B) It seems like our theory is that the treatment should improve memory. That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics. Instead, we test the H 0
11
Testing Hypotheses Step 2: Set your decision criteria Your alpha level will be your guide for when to: “reject the null hypothesis” “fail to reject the null hypothesis” Step 1: State your hypotheses This could be correct conclusion or the incorrect conclusion Two different ways to go wrong Type I error: saying that there is a difference when there really isn’t one (probability of making this error is “alpha level”) Type II error: saying that there is not a difference when there really is one
12
Error types Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error
13
Error types: Courtroom analogy Real world (‘truth’) Defendant is innocent Jury’s decision Find guilty Type I error Type II error Defendant is guilty Find not guilty
14
Error types Type I error: concluding that there is an effect (a difference between groups) when there really isn’t. Sometimes called “significance level” We try to minimize this (keep it low) Pick a low level of alpha Psychology: 0.05 and 0.01 most common Type II error: concluding that there isn’t an effect, when there really is. Related to the Statistical Power of a test How likely are you able to detect a difference if it is there
15
Testing Hypotheses Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Descriptive statistics (means, standard deviations, etc.) Inferential statistics (t-tests, ANOVAs, etc.) Step 5: Make a decision about your null hypothesis Reject H 0 “statistically significant differences” Fail to reject H 0 “not statistically significant differences” Step 1: State your hypotheses Step 2: Set your decision criteria
16
Statistical significance “Statistically significant differences” When you “reject your null hypothesis” Essentially this means that the observed difference is above what you’d expect by chance “Chance” is determined by estimating how much sampling error there is Factors affecting “chance” Sample size Population variability
17
Sampling error n = 1 Population mean x Sampling error (Pop mean - sample mean) Population Distribution
18
Sampling error n = 2 Population mean x Population Distribution x Sampling error (Pop mean - sample mean) Sample mean
19
Sampling error n = 10 Population mean Population Distribution Sampling error (Pop mean - sample mean) Sample mean x x x x x x x x x x Generally, as the sample size increases, the sampling error decreases
20
Sampling error Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Small population variability Large population variability
21
Sampling error These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population XAXA XBXB XCXC XDXD Population Samples of size = n Distribution of sample means Avg. Sampling error “chance”
22
Significance “A statistically significant difference” means: the researcher is concluding that there is a difference above and beyond chance with the probability of making a type I error at 5% (assuming an alpha level = 0.05) Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference
23
Non-Significance Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to detect a difference that is really there) Check the statistical power of your test Sample size is too small Effects that you’re looking for are really small Check your controls, maybe too much variability
24
Next time: Inferential Statistical Tests Different statistical tests “Generic test” T-test Analysis of Variance (ANOVA) Have a great Thanksgiving break
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.