Presentation is loading. Please wait.

Presentation is loading. Please wait.

Descriptive Statistics Psych 231: Research Methods in Psychology.

Similar presentations


Presentation on theme: "Descriptive Statistics Psych 231: Research Methods in Psychology."— Presentation transcript:

1 Descriptive Statistics Psych 231: Research Methods in Psychology

2 Statistics Why do we use them? Descriptive statistics Used to describe, simplify, & organize data sets Describing distributions of scores Inferential statistics Used to test claims about the population, based on data gathered from samples Takes sampling error into account, are the results above and beyond what you’d expect by random chance

3 Statistics Why do we use them? Descriptive statistics Used to describe, simplify, & organize data sets Describing distributions of scores Inferential statistics Used to test claims about the population, based on data gathered from samples Takes sampling error into account, are the results above and beyond what you’d expect by random chance

4 Describing Distributions Properties of a distribution Shape Center: Spread (variability) How similar/dissimilar are the scores in the distribution? Standard deviation (variance), Range Mode MedianMean

5 Variability Low variability The scores are fairly similar High variability The scores are fairly dissimilar mean

6 Spread (Variability) How similar are the scores? Range: the maximum value - minimum value Only takes two scores from the distribution into account Influenced by extreme values (outliers) Standard deviation (SD): (essentially) the average amount that the scores in the distribution deviate from the mean Takes all of the scores into account Also influenced by extreme values (but not as much as the range) Variance: standard deviation squared

7 Standard deviation The standard deviation is the most popular and most important measure of variability. The standard deviation measures how far off all of the individuals in the distribution are from a standard, where that standard is the mean of the distribution. Essentially, the average of the deviations. μ

8 An Example: Computing the Mean Our population 2, 4, 6, 8 1 2 3 4 5 6 7 8 9 10 μ

9 An Example: Computing Standard Deviation (population) Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. 2 - 5 = -3 1 2 3 4 5 6 7 8 9 10 μ X – μ = deviation scores -3 Our population 2, 4, 6, 8

10 1 2 3 4 5 6 7 8 9 10 2 - 5 = -3 4 - 5 = -1 μ X – μ = deviation scores Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. Our population 2, 4, 6, 8 An Example: Computing Standard Deviation (population)

11 1 2 3 4 5 6 7 8 9 10 2 - 5 = -3 4 - 5 = -1 6 - 5 = +1 μ X – μ = deviation scores 1 Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. Our population 2, 4, 6, 8 An Example: Computing Standard Deviation (population)

12 1 2 3 4 5 6 7 8 9 10 2 - 5 = -3 4 - 5 = -1 6 - 5 = +1 8 - 5 = +3 μ X – μ = deviation scores 3 Notice that if you add up all of the deviations they must equal 0. Step 1: To get a measure of the deviation we need to subtract the population mean from every individual in our distribution. Our population 2, 4, 6, 8 An Example: Computing Standard Deviation (population)

13 Step 2: So what we have to do is get rid of the negative signs. We do this by squaring the deviations and then taking the square root of the sum of the squared deviations (SS). SS = Σ(X - μ ) 2 2 - 5 = -3 4 - 5 = -1 6 - 5 = +1 8 - 5 = +3 X – μ = deviation scores = (-3) 2 + (-1) 2 + (+1) 2 + (+3) 2 = 9 + 1 + 1 + 9 = 20 An Example: Computing Standard Deviation (population)

14 Step 3: ComputeVariance (which is simply the average of the squared deviations (SS)) So to get the mean, we need to divide by the number of individuals in the population. variance = σ 2 = SS/N An Example: Computing Standard Deviation (population)

15 Step 4: Compute Standard Deviation To get this we need to take the square root of the population variance. standard deviation = σ = An Example: Computing Standard Deviation (population)

16 To review: Step 1: Compute deviation scores Step 2: Compute the SS Step 3: Determine the variance Take the average of the squared deviations Divide the SS by the N Step 4: Determine the standard deviation Take the square root of the variance An Example: Computing Standard Deviation (population)

17 To review: Step 1: Compute deviation scores Step 2: Compute the SS Step 3: Determine the variance Take the average of the squared deviations Divide the SS by the N-1 Step 4: Determine the standard deviation Take the square root of the variance An Example: Computing Standard Deviation (sample) This is done because samples are biased to be less variable than the population. This “correction factor” will increase the sample’s SD (making it a better estimate of the population’s SD)

18 Statistics Why do we use them? Descriptive statistics Used to describe, simplify, & organize data sets Describing distributions of scores Inferential statistics Used to test claims about the population, based on data gathered from samples Takes sampling error into account, are the results above and beyond what you’d expect by random chance

19 Inferential Statistics Purpose: To make claims about populations based on data collected from samples What’s the big deal?  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error?

20 Testing Hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis “Reject H 0 ” “Fail to reject H 0 ” Step 1: State your hypotheses

21 Testing Hypotheses Step 1: State your hypotheses “There are no differences (effects)” Generally, “not all groups are equal” You aren’t out to prove the alternative hypothesis (although it feels like this is what you want to do) If you reject the null hypothesis, then you’re left with support for the alternative(s) (NOT proof!) This is the hypothesis that you are testing Null hypothesis (H 0 ) Alternative hypothesis(ses)

22 Testing Hypotheses Step 1: State your hypotheses  In our memory example experiment  Null H 0 : mean of Group A = mean of Group B  Alternative H A : mean of Group A ≠ mean of Group B  (Or more precisely: Group A > Group B)  It seems like our theory is that the treatment should improve memory.  That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics.  Instead, we test the H 0  In our memory example experiment  Null H 0 : mean of Group A = mean of Group B  Alternative H A : mean of Group A ≠ mean of Group B  (Or more precisely: Group A > Group B)  It seems like our theory is that the treatment should improve memory.  That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics.  Instead, we test the H 0

23 Testing Hypotheses Step 2: Set your decision criteria Your alpha level will be your guide for when to: “reject the null hypothesis” “fail to reject the null hypothesis” Step 1: State your hypotheses This could be correct conclusion or the incorrect conclusion Two different ways to go wrong Type I error: saying that there is a difference when there really isn’t one (probability of making this error is “alpha level”) Type II error: saying that there is not a difference when there really is one

24 Error types Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error

25 Error types: Courtroom analogy Real world (‘truth’) Defendant is innocent Jury’s decision Find guilty Type I error Type II error Defendant is guilty Find not guilty

26 Error types Type I error: concluding that there is an effect (a difference between groups) when there really isn’t. Sometimes called “significance level” We try to minimize this (keep it low) Pick a low level of alpha Psychology: 0.05 and 0.01 most common Type II error: concluding that there isn’t an effect, when there really is. Related to the Statistical Power of a test How likely are you able to detect a difference if it is there

27 Testing Hypotheses Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Descriptive statistics (means, standard deviations, etc.) Inferential statistics (t-tests, ANOVAs, etc.) Step 5: Make a decision about your null hypothesis Reject H 0 “statistically significant differences” Fail to reject H 0 “not statistically significant differences” Step 1: State your hypotheses Step 2: Set your decision criteria

28 Statistical significance “Statistically significant differences” When you “reject your null hypothesis” Essentially this means that the observed difference is above what you’d expect by chance “Chance” is determined by estimating how much sampling error there is Factors affecting “chance” Sample size Population variability

29 Sampling error n = 1 Population mean x Sampling error (Pop mean - sample mean) Population Distribution

30 Sampling error n = 2 Population mean x Population Distribution x Sampling error (Pop mean - sample mean) Sample mean

31 Sampling error n = 10 Population mean Population Distribution Sampling error (Pop mean - sample mean) Sample mean x x x x x x x x x x  Generally, as the sample size increases, the sampling error decreases

32 Sampling error Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Small population variability Large population variability

33 Sampling error These two factors combine to impact the distribution of sample means. The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population XAXA XBXB XCXC XDXD Population Samples of size = n Distribution of sample means Avg. Sampling error “chance”

34 Significance “A statistically significant difference” means: the researcher is concluding that there is a difference above and beyond chance with the probability of making a type I error at 5% (assuming an alpha level = 0.05) Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference

35 Non-Significance Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to detect a difference that is really there) Check the statistical power of your test Sample size is too small Effects that you’re looking for are really small Check your controls, maybe too much variability

36 From last time Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76%  Example Experiment:  Group A - gets treatment to improve memory  Group B - gets no treatment (control)  After treatment period test both groups for memory  Results:  Group A’s average memory score is 80%  Group B’s is 76% XAXA XBXB 76%80%  Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Two sample distributions H 0 : there is no difference between Grp A and Grp B H 0 :  A =  B About populations

37 “Generic” statistical test Tests the question: Are there differences between groups due to a treatment? One population Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error H 0 is true (no treatment effect) XAXA XBXB 76%80% Two possibilities in the “real world” Two sample distributions

38 “Generic” statistical test XAXA XBXB XAXA XBXB H 0 is true (no treatment effect) H 0 is false (is a treatment effect) Two populations Real world (‘truth’) H 0 is correct H 0 is wrong Experimenter’s conclusions Reject H 0 Fail to Reject H 0 Type I error Type II error 76%80% 76%80% People who get the treatment change, they form a new population (the “treatment population) People who get the treatment change, they form a new population (the “treatment population) Tests the question: Are there differences between groups due to a treatment? Two possibilities in the “real world”

39 “Generic” statistical test XBXB XAXA Why might the samples be different? (What is the source of the variability between groups)? ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment

40 “Generic” statistical test The generic test statistic - is a ratio of sources of variability Observed difference Difference from chance = TR + ID + ER ID + ER = Computed test statistic XBXB XAXA ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment

41 Sampling error The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population XAXA XBXB XCXC XDXD Population Samples of size = n Distribution of sample means Avg. Sampling error “chance”

42 “Generic” statistical test The generic test statistic distribution To reject the H 0, you want a computed test statistics that is large reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic  -level determines where these boundaries go Distribution of sample means Test statistic TR + ID + ER ID + ER

43 “Generic” statistical test Distribution of the test statistic Reject H 0 Fail to reject H 0 The generic test statistic distribution To reject the H 0, you want a computed test statistics that is large reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion

44 “Generic” statistical test Distribution of the test statistic Reject H 0 Fail to reject H 0 The generic test statistic distribution To reject the H 0, you want a computed test statistics that is large reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion “One tailed test”: sometimes you know to expect a particular difference (e.g., “improve memory performance”)

45 “Generic” statistical test Things that affect the computed test statistic Size of the treatment effect The bigger the effect, the bigger the computed test statistic Difference expected by chance (sample error) Sample size Variability in the population

46 Significance “A statistically significant difference” means: the researcher is concluding that there is a difference above and beyond chance with the probability of making a type I error at 5% (assuming an alpha level = 0.05) Note “statistical significance” is not the same thing as theoretical significance. Only means that there is a statistical difference Doesn’t mean that it is an important difference

47 Non-Significance Failing to reject the null hypothesis Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) Usually check to see if you made a Type II error (failed to detect a difference that is really there) Check the statistical power of your test Sample size is too small Effects that you’re looking for are really small Check your controls, maybe too much variability

48 Some inferential statistical tests 1 factor with two groups T-tests Between groups: 2-independent samples Within groups: Repeated measures samples (matched, related) 1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or repeated measures) Multi-factorial Factorial ANOVA

49 T-test Design 2 separate experimental conditions Degrees of freedom Based on the size of the sample and the kind of t-test Formula: T = X 1 - X 2 Diff by chance Based on sample error Observed difference Computation differs for between and within t-tests

50 T-test Reporting your results The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05.” “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 5.67, p < 0.05.”

51 Analysis of Variance Designs More than two groups 1 Factor ANOVA, Factorial ANOVA Both Within and Between Groups Factors Test statistic is an F-ratio Degrees of freedom Several to keep track of The number of them depends on the design XBXB XAXA XCXC

52 Analysis of Variance More than two groups Now we can’t just compute a simple difference score since there are more than one difference So we use variance instead of simply the difference Variance is essentially an average difference Observed variance Variance from chance F-ratio = XBXB XAXA XCXC

53 1 factor ANOVA 1 Factor, with more than two levels Now we can’t just compute a simple difference score since there are more than one difference A - B, B - C, & A - C XBXB XAXA XCXC

54 1 factor ANOVA Null hypothesis: H 0 : all the groups are equal X A = X B = X C Alternative hypotheses H A : not all the groups are equal X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B The ANOVA tests this one!! Do further tests to pick between these XBXB XAXA XCXC

55 1 factor ANOVA Planned contrasts and post-hoc tests: - Further tests used to rule out the different Alternative hypotheses X A ≠ X B ≠ X C X A ≠ X B = X C X A = X B ≠ X C X A = X C ≠ X B Test 1: A ≠ B Test 2: A ≠ C Test 3: B = C

56 Reporting your results The observed differences Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p <0.05). Groups B and C did not differ significantly from one another” 1 factor ANOVA

57 Factorial ANOVAs We covered much of this in our experimental design lecture More than one factor Factors may be within or between Overall design may be entirely within, entirely between, or mixed Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions between the factors

58 Factorial ANOVAs Reporting your results The observed differences Because there may be a lot of these, may present them in a table instead of directly in the text Kind of design e.g. “2 x 2 completely between factorial design” Computed F-ratios May see separate paragraphs for each factor, and for interactions Degrees of freedom for the test Each F-ratio will have its own set of df’s The “p-value” of the test May want to just say “all tests were tested with an alpha level of 0.05” Any post-hoc or planned comparison results Typically only the theoretically interesting comparisons are presented

59 Relationships between variables Example: Suppose that you notice that the more you study for an exam, the better your score typically is. This suggests that there is a relationship between study time and test performance. We call this relationship a correlation.

60 Relationships between variables Properties of a correlation Form (linear or non-linear) Direction (positive or negative) Strength (none, weak, strong, perfect) To examine this relationship you should: Make a scatterplot Compute the Correlation Coefficient

61 Scatterplot Plots one variable against the other Useful for “seeing” the relationship Form, Direction, and Strength Each point corresponds to a different individual Imagine a line through the data points

62 Scatterplot Hours study X Exam perf. Y 66 12 56 34 32 Y X 1 2 3 4 5 6 123456

63 Correlation Coefficient A numerical description of the relationship between two variables For relationship between two continuous variables we use Pearson’s r It basically tells us how much our two variables vary together As X goes up, what does Y typically do X , Y  X , Y  X , Y 

64 Form Non-linearLinear

65 Direction NegativePositive As X goes up, Y goes up X & Y vary in the same direction Positive Pearson’s r As X goes up, Y goes down X & Y vary in opposite directions Negative Pearson’s r Y X Y X

66 Strength Zero means “no relationship”. The farther the r is from zero, the stronger the relationship The strength of the relationship Spread around the line (note the axis scales)

67 Strength r = 1.0 “perfect positive corr.” r = -1.0 “perfect negative corr.” r = 0.0 “no relationship” 0.0+1.0 The farther from zero, the stronger the relationship

68 Strength 0.0+1.0 -.8.5 Which relationship is stronger? Rel A, -0.8 is stronger than +0.5 r = -0.8 Rel A r = 0.5 Rel B

69 Y X 1 2 3 4 5 6 123456 Regression Compute the equation for the line that best fits the data points Y = (X)(slope) + (intercept) 2.0 Change in Y Change in X = slope 0.5

70 Y X 1 2 3 4 5 6 123456 Regression Can make specific predictions about Y based on X Y = (X)(.5) + (2.0) X = 5 Y = ? Y = (5)(.5) + (2.0) Y = 2.5 + 2 = 4.5 4.5

71 Regression Also need a measure of error Y = X(.5) + (2.0) + error Y X 1 2 3 4 5 6 123456 Y X 1 2 3 4 5 6 123456 Same line, but different relationships (strength difference)

72 Cautions with correlation & regression Don’t make causal claims Don’t extrapolate Extreme scores (outliers) can strongly influence the calculated relationship


Download ppt "Descriptive Statistics Psych 231: Research Methods in Psychology."

Similar presentations


Ads by Google