Download presentation
Presentation is loading. Please wait.
Published byBernard Conley Modified over 6 years ago
1
Chapter 6 Making Sense of Statistical Significance: Decision Errors, Effect Size and Statistical Power Part 1: Sept. 24, 2013
2
Decision Errors Due to use of samples to estimate effects in populations Type I error Reject the null hypothesis when in fact it is true Ex? Equal to alpha (α) – probability of Type 1 error α = .05, run a 5% risk of making a type 1 error Type II error Not rejecting the null hypothesis when in reality it is false (being too conservative) Equal to beta (β) - probability of making a Type II error
3
Possible Decisions in Hypothesis Testing
4
Effect Size Figuring effect size (d)
We may reject the null and conclude there is a significant effect, but how large is it? Effect size will estimate that; it is the amount that two populations (from our sample vs. comparison population) do not overlap Figuring effect size (d) Example: Population SD 1 = experimental group (use M from sample) 2 = population or comparison group mean
6
Effect Size Effect size conventions – make conclusions about how large/important effect is small around d = .2 (or -.2) medium around d = .5 (or -.5) large around d = .8 (or -.8) Example interpretation? Effect size speaks to ‘practical significance’ – an indication of the importance of a statistically significant effect
7
Effect Size Interpretation
What is a desired effect size? Interpretation: For an experiment… For a group comparison… For a correlational study…
8
Meta-Analysis Combine results from multiple studies
How are effect sizes used here? Example:
9
Statistical Power Probability that the study will produce a statistically significant result when the research hypothesis is in fact true That is, what is the power to correctly reject the null? Upper right quadrant in decision table Want to maximize our chances that our study has the power to find a true/real result Can calculate power before the study using predictions of means
10
Statistical Power Steps for figuring power
1. Gather the needed information: (N=16) * Mean & SD of comparison distribution (the distrib of means from Ch 5 – now known as Pop 2) * Predicted mean of experimental group (now known as Pop 1) * “Crashed” example: Pop 1 “crashed group” mean = 5.9 Pop 2 “neutral group/comparison pop” μ = 5.5, = .8, m = sqrt (2)/N m = sqrt[(.8 2) / 16] = .2
11
Statistical Power 2. Figure the raw-score cutoff point on the comparison distribution to reject the null hypothesis (using Pop 2 info) For alpha = .05, 1-tailed test (remember we predicted the ‘crashed’ group would have higher fault ratings), z score cutoff = 1.64. Convert z to a raw score (x) = z(m) + μ x = 1.64 (.2) = 5.83 Draw the distribution and cutoff point at 5.83, shade area to right of cutoff point “critical/rejection region”
12
Statistical Power 3. Figure the Z score for this same point, but on the distribution of means for Population 1 (see ex on board) That is, convert the raw score of 5.83 to a z score using info from pop 1. Z = (x from step 2 - from step 1exp group) m (from step 1) (5.83 – 5.9) / .2 = -.35 Draw another distribution & shade in everything to the right of -.35
13
Statistical Power Use the normal curve table to figure the probability of getting a score higher than Z score from Step 3 Find % betw mean and z of -.35 (look up .35)… = 13.68% Add another 50% because we’re interested in area to right of mean too. = 63.68%…that’s the power of the experiment.
14
Power Interpretation Our study (with N=16) has around 64% power to find a difference between the ‘crashed’ and ‘neutral’ groups if it truly exists. Based on our estimate of what the ‘crashed’ mean will be (=5.9), so if this is incorrect, power will change. In decision error table 1-power = beta (aka…type 2 error), so here: Alpha = .05 (5% chance of incorrectly rejecting Null); Power = .64 (64% chance of correctly rejecting a false N); Beta = .36 (36% chance of incorrectly failing to reject N)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.