Download presentation
Presentation is loading. Please wait.
Published byBenny Sasmita Modified over 6 years ago
1
Third year project – review of basic statistical concepts
Descriptive statistics Statistical significance Significance and effect size Interpreting a significant effect Interpreting a non-significant effect
2
Descriptive statistics
Choose descriptive statistics that are: Appropriate Relevant Revealing Previous research articles can be a useful guide
3
Error bars If n is small, show data points not error bars
You must show what n is in the Figure legend You must say what kind of error bar you are using Standard error based error bars are often used Confidence intervals are better
4
Descriptive statistics almost never licence an inference
Men (M = 32, SD = 6) v. women (M = 34, SD = 5) → no way to conclude from this alone that women (in the population) have a higher mean than men Exception: the direction of the difference may contradict a hypothesis If the hypothesis was that men have a higher mean than women – the data do not support that
5
Inferential statistics
T-test; ANOVA; Wilcoxon matched pairs Chi – squared Regression Correlation … and many more These tests assess the effects seen, comparing them to differences we'd expect 'anyway' (i.e. differences attributable merely to the kind of difference we'd expect from sampling) For example, is the difference between Men and Women greater than the difference you might get between two different samples of women.
6
Statistical significance
significance,p-value, alpha-level p < .05 “Fewer than five times out of a hundred, if you ran this study thousands of times, would you see a difference this great.”
7
Exact and approximate p-values
Some inferential statistics can give you an exact p-value Some only give an approximation Usually, with large samples, the approximation is very good Most inferential statistics rely on assumptions about the distribution of the data Textbooks say the tests are 'robust' when assumptions are violated But, really, we don't have a very clear picture The assumptions often are violated It's up to you to check the assumptions (ANOVA etc. don't)
8
Significance and effect size
We are interested in effects Significance – rarity [rarity of observing this if there were no real effect] Size – is it a big difference
9
Effect size and significance are separate
Can be significant with small, tiny, effect size, especially if the sample is huge A large effect can look non-significant, especially with a small sample Apart from sample size, reliability of measures and other sources of error variance can make it hard to detect an effect Power – the probability of detecting an effect if it really exists
10
Report effect size - d - Partial eta squared - r (and r2)
- R-squared (and adjusted R-squared) And compare effect size with previous research...
11
Interpreting a significant effect
If p < .05 (conventionally, significant) It is conventional to conclude that the null hypothesis [e.g. no difference between men and women] can be rejected Bear in mind, however, that up to five times in a hundred, we would get an effect like this if there was no real effect Allow for multiplicity
12
Reporting p-values I recommend: Report exact p-value if p is <.10
Report p < .001 if p <.001 Report p > .20 if p > .20
13
Interpreting a non-significant effect
- p > don't quibble - p > don't quibble, unless there is a substantial reason to - p < mmm... Bear in mind that you may have low power In exploratory research, a more liberal approach is often taken
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.