Presentation is loading. Please wait.

Presentation is loading. Please wait.

Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 7 – Non-normality and outliers.

Similar presentations


Presentation on theme: "Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 7 – Non-normality and outliers."— Presentation transcript:

1 Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 7 – Non-normality and outliers

2 Normally distributed data Many of the statistical tests we will study rely on the assumption that the data were sampled from a normal distribution How reasonable is this assumption? The normal distribution is an ideal distribution that likely never exists in reality – Includes arbitrarily large values and arbitrarily small (negative) values However, simulations show that most tests that rely on the assumption of normality are robust to deviations from the normal distribution Marshall University School of Medicine

3 The ideal normal distribution Marshall University School of Medicine Image shows data sampled from a theoretical normal distribution Uses a very large sample size Close approximation to theoretical distribution

4 Samples from a normal distribution Marshall University School of Medicine

5 Tests for normality It is possible to perform tests to see if the sample data are consistent with the assumption that they were sampled from a normal distribution – Unfortunately, this is not what we really want to know… – Would really like to know if the distribution is close enough to normal for the test we use to be useful Marshall University School of Medicine

6 Tests for normality A test for normality is a statistical test for which the null hypothesis is The data were sampled from a normal distribution Common normality tests include – D’Agostino-Pearson omnibus K2 normality test – Shapiro-Wilk test – Kolmogorov-Smirnov test Marshall University School of Medicine

7 D’Agostino-Pearson omnibus K2 normality test The D’Agostino-Pearson omnibus K2 normality test works by computing two values for the data set: – The skewness, which measures how far the data is from being symmetric – The kurtosis, which measures how sharply peaked the data is The test then combines these to a single value that describes how far from normal the data appear to lie – Computes a p-value for this combined value Marshall University School of Medicine

8 Problem with normality tests If the p-value for a normality test is small, the interpretation is: – If the data were sampled from an ideal normal distribution, it is unlikely the sample would be this skewed and/or kurtotic If the p-value for a normality test is large, then the data are not inconsistent with being sampled from a normal distribution However… – If the sample size is large, it is possible to get a small p-value even for small deviations from the normal distribution Data are likely sampled from a distribution that is close to, but not exactly, normal – If the sample size is small, it is possible to get a large p-value even if the underlying distribution is far from normal Data do not provide sufficient evidence to reject the null hypothesis… – Useful to examine the values for skewness and kurtosis as well as the p-value Marshall University School of Medicine

9 Skewness and kurtosis Marshall University School of Medicine

10 Interpreting skewness and kurtosis The real question we would like to answer is – How much skewness and kurtosis are acceptable? – Difficult to answer… In general, interpret a skewness between -0.5 and 0.5 as being approximately symmetric – Between -1.0 and -0.5, or 0.5 and 1.0 is moderately skewed – Less than -1.0 or more than 1.0 is highly skewed For kurtosis, values between -2 and 2 are generally accepted as being “within limits” – Outside this is evidence the distribution is far from normal Marshall University School of Medicine

11 What to do if the data fail a test for normality If the data fail a test for normality, the following options are available – Can the data be transformed to data that come from a normal distribution? For example, if the data are negatively skewed, transforming to logs may give normally distributed data – Are there a small number of outliers that are causing the data to fail a normality test? Next section discusses outliers – Is the departure from normality small? I.e. are the skewness and kurtosis “small”. If so, your statistical tests may still be accurate enough – Use a test that does not assume a normal distribution (a non- parametric test) Marshall University School of Medicine

12 Non-parametric tests The most common statistical tests assume the data are sampled from a normal distribution – T-tests, ANOVA, Pearson correlation, etc Some other tests do not make this assumption – Mann-Whitney test, Kruskal-Wallis test, Spearman correlation, etc However, these tests have (much) lower statistical power than their parametric equivalents when the data are normally distributed Marshall University School of Medicine

13 Choosing non-parametric tests When running a series of similar experiments, all data should be analyzed the same way – Use normality tests to choose the statistical test for all experiments together – Following “common practice” is acceptable… – Ideally, run one experiment just to determine whether the data look like they come from a normal distribution For small data sets – A test for normality does not tell you much Not likely to get a small p-value anyway – Violations of the normality assumption are more egregious – Non-parametric tests have very low statistical power Marshall University School of Medicine

14 The Mann-Whitney Test The Mann-Whitney test is the non-parametric equivalent of the unpaired T-test Use when you want to compare a variable between two groups, but you have reason to believe the data is not sampled from a normally-distributed population Marshall University School of Medicine

15 How the Mann-Whitney Test works The Mann-Whitney test works as follows: Compute the rank for all values, regardless of which group they come from – Smallest value has a rank of 1, next smallest has a rank of 2, etc. Choose one group: for each data point in that group, count the number of data points in the other group which are smaller – Sum these values, and call the sum U 1 Similarly compute U 2, or use the fact that U 1 +U 2 =n 1 n 2 Let U=min(U 1,U 2 ) The distribution of U under the null hypothesis is known, so software can compute a p-value Marshall University School of Medicine

16 Pros and cons of non-parametric tests Pros of non-parametric tests: – Since non-parametric tests do not rely on the assumption of normally- distributed populations, they can be used when that assumption fails, or cannot be verified Cons of non-parametric tests: – If the data really do come from normally-distributed populations, the non- parametric tests are less powerful than their parametric counterparts i.e. they will give higher p-values – For small sample sizes, they are much less powerful: Mann-Whitney p-values are always greater than 0.05 if the sample size is 7 or fewer – Nonparametric Tests typically do not compute confidence intervals Can sometimes be computed, but often require additional assumptions – Non-parametric tests are not related to regression models Cannot be extended to account for confounding variables using multiple regression techniques Marshall University School of Medicine

17 Choosing between parametric and non-parametric tests The choice between parametric and non-parametric tests is not straightforward A common, but invalid, approach is to use normality tests to automate the choice – The choice is most important for small data sets, for which normality tests are of limited use – Using the data set to determine the statistical analysis will underestimate p- values – If data fail normality tests, a transformation may be appropriate The most "honest" approach is to perform in independent experiment with a large sample to test for normality, and then design the experiment in hand based on the results of this – This is almost always impractical – For well-used experimental designs, an almost-equivalent approach is to follow customary procedure Essentially assuming this has been carried out in some way already Marshall University School of Medicine

18 How much difference does it make? The central limit theorem ensures that parametric tests work well with non-normal distributions if the sample is large enough – How large is large enough? – Depends on the distribution! – For most distributions, sample sizes in the range of dozens will remove any issues with normality You will still increase your statistical power by using a transformation if appropriate Conversely, if the data really come from a normally-distributed population and you choose a non- parametric test, you will lose statistical power – For large samples, however, the difference is minimal Small samples present problems: – Non-parametric tests have very little power for small samples – Parametric tests can give misleading results for small samples if the population data are non- normal – Tests for normality are not helpful for small samples Marshall University School of Medicine

19 Conclusions The bottom-line conclusion is that large samples are better than small samples – In general, the larger the better Of course, it can be prohibitively time consuming and/or expensive to analyze large samples If your experimental design is going to use a small sample, you need to be able to justify the data come from a normally distributed population – If this is a common experimental design that is conventionally analyzed this way, that may be good enough – For a new methodology, you should really perform an independent experiment with a large sample to test for normality first Use the results of this to guide the data analysis for future experiments Marshall University School of Medicine

20 Computationally-intensive non- parametric methods The non-parametric methods we examined worked by analyzing the ranks of the data Another class of non-parametric tests is the class of computationally-intensive methods There are two subclasses: – Permutation or randomization tests: Simulate the null distribution by repeatedly randomly reassigning group labels Compare the "real" data to the generated null distribution – Bootstrapping techniques: Effectively generate many samples from the population by resampling from the original sample Look at the distribution of summary data from the generated samples These techniques still require a reasonable sample size to begin with – Big enough to generate enough distinct permutations or bootstraps Marshall University School of Medicine

21 Outliers Outliers are values in the data that are “far” from the other values Occur for several reasons: – Invalid data entry – Experimental mistakes – Random chance In any distribution, some values are far from the others – In a normal distribution, these values are rarer, but still exist – Biological diversity If your samples are from patient or animal samples, the outlier may be “correct” and due to biological diversity – May be an interesting finding! – Wrong assumptions For example, in a lognormal distribution, some values are far from the others Marshall University School of Medicine

22 Why test for outliers Presence of erroneous outliers, or assuming the wrong distribution, can introduce spurious results or mask real results Trying to detect outliers without a test can be problematic – We tend to want to observe patterns in data – Anything that appears to be counter to these patterns seems to be an outlier – We tend to see too many outliers Marshall University School of Medicine

23 Before testing for outliers Before testing for outliers: – Check the data entry Errors here can often be fixed – Were there problems with the experiment? If errors were observed during the experiment, remove data associated with those errors Many experimental protocols have quality control measures – Is it possible your data is not normally distributed Most outlier tests assume the (non-outlier) data is normally distributed – Was there anything different about any of the samples Was one of the mice phenotypically different, etc? Marshall University School of Medicine

24 Outlier tests After addressing the concerns on the previous slide, if you still suspect an outlier you can run an outlier test Outlier tests answer the following question: If the data were sampled from a normal distribution, what is the chance of observing one value as far from the others as is in the observed data? Marshall University School of Medicine

25 Results of an outlier test If an outlier test results in a small p-value, then the conclusion is that the outlying value is (probably) not from the same distribution as the other values – Justifies excluding it from the analysis If the outlier test results in a high p-value, there is no evidence the value came from a different distribution – Doesn’t prove it did come from the same distribution, just that there is no strong evidence to the contrary Marshall University School of Medicine

26 Guidelines on removing outliers If you address all the previous concerns, and an outlier test gives strong evidence of an outlier, then it is legitimate to remove it from the analysis – The rules for eliminating outliers should be established before you generate the data – You should report the number of outliers removed and the rationale for doing so in any publication using the data Marshall University School of Medicine

27 How outlier tests work Outlier tests work by computing the difference between the extreme value and some measure of central tendency That value is typically divided by a measure of the variability Resulting ratio is compared with a table or expected distribution of those values Marshall University School of Medicine

28 Grubb’s outlier test Grubb’s outlier test calculates the difference between the extreme value and the mean of all values (including the extreme value), and divides by the standard deviation Resulting value is then compared to a table of critical values – Critical value depends on the sample size – If the value is larger than the critical value, then the extreme value can be considered an outlier Marshall University School of Medicine

29 Demo We’ll experiment with the GRHL2 Basal-A and Basal-B data sets in GraphPad, checking for outliers and testing for normality.


Download ppt "Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 7 – Non-normality and outliers."

Similar presentations


Ads by Google