Presentation is loading. Please wait.

Presentation is loading. Please wait.

Canadian Bioinformatics Workshops www.bioinformatics.ca.

Similar presentations


Presentation on theme: "Canadian Bioinformatics Workshops www.bioinformatics.ca."— Presentation transcript:

1 Canadian Bioinformatics Workshops www.bioinformatics.ca

2 2Module #: Title of Module

3 Module 5 Hypothesis testing D EPARTMENT OF B IOCHEMISTRY D EPARTMENT OF M OLECULAR G ENETICS Oedipus ponders the riddle of the Sphinx. Classical (~400 BCE) † † This workshop includes bits of material originally developed by Raphael Gottardo, FHCRC and by Sohrab Shah, UBC Boris Steipe Exploratory Data Analysis of Biological Data using R Toronto, May 21. and 22. 2015

4 Module 5: Hypothesis testing bioinformatics.ca Learning Objectives Understand the principal idea behind a statistical test; Know about the concepts true/false– positives/nergatives, p-value, and significance; Be able to apply simple parametric and non- parametric test to your data; Know how to interpret the results; Understand the problem behind multiple testing; Know what to do about it in the context of expression data analysis.

5 Module 5: Hypothesis testing bioinformatics.ca Hypothesis testing Once we have a statistical model that describes the distribution of our data, we can explore data points with reference to our model. In hypothesis testing we typically ask questions such as: Is a particular sample a part of the distribution, or is it an outlier? Can two sets of samples have been drawn from the same distribution, or did they come from different distributions?

6 Module 5: Hypothesis testing bioinformatics.ca Hypothesis testing Hypothesis testing is confirmatory data analysis, in contrast to exploratory data analysis. Null – and Alternative Hypothesis Region of acceptance / rejection and critical value Error types p - value Significance level Power of a test (1 - false negative) Concepts:

7 Module 5: Hypothesis testing bioinformatics.ca Null hypothesis / Alternative hypothesis The null hypothesis H 0 states that nothing of consequence is apparent in the data distribution. The data corresponds to our expectation. We learn nothing new. The alternative hypothesis H 1 states that some effect is apparent in the data distribution. The data is different from our expectation. We need to account for something new. Not in all cases will this result in a new model, but a new model always begins with the observation that the old model is inadequate.

8 Module 5: Hypothesis testing bioinformatics.ca Test types Just like the large variety of types of hypotheses, the number of test is large. The proper application of tests can be confusing and it is easy to make mistakes. A one-sample test compares a sample with a population. Common types of tests A two-sample test compares samples with each other. Paired sample tests compare matched pairs of observations with each other. Typically we ask whether their difference is significant....

9 Module 5: Hypothesis testing bioinformatics.ca Test types A Z–test compares a sample mean with a normal distribution.... common types of tests (as you would find them in a statistics textbook...) A t–test compares a sample mean with a t-distribution and thus relaxes the requirements on normality for the sample. Chi–squared tests analyze whether samples are drawn from the same distribution. F-tests analyze the variance of populations (ANOVA). Nonparametric tests can be applied if we have no reasonable model from which to derive a distribution for the null hypothesis....

10 Module 5: Hypothesis testing bioinformatics.ca The Hypothesis Test principle Think about what hypothesis testing really means. You have some observation; You have a model of the data; You ask about the probability that the model of your data would contain your observation....

11 Module 5: Hypothesis testing bioinformatics.ca Error types Decision Truth H0H0 H1H1 Accept H 0 Reject H 0   1 -  1 -  "False positive" "False negative" "Type I error" "Type II error"

12 Module 5: Hypothesis testing bioinformatics.ca Introduction One sample and two sample t-tests are used to test a hypothesis about the mean(s) of a distribution. Gene expression: Is the mean expression level under condition 1 different from the mean expression level under condition 2? Assume that the data are from a normal distribution.

13 Module 5: Hypothesis testing bioinformatics.ca one sample t-test t-tests apply to n observations that are independent and normally distributed with equal variance about a mean . The 1-sample t-statistic is defined as: i.e. t is the difference in sample mean and  0, divided by the Standard Error of the Mean, to penalize noisy samples. If the sample mean is indeed  0, t follows a t-distribution with n-1 degrees of freedom.

14 Module 5: Hypothesis testing bioinformatics.ca what is a p–value? a)A measure of how much evidence we have against the alternative hypothesis. b)The probability of making an error. c)Something that biologists want to be below 0.05. d)The probability of observing a value as extreme or more extreme by chance alone. e) All of the above.

15 Module 5: Hypothesis testing bioinformatics.ca two–sample t–test Test if the means of two distributions are the same. The datasets y i 1,..., y i n are independent and normally distributed with mean μ i and variance σ 2, N (μ i,σ 2 ), where i=1,2. In addition, we assume that the data in the two groups are independent and that the variance is the same.

16 Module 5: Hypothesis testing bioinformatics.ca two–sample t–test

17 Module 5: Hypothesis testing bioinformatics.ca t–test assumptions Normality: The data need to be Normal. If not, one can use a transformation or a non-parametric test. If the sample size is large enough (n>30), the t-test will work just fine (CLT). Independence: Usually satisfied. If not independent, more complex modeling is required. Independence between groups: In the two sample t- test, the groups need to be independent. If not, one can use a paired t- test. Equal variances: If the variances are not equal in the two groups, use Welch's t-test (default in R).

18 Module 5: Hypothesis testing bioinformatics.ca non–parametric tests Non-parametric tests constitute a flexible alternative to t-tests if you don't have a model of the distribution. In cases where a parametric test would be appropriate, non-parametric tests have less power. Several non parametric alternatives exist e.g. the Wilcoxon and Mann-Whitney tests.

19 Module 5: Hypothesis testing bioinformatics.ca Wilcoxon test principle o <- order(M[,1]) plot(M[o,1], col=M[o,2]) For each observation in a, count the number of observations in b that have a smaller rank. The sum of these counts is the test statistic. wilcox.test(M[1:n,1], M[(1:n)+n,1])

20 Module 5: Hypothesis testing bioinformatics.ca permutation test A p-value characterizes where an observation lies with reference to the distribution of our statistics under the null hypothesis. How can we estimate the null distribution? In the two sample case, to simulate the null distribution, one could simply randomly permute the group labels and recompute the statistics. Repeat this for a (sufficienty large) number of permutations and compute the number of times you randomly observed a value as extreme or more extreme than the observation of interest.

21 Module 5: Hypothesis testing bioinformatics.ca permutation test Select a statistic (e.g. mean difference, t statistic) Compute the statistic for the observation of interest t. For a number of permutations Randomly permute the labels and compute the associated statistic Count how often the statistic exceeds the observation For data that has multiple "categories" associated with each observation:

22 Module 5: Hypothesis testing bioinformatics.ca the Bootstrap The basic idea is to resample the data we have observed and compute a new value of the statistic/estimator for each resampled data set. Then one can assess the estimator by looking at the empirical distribution across the resampled data sets. set.seed(100) x <- rnorm(15) muHat <- mean(x) sigmaHat <- sd(x) Nrep <- 100 muHatNew <- rep(0, Nrep) for(i in 1:Nrep) { xNew <- sample(x, replace=TRUE) muHatNew[i] <- median(xNew) } se <- sd(muHatNew) muHat se

23 Module 5: Hypothesis testing bioinformatics.ca statistical "power" The power of a statistical test is the probability that the test will reject the null hypothesis when the null hypothesis is false (i.e. that it will not make a Type II error, or a false negative decision). As the power increases, the chances of a Type II error occurring decrease. The probability of a Type II error occurring is referred to as the false negative rate (β). Therefore power is equal to 1 − β, which is also known as the sensitivity. Power analysis can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size. Power analysis can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric and a nonparametric test of the same hypothesis. From Wikipedia – Statistical_Power

24 Module 5: Hypothesis testing bioinformatics.ca One sample t-test – power calculation 1 sample t-test: If the mean is μ 0, t follows a t-distribution with n-1 degrees of freedom. If the mean is not μ 0, t follows a non central t-distribution with n-1 degrees of freedom and noncentrality parameter (μ 1 -μ 0 ) x (s/√n).

25 Module 5: Hypothesis testing bioinformatics.ca Power, error rates and decision > power.t.test(n = 5, delta = 1, sd=2, alternative="two.sided", type="one.sample") One-sample t test power calculation n = 5 delta = 1 sd = 2 sig.level = 0.05 power = 0.1384528 alternative = two.sided Power calculation in R: Other tests are available – see ??power.

26 Module 5: Hypothesis testing bioinformatics.ca Power, error rates and decision PR(False Positive) PR(Type I error) μ0μ0 μ1μ1 PR(False Negative) PR(Type II error)

27 Module 5: Hypothesis testing bioinformatics.ca Power, error rates and decision

28 Module 5: Hypothesis testing bioinformatics.ca multiple testing Fix the False Positive error rate (eg. α = 0.05). Minimize the False Negative (maximize sensitivity) Single hypothesis testing This is what traditional testing does. What if we perform many tests at once? Does this affect our False Positive rate?

29 Module 5: Hypothesis testing bioinformatics.ca multiple testing With high-throughput methods, we usually look at a very large number of decisions for each experiment. For example, we ask for every gene on an array whether it is significantly up- or downregulated. This creates a multiple testing paradox. The more data we collect, the harder it is for every observation to appear significant. Therefore: We need ways to assess error probability in multiple testing situations correctly; We need approaches that address the paradox.

30 Module 5: Hypothesis testing bioinformatics.ca FWER Example: Bonferroni multiple adjustment. The FamilyWise Error Rate is the probability of having at least one False Positive (making at least one type I error) in a "family" of observations. This is simple and conservative, but there are many other (more powerful) FWER procedures. p̃ g = N x p g If p̃ g ≤ α then FWER ≤ α

31 Module 5: Hypothesis testing bioinformatics.ca False Discovery Rate (FDR) The FDR is the proportion of False Positives among the genes called differentially expressed (DE). FDR: Benjamini and Hochberg (1995) p (i ) ≤ i / N x α p (1) ≤... ≤ p (i ) ≤... ≤ p (N) Let k be the largest i such that... then the FDR for genes 1... k is controlled at α. Hypotheses need to be independent! Order the p-values for each of N observations:

32 Module 5: Hypothesis testing bioinformatics.ca SAM SAM (Significance Analysis of Microarrays) is a statistical technique to find significant expression changes of genes in microaray experiments. The input is an expression profile. SAM measures the strength of the association of the expression value and the conditions of the expression profile. SAM employs a modified t-statistic that is more stable if the number of conditions is small. False Discovery Rates are estimated through permutations. library(samr) ?samr ?SAM

33 Module 5: Hypothesis testing bioinformatics.ca summary Multiple testing: If hypotheses are independent or weakly dependent use an FDR correction, otherwise use Bonferroni's FWER. For more complex hypotheses, try an ANOVA (p=1) or limma (p>1). Number of tests Sample size n < 30 p = 1 p > 1 n ≥ 30 non-parametric t-test/F- test t-test, F- test regularized t-test/F-test ( e.g. SAM, limma) + multiple testing t-test, F- test + multiple testing

34 Module 5: Hypothesis testing bioinformatics.ca Get a book. (e.g. Peter Dalgaard, Introductory Statistics with R is available online through UofT library) Simulate your data. (Don't just use the packaged functions.) Have fun. From here...

35 Module 5: Hypothesis testing bioinformatics.ca boris.steipe@utoronto.ca


Download ppt "Canadian Bioinformatics Workshops www.bioinformatics.ca."

Similar presentations


Ads by Google