Dealing With Statistical Uncertainty

Slides:



Advertisements
Similar presentations
Statistics Review – Part II Topics: – Hypothesis Testing – Paired Tests – Tests of variability 1.
Advertisements

Hypothesis Testing Steps in Hypothesis Testing:
Hypothesis: It is an assumption of population parameter ( mean, proportion, variance) There are two types of hypothesis : 1) Simple hypothesis :A statistical.
Dealing With Statistical Uncertainty
Inferential Statistics & Hypothesis Testing
Testing means, part III The two-sample t-test. Sample Null hypothesis The population mean is equal to  o One-sample t-test Test statistic Null distribution.
Detecting Differentially Expressed Genes Pengyu Hong 09/13/2005.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Dealing With Statistical Uncertainty Richard Mott Wellcome Trust Centre for Human Genetics.
Elementary hypothesis testing
10-1 Introduction 10-2 Inference for a Difference in Means of Two Normal Distributions, Variances Known Figure 10-1 Two independent populations.
Differentially expressed genes
BCOR 1020 Business Statistics Lecture 22 – April 10, 2008.
Dealing With Statistical Uncertainty
BCOR 1020 Business Statistics
Topic 2: Statistical Concepts and Market Returns
Analysis of Differential Expression T-test ANOVA Non-parametric methods Correlation Regression.
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
Part III: Inference Topic 6 Sampling and Sampling Distributions
Inferences About Process Quality
Chapter 9 Hypothesis Testing.
Student’s t statistic Use Test for equality of two means
BCOR 1020 Business Statistics
5-3 Inference on the Means of Two Populations, Variances Unknown
Review for Exam 2 Some important themes from Chapters 6-9 Chap. 6. Significance Tests Chap. 7: Comparing Two Groups Chap. 8: Contingency Tables (Categorical.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 14: Non-parametric tests Marshall University Genomics.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Tests of Hypotheses Based on a Single Sample.
Chapter 15 Nonparametric Statistics
Statistical Inference for Two Samples
Analysis of Variance. ANOVA Probably the most popular analysis in psychology Why? Ease of implementation Allows for analysis of several groups at once.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
AM Recitation 2/10/11.
Confidence Intervals and Hypothesis Testing - II
Dealing With Statistical Uncertainty Richard Mott Wellcome Trust Centre for Human Genetics.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University ECON 4550 Econometrics Memorial University of Newfoundland.
Lecture 5: Segregation Analysis I Date: 9/10/02  Counting number of genotypes, mating types  Segregation analysis: dominant, codominant, estimating segregation.
Chapter 9.3 (323) A Test of the Mean of a Normal Distribution: Population Variance Unknown Given a random sample of n observations from a normal population.
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
More About Significance Tests
NONPARAMETRIC STATISTICS
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Statistical Inferences Based on Two Samples Chapter 9.
Non-parametric Tests. With histograms like these, there really isn’t a need to perform the Shapiro-Wilk tests!
CHAPTER 18: Inference about a Population Mean
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
Biostatistics, statistical software VII. Non-parametric tests: Wilcoxon’s signed rank test, Mann-Whitney U-test, Kruskal- Wallis test, Spearman’ rank correlation.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 8 Hypothesis Testing.
Statistical Inference for the Mean Objectives: (Chapter 9, DeCoursey) -To understand the terms: Null Hypothesis, Rejection Region, and Type I and II errors.
Ch11: Comparing 2 Samples 11.1: INTRO: This chapter deals with analyzing continuous measurements. Later, some experimental design ideas will be introduced.
Week111 The t distribution Suppose that a SRS of size n is drawn from a N(μ, σ) population. Then the one sample t statistic has a t distribution with n.
Lecture 3: Statistics Review I Date: 9/3/02  Distributions  Likelihood  Hypothesis tests.
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 14 th February 2013.
© Copyright McGraw-Hill 2004
Copyright ©2006 Brooks/Cole A division of Thomson Learning, Inc. Introduction to Probability and Statistics Twelfth Edition Robert J. Beaver Barbara M.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Hypothesis Testing. Statistical Inference – dealing with parameter and model uncertainty  Confidence Intervals (credible intervals)  Hypothesis Tests.
BIOL 582 Lecture Set 2 Inferential Statistics, Hypotheses, and Resampling.
Hypothesis Tests u Structure of hypothesis tests 1. choose the appropriate test »based on: data characteristics, study objectives »parametric or nonparametric.
Objectives (BPS chapter 12) General rules of probability 1. Independence : Two events A and B are independent if the probability that one event occurs.
Hypothesis Tests. An Hypothesis is a guess about a situation that can be tested, and the test outcome can be either true or false. –The Null Hypothesis.
Two-Sample-Means-1 Two Independent Populations (Chapter 6) Develop a confidence interval for the difference in means between two independent normal populations.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Statistical Inferences for Variance Objectives: Learn to compare variance of a sample with variance of a population Learn to compare variance of a sample.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
4-1 Statistical Inference Statistical inference is to make decisions or draw conclusions about a population using the information contained in a sample.
Class Six Turn In: Chapter 15: 30, 32, 38, 44, 48, 50 Chapter 17: 28, 38, 44 For Class Seven: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 Read.
Chapter 5 STATISTICAL INFERENCE: ESTIMATION AND HYPOTHESES TESTING
Two-Sample Hypothesis Testing
Chapter 9 Hypothesis Testing.
I. Statistical Tests: Why do we use them? What do they involve?
Presentation transcript:

Dealing With Statistical Uncertainty Richard Mott Wellcome Trust Centre for Human Genetics

Synopsis Hypothesis testing P-values Confidence intervals T-test Non-parametric tests Sample size calculations Permutations Bootstrap

Hypothesis testing Null Hypothesis H0 Alternative Hypothesis H1 Examples of H0: Mean of a population is 3.0 In a genetic association study, there is no association between disease state and the genotypes of a particular SNP Examples of H1: Mean of population > 3.0 There is an association between disease and genotype

The Likelihood Ratio Test General Framework for constructing hypothesis tests: S = Likelihood( data | H0) / Likelihood( data | H1) Reject H0 if S < s(H0, H1) Threshold s is chosen such that there is a probability a of making a false rejection under H0. is the size of the test (or false positive rate or Type I error) e.g. a=0.05 is the power of the test, the probability of correctly rejecting H0 when H1 is true e.g. b=0.8 1- b is the type II error, the false negative rate Generally, for fixed sample size n, if we fix a then we can’t fix b If we fix a and b then we must let n vary. The Neyman Pearson Lemma states that the likelihood ratio test is the most powerful of all tests of a given size Type III error: "correctly rejecting the null hypothesis for the wrong reason".

P-values The P-value of a statistic Z is the probability of sampling a value more extreme than that observed value z if the null hypothesis is true. For a one-sided test, the p-value would be a = Prob( Z > z | H0) For a two-sided test it is a = Prob( |Z| > z | H0) Note this is not the probability density at x, but the tails of the cumulative distribution function Upper and lower 2.5% tails of the N(0,1) distribution, corresponding to |z|=1.96

Power The power of a statistic Z is the probability of sampling a value more extreme than that observed value z if the alternative hypothesis is true. For a one-sided test, the power would be b = Prob( Z > z | H1) For a two-sided test it is b = Prob( |Z| > z | H1) Note this is not the probability density at x, but the tails of the cumulative distribution function H0: N(0,1) H1: N(1.5,1) H2: N(4,1) 0.05 = Prob( Z> 1.644|H0) (5% upper tail) 0.44 = Prob( Z> 1.644|H1) 0.99 = Prob( Z> 1.644|H2)

Multiple Testing Conventionally, the significance threshold a for a test is a = 0.05, or 0.01 However, Statistical Genetics problems frequently involve multiple testing – eg a genome scan may test over one million SNPs for association, a gene expression microarray will screen tens of thousands of transcripts. On the one hand, significance thresholds should be adjusted to be more conservative, in order to prevent too many false positives ocurring. In N independent tests, the expected P-value of the most significant result to occur by chance is 1/(N+1) [because P-values are uniformly distributed on [0,1]] Whilst it is not true that tests are independent, it is a surprisingly good guide to what will be observed in reality. Often, at these scales, P-values are reported as –log10(P-value), so that N=106 implies the most significant P-value is about 10-6 , ie logP = 6. On the other hand, raising the significance threshold lowers the power to detect associations, so larger sample sizes are needed to compensate or a different approach, based around False Discovery Rates (FDR) is employed The idea behind FDR is to choose a laxer threshold so that many false positives may occur, but to limit the proportion of false positives to say 10%. The FDR is useful when the purpose of the screen is to identify items (SNPs, genes, etc) for further experimental investigation, and which will provide more information.

Example: The mean of a Normal Distribution H0: mean = m0 vs H1: mean = m1 data: y1, … yn iid N(m,s2) Assume variance is the same and is known Therefore we base all inferences on the sample mean compared to the difference in the hypothesised means

The Normal and T distributions For a sample from a Normal distribution with known mean and variance, the sample mean can be standardised to follow the standard Normal distribution And the 95% confidence interval for the mean is given by But we often wish to make inferences about samples from a Normal Distribution when the variance is unknown: The variance must be estimated from the data as the sample variance Because of this uncertainty the distribution of the sample mean is broader, and follows the Tn-1 distribution

The T distribution The Tn and Normal distributions are almost identical for n>20, As a rule of thumb, an approximate 95% confidence interval for n>20 is

T-tests The T-test compares the means of samples The T-test is optimal (in LR sense) if the data are sampled from a Normal distribution Even if the data are not Normal the test is useful in large samples There are several versions, depending on the details

T tests for the comparison of sample means One Sample Test One sample y1 ….. yn H0: the sample mean = m0 H1: the sample mean != m0 Reject H0 if tn-1(a/2) is the quantile of the Tn-1 distribution such that the probability of exceeding tn-1(a/2) is a/2 Two sided test in R, tn-1(0.025) = pt(0.025,n-1)

T tests for the comparison of sample means Paired T-test Two samples of paired data (x1,y1), (x2-y2) …. (xn,yn) Example: two experimental replicates taken from n individuals, we wish to test if the replicates are statistically similar H0: mean(x) = mean(y) Take differences d1= x1 - y1 … dn= xn - yn H0: mean(d) = 0 One sample T-test with m0=0

T tests for the comparison of sample means Two-Sample T-test Two samples of unpaired data x1, x2 ….. xn y1, y2 ….. ym H0: mean(x) = mean(y) If we assume the variance is the same in each group then we can estimate the variance as the pooled estimator The test statistic is (The case with unequal variances is more complicated)

T tests in R t.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, var.equal = FALSE, conf.level = 0.95, ...) x a numeric vector of data values. y an optional numeric vector data values. alternative a character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less". mu a number indicating the true value of the mean paired a logical indicating whether you want a paired t-test. var.equal a logical variable indicating whether to treat the two variances as being equal. conf.level confidence level of the interval. formula a formula of the form lhs ~ rhs where lhs is a numeric variable giving the data values and rhs a factor with two levels giving the corresponding groups. data an optional data frame containing the variables in the model formula. subset an optional vector specifying a subset of observations to be used.

T tests in R samp1 <- rnorm( 20, 0, 1) # sample 20 numbers from N(0,1) distribution samp2 <- rnorm( 20, 1.5, 1) # sample from N(1.5,1) t.test( samp1, samp2,var.equal=TRUE ) Two Sample t-test data: samp1 and samp2 t = -3.5892, df = 38, p-value = 0.000935 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.9272037 -0.5372251 sample estimates: mean of x mean of y 0.4040969 1.6363113

Outliers A small number of contaminating observations with different distribution Outliers can destroy statistical genuine significance Reason: the estimate of the variance is inflated, reducing the T statistic And sometimes create false significance : Type III error: "correctly rejecting the null hypothesis for the wrong reason". There are Robust Alternatives to the T-test: Wilcoxon tests > samp3 <- samp2 > samp3[1] = 40 # add one outlier > t.test( samp1, samp3,var.equal=TRUE ) Two Sample t-test data: samp1 and samp3 t = -1.5911, df = 38, p-value = 0.1199 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -7.0519594 0.8450837 sample estimates: mean of x mean of y 0.4040969 3.5075347 > var(samp2) [1] 1.160251 > var(samp3) [1] 74.88977

Wilcoxon signed rank test (alternative to Paired T-test) Two samples of paired data (x1,y1), (x2-y2) …. (xn,yn) Take differences d1= x1 - y1 … dn= xn – yn H0: distribution of d’s is symmetric about 0 Compute the ranks of the absolute differences, rank(|d1|) etc Compute the signs of the differences Compute W, the sum of the ranks with positive signs If H0 true then W should be close to the mean rank n/2 The distribution of W is known and for n>20 a Normal approximation is used. in R wilcox.test(x, y, alternative = c("two.sided", "less", "greater"), mu = 0, paired = TRUE, exact = NULL, correct = TRUE, conf.int = FALSE, conf.level = 0.95, ...)

Wilcoxon rank sum test (alternative to unpaired T-test) Two samples of unpaired data x1, x2 ….. xn y1, y2 ….. ym Arrange all the observations into a single ranked series of length N = n+m. R1 = sum of the ranks for the observations which came from sample 1. The sum of ranks in sample 2 follows by calculation, since the sum of all the ranks equals N(N + 1)/2 Compute U = min(R1 ,R2) - n(n + 1)/2. z = (U –m)/s Normal approximation m = nm/2 s2 = nm(n=m+1)/12 The Wilcoxon Rank sum test is only about 5% less efficient than the unpaired T-test (in terms of the sample size required to achieve the same power) in large samples.

Wilcoxon rank sum test (alternative to unpaired T-test) >wilcox.test(samp1,samp2) Wilcoxon rank sum test data: samp1 and samp2 W = 82, p-value = 0.001041 alternative hypothesis: true mu is not equal to 0 > wilcox.test(samp1,samp3) # with outliers data: samp1 and samp3

Binomial Sign Test (T-test on one sample) One sample y1 ….. yn H0: the sample mean = m0 Count M, the number of y’s that are > m0 If H0 is true, this should be about n/2, and should be distributed as a Binomial random variable B(1/2,n) The probability of observing more than r y’s > m0 is 0.5n Si>r (n choose i)

Permutation Tests Another non-parametric Alternative Permutation is a general principle with many applications to hypothesis testing. But it is not useful for parameter estimation The method is best understood by an example: Suppose a data set is classified into N groups. We are interested in whether the differences between group means (eg if N=2 then the two sample T test or Wilcoxon Rank Sum test is appropriate) Compute a statistic S (eg the difference between the maximum and minimum group mean) If there are no differences between groups then any permutation of the group labellings between individuals should produce much the same result for S Repeat i = 1 to M times: permute the data compute the statistic on the permuted data, Si The Permutation p-value is the fraction permutations that exceed S It’s that simple!

Bootstrapping Bootstrapping is a general way to evaluate the uncertainty in estimates of statistics whilst making few assumptions about the data It is particularly useful for complex statistics whose distribution cannot be evaluated analytically Example: In a gene expression study, we wish to find a minimal set of SNPs that in combination will predict the variation in a phenotype. Suppose we have a deterministic algorithm A that will find us a plausible set of SNPs (ie a model). This is likely to be a complex programme and contain some arbitrary steps. It is also unlikely to give any measure of uncertainty to the model. There may be many possible sets of SNPs with similar claims, so we would like a way to average across different models, and evaluate the evidence for a SNP as the fraction of models in which the SNP occurs To do this, we need a way to perturb the data, to generate new data sets on which we can apply the procedure A Bootstrapping is one way to do this

Bootstrapping Given a set of n things S = {x1 …. x2}, a bootstrap sample is formed by sampling with replacement from S, to create a new set S* of size n Original Resample This means some elements of S will be missing in S* and some will be present multiply often (about 1/e = 63% of elements will be present in S* on average) S* can be used as a replacement for S in any algorithm that works on S, and we can repeatedly draw further bootstrap samples. Any statistic Z that can be calculated on S can also be calculated on S*, and so we can construct the bootstrap sampling distribution of Z from repeated samples. From this we can make inferences about the variation of Z, eg construct a 95% confidence interval (if Z is numeric)

Degrees of Freedom Suppose x1,…x2 are iid N(0,1) Then Si xi2 is distributed as C2n And Si (xi2-0)2/n is an estimate of the sample variance. Although the E(xi)=0, knowing the values of x1….xn-1 tells us nothing extra about xn, because the observations are independent But what about Si (xi-mean)2/ ? Here, knowing x1-mean, ..xn-1 –mean forces the value of xn to be n.mean-sum(x1..xn-1) Therefore the sum is distributed as if it comprised one fewer observation, hence it has n-1 df (for example, its expectation is n-1) In general, if there are p constraints placed on a set of n numbers, then the number of degrees of freedom is n-p In particular, if p parameters are estimated from a data set, then the residuals y-x1b1 +… have p constraints on them, so they behave like n-p independent variables