Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis of gene expression data (Nominal explanatory variables) Shyamal D. Peddada Biostatistics Branch National Inst. Environmental Health Sciences (NIH)

Similar presentations


Presentation on theme: "Analysis of gene expression data (Nominal explanatory variables) Shyamal D. Peddada Biostatistics Branch National Inst. Environmental Health Sciences (NIH)"— Presentation transcript:

1 Analysis of gene expression data (Nominal explanatory variables) Shyamal D. Peddada Biostatistics Branch National Inst. Environmental Health Sciences (NIH) Research Triangle Park, NC

2 Outline of the talk Two types of explanatory variables (“experimental conditions”) Some scientific questions of interest A brief discussion on false discovery rate (FDR) analysis Some existing statistical methods for analyzing microarray data

3 Types of explanatory variables

4 Types of explanatory variables (“experimental conditions”) Nominal variables: – No intrinsic order among the levels of the explanatory variable(s). – No loss of information if we permuted the labels of the conditions. E.g. Comparison of gene expression of samples from “normal” tissue with those from “tumor” tissue.

5 Types of explanatory variables (“experimental conditions”) Ordinal/interval variables: – Levels of the explanatory variables are ordered. – E.g. Comparison of gene expression of samples from different stages of severity of lessions such as “normal”, “hyperplasia”, “adenoma” and “carcinoma”. (categorically ordered) Time-course/dose-response experiments. (numerically ordered)

6 Focus of this talk: Nominal explanatory variables

7 Types of microarray data Independent samples – E.g. comparison of gene expression of independent samples drawn from normal patients versus independent samples from tumor patients. Dependent samples – E.g. comparison of gene expression of samples drawn from normal tissues and tumor tissues from the same patient.

8 Possible questions of interest Identify significant “up/down” regulated genes for a given “condition” relative to another “condition” (adjusted for other covariates). Identify genes that discriminate between various “conditions” and predict the “class/condition” of a future observation. Cluster genes according to patterns of expression over “conditions”. Other questions?

9 Challenges Small sample size but a large number of genes. Multiple testing – Since each microarray has thousands of genes/probes, several thousand hypotheses are being tested. This impacts the overall Type I error rates. Complex dependence structure between genes and possibly among samples. – Difficult to model and/or account for the underlying dependence structures among genes.

10 Multiple Testing: Type I Errors - False Discovery Rates …

11 The Decision Table Number of Not rejected Number of rejected Total Number of True Total The only observable values

12 Strong and weak control of type I error rates Strong control: control type I error rate under any combination of true Weak control: control type I error rate only when all null hypotheses are true Since we do not know a priori which hypotheses are true, we will focus on strong control of type I error rate.

13 Consequences of multiple testing Suppose we test each hypothesis at 5% level of significance. – Suppose n = 10 independent tests performed. Then the probability of declaring at least 1 of the 10 tests significant is 1 – 0.95 10 = 0.401. – If 50,000 independent tests are performed as in Affymetrix microarray data then you should expect 2500 false positives!

14 Types of errors in the context of multiple testing Per-Family Error “Rate” (PFER): E(V ) – Expected number of false rejection of Per-Comparison Error Rate (PCER): E(V )/m – Expected proportion of false rejections of among all m hypotheses. Family-Wise Error Rate (FWER): P( V > 0 ) – Probability of at least one false rejection of among all m hypotheses

15 Types of errors in the context of multiple testing False Discovery Rate (FDR): – Expected proportion of Type I errors among all rejected hypotheses. Benjamini-Hochberg (BH): Set V/R = 0 if R = 0. Storey: Only interested in the case R > 0. (Positive FDR)

16 Some useful inequalities

17

18

19 Conclusion It is conservative to control FWER rather than FDR! It is conservative to control pFDR rather than FDR!

20 Some useful inequalities

21

22

23 However, in most applications such as microarrays, one expects In general, there is no proof of the statement

24 q-vlaues versus p-values. Suppose and suppose we are interested in a one-sided test. Suppose is the value of the test stat. for a given data set.

25 q-vlaues versus p-values. The pFDR can be rewritten as Suppose is the value of the test stat. for a given data set. Then the q-value is the posterior-Bayesian p-value

26 Some popular Type I error controlling procedures Let denote the ordered p-values for the ‘m’ tests that are being performed. Let denote the ordered levels of significance used for testing the ‘m’ null hypotheses, respectively.

27 Some popular controlling procedures Step-down procedure:

28 Some popular controlling procedures Step –up procedure:

29 Some popular controlling procedures Single-step procedure A stepwise procedure with critical same critical constant for all ‘m’ hypotheses.

30 Some typical stepwise procedures: FWER controlling procedures Bonferroni: A single-step procedure with Sidak: A single-step procedure with Holm: A step-down procedure with Hochberg: A step-up procedure with minP method: A resampling-based single-step procedure with where be the α quantile of the distribution of the minimum p-value.

31 Comments on the methods Bonferroni: Very general but can be too conservative for large number of hypotheses. Sidak: More powerful than Bonferroni, but applicable when the test statistics are independent or have certain types of positive dependence.

32 Comments on the methods Holm: More powerful than Bonferroni and is applicable for any type of dependence structure between test statistics. Hochberg: More powerful than Holm’s procedure but the test statistics should be either independent or the test statistic have a MTP2 property.

33 Comments on the methods Multivariate Total Positivity of Order 2 (MTP2)

34 Some typical stepwise procedures: FDR controlling procedure Benjamini-Hochberg: A step-up procedure with

35 An Illustration Lobenhofer et al. (2002) data: Expose breast cancer cells to estrodial for 1 hour or (12, 24 36 hours). Number of genes on the cDNA 2 spot array - 1900. Number of samples per time point 8., Compare 1 hour with (12, 24 and 36 hours) using a two-sided bootstrap t-test.

36 Some Popular Methods of Analysis

37 1. Fold-change

38 1. Fold-change in gene expression For gene “g” compute the fold change between two conditions (e.g. treatment and control):

39 1. Fold-change in gene expression : pre-defined constants. : gene “g” is “up-regulated”. : gene “g” is “down-regulated”.

40 1. Fold-change in gene expression Strengths: – Simple to implement. – Biologists find it very easy to interpret. – It is widely used. Drawbacks: – Ignores variability in mean gene expression. – Genes with subtle gene expression values can be overlooked. i.e. potentially high false negative rates – Conversely, high false positive rates are also possible.

41 2. t-test type procedures

42 2.1 Permutation t-test For each gene “g” compute the standard two-sample t-statistic: where are the sample means and is the pooled sample standard deviation.

43 2.1 Permutation t-test Statistical significance of a gene is determined by computing the null distribution of using either permutation or bootstrap procedure.

44 2.1 Permutation t-test Strengths: – Simple to implement. – Biologists find it very easy to interpret. – It is widely used. Drawback: – Potentially, for some genes the pooled sample standard deviation could be very small and hence it may result in inflated Type I errors and inflated false discovery rates.

45 2.2 SAM procedure (Significance Analysis of Microarrays) (Tusher et al., PNAS 2001) For each gene “g” modify the standard two-sample t-statistic as: The “fudge” factor is obtained such that the coefficient of variation in the above test statistic is minimized.

46 3. F-test and its variations for more than 2 nominal conditions Usual F-test and the P-values can be obtained by a suitable permutation procedure. Regularized F-test: Generalization of Baldi and Long methodology for multiple groups. – It better controls the false discovery rates and the powers comparable to the F-test. Cui and Churchill (2003) is a good review paper.

47 4. Linear fixed effects models Effects: – Array (A) - sample – Dye (D) – Variety (V) – test groups – Genes (G) – Expression (Y)

48 4. Linear fixed effects models (Kerr, Martin, and Churchill, 2000) Linear fixed effects model:

49 4. Linear fixed effects models All effects are assumed to be fixed effects. Main drawback – all genes have same variance!

50 5. Linear mixed effects models (Wolfinger et al. 2001) Stage 1 (Global normalization model) Stage 2 (Gene specific model)

51 5. Linear mixed effects models Assumptions:

52 5. Linear mixed effects models (Wolfinger et al. 2001) Perform inferences on the interaction term

53 A popular graphical representation: The Volcano Plots A scatter plot of vs Genes with large fold change will lie outside a pair of vertical “threshold” lines. Further, genes which are highly significant with large fold change will lie either in the upper right hand or upper left hand corner.

54

55 A useful review article Cui, X. and Churchill, G (2003), Genome Biology. Software: R package: statistics for microarray analysis. http://www.stat.berkeley.edu/users/terry/zarray/Software/smacode.html SAM: Significance Analysis of Microarray. http://www-stat.stanford.edu/%7Etibs/SAM http://www.stat.berkeley.edu/users/terry/zarray/Software/smacode.html http://www-stat.stanford.edu/%7Etibs/SAM

56 Supervised classification algorithms

57 Discriminant analysis based methods A. Linear and Quadratic Discriminant analysis based methods: Strength: – Well studied in the classical statistics literature Limitations: – Based on normality – Imposes constraints on the covariance matrices. Need to be concerned about the singularity issue. – No convenient strategy has been proposed in the literature to select “best” discrminating subset of genes.

58 Discriminant analysis based methods B. Nonparametric classification using Genetic Algorithm and K- nearest neighbors. – Li et al. (Bioinformatics, 2001) Strengths: – Entirely nonparametric – Takes into account the underlying dependence structure among genes – Does not require the estimation of a covariance matrix Weakness: – Computationally very intensive

59 GA/KNN methodology – very brief description Computes the Euclidean distance between all pairs of samples based on a sub-vector on, say, 50 genes. Clusters each sample into a treatment group (i.e. condition) based on the K-Nearest Neighbors. Computes a fitness score for each subset of genes based on how many samples are correctly classified. This is the objective function. The objective function is optimized using Genetic Algorithm

60 X Expression levels of gene 1 Expression levels of gene 2 K-nearest neighbors classification (k=3)

61 Expression levels of gene 1 Expression levels of gene 2 Subcategories within a class

62 Advantages of KNN approach Simple, performs as well as or better than more complex methods Free from assumptions such as normality of the distribution of expression levels Multivariate: takes account of dependence in expression levels Accommodates or even identifies distinct subtypes within a class

63 Expression data: many genes and few samples There may be many subsets of genes that can statistically discriminate between the treated and untreated. There are too many possible subsets to look at. With 3,000 genes, there are about 10 72 ways to make subsets of size 30.

64 The genetic algorithm Computer algorithm (John Holland) that works by mimicking Darwin's natural selection Has been applied to many optimization problems ranging from engine design to protein folding and sequence alignment Effective in searching high dimensional space

65 GA works by mimicking evolution Randomly select sets (“chromosomes”) of 30 genes from all the genes on the chip Evaluate the “fitness” of each “chromosome” – how well can it separate the treated from the untreated? Pass “chromosomes” randomly to next generation, with preference for the fittest

66 Summary Pay attention to multiple testing problem. – Use FDR over FWER for large data sets such as gene expression microarrays Linear mixed effects models may be used for comparing expression data between groups. For classification problem, one may want to consider GA/KNN approach.

67

68


Download ppt "Analysis of gene expression data (Nominal explanatory variables) Shyamal D. Peddada Biostatistics Branch National Inst. Environmental Health Sciences (NIH)"

Similar presentations


Ads by Google