Download presentation
Presentation is loading. Please wait.
Published byKory Wood Modified over 9 years ago
1
Topics in the Development and Validation of Gene Expression Profiling Based Predictive Classifiers Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute Linus.nci.nih.gov/brb
2
BRB Website http://linus.nci.nih.gov/brb Powerpoint presentations and audio files Reprints & Technical Reports BRB-ArrayTools software BRB-ArrayTools Data Archive Sample Size Planning for Targeted Clinical Trials
3
Simplified Description of Microarray Assay Extract mRNA from cells of interest –Each mRNA molecule was transcribed from a single gene and it has a linear structure complementary to that gene Convert mRNA to cDNA introducing a fluorescently labeled dye to each molecule Distribute the cDNA sample to a solid surface containing “probes” of DNA representing all “genes”; the probes are in known locations on the surface Let the molecules from the sample hybridize with the probes for the corresponding genes Remove excess sample and illuminate surface with laser with frequency corresponding to the dye Measure intensity of fluorescence over each probe
4
Resulting Data Intensity over a probe is approximately proportional to abundance of mRNA molecules in the sample for the gene corresponding to the probe 40,000 variables measured for each case –Excessive hype –Excessive skepticism –Some familiar statistical paradigms don’t work well
5
Good Microarray Studies Have Clear Objectives Class Comparison (Gene Finding) –Find genes whose expression differs among predetermined classes, e.g. tissue or experimental condition Class Prediction –Prediction of predetermined class (e.g. treatment outcome) using information from gene expression profile –Survival risk-group prediction Class Discovery –Discover clusters of specimens having similar expression profiles
6
Class Comparison and Class Prediction Not clustering problems Supervised methods
7
Class Prediction ≠ Class Comparison A set of genes is not a predictive model Emphasis in class comparison is often on understanding biological mechanisms –More difficult than accurate prediction and usually requires a different experiment Demonstrating statistical significance of prognostic factors is not the same as demonstrating predictive accuracy
8
Components of Class Prediction Feature (gene) selection –Which genes will be included in the model Select model type –E.g. Diagonal linear discriminant analysis, Nearest-Neighbor, … Fitting parameters (regression coefficients) for model –Selecting value of tuning parameters
9
Feature Selection Genes that are differentially expressed among the classes at a significance level (e.g. 0.01) –The level is a tuning parameter –Number of false discoveries is not of direct relevance for prediction For prediction it is usually more serious to exclude an informative variable than to include some noise variables
11
Optimal significance level cutoffs for gene selection. 50 differentially expressed genes out of 22,000 genes on the microarrays 2δ/σn=10n=30n=50 10.1670.0030.00068 1.250.0850.00110.00035 1.50.0450.000630.00016 1.750.0260.000360.00006 20.0150.00020.00002
12
Complex Gene Selection Small subset of genes which together give most accurate predictions –Genetic algorithms Little evidence that complex feature selection is useful in microarray problems
13
Linear Classifiers for Two Classes
14
Fisher linear discriminant analysis Diagonal linear discriminant analysis (DLDA) –Ignores correlations among genes Compound covariate predictor Golub’s weighted voting method Support vector machines with inner product kernel Perceptrons
15
When p>>n It is always possible to find a set of features and a weight vector for which the classification error on the training set is zero. There is generally not sufficient information in p>>n training sets to effectively use more complex methods
16
Myth Complex classification algorithms such as neural networks perform better than simpler methods for class prediction.
17
Comparative studies have shown that simpler methods work as well or better for microarray problems because they avoid overfitting the data.
18
Other Simple Methods Nearest neighbor classification Nearest k-neighbors Nearest centroid classification Shrunken centroid classification
19
Evaluating a Classifier Most statistical methods were not developed for p>>n prediction problems Fit of a model to the same data used to develop it is no evidence of prediction accuracy for independent data Demonstrating statistical significance of prognostic factors is not the same as demonstrating predictive accuracy Testing whether analysis of independent data results in selection of the same set of genes is not an appropriate test of predictive accuracy of a classifier
22
Internal Validation of a Classifier Re-substitution estimate –Develop classifier on dataset, test predictions on same data –Very biased for p>>n Split-sample validation Cross-validation
23
Split-Sample Evaluation Training-set –Used to select features, select model type, determine parameters and cut-off thresholds Test-set –Withheld until a single model is fully specified using the training-set. –Fully specified model is applied to the expression profiles in the test-set to predict class labels. –Number of errors is counted
24
Leave-one-out Cross Validation Omit sample 1 –Develop multivariate classifier from scratch on training set with sample 1 omitted –Predict class for sample 1 and record whether prediction is correct
25
Leave-one-out Cross Validation Repeat analysis for training sets with each single sample omitted one at a time e = number of misclassifications determined by cross-validation Subdivide e for estimation of sensitivity and specificity
26
With proper cross-validation, the model must be developed from scratch for each leave-one-out training set. This means that feature selection must be repeated for each leave-one-out training set. –Simon R, Radmacher MD, Dobbin K, McShane LM. Pitfalls in the analysis of DNA microarray data. Journal of the National Cancer Institute 95:14-18, 2003. The cross-validated estimate of misclassification error is an estimate of the prediction error for model fit using specified algorithm to full dataset
27
Prediction on Simulated Null Data Generation of Gene Expression Profiles 14 specimens (P i is the expression profile for specimen i) Log-ratio measurements on 6000 genes P i ~ MVN(0, I 6000 ) Can we distinguish between the first 7 specimens (Class 1) and the last 7 (Class 2)? Prediction Method Compound covariate prediction Compound covariate built from the log-ratios of the 10 most differentially expressed genes.
30
Major Flaws Found in 40 Studies Published in 2004 Inadequate control of multiple comparisons in gene finding –9/23 studies had unclear or inadequate methods to deal with false positives 10,000 genes x.05 significance level = 500 false positives Misleading report of prediction accuracy –12/28 reports based on incomplete cross-validation Misleading use of cluster analysis –13/28 studies invalidly claimed that expression clusters based on differentially expressed genes could help distinguish clinical outcomes 50% of studies contained one or more major flaws
31
Myth Split sample validation is superior to LOOCV or 10-fold CV for estimating prediction error
33
Comparison of Internal Validation Methods Molinaro, Pfiffer & Simon For small sample sizes, LOOCV is much less biased than split-sample validation For small sample sizes, LOOCV is preferable to 10-fold, 5-fold cross-validation or repeated k-fold versions For moderate sample sizes, 10-fold is preferable to LOOCV Some claims for bootstrap resampling for estimating prediction error are not valid for p>>n problems
35
Simulated Data 40 cases, 10 genes selected from 5000 MethodEstimateStd Deviation True.078 Resubstitution.007.016 LOOCV.092.115 10-fold CV.118.120 5-fold CV.161.127 Split sample 1-1.345.185 Split sample 2-1.205.184.632+ bootstrap.274.084
36
Simulated Data 40 cases MethodEstimateStd Deviation True.078 10-fold.118.120 Repeated 10-fold.116.109 5-fold.161.127 Repeated 5-fold.159.114 Split 1-1.345.185 Repeated split 1-1.371.065
37
DLBCL Data MethodBiasStd DeviationMSE LOOCV-.019.072.008 10-fold CV-.007.063.006 5-fold CV.004.07.007 Split 1-1.037.117.018 Split 2-1.001.119.017.632+ bootstrap-.006.049.004
39
Ordinary bootstrap –Training and test sets overlap Bootstrap cross-validation (Fu, Carroll,Wang) –Perform LOOCV on bootstrap samples –Training and test sets overlap Leave-one-out bootstrap –Predict for cases not in bootstrap sample –Training sets are too small Out-of-bag bootstrap (Breiman) –Predict for case i based on majority rule of predictions for bootstrap samples not containing case i.632+ bootstrap –w*LOOBS+(1-w)RSB
43
Permutation Distribution of Cross- validated Misclassification Rate of a Multivariate Classifier Randomly permute class labels and repeat the entire cross-validation Re-do for all (or 1000) random permutations of class labels Permutation p value is fraction of random permutations that gave as few misclassifications as e in the real data
46
Does an Expression Profile Classifier Predict More Accurately Than Standard Prognostic Variables? Not an issue of which variables are significant after adjusting for which others or which are independent predictors –Predictive accuracy, not significance The two classifiers can be compared by ROC analysis as functions of the threshold for classification The predictiveness of the expression profile classifier can be evaluated within levels of the classifier based on standard prognostic variables
47
Does an Expression Profile Classifier Predict More Accurately Than Standard Prognostic Variables? Some publications fit logistic model to standard covariates and the cross-validated predictions of expression profile classifiers This is valid only with split-sample analysis because the cross-validated predictions are not independent
48
Survival Risk Group Prediction For analyzing right censored data to develop predictive classifiers it is not necessary to make the data binary Can do cross-validation to predict high or low risk group for each case Compute Kaplan-Meier curves of predicted risk groups Permutation significance of log-rank statistic Implemented in BRB-ArrayTools BRB-ArrayTools also provides for comparing the risk group classifier based on expression profiles to one based on standard covariates and one based on a combination of both types of variables
49
Myth Huge sample sizes are needed to develop effective predictive classifiers
50
Sample Size Planning References K Dobbin, R Simon. Sample size determination in microarray experiments for class comparison and prognostic classification. Biostatistics 6:27-38, 2005 K Dobbin, R Simon. Sample size planning for developing classifiers using high dimensional DNA microarray data. Biostatistics (2007)
51
Sample Size Planning for Classifier Development The expected value (over training sets) of the probability of correct classification PCC(n) should be within of the maximum achievable PCC( )
52
Probability Model Two classes Log expression or log ratio MVN in each class with common covariance matrix m differentially expressed genes p-m noise genes Expression of differentially expressed genes are independent of expression for noise genes All differentially expressed genes have same inter-class mean difference 2 Common variance for differentially expressed genes and for noise genes
53
Classifier Feature selection based on univariate t- tests for differential expression at significance level Simple linear classifier with equal weights (except for sign) for all selected genes. Power for selecting each of the informative genes that are differentially expressed by mean difference 2 is 1- (n)
54
For 2 classes of equal prevalence, let 1 denote the largest eigenvalue of the covariance matrix of informative genes. Then
56
Sample size as a function of effect size (log-base 2 fold-change between classes divided by standard deviation). Two different tolerances shown,. Each class is equally represented in the population. 22000 genes on an array.
58
b) PCC(60) as a function of the proportion in the under-represented class. Parameter settings same as a), with 10 differentially expressed genes among 22,000 total genes. If the proportion in the under- represented class is small (e.g., <20%), then the PCC(60) can decline significantly.
61
Acknowledgements Kevin Dobbin Alain Dupuy Wenyu Jiang Annette Molinaro Ruth Pfeiffer Michael Radmacher Joanna Shih Yingdong Zhao BRB-ArrayTools Development Team
62
BRB-ArrayTools Contains analysis tools that I have selected as valid and useful Analysis wizard and multiple help screens for biomedical scientists Imports data from all platforms and major databases Automated import of data from NCBI Gene Express Omnibus
63
Predictive Classifiers in BRB-ArrayTools Classifiers –Diagonal linear discriminant –Compound covariate –Bayesian compound covariate –Support vector machine with inner product kernel –K-nearest neighbor –Nearest centroid –Shrunken centroid (PAM) –Random forrest –Tree of binary classifiers for k- classes Survival risk-group –Supervised pc’s Feature selection options –Univariate t/F statistic –Hierarchical variance option –Restricted by fold effect –Univariate classification power –Recursive feature elimination –Top-scoring pairs Validation methods –Split-sample –LOOCV –Repeated k-fold CV –.632+ bootstrap
64
Selected Features of BRB-ArrayTools Multivariate permutation tests for class comparison to control number and proportion of false discoveries with specified confidence level –Permits blocking by another variable, pairing of data, averaging of technical replicates SAM –Fortran implementation 7X faster than R versions Extensive annotation for identified genes –Internal annotation of NetAffx, Source, Gene Ontology, Pathway information –Links to annotations in genomic databases Find genes correlated with quantitative factor while controlling number of proportion of false discoveries Find genes correlated with censored survival while controlling number or proportion of false discoveries Analysis of variance
65
Selected Features of BRB-ArrayTools Gene set enrichment analysis. –Gene Ontology groups, signaling pathways, transcription factor targets, micro-RNA putative targets –Automatic data download from Broad Institute –KS & LS test statistics for null hypothesis that gene set is not enriched –Hotelling’s and Goeman’s Global test of null hypothesis that no genes in set are differentially expressed –Goeman’s Global test for survival data Class prediction –Multiple classifiers – Complete LOOCV, k-fold CV, repeated k-fold,.632 bootstrap –permutation significance of cross-validated error rate
66
Selected Features of BRB-ArrayTools Survival risk-group prediction –Supervised principal components with and without clinical covariates –Cross-validated Kaplan Meier Curves –Permutation test of cross-validated KM curves Clustering tools for class discovery with reproducibility statistics on clusters –Internal access to Eisen’s Cluster and Treeview Visualization tools including rotating 3D principal components plot exportable to Powerpoint with rotation controls Extensible via R plug-in feature Tutorials and datasets
67
BRB-ArrayTools Extensive built-in gene annotation and linkage to gene annotation websites Publicly available for non-commercial use –http://linus.nci.nih.gov/brb
68
BRB-ArrayTools December 2006 6635 Registered users 1938 Distinct institutions 68 Countries 311 Citations
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.