Introduction to translational and clinical bioinformatics Connecting complex molecular information to clinically relevant decisions using molecular profiles Constantin F. Aliferis M.D., Ph.D., FACMI Director, NYU Center for Health Informatics and Bioinformatics Informatics Director, NYU Clinical and Translational Science Institute Director, Molecular Signatures Laboratory, Associate Professor, Department of Pathology, Adjunct Associate Professor in Biostatistics and Biomedical Informatics, Vanderbilt University Alexander Statnikov Ph.D. Director, Computational Causal Discovery laboratory Assistant Professor, NYU Center for Health Informatics and Bioinformatics, General Internal Medicine
Key Principles for Developing Robust Molecular Signatures Use supervised method to select genes relevant to prediction of the phenotype Use supervised method to build molecular signature/profile (or classification model) for prediction of the phenotype Steps #1 and #2 should be applied by cross-validation such that testing data is used only once for estimation of predictive accuracy. I.e., testing data is neither used for gene selection nor for building of classification model.
Principle #1: Use supervised method for gene selection
Select genes to predict patients who will survive > 5 yr Will select these 3 genes!
Select genes to predict patient who will respond to treatment Will select these 3 genes!
Principle #2: Use supervised method for classification (development of molecular signatures or profiles)
Input data for two genes Rb
We apply clustering algorithm that finds 2 clusters Rb
Unfortunately clustering is a non-specific method and falls into the ‘one-solution fits all’ trap when used for prediction Do not Respond to treatment Tx2 p53 Rb Respond to treatment Tx2
Principle #3: Gene selection and training of the classifier should be performed by cross-validation, such that testing data is used only once for estimation of predictive accuracy.
Hold-out validation method data test train
N-Fold Cross-validation data train test train train train train train test test test test test 0.9 0.8 0.8 0.9 0.8 0.9 Average accuracy = 0.85
Repeated N-Fold Cross-validation Average accuracy = 0.85 data Average accuracy = 0.8 Average accuracy (over different splits into N folds) = 0.87 Average accuracy = 0.9
Leave-one-out cross-validation Each test set consists of a one sample data test train train train train train train test test test test test Leave-one-out cross-validation (LOOCV) is equivalent to N-fold cross-validation, where N is equal to the number of samples in the dataset.
Simon’s Experiment
Avoid training and testing on the same subject A (sample #1) B (sample #1) C (sample #1) D (sample #1) A (sample #2) B (sample #2) C (sample #2) D (sample #2) A (sample #3) B (sample #3) C (sample #3) D (sample #3) A (sample #4) B (sample #4) C (sample #4) D (sample #4) A (sample #1) B (sample #1) C (sample #1) D (sample #1) A (sample #2) B (sample #2) C (sample #2) D (sample #2) A (sample #3) B (sample #3) C (sample #3) D (sample #3) A (sample #4) B (sample #4) C (sample #4) D (sample #4) A (sample #1) A (sample #2) A (sample #3) A (sample #4) B (sample #1) B (sample #2) B (sample #3) B (sample #4) C (sample #1) C (sample #2) C (sample #3) C (sample #4) D (sample #1) D (sample #2) D (sample #3) D (sample #4) Train Train Test Test Unbiased CV Biased CV
Developing molecular profiles for diagnosis of acute respiratory infections Alexander Statnikov, Nikita Lytkin, Lauren McVoy, Constantin Aliferis
Experimental design of Zaas et al. Asymptomatic Symptomatic time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)
Experimental design of Zaas et al. Asymptomatic Asymptomatic at baseline Symptomatic Symptomatic at baseline time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)
(for observing symptoms) Data analysis tasks Asymptomatic at baseline Asymptomatic at peak Task I(a) and (b) Symptomatic at baseline Symptomatic at peak time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)
(for observing symptoms) Data analysis tasks Asymptomatic at baseline Asymptomatic at peak Task IV (a) and (b) Task II Task III Symptomatic at baseline Symptomatic at peak time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)
Same patients in asymptomatic groups Data analysis tasks Task ID Class 1 Class 2 N of Class 1 N of Class 2 Notes Cross-validation design I(a) symptomatic @ peak asymptomatic @ peak and asymptomatic @ baseline 26 60 Same patients in asymptomatic groups Repeated 10-fold CV I(b) Repeated 10-fold CV, Ensure different patients in training and testing sets II symptomatic @ baseline asymptomatic @ baseline 30 Different patients III asymptomatic @ peak IV(a) IV(b) Previous results from Zaas et al: AUC of the signature for task 1a (with inappropriate gene selection using all data) estimated by LOOCV= 0.983 When the signature for task 1a is applied to independent data (but only with one type of virus), its AUC = 1.000 Were unable to find a signature to distinguish groups in tasks 2-4.
Our results in the data of Zaas et al. using linear SVM classifiers Task ID Class 1 Class 2 Average AUC without gene selection (N = 22215) Average AUC with gene selection by HITON-PC (max-k=1, alpha = 0.05) Average number of selected genes Number of features selected on complete data I(a) symptomatic @ peak asymptomatic @ peak and asymptomatic @ baseline 0.932 0.911 10 12 I(b) 0.914 0.908 II symptomatic @ baseline asymptomatic @ baseline 0.445 0.497 3 4 III asymptomatic @ peak 0.817 0.847 6 7 IV(a) 0.824 0.833 5 IV(b) 0.893 0.903 9 Zaas et al. could not find signal here!
A closer examination of task II with non-linear classifiers Average AUC without gene selection (N = 22215) Average AUC with gene selection by HITON-PC (max-k=1, alpha = 0.05) SVM poly 0.519 0.483 SVM RBF 0.369 0.474 RF 0.399 0.496 I.e., no predictive signal can be found!
Biased CV: Task I(a) | Unbiased CV: Task I(b) Validation of our results in the independent data from Ramilo et al. for signature from task I Using model from HITON-PC Using model with all genes AUC in Zaas et al. AUC in Ramilo et al. 0.932 | 0.914 0.981 0.911 | 0.908 0.954 Biased CV: Task I(a) | Unbiased CV: Task I(b)
Statistical comparison with the signature of Zaas et al. for task I Experiment Notes AUC from Zaas et al. Our AUC Different? Obtain AUC in Zaas et al. data by cross-validation Biased cross-validation as in Zaas et al. 0.983 0.932 [0.870 - 0.994] No Unbiased cross-validation 0.914 [0.842 - 0.986] Independent validation of the model using data from Ramlo et al. - 1.000 0.981 [0.940 - 1.000]
Visualization of our signature for task I
Genes in our signature for task I
Genes in our signature for task III
Genes in our signature for task IV
Why Zaas et al. could not find signatures for tasks III and IV? Most plausible explanation so far is that they used genes obtained from supervised gene selection method for a different classification task!
Next time We will be building and validating molecular signatures for acute respiratory viral infections using GEMS software