Introduction to translational and clinical bioinformatics Connecting complex molecular information to clinically relevant decisions using molecular.

Slides:



Advertisements
Similar presentations
Alexander Statnikov1, Douglas Hardin1,2, Constantin Aliferis1,3
Advertisements

Publications Reviewed Searched Medline Hand screening of abstracts & papers Original study on human cancer patients Published in English before December.
Predictive Analysis of Gene Expression Data from Human SAGE Libraries Alexessander Alves* Nikolay Zagoruiko + Oleg Okun § Olga Kutnenko + Irina Borisova.
CSCE555 Bioinformatics Lecture 15 classification for microarray data Meeting: MW 4:00PM-5:15PM SWGN2A21 Instructor: Dr. Jianjun Hu Course page:
A gene expression analysis system for medical diagnosis D. Maroulis, D. Iakovidis, S. Karkanis, I. Flaounas D. Maroulis, D. Iakovidis, S. Karkanis, I.
Introduction to Predictive Learning
Model and Variable Selections for Personalized Medicine Lu Tian (Northwestern University) Hajime Uno (Kitasato University) Tianxi Cai, Els Goetghebeur,
4 th NETTAB Workshop Camerino, 5 th -7 th September 2004 Alberto Bertoni, Raffaella Folgieri, Giorgio Valentini
Guidelines on Statistical Analysis and Reporting of DNA Microarray Studies of Clinical Outcome Richard Simon, D.Sc. Chief, Biometric Research Branch National.
ROC Based Evaluation and Comparison of Classifiers for IVF Implantation Prediction Aslı Uyar, Ayşe Bener Boğaziçi University, Department of Computer Engineering,
Thoughts on Biomarker Discovery and Validation Karla Ballman, Ph.D. Division of Biostatistics October 29, 2007.
Model Assessment and Selection Florian Markowetz & Rainer Spang Courses in Practical DNA Microarray Analysis.
Evaluating Classifiers
Attention Deficit Hyperactivity Disorder (ADHD) Student Classification Using Genetic Algorithm and Artificial Neural Network S. Yenaeng 1, S. Saelee 2.
Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection Takafumi Kanamori Shohei Hido NIPS 2008.
Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning Adan Cosgaya Winter 2006 Dalhousie University.
Exagen Diagnostics, Inc., all rights reserved Biomarker Discovery in Genomic Data with Partial Clinical Annotation Cole Harris, Noushin Ghaffari.
315 Feature Selection. 316 Goals –What is Feature Selection for classification? –Why feature selection is important? –What is the filter and what is the.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Alexander Statnikov Discovery Systems Laboratory Department of Biomedical Informatics Vanderbilt University 10/3/
The Broad Institute of MIT and Harvard Classification / Prediction.
1 Critical Review of Published Microarray Studies for Cancer Outcome and Guidelines on Statistical Analysis and Reporting Authors: A. Dupuy and R.M. Simon.
Skewing: An Efficient Alternative to Lookahead for Decision Tree Induction David PageSoumya Ray Department of Biostatistics and Medical Informatics Department.
Manu Chandran. Outline Background and motivation Over view of techniques Cross validation Bootstrap method Setting up the problem Comparing AIC,BIC,Crossvalidation,Bootstrap.
CROSS-VALIDATION AND MODEL SELECTION Many Slides are from: Dr. Thomas Jensen -Expedia.com and Prof. Olga Veksler - CS Learning and Computer Vision.
Application of Class Discovery and Class Prediction Methods to Microarray Data Kellie J. Archer, Ph.D. Assistant Professor Department of Biostatistics.
Chapter 6 Classification and Prediction Dr. Bernard Chen Ph.D. University of Central Arkansas.
Class 23, 2001 CBCl/AI MIT Bioinformatics Applications and Feature Selection for SVMs S. Mukherjee.
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Feature Extraction Artificial Intelligence Research Laboratory Bioinformatics and Computational Biology Program Computational Intelligence, Learning, and.
Eigengenes as biological signatures Dr. Habil Zare, PhD PI of Oncinfo Lab Assistant Professor, Department of Computer Science Texas State University 5.
Developing outcome prediction models for acute intracerebral hemorrhage patients: evaluation of a Support Vector Machine based method A. Jakab 1, L. Lánczi.
Next, this study employed SVM to classify the emotion label for each EEG segment. The basic idea is to project input data onto a higher dimensional feature.
Kelci J. Miclaus, PhD Advanced Analytics R&D Manager JMP Life Sciences
Bootstrap and Model Validation
Data Science Credibility: Evaluating What’s Been Learned
David Amar, Tom Hait, and Ron Shamir
7. Performance Measurement
Machine Learning with Spark MLlib
Classification with Gene Expression Data
Debesh Jha and Kwon Goo-Rak
Statistical Core Didactic
An Artificial Intelligence Approach to Precision Oncology
Bag-of-Visual-Words Based Feature Extraction
CLASSIFICATION OF TUMOR HISTOPATHOLOGY VIA SPARSE FEATURE LEARNING Nandita M. Nayak1, Hang Chang1, Alexander Borowsky2, Paul Spellman3 and Bahram Parvin1.
Classifiers!!! BCH339N Systems Biology / Bioinformatics – Spring 2016
Chapter 6 Classification and Prediction
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
An Enhanced Support Vector Machine Model for Intrusion Detection
Introduction to translational and clinical bioinformatics Connecting complex molecular information to clinically relevant decisions using molecular.
Claudio Lottaz and Rainer Spang
Machine Learning Week 1.
Nick Macklon, M.D., Ph.D.  Fertility and Sterility 
Comparisons among methods to analyze clustered multivariate biomarker predictors of a single binary outcome Xiaoying Yu, PhD Department of Preventive Medicine.
A Short Tutorial on Causal Network Modeling and Discovery
CSCI N317 Computation for Scientific Applications Unit Weka
Shih-Wei Lin, Kuo-Ching Ying, Shih-Chieh Chen, Zne-Jung Lee
Ensemble learning Reminder - Bagging of Trees Random Forest
Areas of Research … Causal Discovery Application Integration
Model generalization Brief summary of methods
The Bias-Variance Trade-Off
MAS 622J Course Project Classification of Affective States - GP Semi-Supervised Learning, SVM and kNN Hyungil Ahn
Analysis on Accelerated Learning Cohorts
Evaluating Classifiers for Disease Gene Discovery
Support Vector Machines 2
Claudio Lottaz and Rainer Spang
Advisor: Dr.vahidipour Zahra salimian Shaghayegh jalali Dec 2017
Presenter: Donovan Orn
Evaluation David Kauchak CS 158 – Fall 2019.
Presentation transcript:

Introduction to translational and clinical bioinformatics Connecting complex molecular information to clinically relevant decisions using molecular profiles Constantin F. Aliferis M.D., Ph.D., FACMI Director, NYU Center for Health Informatics and Bioinformatics Informatics Director, NYU Clinical and Translational Science Institute Director, Molecular Signatures Laboratory, Associate Professor, Department of Pathology, Adjunct Associate Professor in Biostatistics and Biomedical Informatics, Vanderbilt University Alexander Statnikov Ph.D. Director, Computational Causal Discovery laboratory Assistant Professor, NYU Center for Health Informatics and Bioinformatics, General Internal Medicine

Key Principles for Developing Robust Molecular Signatures Use supervised method to select genes relevant to prediction of the phenotype Use supervised method to build molecular signature/profile (or classification model) for prediction of the phenotype Steps #1 and #2 should be applied by cross-validation such that testing data is used only once for estimation of predictive accuracy. I.e., testing data is neither used for gene selection nor for building of classification model.

Principle #1: Use supervised method for gene selection

Select genes to predict patients who will survive > 5 yr Will select these 3 genes!

Select genes to predict patient who will respond to treatment Will select these 3 genes!

Principle #2: Use supervised method for classification (development of molecular signatures or profiles)

Input data for two genes Rb

We apply clustering algorithm that finds 2 clusters Rb

Unfortunately clustering is a non-specific method and falls into the ‘one-solution fits all’ trap when used for prediction Do not Respond to treatment Tx2 p53 Rb Respond to treatment Tx2

Principle #3: Gene selection and training of the classifier should be performed by cross-validation, such that testing data is used only once for estimation of predictive accuracy.

Hold-out validation method data test train

N-Fold Cross-validation data train test train train train train train test test test test test 0.9 0.8 0.8 0.9 0.8 0.9 Average accuracy = 0.85

Repeated N-Fold Cross-validation Average accuracy = 0.85 data Average accuracy = 0.8 Average accuracy (over different splits into N folds) = 0.87 Average accuracy = 0.9

Leave-one-out cross-validation Each test set consists of a one sample data test train train train train train train test test test test test Leave-one-out cross-validation (LOOCV) is equivalent to N-fold cross-validation, where N is equal to the number of samples in the dataset.

Simon’s Experiment

Avoid training and testing on the same subject A (sample #1) B (sample #1) C (sample #1) D (sample #1) A (sample #2) B (sample #2) C (sample #2) D (sample #2) A (sample #3) B (sample #3) C (sample #3) D (sample #3) A (sample #4) B (sample #4) C (sample #4) D (sample #4) A (sample #1) B (sample #1) C (sample #1) D (sample #1) A (sample #2) B (sample #2) C (sample #2) D (sample #2) A (sample #3) B (sample #3) C (sample #3) D (sample #3) A (sample #4) B (sample #4) C (sample #4) D (sample #4) A (sample #1) A (sample #2) A (sample #3) A (sample #4) B (sample #1) B (sample #2) B (sample #3) B (sample #4) C (sample #1) C (sample #2) C (sample #3) C (sample #4) D (sample #1) D (sample #2) D (sample #3) D (sample #4) Train Train Test Test Unbiased CV Biased CV

Developing molecular profiles for diagnosis of acute respiratory infections Alexander Statnikov, Nikita Lytkin, Lauren McVoy, Constantin Aliferis

Experimental design of Zaas et al. Asymptomatic Symptomatic time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)

Experimental design of Zaas et al. Asymptomatic Asymptomatic at baseline Symptomatic Symptomatic at baseline time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)

(for observing symptoms) Data analysis tasks Asymptomatic at baseline Asymptomatic at peak Task I(a) and (b) Symptomatic at baseline Symptomatic at peak time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)

(for observing symptoms) Data analysis tasks Asymptomatic at baseline Asymptomatic at peak Task IV (a) and (b) Task II Task III Symptomatic at baseline Symptomatic at peak time Baseline (everybody is healthy) Baseline + ε (injection of virus) Peak (for observing symptoms)

Same patients in asymptomatic groups Data analysis tasks Task ID Class 1 Class 2 N of Class 1 N of Class 2 Notes Cross-validation design I(a) symptomatic @ peak asymptomatic @ peak and asymptomatic @ baseline 26 60 Same patients in asymptomatic groups Repeated 10-fold CV I(b) Repeated 10-fold CV, Ensure different patients in training and testing sets II symptomatic @ baseline asymptomatic @ baseline 30 Different patients III asymptomatic @ peak IV(a) IV(b) Previous results from Zaas et al: AUC of the signature for task 1a (with inappropriate gene selection using all data) estimated by LOOCV= 0.983 When the signature for task 1a is applied to independent data (but only with one type of virus), its AUC = 1.000 Were unable to find a signature to distinguish groups in tasks 2-4.

Our results in the data of Zaas et al. using linear SVM classifiers Task ID Class 1 Class 2 Average AUC without gene selection (N = 22215) Average AUC with gene selection by HITON-PC (max-k=1, alpha = 0.05) Average number of selected genes Number of features selected on complete data I(a) symptomatic @ peak asymptomatic @ peak and asymptomatic @ baseline 0.932 0.911 10 12 I(b) 0.914 0.908 II symptomatic @ baseline asymptomatic @ baseline 0.445 0.497 3 4 III asymptomatic @ peak 0.817 0.847 6 7 IV(a) 0.824 0.833 5 IV(b) 0.893 0.903 9 Zaas et al. could not find signal here!

A closer examination of task II with non-linear classifiers Average AUC without gene selection (N = 22215) Average AUC with gene selection by HITON-PC (max-k=1, alpha = 0.05) SVM poly 0.519 0.483 SVM RBF 0.369 0.474 RF 0.399 0.496 I.e., no predictive signal can be found!

Biased CV: Task I(a) | Unbiased CV: Task I(b) Validation of our results in the independent data from Ramilo et al. for signature from task I Using model from HITON-PC Using model with all genes AUC in Zaas et al. AUC in Ramilo et al. 0.932 | 0.914 0.981 0.911 | 0.908 0.954 Biased CV: Task I(a) | Unbiased CV: Task I(b)

Statistical comparison with the signature of Zaas et al. for task I Experiment Notes AUC from Zaas et al. Our AUC Different? Obtain AUC in Zaas et al. data by cross-validation Biased cross-validation as in Zaas et al. 0.983 0.932 [0.870 - 0.994] No Unbiased cross-validation 0.914 [0.842 - 0.986] Independent validation of the model using data from Ramlo et al. - 1.000 0.981 [0.940 - 1.000]

Visualization of our signature for task I

Genes in our signature for task I

Genes in our signature for task III

Genes in our signature for task IV

Why Zaas et al. could not find signatures for tasks III and IV? Most plausible explanation so far is that they used genes obtained from supervised gene selection method for a different classification task!

Next time We will be building and validating molecular signatures for acute respiratory viral infections using GEMS software