Master of Science in Data Science

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Statistical Machine Learning- The Basic Approach and Current Research Challenges Shai Ben-David CS497 February, 2007.
A Computer Aided Detection System For Digital Mammograms Based on Radial Basis Functions and Feature Extraction Techniques By Mohammed Jirari Shanghai,
A Computer-Aided Diagnosis System For Digital Mammograms Based on Radial Basis Functions and Feature Extraction Techniques Dissertation written by Mohammed.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Evaluating the Quality of Image Synthesis and Analysis Techniques Matthew O. Ward Computer Science Department Worcester Polytechnic Institute.
1 Accurate Object Detection with Joint Classification- Regression Random Forests Presenter ByungIn Yoo CS688/WST665.
Ensemble Learning (2), Tree and Forest
Chapter 6 Feature-based alignment Advanced Computer Vision.
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
Jeff Howbert Introduction to Machine Learning Winter Regression Linear Regression.
A Novel Local Patch Framework for Fixing Supervised Learning Models Yilei Wang 1, Bingzheng Wei 2, Jun Yan 2, Yang Hu 2, Zhi-Hong Deng 1, Zheng Chen 2.
Human pose recognition from depth image MS Research Cambridge.
Jeff Howbert Introduction to Machine Learning Winter Regression Linear Regression Regression Trees.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
ImageNet Classification with Deep Convolutional Neural Networks Presenter: Weicong Chen.
WHAT IS DATA MINING?  The process of automatically extracting useful information from large amounts of data.  Uses traditional data analysis techniques.
Multi-label Prediction via Sparse Infinite CCA Piyush Rai and Hal Daume III NIPS 2009 Presented by Lingbo Li ECE, Duke University July 16th, 2010 Note:
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Lecture 4b Data augmentation for CNN training
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
Ke (Kevin) Wu1,2, Philip Watters1, Malik Magdon-Ismail1
Cancer Metastases Classification in Histological Whole Slide Images
Bootstrap and Model Validation
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Automatic Lung Cancer Diagnosis from CT Scans (Week 1)
Applied Biostatistics: Lecture 4
Private Data Management with Verification
The Relationship between Deep Learning and Brain Function
Artificial Neural Networks
ECG data classification with deep learning tools
Data Mining, Neural Network and Genetic Programming
An Artificial Intelligence Approach to Precision Oncology
International Workshop
Automatic Lung Cancer Diagnosis from CT Scans (Week 4)
Erasmus University Rotterdam
Regularizing Face Verification Nets To Discrete-Valued Pain Regression
Non-Parametric Models
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
ECE 6504 Deep Learning for Perception
CS 698 | Current Topics in Data Science
Vincent Granville, Ph.D. Co-Founder, DSC
CSCI 5822 Probabilistic Models of Human and Machine Learning
Project Implementation for ITCS4122
Fenglong Ma1, Jing Gao1, Qiuling Suo1
Deep Learning Convoluted Neural Networks Part 2 11/13/
Multiple Organ Detection in CT Volumes using CNN Week 4
Introduction to Neural Networks
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei.
INF 5860 Machine learning for image classification
Neural Networks Geoff Hulten.
TGS Salt Identification Challenge
Vinit Shah, Joseph Picone and Iyad Obeid
Tuning CNN: Tips & Tricks
Support Vector Machine I
Logistic Regression & Transfer Learning
Image Classification & Training of Neural Networks
Neural networks (3) Regularization Autoencoder
Unsupervised Perceptual Rewards For Imitation Learning
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Multiple Organ detection in CT Volumes - Week 3
Multiple Organ detection in CT Volumes Using Random Forests - Week 9
Machine Learning in Business John C. Hull
Credit Card Fraudulent Transaction Detection
Adrian E. Gonzalez , David Parra Department of Computer Science
Outlines Introduction & Objectives Methodology & Workflow
Do Better ImageNet Models Transfer Better?
Deep CNN for breast cancer histology Image Analysis
Presentation transcript:

Master of Science in Data Science Deep Learning in Medical Physics: When small is too small? Intersection Medical Physics & Deep Learning Miguel Romero, Yannet Interian PhD, Gilmer Valdes PhD, and Timothy Solberg PhD *Was partially supported by the Wicklow AI and Medicine Research Initiative at the Data Institute.

Sections Motivation & Problem Experiments Resources & Techniques Results Discussion

Sections Motivation & Problem Experiments Resources & Techniques Results Discussion

Motivation Diagnosis Diabetic Retinopathy

Question Is life that easy? = Can we do the same in our images and problems? Diagnosis Diabetic retinopathy

Performance as a function of the # data available

Performance as a function of the # data available Medical Physics

Performance as a function of the # data available DL < Ridge ?! But wait…. See statistician at the end of the room

Our problem: When small data is too small? Can we say something about those

Our problem: When small data is too small? Complexity? Amount of noise? But wait…. See statistician at the end of the room

Sections Motivation & Problem Experiments Resources & Techniques Results Discussion

#1 Set of Tasks: "With real labels" Assess performance for different amounts in those ranges with different models/techniques. Given a chest x-ray image, can we infer which ones out of 14 disease does the patient has?

#1 Set of Tasks: "With real labels"

#1 Set of Tasks: "With real labels" ? Pneumonia

#1 Set of Tasks: "With real labels" 50 2000 # of samples 50 1600 # of samples

#2 Set of Tasks: "With synthetic labels" Complexity Noise

#2 Set of Tasks: "With synthetic labels" Original Dataset [0/1]

#2 Set of Tasks: "With synthetic labels" Original Dataset Model X f [0/1]

#2 Set of Tasks: "With synthetic labels" Original Dataset Synthetic Dataset Model X f [0/1] Threshold selection & Hard labels creation

#2 Set of Tasks: "With synthetic labels" Original Dataset Synthetic Dataset Model X f [0/1] [0/1] Threshold selection & Hard labels creation

#2 Set of Tasks: "With synthetic labels" Original Dataset Synthetic Dataset Model X f Model Y g [0/1] [0/1] Threshold selection & Hard labels creation

#2 Set of Tasks: "With synthetic labels" Original Dataset Synthetic Dataset Model X f Model Y g [0/1] [0/1] Threshold selection & Hard labels creation # of samples

Sections Motivation & Problem Experiments Resources & Techniques Results Discussion

ML/Statistical models Data & algorithms Datasets Deep Learning ML/Statistical models NIH Chest X-rays → detect 14 diseases RSNA Bone Age → Detect the age (under/over) median MURA → Abnormality detection ImageNet → Image classification ResNet 18 → 18 layers → 11M parameters ResNet 34 → 34 layers → 21M parameters DenseNet 121 → 121 layers → 9M parameters Custom CNN → 7-9 layers Logistic Regression → 90 parameters Ridge Lasso Elastic Net Random Forest → variable # parameters The specific radiomic features used were: grey level co-occurrence matrix (GLCM) features, with 1, 2, 4, 6 pixel offset and 0, 45 and 90-degree angles (72 features/image), grey image mean, standard deviation and second, third, fourth and fifth moment (6 features/image) and the image entropy (1 feature/image)

ML/Statistical models Rough idea Datasets Deep Learning ML/Statistical models NIH Chest X-rays → detect 14 diseases RSNA Bone Age → Detect the age (under/over) median MURA → Abnormality detection ImageNet → Image classification ResNet 18 → 18 layers → 11M parameters ResNet 34 → 34 layers → 21M parameters DenseNet 121 → 121 layers → 9M parameters Custom CNN → 7-9 layers Logistic Regression → 90 parameters Ridge Lasso Elastic Net Random Forest → variable # parameters 4 data-sets ≃ M parameters ≃ H parameters

DL techniques: Data Augmentation Transfer learning from ImageNet & MURA. CNN as feature extractor Fine tuning the CNN - all at once Fine tuning the CNN - progressive unfreezing + differential learning rates Training approaches (some). Regular lr policy One-cycle cosine with annealing lr policy and momentum Adam optimizer Testing Time data Augmentation (TTA) But wait…. See statistician at the end of the room

Rough idea Data Augmentation Multiple ways of doing DL Transfer learning from ImageNet & MURA. CNN as feature extractor Fine tuning the CNN - all at once Fine tuning the CNN - progressive unfreezing + differential learning rates Training approaches (some). Regular lr policy One-cycle cosine with annealing lr policy and momentum Adam optimizer Testing Time data Augmentation (TTA) Multiple ways of doing DL But wait…. See statistician at the end of the room

Sections Motivation & Problem Experiments Resources & Techniques Results Discussion

Real labels - Approach 1 DL Linear Model Pneumonia 14 diseases

Improves previous approach Pneumonia Compare previous Lock at the left From the methods that we testes most of them will take a minimum of 200 samples to perform as good as a ridge. They seem they go like a rocket. Are NN good with 2000 samples? -> depends 14 diseases

Not that good in perspective Pneumonia Compare previous Lock at the left From the methods that we testes most of them will take a minimum of 200 samples to perform as good as a ridge. They seem they go like a rocket. Are NN good with 2000 samples? -> depends 14 diseases

Sections Motivation & Problem Experiments Resources & Techniques Results Discussion

Discussion DL ≠ Magic. Production level models requires large amounts of rich data. DL seems to improve traditional approaches "earlier" in the binary balanced task. If trained appropriately, improves traditional approaches with small data but not small enough for most medical phyisics problems.

Thank You, Questions?