Deep CNN for breast cancer histology Image Analysis

Slides:



Advertisements
Similar presentations
ImageNet Classification with Deep Convolutional Neural Networks
Advertisements

Three kinds of learning
K-means Based Unsupervised Feature Learning for Image Recognition Ling Zheng.
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
Competition II: Springleaf Sha Li (Team leader) Xiaoyan Chong, Minglu Ma, Yue Wang CAMCOS Fall 2015 San Jose State University.
ImageNet Classification with Deep Convolutional Neural Networks Presenter: Weicong Chen.
An Effective Hybridized Classifier for Breast Cancer Diagnosis DISHANT MITTAL, DEV GAURAV & SANJIBAN SEKHAR ROY VIT University, India.
Automatic Lung Cancer Diagnosis from CT Scans (Week 3)
Cancer Metastases Classification in Histological Whole Slide Images
When deep learning meets object detection: Introduction to two technologies: SSD and YOLO Wenchi Ma.
Automatic Lung Cancer Diagnosis from CT Scans (Week 1)
Handwritten Digit Recognition Using Stacked Autoencoders
Applying Deep Neural Network to Enhance EMPI Searching
The Relationship between Deep Learning and Brain Function
Object Detection based on Segment Masks
Automatic Lung Cancer Diagnosis from CT Scans (Week 2)
Chilimbi, et al. (2014) Microsoft Research
An Artificial Intelligence Approach to Precision Oncology
Semi-supervised Machine Learning Gergana Lazarova
Week 3 (June 6 – June10 , 2016) Summary :
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
CLASSIFICATION OF TUMOR HISTOPATHOLOGY VIA SPARSE FEATURE LEARNING Nandita M. Nayak1, Hang Chang1, Alexander Borowsky2, Paul Spellman3 and Bahram Parvin1.
Efficient Image Classification on Vertically Decomposed Data
MR images analysis of glioma
Article Review Todd Hricik.
Automatic Lung Cancer Diagnosis from CT Scans (Week 4)
Predicting survivors of Neonatal calf diarrhea (NCD) using Logistic Regression or Gradient Boosting Stefano Biffani*, Cesare Lubiano1, Davide Pravettoni1.
Robust Lung Nodule Classification using 2
Inception and Residual Architecture in Deep Convolutional Networks
Neural networks (3) Regularization Autoencoder
Machine Learning Basics
Deep learning and applications to Natural language processing
Training Techniques for Deep Neural Networks
Deep Belief Networks Psychology 209 February 22, 2013.
CS 698 | Current Topics in Data Science
CS6890 Deep Learning Weizhen Cai
ECE 599/692 – Deep Learning Lecture 6 – CNN: The Variants
Deep Learning Convoluted Neural Networks Part 2 11/13/
Master’s Thesis defense Ming Du Advisor: Dr. Yi Shang
Layer-wise Performance Bottleneck Analysis of Deep Neural Networks
Bird-species Recognition Using Convolutional Neural Network
CMPT 733, SPRING 2016 Jiannan Wang
Image Classification.
Presenter: Usman Sajid
A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea,
Albert Xue, Binbin Huang, Jianrong Wang
AlexNet+Max Pooling MIL AlexNet+Label Assign. MIL
Predicting Breast Cancer Diagnosis From Fine-Needle Aspiration
Deepak Kumar1, Chetan Kumar1, Ming Shao2
Deep Neural Networks for Onboard Intelligence
Tuning CNN: Tips & Tricks
Logistic Regression & Transfer Learning
Deep Learning Some slides are from Prof. Andrew Ng of Stanford.
Zhedong Zheng, Liang Zheng and Yi Yang
边缘检测年度进展概述 Ming-Ming Cheng Media Computing Lab, Nankai University
Heterogeneous convolutional neural networks for visual recognition
CMPT 733, SPRING 2017 Jiannan Wang
Jia-Bin Huang Virginia Tech
Department of Computer Science Ben-Gurion University of the Negev
Assignment 7 Due Application of Support Vector Machines using Weka software Must install libsvm Data set: Breast cancer diagnostics Deliverables:
Automatic Handwriting Generation
Car Damage Classification
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Deep screen image crop and enhance
CRCV REU 2019 Kara Schatz.
An introduction to Machine Learning (ML)
Do Better ImageNet Models Transfer Better?
Presented By: Firas Gerges (fg92)
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

Deep CNN for breast cancer histology Image Analysis Honghao Zheng

Background In Recent decades: Improved and highly incidence of cancer, breast cancer is one of most common cancer diagnosed in women. The improvement and advance in Machine learning make it as a efficient aid tool in medical field. Moreover, CNN is the most popular hot model in the medical AI.

Dataset 400 H&E(hematoxylin and eosin) stain images (2048 × 1536 pixels). All the images are digitized with the same acquisition conditions, with a magnification of 200 × and pixel size of 0.42 µm × 0.42 µm generated by the microscope. Stain Images label as 4 classification: normal(A), benign(B), in-situ cancer(C), invasive cancer(D).

How to solve it ? Overfit problem If we just trained only 400 images in the deep CNN architecture like VGG, Inception, ResNet, which consist millions of parameters, it would lead to overfit. The fine-tunning method works not very well.. How to solve it ?

New Idea 1st setup: unsupervised learning feature extraction, trained general dataset ImageNet by Deep CNN VGG-16. Before training, sparse coding on original image is pre-processing job. 2nd setup: use LightGBM implementing gradient boosted trees for supervised classification.

Pre-processing and encoding

Unsupervised Training

lightGBM——gradient boosting framework Faster training speed and higher efficiency. Lower memory usage. Better accuracy. Support of parallel and GPU learning. Capable of handling large-scale data.

Training in LightGBM 10 stratified fold to preserve class distribution Augmentation making the dataset*300. To prevent leakage, all descriptor form a data have to be in same fold. For each combination of the encoder, crop size and scale we train 10 gradient boosting models with 10-fold cross-validation  For the test data, similarly extract 50 descriptors for each image and use them with all models trained for particular patch size and encoder

Result 10 fold stratified cross validation Confusion matrix, without normalization. Vertical axis - ground truth, horizontal - predictions. Left: non-carcinoma vs. carcinoma classification, ROC. 96.5% sensitivity at high sensitivity setpoint (green)

Result

Thank you