Convolutional neural network based Alzheimer’s disease classification from magnetic resonance brain images RachnaJain NikitaJainaAkshayAggarwal D.

Slides:



Advertisements
Similar presentations
Neural networks Introduction Fitting neural networks
Advertisements

ImageNet Classification with Deep Convolutional Neural Networks
Overview of Back Propagation Algorithm
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
ImageNet Classification with Deep Convolutional Neural Networks Presenter: Weicong Chen.
Object Recognizing. Deep Learning Success in 2012 DeepNet and speech processing.
Deep Residual Learning for Image Recognition
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
Lecture 3b: CNN: Advanced Layers
Deep Learning Overview Sources: workshop-tutorial-final.pdf
When deep learning meets object detection: Introduction to two technologies: SSD and YOLO Wenchi Ma.
Wenchi MA CV Group EECS,KU 03/20/2017
Big data classification using neural network
Deep Residual Learning for Image Recognition
Demo.
Deep Feedforward Networks
Debesh Jha and Kwon Goo-Rak
The Relationship between Deep Learning and Brain Function
Compact Bilinear Pooling
[Ran Manor and Amir B.Geva] Yehu Sapir Outlines Review
Computer Science and Engineering, Seoul National University
DeepCount Mark Lenson.
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
The Problem: Classification
Understanding and Predicting Image Memorability at a Large Scale
Article Review Todd Hricik.
Many slides and slide ideas thanks to Marc'Aurelio Ranzato and Michael Nielson.
Robust Lung Nodule Classification using 2
Ajita Rattani and Reza Derakhshani,
Inception and Residual Architecture in Deep Convolutional Networks
Natural Language Processing of Knee MRI Reports
ECE 6504 Deep Learning for Perception
Training Techniques for Deep Neural Networks
Deep Belief Networks Psychology 209 February 22, 2013.
CS 698 | Current Topics in Data Science
CS6890 Deep Learning Weizhen Cai
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
ECE 599/692 – Deep Learning Lecture 6 – CNN: The Variants
Deep Learning Convoluted Neural Networks Part 2 11/13/
Layer-wise Performance Bottleneck Analysis of Deep Neural Networks
Bird-species Recognition Using Convolutional Neural Network
Deep Face Recognition Omkar M. Parkhi Andrea Vedaldi Andrew Zisserman
Introduction to Neural Networks
Image Classification.
A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea,
CS 4501: Introduction to Computer Vision Training Neural Networks II
ECE 599/692 – Deep Learning Lecture 5 – CNN: The Representative Power
Very Deep Convolutional Networks for Large-Scale Image Recognition
Fusion of deep learning models of MRI scans, Mini–Mental State Examination, and logical memory test enhances diagnosis of mild cognitive impairment  Shangran.
[Figure taken from googleblog
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Neural Networks Geoff Hulten.
Use 3D Convolutional Neural Network to Inspect Solder Ball Defects
Object Tracking: Comparison of
ImageNet Classification with Deep Convolutional Neural Networks
Inception-v4, Inception-ResNet and the Impact of
Deep Residual Learning for Automatic Seizure Detection
Heterogeneous convolutional neural networks for visual recognition
Car Damage Classification
Introduction to Neural Networks
CS295: Modern Systems: Application Case Study Neural Network Accelerator Sang-Woo Jun Spring 2019 Many slides adapted from Hyoukjun Kwon‘s Gatech “Designing.
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
End-to-End Facial Alignment and Recognition
CSC 578 Neural Networks and Deep Learning
Adrian E. Gonzalez , David Parra Department of Computer Science
Do Better ImageNet Models Transfer Better?
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

Convolutional neural network based Alzheimer’s disease classification from magnetic resonance brain images RachnaJain NikitaJainaAkshayAggarwal D. JudeHemanth Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi, IndiabDepartment of ECE, Karunya University, Coimbatore, India

Overview Introduction Methods Results Future Extension

Alzheimer Dementia ALZHEIMER No Cure (a decline in memory or other thinking skills) ALZHEIMER (AD) No Cure 60%~80% Late Stage: lost ordinary activities 20% Alzheimer’s disease (AD) is the most common form of dementia. Dementia is a general term that describes a set of symptoms associated with a decline in memory or other thinking skills severe enough to reduce a person's ability to perform ordinary activities. MCI : also known as mild cognitive impairment, memory loss is mild Cognitively normal (CN) Mild cognitive impairment (MCI) Other

Exhausting to manually compare and analyze data Problems and solution No Cure Early diagnosis Needed! Various medical tests Exhausting to manually compare and analyze data CNNs: have performed well in medical field for tasks such as visual object recognition, detection and segmentation. Current solution

Limitations for current CNN solution huge amount of annotated data huge amount of computational resources careful and tedious tuning of hyper-parameters, optimal tuning Improvement: Transfer learning Mush less time (few hours than weeks) Proper training of deep neural networks requires huge amount of annotated data, which can be a problem especially for medical imaging field Training of such architectures requires huge amount of computational resources. Deep learning requires careful and tedious tuning of hyper-parameters, optimal tuning, which otherwise can lead to overfitting/underfitting, resulting in overall poor performance

New approach Transfer learning :is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. Training a CNN takes weeks VS pretrained CNN takes much lesser time, typically few hours. In this paper, VGG16 is taken as the base model for applying transfer learning. VGG16 is a 16-layer network built by Oxfords Visual Geometry Group (VGG). It participated in the ImageNet competition in ILSVRC 2014. It is one of the first architectures to explore network depth by pushing to 16 layers and using small (3 × 3) convolution filters. In practice nowadays, very few people train an entire CNN from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. 

Methodology Building components of CNN Data collection Alzheimer classification mathematical model – PFSECTL Selecting most informative MRI slices based on entropy Creating training and test sets Classification using transfer learning

Building components of CNN Convolutional layer is the core building block of a CNN 2+(-2+1-2)+(1-2-2) + 1= 2 - 3 - 3 + 1 = -3)

max-pooling Max-pooling is an aggregation operation that extracts the maximum value Main reason is to reduce the size average POOLING

Dropout regulaRization dropping out neurons (both hidden and visible) in a neural network randomly.  mainly tackle the problem of overfitting in neural networks. 

Non-linearity layers are also called activation functions : Rectified linear unit (ReLU) ReLU converges six times faster than tanh and sigmoid activation functions it converts all negative inputs to zero and the neuron does not get activated. This makes it very computational efficient as few neurons are activated per time

Fully-connected layers Neurons in a fully connected layer have full connections to all activations in the previous layer Fully-connected layers are usually appended after a combination of convolution, max-pooling and dropout layers in CNN

Data collection Data used was obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. T1-weighted sMRI data of 150 subjects: 50 AD; 50 CN; 50 MCI was selected for the classification task. 

Data pre-processing using FreeSurfer FreeSurfer image analysis suite, which is documented and freely available for download online To remove unnecessary details of brain MR images that might cause poor training of our classification model, cortical reconstruction and volumetric segmentation. There are five processes.

Data pre-processing using FreeSurfer Motion Correction and conform: When there are multiple sources of volume, this process will correct for minor motions between them and will average them together. Let inVol1 inVol2…inVoln be the different input volumes of Image I, then after motion correction the new output Volume received will be outVolI=inVol(I)1+inVol(I)2⋯inVolIn/n Non-Uniform intensity normalization (NU): Also called N3, corrects MR data by removing non-uniformity in intensity of the image. Talairach transform computation: Initially the pixel coordinate are converted to Talairach coordinates using the following equations

Data pre-processing using FreeSurfer Intensity normalization: This steps corrects for fluctuations in intensity. It scales intensities for all voxels such that mean intensity of white matter is 110. White matter pathways play an important role in the human brain by connecting spatially separated areas of the CNS and enabling rapid and efficient information exchange. Skull stripping: This step removes the skull from the normalized image.

Selecting most informative MRI slices based on entropy 3D image: 256*256*256 2D images: 256 slices of 256*256 Top 32 slices of 2D images Entropy Pre-processing

Classification using transfer learning VGG16 pretrained on ImageNet dataset is taken as base model Fully-connected layers from the base model are new fully-connected layers are added at the end of base model in which first layer consists of 256 neurons, second layer is the dropout layer with dropout ratio 0.5 which means half of the neurons will output 0 at any instance.

Classification using transfer learning

Creating training and test sets Balanced dataset of 4800 images is shuffled and split into training and test set with split ratio 80:20.

REsults Resulting accuracy for test (validation) set came out to be 95.73% Accuracy of 99.14%, 99.30% and 99.22% for AD vs CN, AD vs MCI and MCI vs CN classifications respectively.

results false negative indicates that a condition does not hold, while in fact it does a woman is not pregnant whereas she is actually pregnant Fp  indicates a given condition exists, when it does not

results

results

results

results

FUTURE PROSPECTIVES One should try other neural networks such as Inception Network, Residual Network and more recent state-of- the-art networks as base model for building the classifier. One should also try to achieve similar or better results by skipping preprocessing steps such as skull striping, intensity normalization.