CLASSIFICATION OF TUMOR HISTOPATHOLOGY VIA SPARSE FEATURE LEARNING Nandita M. Nayak1, Hang Chang1, Alexander Borowsky2, Paul Spellman3 and Bahram Parvin1.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Human Action Recognition by Learning Bases of Action Attributes and Parts.
Robust Object Tracking via Sparsity-based Collaborative Model
Texture Segmentation Based on Voting of Blocks, Bayesian Flooding and Region Merging C. Panagiotakis (1), I. Grinias (2) and G. Tziritas (3)
Learning Convolutional Feature Hierarchies for Visual Recognition
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Deep Belief Networks for Spam Filtering
K-means Based Unsupervised Feature Learning for Image Recognition Ling Zheng.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Autoencoders Mostafa Heidarpour
Overview of Back Propagation Algorithm
© 2013 IBM Corporation Efficient Multi-stage Image Classification for Mobile Sensing in Urban Environments Presented by Shashank Mujumdar IBM Research,
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
What is the Best Multi-Stage Architecture for Object Recognition Kevin Jarrett, Koray Kavukcuoglu, Marc’ Aurelio Ranzato and Yann LeCun Presented by Lingbo.
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
Online Dictionary Learning for Sparse Coding International Conference on Machine Learning, 2009 Julien Mairal, Francis Bach, Jean Ponce and Guillermo Sapiro.
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
An Example of Course Project Face Identification.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
Table 3:Yale Result Table 2:ORL Result Introduction System Architecture The Approach and Experimental Results A Face Processing System Based on Committee.
Presented by: Mingyuan Zhou Duke University, ECE June 17, 2011
Group Sparse Coding Samy Bengio, Fernando Pereira, Yoram Singer, Dennis Strelow Google Mountain View, CA (NIPS2009) Presented by Miao Liu July
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
Stochastic Subgradient Approach for Solving Linear Support Vector Machines Jan Rupnik Jozef Stefan Institute.
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
Exploring Alternative Splicing Features using Support Vector Machines Feature for Alternative Splicing Alternative splicing is a mechanism for generating.
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
Deep Learning Overview Sources: workshop-tutorial-final.pdf
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Neural networks and support vector machines
Guillaume-Alexandre Bilodeau
Compact Bilinear Pooling
Pathology Spatial Analysis February 2017
In Search of the Optimal Set of Indicators when Classifying Histopathological Images Catalin Stoean University of Craiova, Romania
M.A. Maraci, C.P. Bridge, R. Napolitano, A. Papageorghiou, J.A. Noble 
Bag-of-Visual-Words Based Feature Extraction
When machine vision meets histology: A comparative evaluation of model architecture for classification of histology sections  Cheng Zhong, Ju Han, Alexander.
International Workshop
Learning Mid-Level Features For Recognition
Article Review Todd Hricik.
Journal of Vision. 2009;9(3):5. doi: /9.3.5 Figure Legend:
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Multimodal Learning with Deep Boltzmann Machines
Basic machine learning background with Python scikit-learn
Natural Language Processing of Knee MRI Reports
Classification of Hand-Written Digits Using Scattering Convolutional Network Dongmian Zou Advisor: Professor Radu Balan.
Using Transductive SVMs for Object Classification in Images
Aoxiao Zhong Quanzheng Li Team HMS-MGH-CCDS
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Domingo Mery Department of Computer Science
Goodfellow: Chapter 14 Autoencoders
Large Scale Support Vector Machines
Outline Background Motivation Proposed Model Experimental Results
Visualizing and Understanding Convolutional Networks
Machine learning overview
Neural networks (3) Regularization Autoencoder
COSC 4335: Part2: Other Classification Techniques
Visual Recognition of American Sign Language Using Hidden Markov Models 문현구 문현구.
Domingo Mery Department of Computer Science
Department of Computer Science Ben-Gurion University of the Negev
Introduction to Neural Networks
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Goodfellow: Chapter 14 Autoencoders
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Deep CNN for breast cancer histology Image Analysis
Presentation transcript:

CLASSIFICATION OF TUMOR HISTOPATHOLOGY VIA SPARSE FEATURE LEARNING Nandita M. Nayak1, Hang Chang1, Alexander Borowsky2, Paul Spellman3 and Bahram Parvin1 1Life Sciences Division, Lawrence Berkeley National Laboratory, 2Center for Comparative Medicine, University of California, Davis, 3Center for Spatial Systems Biomedicine, Oregon Health & Science University, Portland Major Challenges and Approach Introduction Proposed Model a) Learn a dictionary 𝐷using a generative model based on an extended version of restricted Boltzmann machine (RBM). Two stages of feedforward (encoding) and feedback (decoding). Second layer of pooling makes the system robust to translation. Fig: Example images of tumor samples in GBM showing diversity in the sample preparation Challenges: Requires a large cohort of histology sections which may be generated at different labs with significant amount of technical variations Expensive to generate large amount of annotated training data. Approach – Learn a set of automated features from unlabeled data and train learned features against an annotated dataset for classifying a collection of small patches in each image. Diagram of the proposed method. Objective: To evaluate tumor compositions in terms of multiparametric morphometric indices and link them to the clinical data. Decompose histology sections into different components (e.g., Stroma, tumor) and test nuclear compartment specific morphometric indices against outcomes. Fig: (a) Architecture for restricted Boltzmann machine (RBM), (b)Illustration of the 2-layer recognition framework including the encoder, decoder and pooling. Unsupervised Feature Learning Classification and Reconstruction Experimental Design Input vectorized image patches 𝑋 to generate an overcomplete set of 𝑘 basis functions 𝐷 and a sparse representation 𝑍 for each input. An encoder 𝑊 is also learnt. Optimization function 𝐹 𝑋 = 𝑊𝑋−𝑍 2+ 𝜆 𝑍 1+ 𝐷𝑍 −𝑋 2 , where 𝑋 𝜖 ℝn , D 𝜖 ℝ𝑛xk , W 𝜖 ℝkxn, 𝑍𝜖 ℝk 𝜆 is a parameter which controls the sparsity of the solution and is chosen by cross validation. Compute optimal 𝐷, 𝑍 and 𝑊 given 𝑋 by: Randomly initialize 𝐷 and 𝑊. Fixing 𝐷 and 𝑊, then minimizing 𝐹 𝑋 w.r.t. 𝑍 via gradient descent. Fixing 𝑍, then estimating 𝐷 and 𝑊 via stochastic gradient descent. Experiments were conducted on two datasets derived from (i) Glioblastoma Multiforme (GBM) and (ii) Kidney clear cell renal carcinoma (KIRC). Each image is 1K-by-1K pixels, which is cropped from whole slide images (WSI).1000 bases were constructed for each dataset. GBM: Necrosis has been shown to be predictive of outcome; We curate three classes that correspond to necrosis, transition to necrosis (an intermediate step), and tumor. Dataset contains 1400 images of samples scanned with 20X objective. Feature learning was performed using 50 randomly selected patches from each image of size 25 x 25. Max pooling was performed on 100 x 100 patches in 4 x 4 neighborhood. The patches were downsampled by a factor of 2 and normalized in the range of 0-1 in the color space . KIRC: Tumor type is the best prognosis for outcome, and in most sections, there is mix grading of clear cell carcinoma (CCC) and Granular tumors. The histology is typically complex since it contains components of stroma, blood, and cystic space. We opted the strategy to label each image patch as normal, granular tumor type, CCC, stroma, and others. The dataset contains 2,500 images of samples scanned with 40X objective. The patches were downsampled by a factor of 4 and normalized in the range of 0-1 in the color space. Fig: (a) A heterogeneous tissue section with “necrosis transition” on the left and tumor on the right, and (b) its reconstruction after encoding and decoding Classification: The labeled training data is divided into non-overlapping image patches. Codes for these patches are computed as 𝑍=𝑊𝑋. Codes are pooled over a local neighborhood. Pooled codes are modeled for different tissue types using a multiclass regularized support vector machine (SVM) implemented using LIBSVM. Reconstruction: The original image can be reconstructed from the codes using the decoder 𝑋=𝐷𝑍 The reconstruction error is a measure of how well the computed bases represent the data. Fig: Representative set of computed basis function, D, for a) the KIRC dataset and b) the GBM dataset. Classification results for GBM and KIRC Conclusion Classification of Heterogeneous Tissue Sections For GBM, from a total of 12,000, 8,000 and 16,000 patches obtained for necrosis, transition to necrosis and tumor. 4,000 patches were randomly selected, from each class, for training. An overall accuracy of 84.3% was obtained. For KIRC, from a total of 10, 000 patches for CCC, 16,000 patches for normal and stromal tissues, and 6,500 patches for tumor and others, we used 3,250 patches for training from each class. The overall classification accuracy was at 80.9% A method for automated feature learning from unlabeled images has been proposed for classifying distinct morphometric regions. Automated feature learning provides a rich representation when a cohort of WSI has to be processed in the context of batch effect. Automated feature learning is a generative model that reconstructs the original image from a sparse representation of an auto encoder. The system has been tested on two tumor types from the TCGA archive. Proposed approach will enable identifying morphometric indices that are predictive of the outcome. The preliminary performance of the computational protocol, for labeling tumor composition, was tested on several GBM sections. Whole slide sections of the size 20,000 × 20,000 pixels were selected, and each 100-by-100 pixel patch was classified against the learned model. Classification has been consistent with pathological evaluation. Fig: Two examples of classification results of heterogeneous GBM tissue sections. The left and right images correspond to the original and classification results, respectively. Color coding is black (tumor), pink (necrosis), and green (transition to necrosis).