Visual Vocabulary Construction for Mining Biomedical Images Arnab Bhattacharya, Vebjorn Ljosa, Jia-Yu Pan Presented by Li An, CIS, TU.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

A probabilistic model for retrospective news event detection
CS 450: COMPUTER GRAPHICS LINEAR ALGEBRA REVIEW SPRING 2015 DR. MICHAEL J. REALE.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Learning Techniques for Video Shot Detection Under the guidance of Prof. Sharat Chandran by M. Nithya.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
AGE ESTIMATION: A CLASSIFICATION PROBLEM HANDE ALEMDAR, BERNA ALTINEL, NEŞE ALYÜZ, SERHAN DANİŞ.
Computer Vision Lecture 16: Texture
São Paulo Advanced School of Computing (SP-ASC’10). São Paulo, Brazil, July 12-17, 2010 Looking at People Using Partial Least Squares William Robson Schwartz.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
CMU SCS Indexing and Mining Biological Images Christos Faloutsos CMU.
A Study of Approaches for Object Recognition
Distinguishing Photographic Images and Photorealistic Computer Graphics Using Visual Vocabulary on Local Image Edges Rong Zhang,Rand-Ding Wang, and Tian-Tsong.
CMU SCS Indexing and Mining Biological Images Christos Faloutsos CMU.
Image Categorization by Learning and Reasoning with Regions Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published.
Feature Screening Concept: A greedy feature selection method. Rank features and discard those whose ranking criterions are below the threshold. Problem:
Presented by Zeehasham Rasheed
5/30/2006EE 148, Spring Visual Categorization with Bags of Keypoints Gabriella Csurka Christopher R. Dance Lixin Fan Jutta Willamowski Cedric Bray.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Accuracy Assessment. 2 Because it is not practical to test every pixel in the classification image, a representative sample of reference points in the.
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University 3D Shape Classification Using Conformal Mapping In.
Identifying Computer Graphics Using HSV Model And Statistical Moments Of Characteristic Functions Xiao Cai, Yuewen Wang.
Computer vision.
Lecture 19 Representation and description II
Multimedia Databases (MMDB)
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
Basic concepts in ordination
Classification. An Example (from Pattern Classification by Duda & Hart & Stork – Second Edition, 2001)
Watch, Listen and Learn Sonal Gupta, Joohyun Kim, Kristen Grauman and Raymond Mooney -Pratiksha Shah.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Local Non-Negative Matrix Factorization as a Visual Representation Tao Feng, Stan Z. Li, Heung-Yeung Shum, HongJiang Zhang 2002 IEEE Presenter : 張庭豪.
COLOR HISTOGRAM AND DISCRETE COSINE TRANSFORM FOR COLOR IMAGE RETRIEVAL Presented by 2006/8.
Yaomin Jin Design of Experiments Morris Method.
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Graph-based Text Classification: Learn from Your Neighbors Ralitsa Angelova , Gerhard Weikum : Max Planck Institute for Informatics Stuhlsatzenhausweg.
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
2005/12/021 Content-Based Image Retrieval Using Grey Relational Analysis Dept. of Computer Engineering Tatung University Presenter: Tienwei Tsai ( 蔡殿偉.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
A Multiresolution Symbolic Representation of Time Series Vasileios Megalooikonomou Qiang Wang Guo Li Christos Faloutsos Presented by Rui Li.
CZ5225: Modeling and Simulation in Biology Lecture 7, Microarray Class Classification by Machine learning Methods Prof. Chen Yu Zong Tel:
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
2D-LDA: A statistical linear discriminant analysis for image matrix
CSE182 L14 Mass Spec Quantitation MS applications Microarray analysis.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
Wavelets Chapter 7 Serkan ERGUN. 1.Introduction Wavelets are mathematical tools for hierarchically decomposing functions. Regardless of whether the function.
Methods of multivariate analysis Ing. Jozef Palkovič, PhD.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Wavelets : Introduction and Examples
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
REMOTE SENSING Multispectral Image Classification
REMOTE SENSING Multispectral Image Classification
Computer Vision Lecture 16: Texture II
Announcements Project 2 artifacts Project 3 due Thursday night
Announcements Project 4 out today Project 2 winners help session today
Announcements Artifact due Thursday
A Novel Smoke Detection Method Using Support Vector Machine
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Random Neural Network Texture Model
Presentation transcript:

Visual Vocabulary Construction for Mining Biomedical Images Arnab Bhattacharya, Vebjorn Ljosa, Jia-Yu Pan Presented by Li An, CIS, TU

Introduction Given a large collection of biomedical images of several conditions and treatments, how to describe the important regions in the images, or the differences between different experimental conditions? Figure 1. (a) a normal retina and (b) a retina after 3 days of detachment. The retinas were labeled with antibodies to rhodopsin (red) and glial fibrillary acidic protein (GFAP, green).

Introduction Build a system that will automatically detect and highlight patterns differentiating image classes, after processing hundreds or thousands of pictures. Problem 1 –Summarize an image automatically. Problem 2 –Identify patterns that distinguish image classes. Problem 3 –Highlight interesting regions in an image.

Related work Textual Vocabulary and Visual Vocabulary Previous work applied transformation on image pixels. –The high dimensionality of pixels limited these methods to small images –Clustering algorithms or transformation-based methods, like K-means, PCA, Wavelet transforms, ICA also have problems with orientation and registration issues –One way to deal with this dimensionality curse is to extract a small number of features from image pixels.

Proposed Method We propose to automatically develop a visual vocabulary by breaking images into n-by-n tiles, and derive key tiles (ViVos) for each image and condition. This method successfully avoids issues such as registration and dimensionality curse. Tiles –The idea of tiles is introduced for visual term generation ViVos –A novel approach to group tiles into visual terms. We call our automatically extracted visual terms ViVos.

ViVo Definition A ViVo is defined by either the positive or the negative direction of an ICA basis vector, and represents a characteristic pattern in image tiles. The ViVo-vector of a tile ˜ ti,j is a vector v( ˜ ti,j) = f1,..., fm], where fi indicates the contibutions of the i-th ViVo in describing the tile. The ViVo-vector of an image is defined as the sum of the ViVo-vectors of all its tiles.

ViVo Constructing

The first step partitions the images into non- overlapping tiles. –The optimal tile size depends on the nature of the images. –The tiles must be large enough to capture the characteristic textures of the images. –On the other hand, they cannot be too large. –Figure 1(a), we use a tile size of 64-by-64 pixels, so each retinal image has 8×12 tiles

ViVo Constructing In the second step, a feature vector is extracted from each tile, representing its image content. –MPEG-7 features Color structure descriptor (CSD) Color layout descriptor (CLD) Homogeneous texture descriptor (HTD) –The vector representing a tile using features of, say CSD, is called a tile-vector of the CSD.

ViVo Constructing The third step derives a set of symbols from the feature vectors of all the tiles of all the images. We derive a set of symbols by applying ICA or PCA to the feature vectors. Each orthogonal (basis) vector found by ICA or PCA becomes a symbol. We call the symbols ViVos and the set of symbols a visual vocabulary. Each basis vector defines two ViVos, one along the positive direction of the vector, another along the negative direction.

ViVo Constructing Figure 2. ViVos (ICA vs PCA). Each point corresponds to a tile. Basis vectors (P1,P2, I1, I2, I3) are scaled for visualization.

ViVo Constructing Let T0 be a t-by-d matrix, where t is the number of tiles from all training images, and d is the number of features extracted from each tile. Each row of T0 corresponds to a tile-vector.~ti,j, with the overall mean subtracted. Suppose we want to generate m ViVos. We first reduce the dimensionality of T0 from d to m =m/2, using PCA, yielding a t-by-m matrix T. Next, ICA is applied in order to decompose T into two matrices H[t*m ] and B[m *m ] such that T = HB. The rows of B are the ICA basis vectors (solid lines in Figure 2). Considering the positive and negative directions of each basis vector, the m ICA basis vectors would dene m = 2m ViVos, which are the outputs of the function gen_vv().

ViVo Constructing How do we determine the number of ViVos? We follow the rule of thumb, and make m =m/2 be the dimensionality which preserves 95% spread/energy of the distribution.

ViVo Constructing Fourth step, with the ViVos ready, we can use them to represent an image. –Represent each d-dim tile-vector in terms of ViVos by projecting a tile-vector to the m -dim PCA space and then to the m -dim ICA space. –The positive and negative projection coefficients are then considered separately, yielding the 2m -dim ViVo- vector of a tile. –The m = 2m coefficients in the ViVo-vector of a tile also indicate the contributions of each of the m ViVos to the tile.

ViVo Constructing Final step, each image is expressed by simply adding up the ViVo-vectors of the tiles in an image. It is a good description of the entire image because ICA producesViVos that do not interfere with each other. ICA makes the columns of H as independent as possible

ViVo Visualization A ViVo is represented by a tile that strongly expresses the characteristics of that ViVo. The representative tiles of a ViVo vk, T (vk), are then selected from its tile group (essentially the tiles at the tip of the tile group).

ViVo Tile Groups Visualizes the tile groups of two ViVos on the 2-D plane defined by the PCA basis vectors (P1;P2). The top 5 representative tiles of the two ViVos are shown in light triangles. Figure 3

Quantitative Evaluation Classification –Classication of Retinal Images –Classication of Subcellular Protein Localization Images Data Mining Using ViVos –Biological Interpretation of ViVos –Finding Most Discriminative ViVos –Highlighting Interesting Regions by ViVos

Classification

Biological Interpretation of ViVos Figure 4

Finding Most Discriminative ViVos Figure 5. Pairs of conditions and the corresponding discriminative ViVos. There is an edge in the graph for each pair of conditions that is important from a biological point of view. The numbers on each edge indicate the ViVos that contribute the most to the differences between the conditions connected by that edge. The ViVos are specied in the order of their discriminating power.

Highlighting Interesting Regions by ViVos Figure 6. Two examples of images with ViVo ­ annotations (highlighting) added. (a)GFAP ­ labeling in the inner retina ; (b)Rod photoreceptor recovered as a result of reat ­ tachment treatment.

Conclusion Biological Significance –The terms of the visual vocabulary correspond to concepts biologists use when describing images and biological processes. Quantitative Evaluation –The ViVo-tiles are successful in classifying images, with accuracies of 80% and above. Generality –The technique is successfully applied on two diverse classes of images. We believe it will be applicable to other biomedical images. Biological Process Summarization –The visual vocabulary can be used to describe the essential differences between classes.

Thanks!