Feature-Based Classification of Mouse Eye Images This work is sponsored by the NIH Interdisciplinary Center for Structural Informatics. National Library.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Alignment Visual Recognition “Straighten your paths” Isaiah.
Biomedical Person Identification via Eye Printing Masoud Alipour Ali Farhadi Ali Farhadi Nima Razavi.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Linear Classifiers (perceptrons)
嵌入式視覺 Feature Extraction
Localization of Diagnostically Relevant Regions of Interest in Whole Slide Images Ezgi Mercan 1, Selim Aksoy 2, Linda G. Shapiro 1, Donald L. Weaver 3,
Computer Vision Lecture 16: Texture
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Automatic in vivo Microscopy Video Mining for Leukocytes * Chengcui Zhang, Wei-Bang Chen, Lin Yang, Xin Chen, John K. Johnstone.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Robust Moving Object Detection & Categorization using self- improving classifiers Omar Javed, Saad Ali & Mubarak Shah.
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
吳家宇 吳明翰 Face Detection Based on Template Matching and 2DPCA Algorithm 2009/01/14.
Lecture 4: Feature matching
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
CS 376b Introduction to Computer Vision 02 / 26 / 2008 Instructor: Michael Eckmann.
Computer vision.
Machine Vision for Robots
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
An efficient method of license plate location Pattern Recognition Letters 26 (2005) Journal of Electronic Imaging 11(4), (October 2002)
From Pixels to Features: Review of Part 1 COMP 4900C Winter 2008.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Digital Image Processing CCS331 Relationships of Pixel 1.
Line detection Assume there is a binary image, we use F(ά,X)=0 as the parametric equation of a curve with a vector of parameters ά=[α 1, …, α m ] and X=[x.
Object Detection with Discriminatively Trained Part Based Models
Artificial Intelligence Chapter 6 Robot Vision Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
EE459 Neural Networks Examples of using Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Text From Corners: A Novel Approach to Detect Text and Caption in Videos Xu Zhao, Kai-Hsiang Lin, Yun Fu, Member, IEEE, Yuxiao Hu, Member, IEEE, Yuncai.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
1 Motion Blur Identification in Noisy Images Using Fuzzy Sets IEEE 5th International Symposium on Signal Processing and Information Technology (ISSPIT.
Image Segmentation Image segmentation (segmentace obrazu)
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
Date of download: 5/28/2016 Copyright © 2016 SPIE. All rights reserved. Flowchart of the computer-aided diagnosis (CAD) tool. (a) Segmentation: The region.
Lecture 4b Data augmentation for CNN training
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Bag-of-Visual-Words Based Feature Extraction
Mean Shift Segmentation
Feature description and matching
Fourier Transform: Real-World Images
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
Chapter 6. Robot Vision.
Fitting Curve Models to Edges
Dr. Chang Shu COMP 4900C Winter 2008
Computer Vision Lecture 16: Texture II
Categorization by Learning and Combing Object Parts
Satellite data Marco Puts
Aline Martin ECE738 Project – Spring 2005
Object Detection Creation from Scratch Samsung R&D Institute Ukraine
CSSE463: Image Recognition Day 30
Digital Image Processing Week III
RCNN, Fast-RCNN, Faster-RCNN
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Feature-Based Classification of Mouse Eye Images This work is sponsored by the NIH Interdisciplinary Center for Structural Informatics. National Library of Medicine under grant No. P20 LM Jenny Yuen, Yi Li, and Linda Shapiro

Problem  “…A cataract is a clouding of the eye's lens.”  The University of Michigan Kellogg Eye Center  There are ideas associating them to other illnesses.  Different cataracts have different effects, so the classification of cataracts is important.  The classification is currently done manually through an expert.  This involves a lot of time and knowledge of the area.

Problem Statement - Eye Image cornea Center of the eye

Problem Statement(1)  We are given a set of cataract categories S = {C 1, C 2, …, C n }. Control CP115 knockout Alpha A-Crystallin knockout Schwann Cell I factor knockout Secreted Protein Acidic and Rich Cystene knockout

Proposed solution  Automate the classification. Classifier Control SPARC SCI

Training Data  For each cataract category, we are given a set of mouse eye images that have such cataract. Control CP115 knockout Alpha A-Crystallin knockout Schwann Cell I factor knockout Secreted Protein Acidic and Rich Cystene knockout

Problem Statement  We are given: S = {C 1, C 2, …, C n } classes of cataracts and for each class C i :  A set of mouse eye training images exhibiting the corresponding cataract. A test image I that is not in the training set.  We want: A classifier that can determine the class C I, to which I belongs.

Approach  Feature Detection Ring Extraction Intensity Curve Texture Eye Center Appearance  Training

Ring Detection  For each pixel, apply a local equalization on a window of 3x3 (for edge enhancement). Before local equalizationAfter local equalization

Ring Detection (2)  Take a horizontal axis x that passes through the middle of the image (row nrows/2). For each pixel p (x,xi) in row x, 1)Take the region of the image in the circular sector C i with center p (x,xi) and central angle  2) Map this circular sector to a rectangle R using a polar coordinate transformation. 3) Represent each column of R by its average intensity. 4) Compute the variance of the intensities in this curve. This is the score for p (x,xi).

Ring Feature Extraction

Ring Detection (3) Let L best be the curve to which the circular sector with the highest score was mapped.  Create a feature vector with the following information from L best : Number, mean, and variance of maxima. Number, mean, and variance of minima. Mean and variance for the distance between every two consecutive maxima. Mean and variance for the distance between every two consecutive minima.

Intensity Curve Feature  The intensity curve of a line through the center of the eye with respect to the plane described by the image. y x z

Intensity Curve(2)  Let P = {p| p is a pixel in the row r/2 in the image I}  Fit a polynomial on the function C: C(x) = {Intensity(I(r/2,x))} Call this polynomial fit on C Fit C.

Curvature Description(3)  We could use the coefficients of Fit C as features to describe each class.  But there is a problem: Noise in the image might affect the polynomial.  Solution: Use the Fourier Transform

Intensity Curve(4) – Modified Algorithm  Let P = {p| p is a pixel in the row r/2 in the image I}  C(x) = {Intensity(I(r/2,x))}  Apply a Fourier Transform to C: Fourier(C). Fit a polynomial on Fourier(C). Call this polynomial fitting Fit Fourier(C). Use the coefficients of Fit Fourier(C) as features.

Intensity Curve(5) With the application of a Fast Fourier Transform Intensity values {coeff1 Real, coeff1 Im, coeff2 Real, coeff2 Im, coeff3 Real, coeff3 Im, coeff4 Real, coeff4 Im }

Eye Center Appearance  Some of the classes have a cloudy circular region in the center of the eye.  We plan to identify these, extract features such as the size and intensity of the region, and use this information for classification.

Training  Using the features from a training set, we can input this data in a neural net. (f11, f12, …. f1n, C1) (f21, f22, …. f2n, C2). (fk1, fk2, …. fkn, Ck) neural net training data (f1,f2, …. fn)test image vector Classification

Preliminary Results on 3 Classes SCISPARCWT SCI SPARC WT Predicted Actual 382 images Correctly classified 82.7 % Incorrectly classified 17.2 % 2 p  0.001

Confusion between images  Some of the images are so similar to each other that it is very hard, even for an expert, to determine the class. There are 50 of these. WT SPARC

Continued Work  Retrain and retest without the ambiguous images.  Try more features separately.  Try combinations of features.  Try other classifiers (ie. SVMs).  Get more training images.