Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray.

Slides:



Advertisements
Similar presentations
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Advertisements

ECG Signal processing (2)
Machine learning continued Image source:
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Support Vector Machines Kernel Machines
Feature extraction: Corners and blobs
Sketched Derivation of error bound using VC-dimension (1) Bound our usual PAC expression by the probability that an algorithm has 0 error on the training.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
Object Recognition Using Geometric Hashing
Scale Invariant Feature Transform (SIFT)
Blob detection.
SVM Support Vectors Machines
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
02/12/02 (c) 2002 University of Wisconsin, CS 559 Filters A filter is something that attenuates or enhances particular frequencies Easiest to visualize.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
An Introduction to Support Vector Machines Martin Law.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection Takafumi Kanamori Shohei Hido NIPS 2008.
Ajay Kumar, Member, IEEE, and David Zhang, Senior Member, IEEE.
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.
Marcin Marszałek, Ivan Laptev, Cordelia Schmid Computer Vision and Pattern Recognition, CVPR Actions in Context.
Machine Learning Seminar: Support Vector Regression Presented by: Heng Ji 10/08/03.
Video Tracking Using Learned Hierarchical Features
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
An Introduction to Support Vector Machines (M. Law)
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
CS 478 – Tools for Machine Learning and Data Mining SVM.
Histograms of Oriented Gradients for Human Detection(HOG)
Support Vector Machines Project מגישים : גיל טל ואורן אגם מנחה : מיקי אלעד נובמבר 1999 הטכניון מכון טכנולוגי לישראל הפקולטה להנדסת חשמל המעבדה לעיבוד וניתוח.
12/4/981 Automatic Target Recognition with Support Vector Machines Qun Zhao, Jose Principe Computational Neuro-Engineering Laboratory Department of Electrical.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
A TUTORIAL ON SUPPORT VECTOR MACHINES FOR PATTERN RECOGNITION ASLI TAŞÇI Christopher J.C. Burges, Data Mining and Knowledge Discovery 2, , 1998.
CSSE463: Image Recognition Day 11 Due: Due: Written assignment 1 tomorrow, 4:00 pm Written assignment 1 tomorrow, 4:00 pm Start thinking about term project.
Advanced Science and Technology Letters Vol.29 (SIP 2013), pp Electro-optics and Infrared Image Registration.
A Kernel Approach for Learning From Almost Orthogonal Pattern * CIS 525 Class Presentation Professor: Slobodan Vucetic Presenter: Yilian Qin * B. Scholkopf.
Incremental Reduced Support Vector Machines Yuh-Jye Lee, Hung-Yi Lo and Su-Yun Huang National Taiwan University of Science and Technology and Institute.
Finding Clusters within a Class to Improve Classification Accuracy Literature Survey Yong Jae Lee 3/6/08.
1 Shape Descriptors for Maximally Stable Extremal Regions Per-Erik Forss´en and David G. Lowe Department of Computer Science University of British Columbia.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人:蒲薇榄.
Blob detection.
A. M. R. R. Bandara & L. Ranathunga
Distinctive Image Features from Scale-Invariant Keypoints
Gender Classification Using Scaled Conjugate Gradient Back Propagation
CSSE463: Image Recognition Day 11
Support Vector Machines and Kernels
Recognizing Deformable Shapes
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
An Introduction to Support Vector Machines
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Assistive System Progress Report 1
Learning with information of features
CSSE463: Image Recognition Day 11
RGB-D Image for Scene Recognition by Jiaqi Guo
COSC 4368 Machine Learning Organization
Lecture VI: Corner and Blob Detection
CSSE463: Image Recognition Day 11
Support vector machine-based text detection in digital video
CSSE463: Image Recognition Day 11
Presented by Xu Miao April 20, 2005
Presentation transcript:

Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray

Outline Introduction Proposed classification method –Scale space analysis of LLD images –Feature for classification Experiments and results Conclusion

Introduction Large lump detection is important as it is related to downtime in oil- sand mining process. We investigate the solution of that problem by employing scale- space analysis and subsequent support vector machine classification. A frame with large lumpsA frame with no large lump

Feature extraction Supervised classification using Support Vector machine Proposed Method Image DoG Convolution Support Vector Machine Classification Result Training set and Test set Feature

Scale Space The scale space of an image is defined as a function, that is produced from the convolution of a variable- scale Gaussian, with an input image, I(x, y): where ∗ is the convolution operation in x and y, and –The parameter in this family is referred to as the scale parameter, –image structures of spatial size smaller than about have largely been smoothed away in the scale-space level at scale

Scale Space

Difference of Gaussian Difference of Gaussians (DoG) involves the subtraction of one blurred version of an original grayscale image from another, less blurred version of the original DoG can be computed as the difference of two nearby scales separated by a constant multiplicative factor k: Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images.

Why Difference of Gaussian? DoG scale-space: Efficient to compute “Blob” characteristic is extracted from image Good theory behind DoG (e.g., SIFT feature)

Constructing Scale Space The scale space represents the same information at different levels of scale, To reduce the redundancy the scale space can be sampled in the following way: –The domain of the variable is discretized in logarithmic steps arranged in O octaves. –Each octave is further subdivided in S sub-levels. –At each successive octave the data is spatially downsampled by half. –The octave index o and the sub-level index s are mapped to the corresponding scale by the formula O is the number of Octaves O min index of the first octave S is the number of sub-levels is the base smoothing

Constructing Scale Space…

Scale Space Analysis of LL images

Scale Space Analysis of non-LL images

Feature From DoG One possibility is to use the DoG image (as a vector) for classification. Problem: this feature is not shift invariant. Remedy: construction of shift invariant kernel.

Shift Invariant Kernel: Convolution Kernel Given two images I and J, their convolution is given by: Define a kernek between I and J as: This is the convolution kernel. Can we prove this is indeed a kernel?

Feature Selection and Classification Feature Selection: Classification Method –Support vector machine Construct convolution kernel matrix (Gram matrix)

Kernel Polynomial Kernel function on DoG training images without convolution Convolution kernel matrix on training DoG images

Supervised Classification Classification Method- Support Vector Machine (SVM) with polynomial kernel –Using cross validation we got polynomial kernel of degree 2 gives best results. Training set -20 image –10 large lump images –10 non large lump images Test Set images (training set including) –45 large lumps

Experimental Results Without convolution the system can detect 40 out of 45 large lump. FP - No large lump but system says lump FN - There is a large lump but system says no Precision=TP/(TP+FP)=40/(40+72)=0.35 Recall= TP/(TP+FN) =40/(40+5)=0.89

Experimental Results With convolution the system can detect 42 out of 45 large lump. FP - No large lump but system says lump FN - There is a large lump but system says no Precision=TP/(TP+FP)=42/(42+22)=0.66 Recall= TP/(TP+FN) =42/(42+3)=0.94

Conclusions Most of the cases DoG successfully captures blob like structure in the presence of large lump sequence LLD based on scale space analysis is very fast and simple No parameter tuning is required Shift invariant kernel improves the classification accuracy We believe by optimizing the kernel function we will achieve better classification accuracy (future work) The temporal information also can be used to avoid false positives (future work)

References [1]Huilin Xiong Swamy M.N.S. Ahmad, M.O., “Optimizing the kernel in the empirical feature space”, IEEE Transactions on Neural Networks, 16(2), pp , [2] G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan, “Learning the kernel matrix with semidefinte programming,” J. Machine Learning Res., vol. 5, [3] N. Cristianini, J. Kandola, A. Elisseeff, and J. Shawe-Taylor, “On kernel target alignment,” in Proc. Neural Information Processing Systems (NIPS’01), pp. 367–373. [4] D. Lowe, "Object recognition from local scale-invariant features". Proceedings of the International Conference on Computer Vision pp. 1150–1157.,1999

Thanks