Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray.

Slides:



Advertisements
Similar presentations
ECG Signal processing (2)
Advertisements

An Introduction of Support Vector Machine
An Introduction of Support Vector Machine
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
LPP-HOG: A New Local Image Descriptor for Fast Human Detection Andy Qing Jun Wang and Ru Bo Zhang IEEE International Symposium.
1 Fast Asymmetric Learning for Cascade Face Detection Jiaxin Wu, and Charles Brubaker IEEE PAMI, 2008 Chun-Hao Chang 張峻豪 2009/12/01.
Support Vector Machines
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Support Vector Machines Kernel Machines
Sketched Derivation of error bound using VC-dimension (1) Bound our usual PAC expression by the probability that an algorithm has 0 error on the training.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
Object Recognition Using Geometric Hashing
Scale Invariant Feature Transform (SIFT)
SVM Support Vectors Machines
Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
02/12/02 (c) 2002 University of Wisconsin, CS 559 Filters A filter is something that attenuates or enhances particular frequencies Easiest to visualize.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
An Introduction to Support Vector Machines Martin Law.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection Takafumi Kanamori Shohei Hido NIPS 2008.
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Marcin Marszałek, Ivan Laptev, Cordelia Schmid Computer Vision and Pattern Recognition, CVPR Actions in Context.
Video Tracking Using Learned Hierarchical Features
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
An Introduction to Support Vector Machines (M. Law)
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
CSSE463: Image Recognition Day 11 Lab 4 (shape) tomorrow: feel free to start in advance Lab 4 (shape) tomorrow: feel free to start in advance Test Monday.
CS 478 – Tools for Machine Learning and Data Mining SVM.
Histograms of Oriented Gradients for Human Detection(HOG)
RSVM: Reduced Support Vector Machines Y.-J. Lee & O. L. Mangasarian First SIAM International Conference on Data Mining Chicago, April 6, 2001 University.
Support Vector Machines Project מגישים : גיל טל ואורן אגם מנחה : מיקי אלעד נובמבר 1999 הטכניון מכון טכנולוגי לישראל הפקולטה להנדסת חשמל המעבדה לעיבוד וניתוח.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
Support Vector Machines Tao Department of computer science University of Illinois.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
CSSE463: Image Recognition Day 11 Due: Due: Written assignment 1 tomorrow, 4:00 pm Written assignment 1 tomorrow, 4:00 pm Start thinking about term project.
Advanced Science and Technology Letters Vol.29 (SIP 2013), pp Electro-optics and Infrared Image Registration.
A Kernel Approach for Learning From Almost Orthogonal Pattern * CIS 525 Class Presentation Professor: Slobodan Vucetic Presenter: Yilian Qin * B. Scholkopf.
Projects Project 1a due this Friday Project 1b will go out on Friday to be done in pairs start looking for a partner now.
Incremental Reduced Support Vector Machines Yuh-Jye Lee, Hung-Yi Lo and Su-Yun Huang National Taiwan University of Science and Technology and Institute.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
An Introduction of Support Vector Machine In part from of Jinwei Gu.
Finding Clusters within a Class to Improve Classification Accuracy Literature Survey Yong Jae Lee 3/6/08.
1 Shape Descriptors for Maximally Stable Extremal Regions Per-Erik Forss´en and David G. Lowe Department of Computer Science University of British Columbia.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
WLD: A Robust Local Image Descriptor Jie Chen, Shiguang Shan, Chu He, Guoying Zhao, Matti Pietikäinen, Xilin Chen, Wen Gao 报告人:蒲薇榄.
Blob detection.
Support Vector Machine
A. M. R. R. Bandara & L. Ranathunga
Gender Classification Using Scaled Conjugate Gradient Back Propagation
CSSE463: Image Recognition Day 11
Dynamic Routing Using Inter Capsule Routing Protocol Between Capsules
An Introduction to Support Vector Machines
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Assistive System Progress Report 1
Learning with information of features
CSSE463: Image Recognition Day 11
The following slides are taken from:
COSC 4368 Machine Learning Organization
CSSE463: Image Recognition Day 11
Support vector machine-based text detection in digital video
CSSE463: Image Recognition Day 11
Presentation transcript:

Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray

Outline Introduction Proposed classification method –Scale space analysis of LLD images –Feature for classification Experiments and results Conclusion

Introduction Large lump detection is important as it is related to downtime in oil- sand mining process. We investigate the solution of that problem by employing scale- space analysis and subsequent support vector machine classification. A frame with large lumpsA frame with no large lump

Feature extraction Supervised classification using Support Vector machine Proposed Method Image DoG Convolution Support Vector Machine Classification Result Training set and Test set Feature

Scale Space The scale space of an image is defined as a function, that is produced from the convolution of a variable- scale Gaussian, with an input image, I(x, y): where ∗ is the convolution operation in x and y, and –The parameter in this family is referred to as the scale parameter, –image structures of spatial size smaller than about have largely been smoothed away in the scale-space level at scale

Scale Space

Difference of Gaussian Difference of Gaussians (DoG) involves the subtraction of one blurred version of an original grayscale image from another, less blurred version of the original DoG can be computed as the difference of two nearby scales separated by a constant multiplicative factor k: Subtracting one image from the other preserves spatial information that lies between the range of frequencies that are preserved in the two blurred images.

Why Difference of Gaussian? DoG scale-space: Efficient to compute “Blob” characteristic is extracted from image Good theory behind DoG (e.g., SIFT feature)

Constructing Scale Space The scale space represents the same information at different levels of scale, To reduce the redundancy the scale space can be sampled in the following way: –The domain of the variable is discretized in logarithmic steps arranged in O octaves. –Each octave is further subdivided in S sub-levels. –At each successive octave the data is spatially downsampled by half. –The octave index o and the sub-level index s are mapped to the corresponding scale by the formula O is the number of Octaves O min index of the first octave S is the number of sub-levels is the base smoothing

Constructing Scale Space…

Scale Space Analysis of LL images

Scale Space Analysis of non-LL images

Feature From DoG One possibility is to use the DoG image (as a vector) for classification. Problem: this feature is not shift invariant. Remedy: construction of shift invariant kernel.

Shift Invariant Kernel: Convolution Kernel Given two images I and J, their convolution is given by: For LLD define a kernel between I and J as: This is the convolution kernel. Can we prove this is indeed a kernel? Note that this kernel function depends on parameters. How do we choose the parameter values?

Choosing Kernel Parameters Polynomial Kernel function on DoG training images without convolution Convolution kernel matrix on training DoG images The hint is there in the Gram matrix (aka kernel matrix): The right kernel matrix produces better result. Do you see any cue?

Choosing Kernel Parameters… Here’s one criterion for choosing a better Gram matrix: Look for a Gram matrix that is more “block-structured” Let be a Gram matrix, then consider a criterion: We can expect that for smaller values of r, K is more block-structured. So we can choose the parameter in K for which we obtain smallest r.

A Mixture Kernel A mixture kernel: We can prove that if K l are kernels, then K is also a kernel. Ex. Prove it. For the LLD problem, let’s consider a mixture kernel: Question: we want find out from the training data? How? We can minimize r as a function of Another such criterion is called kernel alignment criterion. This is a very active area of research these days.

Supervised Classification Classification Method- Support Vector Machine (SVM) with polynomial kernel –Using cross validation we got polynomial kernel of degree 2 gives best results. Training set -20 image –10 large lump images –10 non large lump images Test Set images (training set including) –45 large lumps

Experimental Results Without convolution the system can detect 40 out of 45 large lump. FP - No large lump but system says lump FN - There is a large lump but system says no Precision=TP/(TP+FP)=40/(40+72)=0.35 Recall= TP/(TP+FN) =40/(40+5)=0.89

Experimental Results With convolution the system can detect 42 out of 45 large lump. FP - No large lump but system says lump FN - There is a large lump but system says no Precision=TP/(TP+FP)=42/(42+22)=0.66 Recall= TP/(TP+FN) =42/(42+3)=0.94

Conclusions Most of the cases DoG successfully captures blob like structure in the presence of large lump sequence LLD based on scale space analysis is very fast and simple No parameter tuning is required Shift invariant kernel improves the classification accuracy We believe by optimizing the kernel function we will achieve better classification accuracy (future work) The temporal information also can be used to avoid false positives (future work)

References [1]Huilin Xiong Swamy M.N.S. Ahmad, M.O., “Optimizing the kernel in the empirical feature space”, IEEE Transactions on Neural Networks, 16(2), pp , [2] G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan, “Learning the kernel matrix with semidefinte programming,” J. Machine Learning Res., vol. 5, [3] N. Cristianini, J. Kandola, A. Elisseeff, and J. Shawe-Taylor, “On kernel target alignment,” in Proc. Neural Information Processing Systems (NIPS’01), pp. 367–373. [4] D. Lowe, "Object recognition from local scale-invariant features". Proceedings of the International Conference on Computer Vision pp. 1150–1157.,1999

Thanks