Categorization by Learning and Combing Object Parts B. Heisele, T. Serre, M. Pontil, T. Vetter, T. Poggio. Presented by Manish Jethwa.

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

Support Vector Machine
Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Linear Classifiers (perceptrons)
Support Vector Machines
SVM—Support Vector Machines
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Face detection Many slides adapted from P. Viola.
Discriminative and generative methods for bags of features
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Face Recognition with Harr Transforms and SVMs EE645 Final Project May 11, 2005 J Stautzenberger.
Image Categorization by Learning and Reasoning with Regions Yixin Chen, University of New Orleans James Z. Wang, The Pennsylvania State University Published.
Data mining and statistical learning - lecture 13 Separating hyperplane.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Statistical Learning Theory: Classification Using Support Vector Machines John DiMona Some slides based on Prof Andrew Moore at CMU:
Optimization Theory Primal Optimization Problem subject to: Primal Optimal Value:
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
FACE DETECTION AND RECOGNITION By: Paranjith Singh Lohiya Ravi Babu Lavu.
Classification III Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
Multiclass object recognition
Machine Learning Week 4 Lecture 1. Hand In Data Is coming online later today. I keep test set with approx test images That will be your real test.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
Active Learning for Class Imbalance Problem
Based on: The Nature of Statistical Learning Theory by V. Vapnick 2009 Presentation by John DiMona and some slides based on lectures given by Professor.
SVM by Sequential Minimal Optimization (SMO)
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Rotation Invariant Neural-Network Based Face Detection
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
CS Statistical Machine learning Lecture 18 Yuan (Alan) Qi Purdue CS Oct
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Object Recognition in Images Slides originally created by Bernd Heisele.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Human Detection Mikel Rodriguez. Organization 1. Moving Target Indicator (MTI) Background models Background models Moving region detection Moving region.
Face Detection Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Handwritten digit recognition
Face Detection Using Large Margin Classifiers Ming-Hsuan Yang Dan Roth Narendra Ahuja Presented by Kiang “Sean” Zhou Beckman Institute University of Illinois.
CISC667, F05, Lec22, Liao1 CISC 667 Intro to Bioinformatics (Fall 2005) Support Vector Machines I.
Linear hyperplanes as classifiers Usman Roshan. Hyperplane separators.
An Introduction to Support Vector Machine (SVM)
The Viola/Jones Face Detector A “paradigmatic” method for real-time object detection Training is slow, but detection is very fast Key ideas Integral images.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Support Vector Machines
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
FACE DETECTION : AMIT BHAMARE. WHAT IS FACE DETECTION ? Face detection is computer based technology which detect the face in digital image. Trivial task.
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
9.913 Pattern Recognition for Vision Class9 - Object Detection and Recognition Bernd Heisele.
Computational Intelligence: Methods and Applications Lecture 24 SVM in the non-linear case Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
SVMs in a Nutshell.
CSSE463: Image Recognition Day 14 Lab due Weds. Lab due Weds. These solutions assume that you don't threshold the shapes.ppt image: Shape1: elongation.
SUPPORT VECTOR MACHINES Presented by: Naman Fatehpuria Sumana Venkatesh.
Face detection Many slides adapted from P. Viola.
High resolution product by SVM. L’Aquila experience and prospects for the validation site R. Anniballe DIET- Sapienza University of Rome.
Non-separable SVM's, and non-linear classification using kernels Jakob Verbeek December 16, 2011 Course website:
Recognition using Nearest Neighbor (or kNN)
Categorization by Learning and Combing Object Parts
CSSE463: Image Recognition Day 14
Machine Learning Week 3.
Concave Minimization for Support Vector Machine Classifiers
Presentation transcript:

Categorization by Learning and Combing Object Parts B. Heisele, T. Serre, M. Pontil, T. Vetter, T. Poggio. Presented by Manish Jethwa

Overview Learn discriminatory components of objects with Support Vector Machine (SVM) classifiers.

Background Global Approach –Attempt to classify the entire object –Successful when applied to problems in which the object pose is fixed. Component-based techniques –Individual components vary less when object pose changes than whole object –Useable even when some of the components are occluded.

Linear Support Vector Machines Linear SVMs are used to discriminate between two classes by determining the separating hyperplane. Support Vectors

Decision function The decision function of the SVM has the form: Number of training data points Training data points Class label {-1,1}Adjustable coefficients -solution of quadratic programming problem -positive weights for Support Vectors -zero for all other data points New data point f ( x ) defines a hyperplane dividing The data. The sign of f ( x ) indicates the class x. Bias i=1 l f (x)= ∑ α i y i + b

Significance of α i Correspond to the weights of the support vectors. Learned from training data set. Used to compute the margin M of the support vectors to the hyperplane. Margin M = ( √∑ i l  i ) -1

Non-separable Data The notion of a margin extends to non-separable data also. Misclassified points result in errors. The hyperplane is now defined by maximizing the margin while minimizing the summed error. The expected error probability of the SVM satisfies the following bound: EP err ≤l -1 E[D 2 /M 2 ] Diameter of sphere containing all training data

Measuring Error Probability or error is proportional to the following ratio: ρ = D 2 /M 2 Renders ρ, and therefore the probability of error, invariant to scale. D1D1D1D1 M1M1M1M1 D2D2D2D2 M2M2M2M2 = D2D2M2M2D2D2M2M2 D1D1M1M1D1D1M1M1 ρ= ρ 2 ρ 1 = ρ 2

Learning Components Expansion left ρ

Learning Facial Components Extracting face components is time consuming –Requires manually extracting each component from all training images. Use textured head models instead –Automatically produce a large number of faces under differing illumination and poses Seven textured head models used to generate 2,457 face images of size 58x58

Negative Training Set Use extract 58x58 patches from 502 non-face images to give 10,209 negative training points. Train SVM classifier on this data, then add false positives to the negative training set. Increases negative training set with those images which look most like faces.

Learned Components Start with fourteen manually selected 5x5 seed regions. The eyes (17x17 pixels) The nose (15x20 pixels) The mouth (31x15 pixels) The cheeks (21x20 pixels) The lip (13x16 pixels) The nostrils (22x12 pixels) The corners of the mouth (15x20 pixels) The eyebrows (15x20 pixels) The bridge of the nose (15x20 pixels)

Combining Components Combining Classifier Linear SVM Shift 58x58 window over input image Determine maximum output and its location Final decision face / background Shift component Experts over 58x58 window Left Eye Expert Linear SVM Nose Expert Linear SVM Mouth Expert Linear SVM

Experiments