Thesis title: “Studies in Pattern Classification – Biological Modeling, Uncertainty Reasoning, and Statistical Learning” 3 parts: (1)Handwritten Digit.

Slides:



Advertisements
Similar presentations
Introduction to Support Vector Machines (SVM)
Advertisements

Face Recognition: A Convolutional Neural Network Approach
Multiclass SVM and Applications in Object Classification
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Chapter 2.
Object recognition and scene “understanding”
Linear Classifiers (perceptrons)
An Introduction of Support Vector Machine
Support Vector Machines
Machine learning continued Image source:
HMAX Models Architecture Jim Mutch March 31, 2010.
Computer Vision for Human-Computer InteractionResearch Group, Universität Karlsruhe (TH) cv:hci Dr. Edgar Seemann 1 Computer Vision: Histograms of Oriented.
LPP-HOG: A New Local Image Descriptor for Fast Human Detection Andy Qing Jun Wang and Ru Bo Zhang IEEE International Symposium.
Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting by Yann LeCun, Fu Jie Huang, and Léon Bottou in Proceedings of CVPR'04,
Discriminative and generative methods for bags of features
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Human Visual System Neural Network Stanley Alphonso, Imran Afzal, Anand Phadake, Putta Reddy Shankar, and Charles Tappert.
Face Recognition with Harr Transforms and SVMs EE645 Final Project May 11, 2005 J Stautzenberger.
Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Slide credits for this chapter: Frank Dellaert, Forsyth & Ponce, Paul Viola, Christopher Rasmussen.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Multiclass object recognition
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Support Vector Machine & Image Classification Applications
Hurieh Khalajzadeh Mohammad Mansouri Mohammad Teshnehlab
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 1. Stat 231. A.L. Yuille. Fall 2004 Practical Issues with SVM. Handwritten Digits:
An Introduction to Support Vector Machine (SVM) Presenter : Ahey Date : 2007/07/20 The slides are based on lecture notes of Prof. 林智仁 and Daniel Yeung.
Yang, Luyu.  Postal service for sorting mails by the postal code written on the envelop  Bank system for processing checks by reading the amount of.
SVM Support Vector Machines Presented by: Anas Assiri Supervisor Prof. Dr. Mohamed Batouche.
Lecture 31: Modern recognition CS4670 / 5670: Computer Vision Noah Snavely.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
Recognition II Ali Farhadi. We have talked about Nearest Neighbor Naïve Bayes Logistic Regression Boosting.
EE459 Neural Networks Examples of using Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
Computer Vision Group University of California Berkeley On Visual Recognition Jitendra Malik UC Berkeley.
Handwritten digit recognition
An Introduction to Support Vector Machine (SVM)
CS 1699: Intro to Computer Vision Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh October 29, 2015.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
CSSE463: Image Recognition Day 14 Lab due Weds, 3:25. Lab due Weds, 3:25. My solutions assume that you don't threshold the shapes.ppt image. My solutions.
Analysis of Classification Algorithms In Handwritten Digit Recognition Logan Helms Jon Daniele.
Final Exam Review CS479/679 Pattern Recognition Dr. George Bebis 1.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 6: Applying backpropagation to shape recognition Geoffrey Hinton.
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
9.913 Pattern Recognition for Vision Class9 - Object Detection and Recognition Bernd Heisele.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Rich feature hierarchies for accurate object detection and semantic segmentation 2014 IEEE Conference on Computer Vision and Pattern Recognition Ross Girshick,
1 Convolutional neural networks Abin - Roozgard. 2  Introduction  Drawbacks of previous neural networks  Convolutional neural networks  LeNet 5 
Deep Learning Overview Sources: workshop-tutorial-final.pdf
1 Bilinear Classifiers for Visual Recognition Computational Vision Lab. University of California Irvine To be presented in NIPS 2009 Hamed Pirsiavash Deva.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Digit Recognition using SVMS
Non-linear classifiers Neural networks
Support Vector Machines
Image Classification.
Outline Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no.
Introduction to Deep Learning with Keras
network of simple neuron-like computing elements
Convolutional neural networks Abin - Roozgard.
Creating Data Representations
Image Classification Painting and handwriting identification
On Convolutional Neural Network
Analysis of Trained CNN (Receptive Field & Weights of Network)
Face Recognition: A Convolutional Neural Network Approach
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Image recognition.
Presentation transcript:

Thesis title: “Studies in Pattern Classification – Biological Modeling, Uncertainty Reasoning, and Statistical Learning” 3 parts: (1)Handwritten Digit Recognition with a Vision- Based Model (part in CVPR-2000) (2)An Uncertainty Framework for Classification (UAI-2000) (3)Selection of Support Vector Kernel Parameters (ICML-2000)

Handwritten Digit Recognition with a Vision-Based Model Loo-Nin Teow & Kia-Fock Loe School of Computing National University of Singapore

OBJECTIVE To develop a vision-based system that extracts features for handwritten digit recognition based on the following principles: –Biological Basis; –Linear Separability; –Clear Semantics.

Developing the model 2 main modules: Feature extractor –generates feature vector from raw pixel map. Trainable classifier –outputs the class based on the feature vector.

General System Structure Handwritten Digit Recognizer Feature Extractor Feature Classifier Raw Pixel Map Feature Vector Digit Class

The Biological Visual System Primary Visual Cortex Eye Optic nerve Optic tract Optic chiasm Brain Lateral geniculate nucleus Optic radiation

Receptive Fields Visual map Visual cell Receptive field input Output activations

Simple Cell Receptive Fields

Simple Cell Responses Cases with activation Cases without activation

Hypercomplex Receptive Fields

Hypercomplex Cell Responses Cases without activation Cases with activation

Biological Vision Local spatial features; Edge and corner orientations; Dual-channel (bright/dark; on/off); Non-hierarchical feature extraction.

The Feature Extraction Process Selective Feature Convolution Aggregation  I  Q  F I  I  Q  F 2 of 36x36 32 of 32x32 32 of 9x9

Dual Channel On-Channel  intensity-normalize (Image) Off-Channel  complement (On-Channel)

Selective Convolution Local receptive fields –same spatial features at different locations. Truncated linear  halfwave rectification –strength of feature’s presence. “Soft” selection based on central pixel –reduce false edges and corners.

Selective Convolution (formulae) where

Convolution Mask Templates Simplified models of the simple and hypercomplex receptive fields. Detect edges and end-stops of various orientations. Corners - more robust than edges –On-channel end-stops : convex corners; –Off-channel end-stops : concave corners.

Some representatives of the 16 mask templates used in the feature extraction

Feature Aggregation Similar to subsampling: –reduces number of features; –reduces dependency on features’ positions; –local invariance to distortions and translations. Different from subsampling: –magnitude-weighted averaging; –detects presence of feature in window; –large window overlap.

Feature Aggregation (formulae) Magnitude-Weighted Average where

Classification Linear discrimination systems –Single-layer Perceptron Network minimize cross-entropy cost function. –Linear Support Vector Machines maximize interclass margin width. k-nearest neighbor –Euclidean distance –Cosine Similarity x x x x x x o o o o o o x x x x x x o o o o o o

Multiclass Classification Schemes for linear discrimination systems One-per-class (1 vs 9) Pairwise (1 vs 1) Triowise (1 vs 2)

Experiments MNIST database of handwritten digits training, testing. 36x36 input image. 32 9x9 feature maps.

Preliminary Experiments Feature Classifier SchemeVoting Option Train Error (%) (60000 samples) Test Error (%) (10000 samples) Perceptron Network 1-per-class PairwiseHard Soft TriowiseHard Soft Linear SVMs PairwiseHard Soft TriowiseHard Soft k-Nearest Neighbor Euclidean Distance (k = 3) Cosine Similarity (k = 3)

Experiments on Deslanted Images Feature Classifier SchemeVoting Option Train Error (%) (60000 samples) Test Error (%) (10000 samples) Perceptron Network PairwiseHard Soft TriowiseHard Soft Linear SVMs PairwiseHard Soft TriowiseHard Soft0.00 * 0.59 *

Misclassified Characters

Comparison with Other Models Classifier ModelTest Error (%) LeNet LeNet-4, boosted [distort]0.70 LeNet LeNet-5 [distort]0.80 Tangent distance1.10 Virtual SVM0.80 [deslant]* 0.59 *

Conclusion Our model extracts features that are –biologically plausible; –linearly separable; –semantically clear. Needs only a linear classifier –relatively simple structure; –trains fast; –gives excellent classification performance.

Hierarchy of Features? Idea originated from Hubel & Wiesel –LGN  simple  complex  hypercomplex –later studies show these to be parallel. Hierarchy - too many feature combinations. Simpler to have only one convolution layer.

Linear Discrimination Output: where f defines a hyperplane: and g is the activation function: or

One-per-class Classification the unit with the largest output value indicates the class of the character:

Pairwise Classification Soft Voting: Hard Voting: where

Triowise Classification Soft Voting: Hard Voting:

k-Nearest Neighbor Euclidean Distance Cosine Similarity where

Confusion Matrix (triowise SVMs / soft voting / deslanted)

Number of iterations to convergence for the perceptron network Scheme#units# epochs 1-per-class Pairwise Triowise360147