Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.

Slides:



Advertisements
Similar presentations
Active Appearance Models
Advertisements

Face Recognition: A Convolutional Neural Network Approach
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
ImageNet Classification with Deep Convolutional Neural Networks
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Face Recognition Method of OpenCV
Automatic Feature Extraction for Multi-view 3D Face Recognition
黃文中 Preview 2 3 The Saliency Map is a topographically arranged map that represents visual saliency of a corresponding visual scene. 4.
1 Rotation Invariant Face Detection Using Neural Network Lecturers: Mehdi Dehghani - Mahdy Bashary Supervisor: Dr. Bagheri Shouraki Spring 2007.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Simple Neural Nets For Pattern Classification
Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Overview of Back Propagation Algorithm
Oral Defense by Sunny Tang 15 Aug 2003
Dan Simon Cleveland State University
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Radial-Basis Function Networks
Radial Basis Function Networks
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Soft Computing Colloquium 2 Selection of neural network, Hybrid neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Rotation Invariant Neural-Network Based Face Detection
NEURAL NETWORKS FOR DATA MINING
Classification / Regression Neural Networks 2
COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION Olarik Surinta, chatklaw Jareanpon Department of Management Information System.
Soft Computing Lecture 8 Using of perceptron for image recognition and forecasting.
EE459 Neural Networks Examples of using Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Perceptrons Michael J. Watts
Essential components of the implementation are:  Formation of the network and weight initialization routine  Pixel analysis of images for symbol detection.
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Summary of “Efficient Deep Learning for Stereo Matching”
Data Mining, Neural Network and Genetic Programming
Article Review Todd Hricik.
Final Year Project Presentation --- Magic Paint Face
Structure learning with deep autoencoders
CSSE463: Image Recognition Day 17
Classification / Regression Neural Networks 2
Neuro-Computing Lecture 4 Radial Basis Function Network
network of simple neuron-like computing elements
Neural Network - 2 Mayank Vatsa
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
Ch4: Backpropagation (BP)
Convolutional Neural Networks
CSSE463: Image Recognition Day 17
CSSE463: Image Recognition Day 17
Face Recognition: A Convolutional Neural Network Approach
Lecture 16. Classification (II): Practical Considerations
Image recognition.
Ch4: Backpropagation (BP)
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian

2 Outline Overview Scaling Invariance Rotation Invariance Face Recognition Methods Multi-Layer Perceptron Hybrid NN SOM Convolutional NN Conclusion

3 Overview

4 Scaling Invariance Magnifying image while minimizing the loss of perceptual quality. Interpolation methods: Weighted sum of neighboring pixels. Content-adaptive methods.  Edge-directed.  Classification-based. Using multilayer neural networks. Proposed method: Content-adaptive neural filters using pixel classification.

5 Scaling Invariance (Cont.) Pixel Classification: Adaptive Dynamic Range Coding (ADRC): Concatenation of ADRC(x) of all pixels in the window gives the class code. If we invert the picture date, the coefficients for the filter should remain the same ⇒ It is possible to reduce half of the numbers of classes. Number of classes: 2 N-1 for a window with N pixels

6 Scaling Invariance (Cont.) Content-adaptive neural filters: The original high resolution, y, and the downscaled, x, images are employed as the training set. These pairs, (x, y), are classified using ADRC on the input vector x. The optimal coefficients are obtained for each class. The coefficients are stored in the corresponding index of a look-up-table(LUT).

7 Scaling Invariance (Cont.) A simple 3-layer feedforward architecture. Few neurons in the hidden layer. The activation function in the hidden layer is tanh. The neural network can be described as: y 2, y 3 and y 4 can be calculated in the same way by flipping the window simmetrically

8 Scaling Invariance (Cont.) Pixel classification set reduction 1. Calculate the Euclidian distance of normalized coefficient vector between each class. 2. If the distance is below the threshold, combine the classes. The coefficient can be obtained by training on the combined data of the corresponding classes. 3. Repeat step 1 for the new class set, until the threshold is reached.

9 Scaling Invariance (Cont.)

10 Rotation Invariance Handling in-plane rotation of face. Using a neural network called router. The router’s input is the same region that the detector network will receive as input. The router returns the angle of the face.

11 Rotation Invariance (Cont.) The output angle can be represented by Single unit 1-of-N encoding Gaussian output encoding An array of 72 output unit is used for proposed method. For a face with angle of θ, each output trained to have a value of cos(θ – i × 5 o ) Computing an input face angle as:

12 Rotation Invariance (Cont.) Router architecture Input is 20 × 20 window of scaled image. Router has a single hidden layer consisting of a total 100 units. There are 4 sets of units in hidden layer. Each unit connects to a 4 × 4 region of the input. Each set of 25 units covers the entire input without overlap. The activation function for hidden layer is tanh. The network in trained using the standard error back propagation algorithm.

13 Rotation Invariance (Cont.) Generating a set of manually labeled example images Align the labeled faces: 1. Initializing F, a vector which will be the average position of each labeled feature over all the training faces. 2. Each face is aligned with F by computing rotation and scaling. 3. Transformation can be written as linear functions, we can solve it for the best alignment. 4. After iterating these steps a small number of times, the alignments converge.

14 Rotation Invariance (Cont.) To generate the training set, the faces are rotated to a random orientation.

15 Rotation Invariance (Cont.) Empirical results:

16 Rotation Invariance (Cont.)

17 Face Recognition Methods Database: ORL(Olivetti Research Lab.) Database consists of × 112 different images of 40 distinct subject. 5 image per person for training set and 5 for test. There are variation of facial expression and facial detail.

18 Face Recognition Methods Multi-Layer Perceptron: The training set faces are run through a PCA, and the 200 corresponding eigenvectors (principal components) are found which can be displayed as eigenfaces. Each face in the training set can be reconstructed by a linear combination of all the principal components. By projecting the test set images onto the eigenvector basis, the eigenvector expansion coefficients can be found. (a dimensionality reduction!)

19 Face Recognition Methods (Cont.) MLP Training classifier using coefficients of training set images. Using variable number of principal components ranging from 25 to 200 in different simulation. Repeating simulation 5 times for each number with random initialization of all parameters in the MLP and averaging the results for that number. The Error Backpropagation learning algorithm was applied with a small constant learning rate (normally < 0.01)

20 Face Recognition Methods (Cont.) MLP Results:

21 Face Recognition Methods (Cont.) Hybrid NN

22 Face Recognition Methods (Cont.) Hybrid NN 1. Local Image Sampling

23 Face Recognition Methods (Cont.) Hybrid NN 2. Self-Organizing Map

24 Face Recognition Methods (Cont.) Hybrid NN

25 Face Recognition Methods (Cont.) Hybrid NN SOM image samples corresponding to each node before training and after training

26 Face Recognition Methods (Cont.) Hybrid NN 3. Convolutional NNs Invariant to some degree of:  Shift  Deformation Using these 3 ideas:  Local Receptive Fields  Shared Weights  aiding genaralization  Spatial Subsampling

27 Face Recognition Methods (Cont.) Hybrid NN

28 Face Recognition Methods (Cont.) Hybrid NN Network Layers: Convolutional Layers Each Layer one or more planes Each Plane can be considered as a feature map which has a fixed feature detector that is convolved with the local window which is scanned over the planes in previous layer. Subsampling Layers Local averaging and subsampling operation

29 Face Recognition Methods (Cont.) Hybrid NN Convolutional and Sampling relations:

30 Face Recognition Methods (Cont.) Hybrid NN Simulation Details: Initial weights are uniformly distributed random numbers in the range [-2.4/Fi, 2.4/Fi] where Fi is the fan-in neuron i. Target outputs are -0.8 and 0.8 using the tanh output activation function. Weights are updated after each pattern presentation.

31 Face Recognition Methods (Cont.) Hybrid NN Expremental Results  Expriment #1: Variation of the number of output classes

32 Face Recognition Methods (Cont.) Hybrid NN Variation of the dimentionality of the SOM

33 Face Recognition Methods (Cont.) Hybrid NN Substituting the SOM with the KLT Replacing the CN with an MLP

34 The tradeoff between rejection threshold and recognition accuracy Face Recognition Methods (Cont.) Hybrid NN

35 Face Recognition Methods (Cont.) Hybrid NN Comparison with other known results on the same database

36 Face Recognition Methods (Cont.) Hybrid NN Variation of the number of training images per person

37 Face Recognition Methods (Cont.) Hybrid NN

38 Face Recognition Methods (Cont.)  Expriment #2:

39 Face Recognition Methods (Cont.)

40 Conclusion The results of the face recognition expriments are greatly influenced by: The Training Data The Preprocessing Function The Type of Network selected Activation Functions A fast, automatic system for face recognition has been presented which is a combination of SOM and CN. This network is partial invariant to translation, rotation, scale and deformation.