FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Face Recognition Face Recognition Using Eigenfaces K.RAMNATH BITS - PILANI.
Robust 3D Head Pose Classification using Wavelets by Mukesh C. Motwani Dr. Frederick C. Harris, Jr., Thesis Advisor December 5 th, 2002 A thesis submitted.
A Comprehensive Study on Third Order Statistical Features for Image Splicing Detection Xudong Zhao, Shilin Wang, Shenghong Li and Jianhua Li Shanghai Jiao.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Face Recognition Committee Machine Presented by Sunny Tang.
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
Dynamic Face Recognition Committee Machine Presented by Sunny Tang.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Face Recognition Jeremy Wyatt.
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Face Recognition Using Eigenfaces
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Face Collections : Rendering and Image Processing Alexei Efros.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Oral Defense by Sunny Tang 15 Aug 2003
Face Detection and Recognition
Facial Recognition CSE 391 Kris Lord.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using EigenFaces Presentation by: Zia Ahmed Shaikh (P/IT/2K15/07) Authors: Matthew A. Turk and Alex P. Pentland Vision and Modeling Group,
Eigenfaces for Recognition Student: Yikun Jiang Professor: Brendan Morris.
Summarized by Soo-Jin Kim
Training Database Step 1 : In general approach of PCA, each image is divided into nxn blocks or pixels. Then all pixel values are taken into a single one.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Presented By Wanchen Lu 2/25/2013
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
1 Comparison of Principal Component Analysis and Random Projection in Text Mining Steve Vincent April 29, 2004 INFS 795 Dr. Domeniconi.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Face Recognition: An Introduction
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Recognizing Partially Occluded, Expression Variant Faces.
2D-LDA: A statistical linear discriminant analysis for image matrix
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
3D Face Recognition Using Range Images Literature Survey Joonsoo Lee 3/10/05.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Principal Components Analysis ( PCA)
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
School of Computer Science & Engineering
University of Ioannina
Recognition with Expression Variations
Submitted by: Ala Berawi Sujod Makhlof Samah Hanani Supervisor:
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Principal Component Analysis (PCA)
Principal Component Analysis
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Facial Recognition in Biometrics
Face Recognition and Detection Using Eigenfaces
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
CSSE463: Image Recognition Day 25
CS4670: Intro to Computer Vision
CSSE463: Image Recognition Day 25
Presentation transcript:

FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION Navin Goel Graduate Student Advisor: Dr George Bebis Associate Professor Department Of Computer Science and Engineering University of Nevada, Reno

Overview Introduction and Thesis Scope Principal Component Analysis Method of Eigenfaces Random Projection Properties of Random Projection Random Projection for Face Recognition Experimental Procedure and Data sets Recognition approaches and results Conclusion and Future work

Introduction Problem Statement Identify a person’s face image from face database. Applications Human-Computer interface, Static matching of photographs, Video surveillance, Biometric security, Image and film processing.

Challenges Variations in pose Head positions, frontal view, profile view and head tilt, facial expressions Illumination Changes Light direction and intensity changes, cluttered background, low quality images Camera Parameters Resolution, color balance etc. Occlusion Glasses, facial hair and makeup

Thesis Scope Investigate the application of Random Projection (RP) in Face Recognition. Evaluate the performance of RP for face recognition under various conditions and assumptions. Aim at proposing an algorithm, which replaces the learning step of PCA by cheaper and efficient step.

Principal Component Analysis (PCA) For a set M of N-dimensional vectors {x1, x2…xM}, PCA finds the eigenvalues and eigenvectors of the covariance matrix of the vectors  - the average of the image vectors uk - Eigenvectors k - Eigenvalues an image as 1d vector Keep only k eigenvectors, corresponding to the k largest eigenvalues.

Method of Eigenfaces Apply PCA on the training dataset Project the Gallery set images to the reduced dimensional eigenspace. For each test set image: Project the image to the reduced dimensional eigenspace. Measure similarity by calculating the distance between the projection coefficients of two datasets The face is recognized if the closest gallery image belongs to same person in test set

Random Projection (RP) The original N-dimensional data is projected to a d-dimensional subspace, (d << n) using: Random matrix is calculated using the following steps: Each entry of the matrix follows N(0,1). The d rows of the matrix are orthogonalized using Gram-Schmidt algorithm and then are normalized to unit length xNxM – original data RdxN – random matrix Mean Variance

Random Projection – Data Independence S. Dasgupta. Experiments with Random Projection. Uncertainty in Artificial Intelligence, 2000. Random Projection does not depend on the data itself. Two 1-separated spherical Gaussians were projected onto a random space of dimension 20. Error bars are for 1 standard deviation and there are 40 trials per dimension. Digital images, document databases, signal processing.

Random Projection – Eccentricity S. Dasgupta. Experiments with Random Projection. Uncertainty in Artificial Intelligence, 2000. RP makes highly eccentric Gaussian clusters to spherical. Gaussian in subspace of 50-dimension and eccentricity 1,000 is projected onto lower dimensions. Conceptually easier to design algorithms for spherical clusters than ellipsoidal ones.

Random Projection – Complexity E. Bingham and H. Mannila. Random projection in dimensionality reduction: applications to image and text data. Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 245-250, August 26-29, 2001. Complexity of RP is of the order of quadratic (n2) in contrast to PCA which is cubic (n3). Number of floating-point operations needed when reducing the dimensionality of image data using RP (+), SRP (*), PCA () and DCT (), in a logarithmic scale.

Random Projection – Lower Bound S. Dasgupta. Experiments with Random Projection. Uncertainty in Artificial Intelligence, 2000. What value of d (lower space) must be chosen ? 1-separated mixtures of k Gaussians of dimension 100 was projected on d = lnk. PCA cannot be expected to reduce the dimensionality of k Gaussians below Ω(k).

Random Projection for Face Recognition Generate lower dimensional random subspace. Project the Gallery set images to the reduced dimensional random space. For each test set image: Project the image to the reduced dimensional random space. Measure similarity by calculating the distance between the projection coefficients of two datasets. The face is recognized if the closest gallery image belongs to same person in test set.

Experimental Procedure Main steps of the approach

Data Sets Face images from ORL data set for a particular subject. Face images from CVL data set for a particular subject. Face images from AR data set for a particular subject.

Closest Match Approach Averaging over 5 experiments. Flowchart for calculating recognition rate using closest match approach.

Closest Match Approach + Majority Voting Flowchart for calculating recognition rate using closest match approach + majority voting technique.

Closest Match Approach + Scoring Flowchart for calculating recognition rate using closest match approach + scoring technique.

Results for the ORL database Experiment on ORL database using closest match approach + majority voting technique, where training set consists of same subjects as in the gallery and testing set. Experiment on ORL database using closest match approach + majority voting technique, where training set consists of different subjects as in the gallery and testing set.

Results for the CVL database Experiment on CVL database using closest match approach + majority voting technique, where training set consists of same subjects as in the gallery and testing set. Experiment on CVL database using closest match approach + majority voting, training set consists of different subjects as in the gallery and testing set.

Results for the AR database Experiment on AR database using closest match approach + majority voting, training set consists of random subjects, gallery and Test set contains different combinations.

ORL database for Multiple Ensembles Plot on RCA, Majority-Voting technique for 5 and 30 different random seeds, training set consists of different subjects as in the gallery and testing set.

Results for the ORL database with Scoring Technique Experiment on ORL database using closest match approach + scoring, training set consists of same subjects as in the gallery and testing set. Experiment on ORL database using closest match approach + scoring, training set consists of different subjects as in the gallery and testing set.

Results for the CVL database with Scoring Technique Experiment on CVL database using closest match approach + scoring, training set consists of different subjects as in the gallery and testing set.

Results for the AR database with Scoring Technique Experiment on AR database using closest match approach + scoring, training set consists of random subjects as in the gallery and Test set contains different combinations.

Conclusion We were able to get recognition rate equivalent to PCA and in most cases better than it. RP matrix is independent of the training data. The main advantage of using RP is the computational complexity, for RP it is quadratic and for PCA cubic. RP works better when gallery to test set ratio is higher. RP works better than PCA when the training set images differ from gallery and test set. RP shows irregularity for single runs, but improves with multiple ensembles. Majority-voting over closest match for recognition further improves the performance of RP. For scoring technique, greater the number of top hits per image, better the performance.

Future Work Combine different random ensembles, that will improve efficiency and accuracy.