Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”

Slides:



Advertisements
Similar presentations
EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Advertisements

Component Analysis (Review)
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Pattern Recognition and Machine Learning
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Face Recognition Under Varying Illumination Erald VUÇINI Vienna University of Technology Muhittin GÖKMEN Istanbul Technical University Eduard GRÖLLER Vienna.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Face Recognition Jeremy Wyatt.
Principal Component Analysis Barnabás Póczos University of Alberta Nov 24, 2009 B: Chapter 12 HRF: Chapter 14.5.
Face Recognition: A Comparison of Appearance-Based Approaches
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Object recognition under varying illumination. Lighting changes objects appearance.
Face Recognition: An Introduction
Facial Recognition CSE 391 Kris Lord.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
8/16/99 Computer Vision and Modeling. 8/16/99 Principal Components with SVD.
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Pattern Recognition April 19, 2007 Suggested Reading: Horn Chapter 14.
Face Recognition: An Introduction
1 Introduction to Kernel Principal Component Analysis(PCA) Mohammed Nasser Dept. of Statistics, RU,Bangladesh
CSE 185 Introduction to Computer Vision Face Recognition.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Facial Recognition Justin Kwong Megan Thompson Raymundo Vazquez-lugo.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Principal Component Analysis and Linear Discriminant Analysis for Feature Reduction Jieping Ye Department of Computer Science and Engineering Arizona State.
2D-LDA: A statistical linear discriminant analysis for image matrix
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
3D Face Recognition Using Range Images Literature Survey Joonsoo Lee 3/10/05.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
Principal Component Analysis (PCA)
University of Ioannina
Recognition with Expression Variations
Lecture 8:Eigenfaces and Shared Features
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Principal Component Analysis (PCA)
Machine Learning Dimensionality Reduction
Principal Component Analysis
Announcements Project 1 artifact winners
PCA vs ICA vs LDA.
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Feature space tansformation methods
CS4670: Intro to Computer Vision
Announcements Project 2 artifacts Project 3 due Thursday night
Announcements Project 4 out today Project 2 winners help session today
Outline A. M. Martinez and A. C. Kak, “PCA versus LDA,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp , 2001.
Facial Recognition as a Pattern Recognition Problem
SVMs for Document Ranking
The “Margaret Thatcher Illusion”, by Peter Thompson
Presentation transcript:

Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, 1997.

The Goal Face recognition that is insensitive to large variations in lighting and facial expressions Note that lighting variability here includes lighting intensity, direction, and number of light sources November 15, 2018 Computer Vision

The Difficulty It is difficult because the same person with the same facial expression, and seen from the same viewpoint, can appear dramatically different when light sources illuminate the face from different directions November 15, 2018 Computer Vision

Observation All of the images of a Lambertian surface, taken from a fixed viewpoint, but under varying illumination, lie in a 3D linear subspace of the high-dimensional image space Image formulation can be modeled by For a Lambertian surface, the amount of reflected light does not depend on the viewing direction, but only on the cosine angle between the incidence light ray and the normal of the surface So, for Lambertian surfaces, we have November 15, 2018 Computer Vision

Observation Therefore, in the absence of shadowing, given three images of a Lambertian surface from the same viewpoint taken under three known, linearly independent light source directions, the albedo and surface normal can be recovered One can reconstruct the image of the surface under an arbitrary lighting direction by a linear combination of three original images November 15, 2018 Computer Vision

3D Linear Space Example November 15, 2018 Computer Vision

3D Linear Space Example November 15, 2018 Computer Vision

The Problem Statement Given a set of face images labeled with the person’s identity and un unlabeled set of face images from the same group of people, identify each person in the test images November 15, 2018 Computer Vision

Correlation Nearest neighbor in the image space If all the images are normalized to have zero mean and unit variance, it is equivalent of choosing the image in the learning set that best correlates with the test image Due to the normalization process, the result is independent of light source intensity November 15, 2018 Computer Vision

Correlation Covariance Correlation November 15, 2018 Computer Vision

Correlation Problems with correlation If the images are gathered under varying light conditions, then the corresponding points in the image space may not be tightly clustered Computationally, it is expensive to compute correlation between two images All the images have to be stored, which can require a large amount of storage November 15, 2018 Computer Vision

Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA Suppose we have a set of N images and there are c classes We define a linear transformation The total scatter of the training set is given by November 15, 2018 Computer Vision

Eigenfaces For PCA, it chooses to maximize the total scatter of the transformed feature vectors , which is Mathematically, we have November 15, 2018 Computer Vision

Eigenfaces When lighting changes, the total scatter is due to the between-class scatter that is useful for classification and also due to the within-class scatter, which is unwanted for classification purposes When lighting changes, much of the variation from one image to the next is due to the illumination changes An ad-hoc way of dealing with this problem is to discard the three most significant principal components, which reduces the variations due to lighting November 15, 2018 Computer Vision

Linear Subspaces Note that all images of a Lambertian surface under different lighting lie in a 3D linear subspace For each face, use three or more images taken under different lighting directions to construct a 3D basis for the linear subspace To perform recognition, we simply compute the distance to a new image to each linear subspace and choose the face corresponding to the shortest distance If there is no noise or shadowing, it would achieve error free classification under any lighting conditions November 15, 2018 Computer Vision

Fisherfaces Using Fisher’s linear discriminant to find class-specific linear projections More formally, we define the between-class scatter The within-class scatter Then we choose to maximize the ratio of the determinant of the between-class scatter matrix to the within-class scatter of the projected samples November 15, 2018 Computer Vision

Fisherfaces That is, November 15, 2018 Computer Vision

Comparison of PCA and FDA November 15, 2018 Computer Vision

Fisherfaces Singularity problem The within-class scatter is always singular for face recognition This problem is overcome by applying PCA first, which can be called PCA/LDA November 15, 2018 Computer Vision

Experimental Results Variation in lighting November 15, 2018 Computer Vision

Experimental Results November 15, 2018 Computer Vision

Experimental Results November 15, 2018 Computer Vision

Experimental Results November 15, 2018 Computer Vision

Variations in Facial Expression, Eye Wear, and Lighting November 15, 2018 Computer Vision

Variations in Facial Expression, Eye Wear, and Lighting November 15, 2018 Computer Vision

Variations in Facial Expression, Eye Wear, and Lighting November 15, 2018 Computer Vision

Variations in Facial Expression, Eye Wear, and Lighting November 15, 2018 Computer Vision

Variations in Facial Expression, Eye Wear, and Lighting November 15, 2018 Computer Vision

Variations in Facial Expression, Eye Wear, and Lighting November 15, 2018 Computer Vision

Glasses Recognition Glasses / no glasses recognition November 15, 2018 Computer Vision