Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c.

Slides:



Advertisements
Similar presentations
Biometrics and Security Tutorial 3. 1 (a) What is the scatter matrix (P4: 21)? Understand what about eigenvector and eigenvalue as well as their functions?
Advertisements

Face Recognition CPSC UTC/CSE.
Face Recognition By Sunny Tang.
Face Recognition Method of OpenCV
A survey of Face Recognition Technology Wei-Yang Lin May 07, 2003.
Face Recognition and Biometric Systems
Face Recognition Committee Machine Presented by Sunny Tang.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Face Recognition and Feature Subspaces
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
Face Recognition Under Varying Illumination Erald VUÇINI Vienna University of Technology Muhittin GÖKMEN Istanbul Technical University Eduard GRÖLLER Vienna.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Fisher’s Linear Discriminant  Find a direction and project all data points onto that direction such that:  The points in the same class are as close.
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Face Recognition Jeremy Wyatt.
Subspace Representation for Face Recognition Presenters: Jian Li and Shaohua Zhou.
Face Recognition: A Comparison of Appearance-Based Approaches
PCA Channel Student: Fangming JI u Supervisor: Professor Tom Geoden.
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
Biometrics & Security Tutorial 6. 1 (a) Understand why use face (P7: 3-4) and face recognition system (P7: 5-10)
Comparing Kernel-based Learning Methods for Face Recognition Zhiguo Li
Face Recognition: An Introduction
Three-Dimensional Face Recognition Using Surface Space Combinations Thomas Heseltine, Nick Pears, Jim Austin Advanced Computer Architecture Group Department.
A PCA-based feature extraction method for face recognition — Adaptively weighted sub-pattern PCA (Aw-SpPCA) Group members: Keren Tan Weiming Chen Rong.
Face Detection and Recognition
8/16/99 Computer Vision and Modeling. 8/16/99 Principal Components with SVD.
Computer Vision – Lecture 9
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
Face Detection and Recognition Computational Photography Derek Hoiem, University of Illinois Lecture by Kevin Karsch 12/3/13 Chuck Close, self portrait.
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
Access Control Via Face Recognition Progress Review.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Face Recognition: An Introduction
Generalizing Linear Discriminant Analysis. Linear Discriminant Analysis Objective -Project a feature space (a dataset n-dimensional samples) onto a smaller.
PCA explained within the context of Face Recognition Berrin Yanikoglu FENS Computer Science & Engineering Sabancı University Updated Dec Some slides.
Terrorists Team members: Ágnes Bartha György Kovács Imre Hajagos Wojciech Zyla.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Face Recognition: An Introduction
Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Presented by Xianwang Wang Masashi Sugiyama.
CSE 185 Introduction to Computer Vision Face Recognition.
Lecture 4 Linear machine
Discriminant Analysis
Feature extraction using fuzzy complete linear discriminant analysis The reporter : Cui Yan
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Dimensionality reduction
Principal Component Analysis and Linear Discriminant Analysis for Feature Reduction Jieping Ye Department of Computer Science and Engineering Arizona State.
2D-LDA: A statistical linear discriminant analysis for image matrix
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
3D Face Recognition Using Range Images Literature Survey Joonsoo Lee 3/10/05.
LDA (Linear Discriminant Analysis) ShaLi. Limitation of PCA The direction of maximum variance is not always good for classification.
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Zhiming Liu and Chengjun Liu, IEEE. Introduction Algorithms and procedures Experiments Conclusion.
Principal Component Analysis (PCA)
Recognition with Expression Variations
Face Recognition and Feature Subspaces
Recognition: Face Recognition
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Eigenfaces for recognition (Turk & Pentland)
Introduction PCA (Principal Component Analysis) Characteristics:
Facial Recognition as a Pattern Recognition Problem
Presentation transcript:

Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c classes –We define a linear transformation –The total scatter of the training set is given by –

For PCA, it chooses to maximize the total scatter of the transformed feature vectors, which is –Mathematically, we have Eigenfaces

One Difficulty: Lighting The same person with the same facial expression, and seen from the same viewpoint, can appear dramatically different when light sources illuminate the face from different directions.

Another Difficulty: Facial Expression Facial expressions changes also create variations that PCA will hold unto, yet these variations may confuse recognition.

Fisherfaces The idea here is to try to throw out the variability that is not useful for recognition and hold onto the variability that is…

Fisherfaces Using Fisher’s linear discriminant to find class-specific linear projections –More formally, we define the between-class scatter –The within-class scatter –Then we choose to maximize the ratio of the determinant of the between-class scatter matrix to the within-class scatter of the projected samples

Fisherfaces That is,

Comparison of PCA and FDA

Fisherfaces Singularity problem –The within-class scatter is always singular for face recognition –This problem is overcome by applying PCA first, which can be called PCA/LDA

Experimental Results Variation in lighting

Experimental Results

Variations in Facial Expression, Eye Wear, and Lighting

Glasses Recognition Glasses / no glasses recognition