Download presentation
Presentation is loading. Please wait.
1
Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c classes –We define a linear transformation –The total scatter of the training set is given by –
2
For PCA, it chooses to maximize the total scatter of the transformed feature vectors, which is –Mathematically, we have Eigenfaces
3
One Difficulty: Lighting The same person with the same facial expression, and seen from the same viewpoint, can appear dramatically different when light sources illuminate the face from different directions.
4
Another Difficulty: Facial Expression Facial expressions changes also create variations that PCA will hold unto, yet these variations may confuse recognition.
5
Fisherfaces The idea here is to try to throw out the variability that is not useful for recognition and hold onto the variability that is…
6
Fisherfaces Using Fisher’s linear discriminant to find class-specific linear projections –More formally, we define the between-class scatter –The within-class scatter –Then we choose to maximize the ratio of the determinant of the between-class scatter matrix to the within-class scatter of the projected samples
7
Fisherfaces That is,
8
Comparison of PCA and FDA
9
Fisherfaces Singularity problem –The within-class scatter is always singular for face recognition –This problem is overcome by applying PCA first, which can be called PCA/LDA
10
Experimental Results Variation in lighting
11
Experimental Results
14
Variations in Facial Expression, Eye Wear, and Lighting
20
Glasses Recognition Glasses / no glasses recognition
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.