Download presentation
Presentation is loading. Please wait.
Published byKristin Pashby Modified over 9 years ago
1
On the Dimensionality of Face Space Marsha Meytlis and Lawrence Sirovich IEEE Transactions on PAMI, JULY 2007
2
Outline Introduction Background Experiment Analysis of Data Results Discussion 2
3
Introduction A low-dimensional description of face space first appears in [1] for face recognition. Then the eigenface approach, [2] [3], was based on the premise that a small number of elements or features could be efficiently used. [1] L. Sirovich and M. Kirby, “Low-Dimensional Procedure for the Characterization of Human Faces,” J. Optical Soc. Am., vol. 4, pp. 519-524, 1987. [2] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, pp. 71-86, 1991. [3] M. Turk and A. Pentland, “Face Recognition Using Eigenfaces,” Proc. IEEE Computer Vision and Pattern Recognition, pp. 586-591, 1991. 3
4
Introduction The dimension of face space may be reasonably defined as an acceptable threshold number of dimensions necessary to specify an identifiable face. How to find the threshold number of dimensions? 4
5
Background Eigenface approach: – Acquire the training set of face images and calculate the eigenfaces, which define the face space. – Calculate a set of weights based on the new face image and the M eigenfaces by projecting the input image onto each of eigenfaces. – Determine if the image is a face and classify the weight pattern as either a known person or as unknown. 5
6
Background Calculate eigenfaces: 6 A face image which is a two-dimensional array of intensity values could be considered as a vector of dimension. face image NiNi NjNj NkNk A set of images maps to a collection of points in this huge space. Face images are similar in overall configuration and can be described by a relatively low dimensional subspace. The principal component analysis(PCA, or Karhunen- Loeve expansion) is to find the vectors which best account for the distribution of face images.
7
Background Face images of the training set are Γ 1, Γ 2, …, Γ M, and the average face of the set is defined by. Each face differs from the average by the vector, and the covariance matrix is, where. The N orthonormal vectors u n which best describes the data are the eigenvectors of C. 7
8
Background Using Eigenfaces to classify a face image: a new face (Γ) is transformed into its eigenface components(projected into “face space”) by a simple operation for k = 1, 2, …, N. 8 The weights form a vector Ω T = [ω 1, ω 2, …, ω N ]. We just compare Ω with other face classes’ Ω n to determine which face class it belong to.
9
Background With the SVD of the training set, we could get the eigenfuntions(eigenfaces), and the corresponding eigenvalues, in [2] [3]. For experiment, we could consider the average probability that an eigenface appears in the representation of a face. 9
10
Background 10 the signal line the noise line
11
Background 11 The remnants of facial structure in the eigenfaces decay slowly after the first 100 components.
12
Background SNR(signal-to-noise),,the measure of error in the reconstruction, i.e., the amount of variance that has been captured in the reconstruction. In [4], the most face identity information necessary for recognition is captured within an SNR span of approximately 7-7.5 octaves. [4] P. Penev and L. Sirovich, “The Global Dimensionality of Face Space,” Proc. IEEE CS Int’l Conf. Automatic Face and Gesture Recognition, pp. 264-270, 2000. 12
13
Experiment The goal was to arrive at an estimate of the dimension of face space, that is, the threshold number of dimensions. Human observers were shown partial reconstructions of faces and asked whether there was recognition. Human observers: five men and five women, mean age 27, range 20-35, all right handed. 13
14
Experiment The first part: assess a baseline for the observers’ knowledge of familiar faces. The observers had to respond 46 people (three images of each) with one of the following options: – high familiarity – medium familiarity – low or no familiarity 14
15
Experiment The second part: the observers viewed the truncated versions of 80 faces, referred to as test faces. The test faces included: – 20 familiar faces in the FERET training set – 20 unfamiliar faces in the FERET training set – 20 familiar faces not in the FERET training set – 20 unfamiliar faces not in the FERET training set 15
16
Experiment All 80 test faces were reconstructed to an SNR of 5.0, and the observers viewed them in a random sequence. In the same manner, SNR was incremented in even steps of 0.5 until 10 was reached, with 11 steps in all. 16
17
Experiment Observers distinguish the degree to which a face is familiar or unfamiliar and respond with one of the following options: – 1. high certainty a face is unfamiliar – 2. medium certainty a face is unfamiliar – 3. low certainty a face is unfamiliar – 4. low certainty a face is familiar – 5. medium certainty a face is familiar – 6. high certainty a face is familiar 17
18
Experiment The third part: Using 80 faces to furnish a baseline comparison of reconstruction error. 18 in-population faces are better reconstructed.
19
Analysis of Data Data gathered in the second part of the experiment were analyzed using Receiver Operating Characteristic(ROC) curves to classify familiar versus unfamiliar faces. 19
20
Analysis of Data The ROC can also be represented equivalently by plotting the fraction of true positives(TPR) vs. the fraction of false positives(FPR).
21
Analysis of Data For classification, we need to transform the six-point response into a binary recognition, based on five different thresholds for observer’s responses, r: r>5, r>4, r>3, r>2, and r>1. Then, r>5 may be regarded as the probability that the observers is certain that he is viewing a familiar face. r>4 is this probability plus the probability of medium certainty and so forth. 21
22
Analysis of Data An image which received a score above a specific threshold was classified as familiar and, otherwise, as unfamiliar. The proportion of true positive responses was determined as the percentage of familiar faces that were classified as familiar at a threshold. 22
23
Analysis of Data For each observer, we could get the series of ROC curves. 23 45° line: pure chance carry a high signal be noisy The area between each curve and 45° line corresponds to classification accuracy, an increasing function of SNR.
24
Analysis of Data From [5], we use the area under the ROC curve(AUC) as a measure of classifier performance. The numerical classification of accuracy is the area under the ROC curve which adds a baseline value of 0.5. [5] A. Bradley, “The Use of the Area under the ROC Curve in the Evaluation of Machine Learning Algorithms,” Pattern Recognition, vol. 30, pp. 1145-1159, 1997. 24
25
Results In the first part of the experiment, we could the familiarity rating of each observer. Not all observers were equally familiar with the face. 25 Those have good representation of the familiar faces in memory.
26
Results In the second part of the experiment, we could use the ROC curve to analysis the classification accuracy. – 3 best observers and all observers. 26
27
Results For all observers, we averaged face classification accuracy as a function of SNR. 27 The functions are fitted by the Weibull distribution’s Cumulative distribution function 3 best observers all observers
28
Results The functions would be:. A classification accuracy of 1.0 indicates perfect stimulus detection. The point at which there is a 50% improvement over chance( ) in classification accuracy is chosen as the detection threshold [6]. [6] R. Quick, “A Vector Magnitude Model of Contrast Detection,” Kybernetik, vol. 16, pp. 65-67, 1974 28
29
Results Parameter values for the Weibull distribution: With the classification accuracy threshold 0.75, the average of all observers is reach at an SNR of 7.74, and the 3 best observers is reach at an SNR of 7.24. 29
30
Results 30 0.75 7.24 7.74 161 196107 124
31
Results The dimensionality measure based on observers that have the highest baseline familiarity ratings is significantly lower than the estimate based on the average observers. A person’s measure of dimensionality might be dependent upon how well these familiar faces are coded in memory. 31
32
Discussion On average, the dimension of face space is in the range of 100~200 eigenfeatures. The error tolerance of observers may be related to an observer’s prior familiarity with the familiar faces. 32
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.