Preprocessing to enhance recognition performance in the presence of: -Illumination variations -Pose/expression/scale variations - Resolution enhancement (deblurring) Stand-alone recognition system Preprocessing/recognition results - Face Recognition Grand Challenge (NIST) Preprocessing Overview
Current state-of-the-art face recognition systems degrade significantly in performance due to variations in pose, illumination, and blurring. Problem: IMAGE CAPTURE PREPROCESSING RESTORATION/ ENHANCEMENT FACE RECOGNITION SYSTEM Solution: POSE CORRECTION due to mismatch in facial position, facial expression and scale ILLUMINATION CORRECTION due to mismatch in lighting conditions in both indoor and outdoor environments DEBLURRING due to mismatch in camera focus, camera lenses, camera resolution and motion blur Face Recognition
No a priori information with regards to pose orientation, camera parameters, etc No laser scanned images for 3D reconstruction No manual detection of feature points Preprocessing & Stand-Alone Recognition Highlights of the approach
Principle Find a function which maps a given test (probe) image into the correct train (gallery) image Approach where M is the number of training images Select that is maximally bijective
Recognition Principle A function ’f ‘ is found which maps points in the test (probe) to equivalent points in the train (gallery) where X = Test image (domain) Y = Train image (co-domain) = Bijective function mapping X Y One to One and Onto (bijection) X Y domain (X) f range(X) TestTrain
Inverse Estimation where Y = Train image (domain) X = Test image (co-domain) = Bijective function mapping Y X X Y domain (Y) g range(Y) A function ’g ‘ is found which maps points in the train (gallery) to equivalent points in the test (probe) TestTrain
Measure of Bijectivity XYf Partition X where n is the total number of distinct blocks in X Blue,Green,Cyan Red
Measure of Bijectivity YX g Partition Y where p is the total number of distinct blocks in Y Blue,Green,Cyan Red
Measure of Bijectivity The Bijectivity score is given by: = Forward (test train) = Adaptive Forward (test train) = Backward (train test) = Adaptive Backward (train test) = constants and
Mapping Properties Train Image No.1
Preprocessing Example
Preprocessing Performance
Metric used One-to-None Mapping Yale Subset-I Block Size-8x8;Search-56x56 Yale Subset-II Block Size-8x8;Search-88x88 Exhaustive Search100 %80 % Exhaustive Search with Constraints 100 %83.33 % Fast Search98 %83.33 % Table 4.2 Yale Results-Stand Alone Recognition-II Face recognition performance
PIE Subset- Exhaustive Search Block Size-8x8; Search-72x72 One-to-None MappingOne-to-One Mapping Stand Alone Recognition Performance %95.59 % Table 4.4 PIE Database-Stand Alone Recognition PCA-based Approach recognition accuracy: 5.88 % Face recognition performance
Training Test Preprocessed Preprocessing for Illumination Correction
Algorithm based on image adaptive least squares illumination correction Training image ATesting image BAdaptive segmentation Image A illuminated as BImage B illuminated as A Least squares estimate of illumination Preprocessing for Illumination Correction
Set tested (Yale) Enrollment Rate (Commercial face recognition system) Original56 % Preprocessed90 % Enrollment : Process of accepting the image and creating a feature set for recognition. Preprocessing Results Illumination Correction
Comparison with Existing Methods Test subset 3D Morphable Model Our algorithm PIE frontal99.8 %99.6 % 3D morphable models : Good results (FRVT 2002). Very complex, computationally expensive, manual labeling of features 1. T. Vetter and V. Blanz, “Face Recognition Based on Fitting a 3D Morphable Model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp , Sept
Preprocessing Example traintest Vector Field Representation of f Preprocessed Test Image
Notre Dame Database Preprocessing Example
Notre Dame Database
Recognition Example GalleryProbe Bijective Mapping With the correct gallery White Region measure of bijectivity (52.91%)
Recognition Example With the incorrect gallery GalleryProbe Bijective Mapping White Region measure of bijectivity (33.94%)
Notre Dame Database Face Recognition Performance
Conclusion and Future Work New algorithm for registration and illumination correction to enhance the performance of face recognition systems Algorithm is based on properties of the mapping between test and train data Mapping produces similarity scores which can be used for a stand-alone face recognition algorithm Extend algorithm for high resolution data Reduce algorithm complexity