Download presentation
Presentation is loading. Please wait.
1
Face Recognition with Harr Transforms and SVMs EE645 Final Project May 11, 2005 J Stautzenberger
2
Outline i.Motivation ii.Description of Face Recognition System i.Overview ii.Feature Extraction iii.Haar Transform iv.SVM iii.Experiments i.Structure ii.Data Set iii.Results iv.Conclusions
3
Motivation Very active field in CS right now. Applications in Security, Multimedia Retrieval, Human Computer Interaction, … Many “good” algorithms exist: Correlation – Nearest Neighbor Eigenfaces – PCA based Fisher Faces I am interested in doing real time face detection and feature extraction.
4
Proposed System Overview Proposed a system Using cascaded SVM using Haar wavelet features with feature selection done by Adaboost. Combination of simple to complex classifiers Using SVMs for entire problem can lead to a lot useless computations on easily distinguishable background patters. Adaboost feature selection will cut out almost all background patterns quickly. –“The aim of boosting is to improve the classification of any given simple learning algorithm.” (Schapire) –Training with Adaboost can be very slow if used for the complete classification algorithm though. – algorithm converges very slowly when examples become very hard.
5
Feature Extraction Two general types of feature selection: –Filter Methods - preprocessing steps performed independently of the classification algorithm –Wrapper Methods – search though feature space using criterion of the classification algorithm to select optimal features 2 popular filter methods: –Haar Transform - simple –Gabor Transform – not simple
6
Haar Feature Extraction Haar Wavelet Breaks down image into 4 sub-samples –HH High passed in vertical and horizontal direction –LH Low passed in vertical and high passed in horizontal –HL High passed in vertical and low passed in horizontal –LL Low passed in vertical and horizontal directions LLLH HLHH LL LH HLHH LH HLHH
7
Rectangle Features Rectangle Feature Examples –horizontal –vertical –diagonal The sum of the pixels which lie in the white rectangles are subtracted from the sum pixels in the grey rectangles
8
Haar Example
9
SVMs The SVM determines the optimal hyperplane which maximizes the margin. The margin is the distance between the hyperplane and the nearest sample from the hyperplane. Decision function: α is the solutions from quadratic programming problem Non-zero α is called a support vector
10
Images Take some initial images for simple testing Either create or Find a large Database –2100 images of 2 people –Yale Database B 5760 single light source images of 10 subjects under 576 viewing conditions (9 poses, shown in Figure (4), x 64 illumination conditions, shown in Figure (5)).
11
Illumination
12
Poses
13
Experiments Initial Test No Feature Selection 100 64x64 training images 3072 length feature vector 2000 test images
14
Training Results
15
Test Results
16
Yale Image Experiment 1 Two subjects –512 Training Images –512 Test Images –No feature selection –3072 length feature vector –Linear SVM Training –18 support vectors –No misclassified images Testing –All images classified correctly
17
Yale Image Experiment 2 10 test subjects 1024 faces 384 training faces Only 2 subjects trained 640 test faces 10 subjects tested Training –38 support vectors –0 misclassified Testing –All faces positively classified… very bad
18
Yale Experiment 3 Same setup as before but this time with feature extraction 1 level Haar transform –4 filtered images –3072 length feature vector –No Feature Selection this time Training –20 support Vectors –None misclassified Testing –classification error rate was 0.020 –All positive labels classified correctly
19
Training
20
Testing
21
Yale Experiment 4 Same setup as 3 but with 2 level Haar Transform 2 level Haar transform –7 filtered images –3072 length feature vector –No Feature selection this time Training –36 support Vectors –None misclassified Testing –classification error rate was 0.000 –Very good…
22
Training
23
Testing
24
Yale Experiment 5 Same setup as 3 and 4 but now with feature selection Feature Selection Algorithm –1 level Haar Transform –Sum 4 filtered images –4 features Training –Nonlinear Support Vector (RBF) –372 support Vectors –9 misclassified Testing –classification error rate was 0.2391 –Positive examples badly mislabeled
25
Training
26
Testing
27
Yale Experiment 6 Same setup as 5 but now with 16 selected features Feature Selection Algorithm –4 level Haar Transform –Sum 16 filtered images –16 features Training –Back to Linear Support Vector Machine –11 support Vectors –0 misclassified Testing –classification error rate was 0.1812 –Positive examples labeled correctly
28
Training
29
Testing
30
Conclusions Feature Extraction is simple but very powerful Better feature selection for better error rate –Better rectangle filters –Use Boosting Eliminate background patterns Reduce features –Gabor Transform Better Testing Needed –Test against known results –Crop Images better Implement 3 layer system –Feature Extraction –Boosting (soft classier) –SVMs (hard classifier)
31
References [1] "Georghiades, A.S. and Belhumeur, P.N. and Kriegman, D.J.", "From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose", "IEEE Trans. Pattern Anal. Mach. Intelligence", 2001, 23, 6, "643-660". [2] Le, Duy Dinh, Satoh S., Feature Selection by AdaBoost for SVM-Based Face Detection. [3] F. Smeraldi, O. Carmona, J. Bigün. Saccadic Search with Gabor features applied to Eye Detection and Real-Time Head Tracking (1998). [4] P. Viola and M. Jones. "Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade"
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.