Recognition with Expression Variations

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Principal Components Analysis Vida Movahedi December 2006.
Face Recognition CPSC UTC/CSE.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Dimensionality Reduction Chapter 3 (Duda et al.) – Section 3.8
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Principal Component Analysis
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c.
Face Recognition using PCA (Eigenfaces) and LDA (Fisherfaces)
Face Recognition Jeremy Wyatt.
Face Recognition: A Comparison of Appearance-Based Approaches
Face Recognition: An Introduction
Preprocessing Images for Facial Recognition Adam Schreiner ECE533.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
8/16/99 Computer Vision and Modeling. 8/16/99 Principal Components with SVD.
PCA & LDA for Face Recognition
Summarized by Soo-Jin Kim
Training Database Step 1 : In general approach of PCA, each image is divided into nxn blocks or pixels. Then all pixel values are taken into a single one.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition by D. Tao, X. Li, and J. Maybank, TPAMI 2007 Presented by Iulian Pruteanu.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Face Recognition: An Introduction
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Face Recognition: An Introduction
1 Terrorists Face recognition of suspicious and (in most cases) evil homo-sapiens.
CSE 185 Introduction to Computer Vision Face Recognition.
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Principal Component Analysis and Linear Discriminant Analysis for Feature Reduction Jieping Ye Department of Computer Science and Engineering Arizona State.
2D-LDA: A statistical linear discriminant analysis for image matrix
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Principal Component Analysis (PCA)
CSSE463: Image Recognition Day 27
CSSE463: Image Recognition Day 26
University of Ioannina
LECTURE 10: DISCRIMINANT ANALYSIS
Lecture 8:Eigenfaces and Shared Features
CS 2750: Machine Learning Dimensionality Reduction
Face Recognition and Feature Subspaces
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
PCA vs ICA vs LDA.
Singular Value Decomposition
Face Recognition and Detection Using Eigenfaces
Principal Component Analysis
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Eigenfaces for recognition (Turk & Pentland)
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
Introduction PCA (Principal Component Analysis) Characteristics:
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
CSSE463: Image Recognition Day 25
Feature space tansformation methods
CS4670: Intro to Computer Vision
LECTURE 09: DISCRIMINANT ANALYSIS
CSSE463: Image Recognition Day 25
Facial Recognition as a Pattern Recognition Problem
Hairong Qi, Gonzalez Family Professor
Presentation transcript:

Recognition with Expression Variations Pattern Recognition Theory – Spring 2003 Prof. Vijayakumar Bhagavatula Derek Hoiem Tal Blum

Method Overview Training Image (Reduced) m < N Variables Test Image Dimensionality Reduction 1-NN Euclidean N Variables Classification

Principal Components Analysis Minimize representational error in lower dimensional subspace of input Choose eigenvectors corresponding to m largest eigenvalues of total scatter as the weight matrix T k N x S ) )( ( 1 m - = å ] ... [ max arg 2 1 m T W opt w S =

Linear Discriminant Analysis Maximize the ratio of the between-class scatter to the within-class scatter in lower dimensional space than input Choose top m eigenvectors of generalized eigenvalue solution T i c B S ) ( 1 m - = å T i k c x W S ) ( 1 m w - = å Î ] ... [ max arg 2 1 m W T B opt w S = i W B w S l =

LDA: Avoiding Singularity For N samples and c classes: Reduce dimensionality to N - c using PCA Apply LDA to reduced space Combine weight matrices PCA LDA opt W =

Discriminant Analysis of Principal Components For N samples and c classes: Reduce dimensionality m < N - c Apply LDA to reduced space Combine weight matrices PCA LDA opt W =

When PCA+LDA Can Help Test includes subjects not present in training set Very few (1-3) examples available per class Test samples vary significantly from training samples

Why Would PCA+LDA Help? Allows more freedom of movement for maximizing between-class scatter Removes potentially noisy low-ranked principal components in determining LDA projection Goal is improved generalization to non-training samples

PCA Projections Best 2-D Projection Training Testing

LDA Projections Best 2-D Projection Training Testing

PCA+LDA Projections Best 2-D Projection Training Testing

Processing Time Training time: < 3 seconds (Matlab 1.8 GHz) Testing time: O( d * (N + T) ) Method images/sec PCA (30) 224 LDA (12) 267 PCA+LDA (12)

Results Dimensions PCA LDA PCA+LDA 1 37.52% 11.28% 14.15% 2 2.56% 1.13% 1.03% 3 0.41% 0.00% 0.72% 4

Sensitivity of PCA+LDA to Number of PCA Vectors Removed 13 23.69% 11.28% 14 7.08% 15 14.15% 16 9.33% 17 6.56%

Conclusions Recognition under varying expressions is an easy problem LDA and LDA+PCA produce better subspaces for discrimination than PCA Simply removing lowest ranked PCA vectors may not be good strategy for PCA+LDA Maximizing the minimum between-class distance may be a better strategy than maximizing the Fisher ratio

References M. Turk and A. Pentland, “Face recognition using eigenfaces,” in Proc. IEEE Conf. on Comp. Vision and Patt. Recog., pages 586-591, 1991 P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” in Proc. European Conf. on Computer Vision, April 1996 W. Zhao, R. Chellappa, and P.J. Phillips, “Discriminant Analysis of Principal Components for Face Recognition,” in Proceedings, International Conference on Automatic Face and Gesture Recognition, pp. 336-341, 1998