Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25,

Slides:



Advertisements
Similar presentations
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Advertisements

Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Face Recognition. Introduction Why we are interested in face recognition? Why we are interested in face recognition? Passport control at terminals in.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University
Sami Romdhani Volker Blanz Thomas Vetter University of Freiburg
Face Description with Local Binary Patterns:
3D Face Modeling Michaël De Smet.
Amir Hosein Omidvarnia Spring 2007 Principles of 3D Face Recognition.
Color Image Processing
Face Recognition & Biometric Systems, 2005/2006 Face recognition process.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
A Study of Approaches for Object Recognition
Face Recognition Based on 3D Shape Estimation
Energy interactions in the atmosphere
Stepan Obdrzalek Jirı Matas
Preprocessing to enhance recognition performance in the presence of: -Illumination variations -Pose/expression/scale variations - Resolution enhancement.
Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer,
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Learning the Appearance of Faces: A Unifying Approach for the Analysis and Synthesis of Images. Thomas Vetter Germany University of Freiburg
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Computer vision.
Multimodal Interaction Dr. Mike Spann
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
A New Subspace Approach for Supervised Hyperspectral Image Classification Jun Li 1,2, José M. Bioucas-Dias 2 and Antonio Plaza 1 1 Hyperspectral Computing.
Digital Face Replacement in Photographs CSC2530F Project Presentation By: Shahzad Malik January 28, 2003.
Spectral Characteristics
Resolution A sensor's various resolutions are very important characteristics. These resolution categories include: spatial spectral temporal radiometric.
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
COLOR HISTOGRAM AND DISCRETE COSINE TRANSFORM FOR COLOR IMAGE RETRIEVAL Presented by 2006/8.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Estimating Water Optical Properties, Water Depth and Bottom Albedo Using High Resolution Satellite Imagery for Coastal Habitat Mapping S. C. Liew #, P.
Face Recognition: An Introduction
Volker Blanz Thomas Vetter. OUTLINE Introduction Related work Database Morphable 3D Face Model Matching a morphable model to images/3D scans Building.
The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
3D Face Recognition Using Range Images
Hyperspectral remote sensing
Features, Feature descriptors, Matching Jana Kosecka George Mason University.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Timo Ahonen, Abdenour Hadid, and Matti Pietikainen
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University.
Optical System Review. Lens concept selections Solving the optical parameters (Magnification) (Focal length), Where, S’ is the distance from the lens.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Color Image Processing
Color Image Processing
Reverse-Projection Method for Measuring Camera MTF
DIGITAL SIGNAL PROCESSING
Color Image Processing
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Common Classification Tasks
Computer Vision Lecture 4: Color
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Digital Image Fundamentals
What Is Spectral Imaging? An Introduction
Hu Li Moments for Low Resolution Thermal Face Recognition
Color Image Processing
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
Color Image Processing
Recognition and Matching based on local invariant features
Presentation transcript:

Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25, No. 12, December 2003.

Introduction What is a hyperspectral Image? RGB Red, Green, Blue Channels µm visible electromagnetic spectrum

Introduction UV = Ultra Violet Vis = Visible NIR = Near infrared SWIR = Short wavelength infrared MWIR = Medium wavelength infrared LWIR = Long wavelength infrared What is a hyperspectral Image?

Introduction “Hyperspectral cameras provide useful discriminants for human face that cannot be obtained by other imaging methods.”

Introduction The utility of using near-infrared (NIR) hyperspectral images for face recognition is studied; Spectral measurements over the NIR allow sensing subsurface tissue structures; Subsurface tissue: –Significantly different from person to person, –Relatively stable over time, –Nearly invariant to face orientations and expressions.

Introduction “Significantly different from person to person”

Introduction “Nearly invariant to face orientations”

Data Collection 200 subjects; 31 spectral bands ( µm); Tunable filter; 468x498 spatial resolution; Uniform illumination; 10 seconds each image.

Data Collection

7 images for each subject and at most 5 regions (17x17) sampled: 20 subjects took part of different imaging sessions:

Experiments Setup –Cumulative Match Characteristic (CMC) curves. –Minimum Mahalanobis Distance from query to gallery: where ω x is 1 or 0, if region x was sampled or not; D x (i, j) is computed from the average intensities of the sampled region x of i and j.

First Experiment - Verification of utility of various tissues types for hyperspectral face recognition; - Only frontal images were used (Gallery: fg; Query: fa, fb).

First Experiment Better performance is achieved when different tissues are combined

First Experiment Changes in expression do not impact significantly the hyperspectral discriminants

First Experiment Forehead is the least affected by change of expressions

Second Experiment - Examination of the impact of changes in face orientation for hyperspectral face recognition; - Only frontal images were used (Gallery: fg; Query: all others).

Second Experiment 45° - 75% for n = 1 and 94% for n = 5; 90° - 80% for n = 10. The distance function assumes that tissue spectral reflectance does not depend on photometric angles.

Second Experiment Performance degrades as the size of the subset considered increases.

Analyses of First and Second Experiment

Third Experiment - Examination of variance of hyperspectral discriminants over time; - 20 subjects imaged between 3 days and 5 weeks after the first session; - The same 200 subject gallery is used.

Third Experiment - Similar results for images from different times; - Significant reduction of performance over “single day” images

Third Experiment The difference in performance can be attributed to changes in subject condition: - blood flow; - water concentration; - blood oxygenation; - melanin concentration; Also - sensor characteristics.

Questions?

Face Recognition on Fitting a 3D Morphable Model V. Blanz and T. Vetter Published at IEEE Trans. on PAMI Vol 25, No. 9, September 2003.

Introduction Color values in a face image do not depend only on the person identity (pose and illumination); Goal: separate the characteristics of a face (shape and texture) from conditions of image acquisition; The conditions may be described consistently across the entire image by a small set of extrinsic parameters;

Introduction The algorithm developed combines deformable 3D models with CG simulations of illumination and projection; It makes face shape and texture fully independent of extrinsic parameters; Given a single image of a person, the algorithm automatically estimates face 3D shape, texture, and all relevant 3D scene parameters.

Model-Based Recognition

Morphable Model Vector space constructed such that any “convex combination” of shape and texture vectors S i and T i describes a human face; Continuous changes in model parameters generate a smooth transition that moves the initial surface toward a final one;

Database of 3D Laser Scans Laser scans of 200 faces were used to create the morphable model;

Correspondence Establish dense point-to-point correspondence between each face and a reference face: Generalization of “Optical Flow” to 3D surfaces is used to determine the vector field: ViVi

Generalized Optical Flow To find the face vector field, the following expression must be minimized for a neighborhood R (5x5):

Face Vectors One scanned face is chosen as reference I 0 Reference shape and texture vectors are defined from conversion of each cylindrical coordinate to Cartesian coordinates:

Face Vectors For a novel scan I, the flow field from I 0 to I is computed and converted to cartesian coordinates (S and T).

Principal Component Analysis PCA is performed on S i and T i Shape and texture eigenvectors (s i and t i ) and variances (σ S and σ T ) are computed:

Model Fitting Given a novel face image, the parameters and are found to provide the reconstruction of the 3D shape; Pose, camera focal length, light intensity, color and direction are automatically found;

Model Fitting

Optimization of shape coefficients and texture coefficients, along with pose angles, translation and focal length parameters, Lambertian light intensity and direction, contrast, and gains and offsets of color channels (ρ); Cost Function: Optimization method: Stochastic Newton Algorithm. Similar to stochastic gradient descent algorithm; Makes use of first derivative of E;

Experiments Model fitting and identification were tested on PIE (4488 images) and FERET (1940 images) databases; None of the faces are in the model database; Feature points manually defined: Gallery and Query recognition approach.

Results of Model Fitting

Results of Recognition Metrics used for comparison: –Sum of Mahalanobis Distances d M = ||c 1 -c 2 || ^2 –Cosine of the angle between two vectors d A = /||c 1 ||.||c 2 || –Maximum-Likelihood and LDA c is a face, represented by shape and texture coefficients; d W is superior because it takes into account fitting inaccuracy (different coefficients for the same subject)

Results of Recognition

Comment Fitting process depends on user interaction and takes 4.5 minutes on a Pentium 3 2GHz.

Questions?