Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer,

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Tests of Hypotheses Based on a Single Sample
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
PCA + SVD.
Fingerprint Minutiae Matching Algorithm using Distance Histogram of Neighborhood Presented By: Neeraj Sharma M.S. student, Dongseo University, Pusan South.
C. L. Wilson Manager, Image Group Biometrics Overview of the PATRIOT Act.
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Automatic Feature Extraction for Multi-view 3D Face Recognition
The Statistics of Fingerprints A Matching Algorithm to be used in an Investigation into the Reliability of the Use of Fingerprints for Identification Bob.
66: Priyanka J. Sawant 67: Ayesha A. Upadhyay 75: Sumeet Sukthankar.
Instructor: Dr. G. Bebis Reza Amayeh Fall 2005
Symmetric hash functions for fingerprint minutiae
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Evaluating Hypotheses
Development of Empirical Models From Process Data
Chapter 11 Integration Information Instructor: Prof. G. Bebis Represented by Reza Fall 2005.
Scale Invariant Feature Transform (SIFT)
Face Recognition: A Comparison of Appearance-Based Approaches
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
8-2 Basics of Hypothesis Testing
Experimental Evaluation
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
A Brief Survey on Face Recognition Systems Amir Omidvarnia March 2007.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Tests of Hypotheses Based on a Single Sample.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
1 Fingerprint Classification sections Fingerprint matching using transformation parameter clustering R. Germain et al, IEEE And Fingerprint Identification.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 9 Hypothesis Testing.
Overview Definition Hypothesis
Sections 8-1 and 8-2 Review and Preview and Basics of Hypothesis Testing.
Fundamentals of Data Analysis Lecture 7 ANOVA. Program for today F Analysis of variance; F One factor design; F Many factors design; F Latin square scheme.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Copyright © Cengage Learning. All rights reserved. 10 Inferences Involving Two Populations.
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Multimodal Interaction Dr. Mike Spann
Overview Basics of Hypothesis Testing
CPSC 601 Lecture Week 5 Hand Geometry. Outline: 1.Hand Geometry as Biometrics 2.Methods Used for Recognition 3.Illustrations and Examples 4.Some Useful.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Mid-Term Review Final Review Statistical for Business (1)(2)
Symmetric hash functions for fingerprint minutiae S. Tulyakov, V. Chavan and V. Govindaraju Center for Unified Biometrics and Sensors SUNY at Buffalo,
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Video Based Palmprint Recognition Chhaya Methani and Anoop M. Namboodiri Center for Visual Information Technology International Institute of Information.
March 10, Iris Recognition Instructor: Natalia Schmid BIOM 426: Biometrics Systems.
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Chapter 9 Probability. 2 More Statistical Notation  Chance is expressed as a percentage  Probability is expressed as a decimal  The symbol for probability.
Chapter 7 Probability and Samples: The Distribution of Sample Means.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
Designing multiple biometric systems: Measure of ensemble effectiveness Allen Tang NTUIM.
Biometric Iris Recognition System INTRODUCTION Iris recognition is fast developing to be a foolproof and fast identification technique that can be administered.
1 Iris Recognition Ying Sun AICIP Group Meeting November 3, 2006.
Statistics Chapter 6 / 7 Review. Random Variables and Their Probability Distributions Discrete random variables – can take on only a countable or finite.
Computer and Robot Vision II Chapter 20 Accuracy Presented by: 傅楸善 & 王林農 指導教授 : 傅楸善 博士.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
CHAPTER 2.3 PROBABILITY DISTRIBUTIONS. 2.3 GAUSSIAN OR NORMAL ERROR DISTRIBUTION  The Gaussian distribution is an approximation to the binomial distribution.
MATH Section 4.4.
ISA Kim Hye mi. Introduction Input Spectrum data (Protein database) Peptide assignment Peptide validation manual validation PeptideProphet.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Inferences Concerning Means.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Estimating standard error using bootstrap
CSE 554 Lecture 8: Alignment
Hand Geometry Recognition
Review and Preview and Basics of Hypothesis Testing
FINGER PRINT RECOGNITION USING MINUTIAE EXTRACTION FOR BANK LOCKER SECURITY Presented by J.VENKATA SUMAN ECE DEPARTMENT GMRIT, RAJAM.
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Recognition: Face Recognition
MATH 2311 Section 4.4.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
SIFT.
Presentation transcript:

Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer, Sudeep Sarkar, Barnabas Integrating Faces and Fingerprints for Personal Identification IEEE Trans on PAMI, VOL. 20, NO. 12, 1998 Lin Hong and Anil Jain Presented by: Zhiming Liu Instructor: Dr. Bebis

Multimodal Biometrics  All these biometric techniques have their own advantages and disadvantages and are admissible depending on the application domain.  Combine them to improve the performance

Face versus Ear  Normalization Original face images are cropped to 768*1,024 and original ear images are cropped to 400*500.

Face versus Ear The cropped images are normalized to the 130*150. The masks are used for face images and ear images to remove the backgrounds. The images are histogram equalized.

Face versus Ear  Eigen-faces and Eigen-ears PCA computes the eigenvectors and eigenvalues. Following the FERET approach, we use the eigenvectors corresponding to the first 60 percent of the large eigenvalues and drop the first eigenvector as it represents illumination. Another approach uses the fixed percent of total energy.

Face versus Ear Database The training set consists of 197 subjects, each of whom has both a face image and an ear image. There is a separate (gallery, probe) data set for three experiments: the day variation, the lighting variation and the pose variation.

Face versus Ear Experimental Results: Face versus Ear Face and ear recognition performance in the day variation experiment

Face versus Ear Face and ear recognition performance in the lighting variation experiment

Face versus Ear Face and ear recognition performance in the pose variation experiment: 22.5 degree rotation to the left between the gallery and probe images

Face versus Ear Experimental Results: Face versus Ear  Simple combination technique: the normalized, masked ear and face images of a subject are concatenated to form a combined face-plus-ear image.  Compute eigenvectors and eigenvalues using these combined images.

Face versus Ear Face combined with ear recognition performance in the day variation experiment Rank-one recognition rate: 90.9% for combination versus 71.6% for the ear and 70.5% for the face

Face versus Ear Face combined with ear recognition performance in the lighting variation experiment Rank-one recognition rate: 87.4% for combination versus 68.5% for the ear and 64.9% for the face

Face versus Ear Face combined with ear recognition performance in the pose variation experiment

Face versus Ear Discussion  Results don’t support a conclusion that an ear-based or face- based biometric should necessarily offer better performance than the other.  Results do support the conclusion that a multimodal biometric using both the ear and the face can out-perform a biometric using either one alone.  If the different eigenvectors affect the performance?

Face versus Ear

Face versus Fingerprint Fingerprint Verification An alignment-based “elastic” matching algorithm:  Alignment stage: transformations such as translation, rotation, and scaling between an input and a temple in the database are estimated, then the input minutiae are aligned with the template minutiae.  Matching stage: both the input minutiae and the template minutiae are converted to “strings” in the polar coordinate system, and an “elastic” string matching algorithm is used to match the resulting strings.

Face versus Fingerprint 1)Let and denote the p minutiae in the template and q minutiae in the input image. 2)After estimating transformation parameters and aligning two minutiae patterns, convert the template pattern and input pattern into the polar coordinate representations: 3)Match P * and Q * with a modified dynamic-programming algorithm. 4)The matching score, S, is defined as:

Face versus Fingerprint Decision Fusion 1)Abstract level: the output from each module is only a set of possible labels without any confidence associated with the labels; in this case, the simple majority rule may be employed to reach a more reliable decision. 2)Rank level: the output from each module is a set of possible labels ranked by decreasing confidence values, but the confidence values themselves are not specified. 3)Measurement level: the output from each module is a set of possible labels with associated confidence values; in this case, more accurate decision can be made by integrating different confidence measures to a more informative confidence measure.

Face versus Fingerprint  We need to define a measure that indicates the confidence of the decision criterion and a decision fusion criterion.  The confidence of a given decision criterion may be characterized by its FAR.  In order to estimate FAR, the impostor distribution needs to be computed.

Face versus Fingerprint Impostor Distribution for Fingerprint Verification –The region of interest of both input fingerprint and template is of the same size, W*W. –Let the size of cell be w*w, there are a total of (W*W)/(w*w) = N c different cells in the region of interest of fingerprint. –Assume that each fingerprint has the same number of minutiae, N m (<= N c ), which are distributed randomly in different cells and each cell contains at most one minutiae. –Each minutiae is directed towards one of the D possible orientation with equal probability.

Face versus Fingerprint –For a given cell, the probability that the cell is empty with no minutiae present is P empty = N m /N c, and the probability that cell has a minutiae that is directed toward a specific orientation is P = (1-P empty )/D. –A pair of corresponding minutiae between a template and an input is considered to be identical if and only if they are in the cells at the same position and directed in the same direction.

Face versus Fingerprint –With the above simplifying assumptions, the number of corresponding minutiae pairs between any two randomly selected minutiae patterns is a random variable, Y, which has a binomial distribution with parameters N m and P: –The probability that the number of corresponding minutiae pairs between any two sets of minutiae patterns is less than a given threshold value, y, is:

Face versus Fingerprint Impostor Distribution for Face Recognition –Top n matches are obtained by calculating the DFFS arranging in the increasing order. –The relative distances between consecutive DFFSs are invariant to the mean shift of the DFFSs. –The probability that a retrieved top n match is incorrect is different for different ranks. –Thus, the impostor distribution is a function of both the relative DFFS values, , and the rank order, i: F i (  ) represents the probability that the consecutive DFFS values between impostor and their claimed individual at rank i are larger than a value , and P order (i) is the probability that the retrieved match at rank i is an impostor.

Face versus Fingerprint Estimate P order (i) –X  denote the DFFS between an individual and his own template. It is a random variable with density function f  (X  ). –X 1 β, X 2 β, …, X N-1 β denote the DFFS values between an individual and the templates of the other individuals in the database, with density functions, –For an individual, π, the rank, I, of X  among X 1 β, X 2 β, …, X N-1 β is a random variable with probability where, when p<<1 and N is very large, –P(I) is the probability that matches at rank i are genuine individuals. Therefore,

Face versus Fingerprint Estimate F i (  ) –Assume for a given individual, π, X 1 β, X 2 β, …, X N-1 β are arranged in increasing order of values. –Define the non-negative distance between the (i+1)th and ith DFFS values as the ith DFFS distance, –The distribution, f i (  i ), of the ith distance,  i, is obtained from the joint distribution w i (X β,  i ) of the ith value, X β, and the ith distance,  i, where

Face versus Fingerprint Estimate F i (  ) (cont’d) –With the distribution, f i (  i ), of the ith distance defined, the probability that the DFFS of the impostor at rank i is larger than a threshold value, , is

Face versus Fingerprint Decision Fusion –Each of the top n possible identities established by the face recognition module is verified by the fingerprint verification module: either rejects all the n possibilities or accepts only one of them as the genuine identity. –It is usually specified that the FAR of the system should be less than a given value. –The goal of decision fusion, in essence, is to derive a decision criterion which satisfies the FAR specification.

Face versus Fingerprint Decision Fusion (cont’d) –The composite impostor distribution at rank i may be defined as which defines the probability that an impostor is accepted at rank i with consecutive relative DFFS, , and fingerprint matching score, Y. –I 1, I 2, …, I n denote the n possible identities established by face recognition, {X 1,X 2,…X n } denote the corresponding n DFFSs, {Y 1,Y 2,…Y n } denote the corresponding n fingerprint matching scores, and FAR 0 denote the specified value of FAR.

Face versus Fingerprint Experimental Results –1,500 fingerprint images from 150 individuals with 10 images per individual. –1,132 images of 86 individuals. –Randomly selected 640 fingerprints of 64 individuals as the training set.

Face versus Fingerprint Experimental Results (cont’d) Impostor distribution for fingerprint: the mean and standard deviation of the impostor distribution are estimated to be 0.7 and 0.64.

Face versus Fingerprint Experimental Results (cont’d) –542 face images were used as training samples. –First 64 eigenfaces were used for face recognition. –The top 5 impostor distributions were approximated. the impostor distribution for face recognition at rank No. 1, where the stars (*) represent empirical data and the solid curve represents the fitted distribution: the mean square error between the empirical distribution and fitted distribution is

Face versus Fingerprint Experimental Results (cont’d) –Randomly assign each of the remaining 86 individuals in the fingerprint database to an individual in the face database. –One fingerprint for each individual is randomly selected as the template for the individual. –Each of the remaining 590 faces was paired with a fingerprint to produce a test pair.

Face versus Fingerprint Experimental Results (cont’d)

Face versus Fingerprint Experimental Results (cont’d)

Questions?