Download presentation
1
Biometrics: Faces and Identity Verification in a Networked World
CSI7163/ELG5121 Donald Chow Mathew Samuel
2
Biometric Communication Conclusion Questions
Agenda Identification Biometrics Facial Recognition PCA 3D Expression Invariant Recognition 3D Morphable Model Biometric Communication XML implementation of CBEFF Conclusion Questions
3
Three Basic Identification Methods
“Sidova” “750426” Possession (“something I have”) Knowledge (“something I know”) Keys Passport Smart Card Universal Unique Permanent Collectable Acceptance Password PIN Universal Unique Permanent Collectable Acceptance ü ü ü ü Biometrics (“something I am”) Be it purchasing goods, boarding an airplane, crossing a border, or performing a financial transaction, reliable authorization and authentication have become necessary for many daily interactions. Essentially, these activities rely upon ensuring the identities and authenticity of the people involved. Traditionally, authentication is based upon possession-based and knowledge-based identification. What you have Examples: User IDs, Accounts, Cards, Badges, Keys Shortcomings: Can be shared May be duplicated May be lost or stolen What you know Examples: Password, PIN Many passwords are easily guessed Can be forgotten Today, daily interactions are becoming increasingly automated, interfacing people with computers. Until recently, the primary authentication components of human computer interaction consisted of passwords and personal identification numbers. However, new authentication technologies are emerging that are capable of providing higher degrees of certainty for identifying an individual. One of these technologies is biometrics. Biometrics Examples: Fingerprint, voiceprint, face, iris Not possible to share Repudiation unlikely Difficult to forge Cannot be lost or stolen Source: Bolle, R.M. et al. (2004) Guide to Biometrics, New York: Springer-Verlag: 1-5 Universal Unique Permanent Collectable Acceptance ü Face Fingerprint Iris ü ü ü ?
4
Biometrics Refer to a broad range of technologies
Automate the identification or verification of an individual Based on human characteristics Physiological: Face, fingerprint, iris Behavioural: Hand-written signature, gait, voice Characteristics Templates Biometrics refers to a broad range of technologies, systems, and applications that automate the identification or verification of an individual based on his or her physiological or behavioral characteristics. Source: Bolle, R.M. et al. (2004) Guide to Biometrics, New York: Springer-Verlag: 1-5 Physiological biometrics are based on direct measurements of a part of the human body at a point in time. The most common physiological biometrics involve fingerprints, face, hand geometry, and iris. Less common physiological biometrics involve DNA, ear shape, odor, retina, skin reflectance and thermogram. Behavioural biometrics are based on measurements and data derived from the method by which a person carries out an action over an extended period of time. The most common behavioural biometrics involve hand-written signature and voice. Less common behavioural biometrics involve gait, keystroke pattern, and lip motion. According to Turk and Pentland, in their seminal paper, “Eigenfaces for Recognition”, which was published in 1991, “In the language of information theory, we want to extract the relevant information in a face image, encode it as efficiently as possible, and compare one face encoding with a database of models encoded similarly.” Many rationales for deploying biometrics center on improved certainty in determining an individual’s identity and perceived cost savings from the reduced risk of financial losses for the individual or institution deploying the biometric. Source: Nanavati, S. et al. (2002) Biometrics: Identity Verification in a Networked World, New York: John Wiley & Sons, Inc: 1-5 … …
5
Typical Biometric Authentication Workflow
Enroll: Enrollment subsystem Biometric reader Feature Extractor … Template Authenticate: Authentication subsystem Biometric reader Feature Extractor Matcher … Template Database Match or No Match Most biometric applications can be seen as pattern recognition systems. Such systems consist of sensors, feature extractors, and feature matchers. A user enrolls in a biometrics system by providing biometric data, which is converted into a template Templates are stored in biometric systems for the purpose of subsequent comparison In order to be verified or identified after enrollment, the user provides biometric data, which is converted into a template The verification template is compared with one or more enrollment templates The result of a comparison between biometric templates is rendered as a score or confidence level, which is compared to a threshold used for a specific technology, system, user, or transaction If the score exceeds the threshold, the comparison is match, and the result is transmitted If the score does not meet the threshold, the comparison is not a match and the result is not transmitted Source: Nanavati, S. et al. (2002) Biometrics: Identity Verification in a Networked World, New York: John Wiley & Sons, Inc: 1-5
6
Identification vs. Verification
Identification (1:N) Database Biometric reader Biometric Matcher This person is Emily Dawson I am Emily Dawson Verification (1:1) Identification systems answer the question, “Whom am I?” They do not require that a user claim an identity before biometric comparisons take place. The user provides biometric data, which is compared to data from a number of users to find a match. The answer returned to the system is an identity. Source: Nanavati, S. et al. (2002) Biometrics: Identity Verification in a Networked World, New York: John Wiley & Sons, Inc: 1-5 Verification systems answer the question, “Am I who I claim to be?” They require that a user claim an identity in order for a biometric comparison to be performed. After a user claims an identity, the user provides biometric data, which is then compared against his or her enrolled biometric data. To claim an identity, the user may use a username, a given name, or an ID number. The answer returned to the system is a match or unmatch Database ID Biometric reader Biometric Matcher Match
7
Faces are integral to human interaction
Manual facial recognition is already used in everyday authentication applications ID Card systems (passports, health card, and driver’s license) Booking stations Surveillance operations Face appearance is a compelling biometric because it is integral to everyday human interaction. It is the primary means for people recognizing one another. As such, facial scan is more acceptable than other biometrics, save for fingerprints. Since the advent of photography, faces have been the primary method of identification in passports and ID card systems. Because optical imaging devices can easily capture faces, there are large legacy databases that can provide template sources for facial scan technology. This includes police mug-shot data-bases and television footage. Cameras can acquire the biometric passively. Hence, facial scan technology is easy to use and deploy overtly or covertly. Source: Bolle, R.M. et al. (2004) Guide to Biometrics, New York: Springer-Verlag: 35-37
8
Facial recognition requires 2 steps:
Facial Detection (will not present today) Facial Recognition Typical Facial Recognition technology automates the recognition of faces using one of two 2 modeling approaches: Face appearance 2D Eigen faces 3D Morphable Model Face geometry 3D Expression Invariant Recognition According to Jeng and Liao, in “Facial Feature Detection Using Geometrical Face Model: An Efficient Approach”, which was published in 1997, “Facial recognition requires two major steps. Firstly, faces have to be detected from a scene, either simple or cluttered. Then the detected faces have to be normalized and recognized by a specially designed classifier.” According to Antonini et al., in “Independent Component Analysis and Support Vector Machine for Face Feature Extraction”, “Face detection-recognition algorithms are made up of three different steps: 1 – localization of the face regions 2 – extraction of meaningful features 3 - normalization of the image with respect to the features to perform a recognition step” Face appearance: reduce a facial image containing thousands of pixels to a handful of numbers. Detecting and recognizing faces is a statistical comparison between a collection of encoded faces and a newly encoded face. Face geometry: model a human face in terms of particular face features, such as eyes, mouth, and their geometrical layout. Face recognition is then a matter of matching feature geometry. Source: Bolle, R.M. et al. (2004) Guide to Biometrics, New York: Springer-Verlag: 35-38
9
Facial Recognition Algorithms
2D Eigenface Principle Component Analysis (PCA) 3D Face Recognition 3D Expression Invariant Recognition 3D Morphable Model
10
Facial Recognition: Eigenface
Decompose face images into a small set of characteristic feature images. A new face is compared to these stored images. A match is found if the new faces is close to one of these images.
11
Facial Recognition: PCA - Overview
Create training set of faces and calculate the eigenfaces Project the new image onto the eigenfaces. Check if image is close to “face space”. Check closeness to one of the known faces. Add unknown faces to the training set and re-calculate
12
Facial Recognition: PCA – Training Set
13
Facial Recognition: PCA Training
Find average of training images. Subtract average face from each image. Create covariance matrix Generate eigenfaces Each original image can be expressed as a linear combination of the eigenfaces – face space
14
Facial Recognition: PCA Recognition
A new image is project into the “facespace”. Create a vector of weights that describes this image. The distance from the original image to this eigenface is compared. If within certain thresholds then it is a recognized face.
15
Facial Recognition: 3D Expression Invariant Recognition
Treats face as a deformable object. 3D system maps a face. Captures facial geometry in canonical form. Can be compared to other canonical forms.
16
Facial Recognition: 3D Morphable Model
Create a 3D face model from 2D images. Synthetic facial images are created to add to training set. PCA can then be done using these images
17
Pros and Cons 2D face recognition methods are sensitive to lighting, head orientations, facial expressions and makeup. 2D images contain limited information 3D Representation of face is less susceptible to isometric deformations (expression changes). 3D approach overcomes problem of large facial orientation changes
18
Common Biometric Exchange Formats Framework (CBEFF)
Communication Common Biometric Exchange Formats Framework (CBEFF) XML implementation of CBEFF CBEFF Data Elements Standard Biometric Header Biometric Specific Memory Block Signature or MAC
19
Conclusion Facial scan has unique advantages over other biometrics Core technologies are highly researched Automated facial detection and facial recognition algorithm are not yet mature
20
References Antonini, G. et al. (2003) “Independent Component Analysis and Support Vector Machine for Face Feature Extraction”, Signal Processing Institute, Swiss Federal Institute of Technology Lausanne, Switzerland: 1-8 Bolle, R.M. et al. (2004) Guide to Biometrics, New York: Springer-Verlag: 1-5 Bronstein, A.M. et al. (2003) “Expression-Invariant 3D Face Recognition” AVBPA, LNCS (2688): 62-70, Springer-Verlag Berlin Heidelbert Huang, J et al. (2003) “Component-based Face Recognition with 3D Morphable Models” Center for Biological and Computational Learning, MIT Jeng, SH. Et al. (1998) “Facial Feature Detection Using Geometrical Face Model: An Efficient Approach” Pattern Recognition, vol 31(3): Nanavati, S. et al. (2002) Biometrics: Identity Verification in a Networked World, New York: John Wiley & Sons, Inc: 1-5 Storring, M. (2004) “Computer Vision and Human Skin Colour” Computer Vision and Media Technology Laboratory, PHD Dissertation, Aalborg University Turk, M. (1991) “Eigenfaces for Recognition” Journal of Cognitive Neuroscience, The Media Laboratory: Vision and Modeling Group, MIT, vol(3): 1 Vezhevets, V. et al. (2002) “A Survey on Pixel-Based Skin Color Detection Techniques” Graphics Medial Laboratory, Faculty fo Computational Mathematics and Cybernetics, Moscow State University
21
Questions
22
Facial Detection: Colour
Algorithms: Pixel-based Region-based Approaches: Explicitly defined region within a specific colour space Dynamic skin distribution model The majority of colour-related facial detection algorithms focus on detecting human skin. Human skin is composed of a thin layer, the epidermis, and the dermis, which is a thicker layer that is directly blow the epidermis. On average, human skin directly reflect 5 percent of light, independent of its wavelength. The rest of the incident light enters the two layers of skin and is reflected depending on the various colourant substances – called melanin - in the dermis layer. As the dermis itself is basically the same for all humans, it is the varying concentrations of melanin that results in skin colour. To determine skin colour, an algorithm must choose a colour value within a specific colour space. In the literature, the following colourspaces are predominant: RGB is a colour space that originated from display applications, where it is most convenient to combine there colored rays. Normalized RGB is a representation of RGB using normalization procedure that reduces the related values into components that add up to 1. These components are called pure colours. The hue saturation based colour space was introduced when colours needed to be represented numerically. YCrCB is an encoded nonlinear RGB signal that is commonly used by European television studios and image compression. Essentially, color is represented by luma, which is a value that is constructed as a weighted sum of RGB values, and two colour difference values that are formed by subtracting luma from RGB’s red and blue components. No matter the algorithm, the goal of skin colour detection is to build a decision model that will discriminate between skin and non skin. These decision models are then used to classify skin based on independent pixels or regions. Pixel-based skin detection methods classify each pixel as skin or non-skin individually and independently of its neighbors. Region based skin detection methods determine the spatial arrangement of skin pixels during the detection stage. Source: Storring, M. (2004) “Computer Vision and Human Skin Colour” Computer Vision and Media Technology Laboratory, PHD Dissertation, Aalborg University One method to build a decision model is to define explicitly the boundaries in which skin colour can exist in a colour space. This approach creates a very rapid classifier. A second method is non-parametric skin modeling which estimates skin colour distribution from a training set without deriving an explicit model of the skin colour. The results are referred to as a “Skin Probability Map.” One implementation of this model is the “Self-Organizing Map” which is one of the more popular types of unsupervised neural networks. Essentially, a neural network is fed two classes of manually labeled images. The first is skin-only. The other is non-skin. SOM skin detectors are able to function between colourspaces. A third method involves designing and tuning a model during face tracking. This method pre-supposes another face detection mechanism is present to quickly determine the presence of a face. Essentially, once a face is determined, an adhoc model for skin representation is created for that face. Afterwards, model is used to determine the presence of other occurrences of human skin. Source: Vezhevets, V. et al. (2002) “A Survey on Pixel-Based Skin Color Detection Techniques” Graphics Medial Laboratory, Faculty fo Computational Mathematics and Cybernetics, Moscow State University
23
Facial Detection: Geometry
Faces decompose into 4 main organs Eyebrows Eyes Nose Mouth Algorithm Preprocessing Matching 3 Steps Pre-processing. All reflection on the eyes and glasses (if present) are removed. A Boost-filtering mask is employed to enhance the contract between the face features and the rest of the face image. To prevent the side-effect of over-enhancing the background, a gradient is introduced for computing the center weight of the mask. The image will then be binarized into a sketch-like image 2)Labeling. A run-length local table method is employed to group the active pixels into connected blocks and label them. This algorithm makes two passes. After labeling, the center mass of each block is determined. A bounding rectangle is then determined. Once grouped into blocks, each block is regarded s a facial feature candidate. 3)Grouping. A geometric determination based on distribution is then done to determine if a face is present.
24
Facial Detection: Demo (Torch3Vision)
Souce: Features: - basic image processing and feature extraction algorithms such as rotation, flip, photometric normalisations (Histogram Equalization, Multiscale Retinex, Self-Quotient Image or Gross-Brajovic), edge detection, 2D DCT, 2D FFT, 2D Gabor, PCA to do Eigen-Faces, LDA to do Fisher-Faces face detection using MLP, cascade of Haar-like classifiers Neural Networks A neural network is an interconnected group of nodes (called artificial neurons) that uses a computational model for information processing. It is a network of relatively simple processing elements whose behavior is determined by the connections between the processing elements and element parameters. Neural networks are useful for working with bounded real-valued data and determining patterns. The earliest implementation of a neural network is a single-layer perceptron (SLP) network, which consists of a single layer of output nodes. In a single-layer perceptron network, inputs are fed directly to the output notes using a series of weights. The sum of the products of the weights and the inputs is calculated in each node. If the value is above some threshold, the neuron fires and takes the activated value. As such, single-layer perceptrons can be trained by calculating the errors between calculated output and sample output data, and making adjustments to the weights. However single-layer perceptrons are only capable of learning linearly separable patterns. Multi-layer perceptron (MLP) is a class of neural network that consists of multiple layers of computational units, usually interconnected in a feed-forward array. That is, each neuron in one layer has directed connections to the neurons in the next. Again these connections are weighted. However, unlike a single layer perceptron network, multi-layer networks can use the back-propagation learning technique. That is, the calculated output values are compared to the sample output values to compute the value of a predefined error-function. The error is then fed back through the network. Using this information, the weights of each connection are adjusted to reduce the value of the error function. This way, after repeating this process for a sufficiently large number of training cycles, the network will learn a targeted function. As such, multi-layer perceptrons are more robust and are capable of learning complex patterns such as the variety of cues required for face detection. Such patterns are not linear and more polynomial in nature. Source: Wikipedia
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.