MITRE Corporation is a federally-funded research-and- development corporation that has developed their own facial recognition system, known as MITRE Matcher.

Slides:



Advertisements
Similar presentations
Shape Matching and Object Recognition using Low Distortion Correspondence Alexander C. Berg, Tamara L. Berg, Jitendra Malik U.C. Berkeley.
Advertisements

Applications of one-class classification
Face Recognition Sumitha Balasuriya.
Active Appearance Models
Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Context-based object-class recognition and retrieval by generalized correlograms by J. Amores, N. Sebe and P. Radeva Discussion led by Qi An Duke University.
1 Registration of 3D Faces Leow Wee Kheng CS6101 AY Semester 1.
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Wavelets Fast Multiresolution Image Querying Jacobs et.al. SIGGRAPH95.
MITRE Corporation Pose Correction for Automatic Facial Recognition Team: Elliot Godzich, Dylan Marriner, Emily Myers-Stanhope, Emma Taborsky (PM), Heather.
Face Alignment with Part-Based Modeling
Automatic Feature Extraction for Multi-view 3D Face Recognition
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Computer Vision Laboratory 1 Unrestricted Recognition of 3-D Objects Using Multi-Level Triplet Invariants Gösta Granlund and Anders Moe Computer Vision.
Amir Hosein Omidvarnia Spring 2007 Principles of 3D Face Recognition.
Neurocomputing,Neurocomputing, Haojie Li Jinhui Tang Yi Wang Bin Liu School of Software, Dalian University of Technology School of Computer Science,
Face Alignment by Explicit Shape Regression
Facial feature localization Presented by: Harvest Jang Spring 2002.
AAM based Face Tracking with Temporal Matching and Face Segmentation Dalong Du.
A 4-WEEK PROJECT IN Active Shape and Appearance Models
Lecture 5 Template matching
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
Appearance Models Shape models represent shape variation Eigen-models can represent texture variation Combined appearance models represent both.
1 Face Tracking in Videos Gaurav Aggarwal, Ashok Veeraraghavan, Rama Chellappa.
Face Recognition Based on 3D Shape Estimation
 Image Search Engine Results now  Focus on GIS image registration  The Technique and its advantages  Internal working  Sample Results  Applicable.
A Wrapper-Based Approach to Image Segmentation and Classification Michael E. Farmer, Member, IEEE, and Anil K. Jain, Fellow, IEEE.
FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION
Cue Integration in Figure/Ground Labeling Xiaofeng Ren, Charless Fowlkes and Jitendra Malik, U.C. Berkeley We present a model of edge and region grouping.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
Facial Recognition CSE 391 Kris Lord.
Wang, Z., et al. Presented by: Kayla Henneman October 27, 2014 WHO IS HERE: LOCATION AWARE FACE RECOGNITION.
Active Appearance Models for Face Detection
Face Alignment Using Cascaded Boosted Regression Active Shape Models
Summarized by Soo-Jin Kim
05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion.
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
Multimodal Interaction Dr. Mike Spann
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
3D SLAM for Omni-directional Camera
1 ECE 738 Paper presentation Paper: Active Appearance Models Author: T.F.Cootes, G.J. Edwards and C.J.Taylor Student: Zhaozheng Yin Instructor: Dr. Yuhen.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
Video Based Palmprint Recognition Chhaya Methani and Anoop M. Namboodiri Center for Visual Information Technology International Institute of Information.
Object Detection with Discriminatively Trained Part Based Models
Structured Face Hallucination Chih-Yuan Yang Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science 1.
A Region Based Stereo Matching Algorithm Using Cooperative Optimization Zeng-Fu Wang, Zhi-Gang Zheng University of Science and Technology of China Computer.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn Paper presentation Kin-chung (Ryan) Wong 2006/7/27.
Halftoning With Pre- Computed Maps Objective Image Quality Measures Halftoning and Objective Quality Measures for Halftoned Images.
Categorization by Learning and Combing Object Parts B. Heisele, T. Serre, M. Pontil, T. Vetter, T. Poggio. Presented by Manish Jethwa.
VIP: Finding Important People in Images Clint Solomon Mathialagan Andrew C. Gallagher Dhruv Batra CVPR
3D Face Recognition Using Range Images
Non-Ideal Iris Segmentation Using Graph Cuts
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Adaptive Wavelet Packet Models for Texture Description and Segmentation. Karen Brady, Ian Jermyn, Josiane Zerubia Projet Ariana - INRIA/I3S/UNSA June 5,
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
3D head pose estimation from multiple distant views X. Zabulis, T. Sarmis, A. A. Argyros Institute of Computer Science, Foundation for Research and Technology.
CONTENTS:  Introduction.  Face recognition task.  Image preprocessing.  Template Extraction and Normalization.  Template Correlation with image database.
Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University.
FINGERTEC FACE ID FACE RECOGNITION Technology Overview.
Performance Analysis of 1D and 2D Statistical Measures on Standard Facial Image Databases International Conference on Emerging Trends in Engineering &
Paper – Stephen Se, David Lowe, Jim Little
Gait Recognition Gökhan ŞENGÜL.
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Fast Preprocessing for Robust Face Sketch Synthesis
Machine Learning Basics
Cheng-Ming Huang, Wen-Hung Liao Department of Computer Science
Categorization by Learning and Combing Object Parts
Presentation transcript:

MITRE Corporation is a federally-funded research-and- development corporation that has developed their own facial recognition system, known as MITRE Matcher. Non-frontal facial images create a significant challenge to the recognition process for both MITRE Matcher and other facial recognition systems, even if the degree of the pose variation is as small as ten or twenty degrees. This project's goal was to research, implement, and evaluate facial-landmarking algorithms and approaches to pose- analysis and pose-correction. Modeling Faces with Active Shape Models Background MITRE Computer Science Clinic Landmarking and Pose Correction for Face Recognition An Active Shape Model (ASM) uses a dataset's statistics to capture the possible shapes that objects of a certain class can take. We used face shapes consisting of sets of landmarks (nose tip, mouth corners, etc.) and trained the model to recognize the configuration of the average face in our training data. We also find the most significant parameters that describe the ways a face can vary from the average while still representing a viable face. Landmarking heuristics Pose correction pipeline Facial Landmarking ASEF The Average of Synthetic Exact Filters (ASEF) is a texture- based method for landmarking. To create an ASEF filter we specify a desired synthetic output for each training image. That desired output consists of a Gaussian dot centered at the ground truth landmark position in the training image. This results in a filter that exactly transforms the training image into the synthetic image. We average all of a dataset's exact filters to get the final ASEF filter, which can then be applied to facial images to locate the landmark. Image warping The team is delivering to MITRE: Code implementing ASEF and UMACE landmarking, Active Shape Models and its component algorithms, and our pose- correction technique, as well as scripts and applications for testing and demonstrating all of these algorithms. Accuracy results for landmarking and pose-corrected match scores. Because full-image pose-correction can lead to lower match scores, improved face recognition may result from comparing feature-relative patches instead of warped full images. The team's feature extraction routines will form the basis of that process. We use thin-plate splines to smoothly warp from one face set of landmarks to another. By isolating the yaw from ASM, we transform a landmarked face into a neutral, frontal pose. The other ASM-derived vectors enable other transformations. Results and Deliverables Acknowledgments Team Members Elliot Godzich '12 Dylan Marriner '12 Emily Myers-Stanhope '12 Emma Taborsky '12 (PM) Heather Williams '12 MITRE Liaisons Joshua Klontz '10 Mark Burge Faculty Advisor Zachary Dodds We also investigated and implemented a feature-detection algorithm involving the use of Unconstrained Minimum Average Correlation Energy (UMACE) filters. We divide the average values of a standard square region around the ground truth eye location of each training image by the average power spectrum for that same region. This gives us a correlation filter that we can apply to a standard eye-containing region to determine possible eye locations within that region. UMACE Original off-pose imageBest landmarks The original and a forward-facing comparison image and the resulting match scores. For reference, the self-match score is about 6.29 Overview: The original image is cropped to the face and landmarked to determine possible feature locations on the face. The best combination of feature locations is selected using a combination of spatial heuristics and statistical estimation. Using a statistical model of landmark variation with pose, the landmarks are neutrally posed. These landmarks are used to warp the image to a neutral pose. combining features geometric constraints multiple responses Example yaw warp: off-pose (right) to frontal pose (middle) Circles showing 5%, 10%, and 25% of inter- ocular distance Accuracy results for the best-match feature in each of six locations The mean face (green) computed from 300 faces (white), after alignment via the Procrustes algorithm the team implemented. Three standard deviations from the mean along the largest source of variation, roughly corresponding to yaw King of the Hill is a technique for finding the n local maxima in a two-dimensional array. We use it to determine the top three possible locations in our UMACE or ASEF filter responses. Facial features tend to end up in roughly the same area of each facial crop. We can take advantage of this by constraining the area in which we search for each landmark. An Active Shape Model (at right) and/or feature-strength heuristics can provide a probability that a particular set of landmarks form a face shape. The current system uses only feature strength, but can support additional metrics in the future. Training image UMACE filter Facial image with right-eye region Response of eye region to filter Synthetic image Average filter (ASEF) Exact filter Pose- corrected image Three standard deviations from the mean along the largest source of variation, roughly representing pitch Landmark map