Yanxi Liu The Robotics Institute School of Computer Science

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Face Recognition CPSC UTC/CSE.
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
Automatic 3D Face Recognition System 邱彥霖 龔士傑 Biometric Authentication.
Mapping: Scaling Rotation Translation Warp
3D Face Modeling Michaël De Smet.
HYBRID-BOOST LEARNING FOR MULTI-POSE FACE DETECTION AND FACIAL EXPRESSION RECOGNITION Hsiuao-Ying ChenChung-Lin Huang Chih-Ming Fu Pattern Recognition,
Facial feature localization Presented by: Harvest Jang Spring 2002.
Robust Object Tracking via Sparsity-based Collaborative Model
Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25,
A Colour Face Image Database for Benchmarking of Automatic Face Detection Algorithms Prag Sharma, Richard B. Reilly UCD DSP Research Group This work is.
Face Databases. Databases for Face Recognition The appearance of a face is affected by many factors because of its non-rigidity and complex 3D structure:
The Viola/Jones Face Detector (2001)
Face Verification across Age Progression Narayanan Ramanathan Dr. Rama Chellappa.
Region labelling Giving a region a name. Image Processing and Computer Vision: 62 Introduction Region detection isolated regions Region description properties.
A Study of Approaches for Object Recognition
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Eigenfaces As we discussed last time, we can reduce the computation by dimension reduction using PCA –Suppose we have a set of N images and there are c.
Automatic Face Recognition under Component-Based Manifolds CVGIP 2006 Wen-Sheng Chu ( 朱文生 ) and Jenn-Jier James Lien ( 連震杰 ) Robotics Lab. CSIE NCKU.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Face Recognition from Face Motion Manifolds using Robust Kernel RAD Ognjen Arandjelović Roberto Cipolla Funded by Toshiba Corp. and Trinity College, Cambridge.
Face Recognition: An Introduction
An Illumination Invariant Face Recognition System for Access Control using Video Ognjen Arandjelović Roberto Cipolla Funded by Toshiba Corp. and Trinity.
A PCA-based feature extraction method for face recognition — Adaptively weighted sub-pattern PCA (Aw-SpPCA) Group members: Keren Tan Weiming Chen Rong.
Gender and 3D Facial Symmetry: What’s the Relationship ? Xia BAIQIANG (University Lille1/LIFL) Boulbaba Ben Amor (TELECOM Lille1/LIFL) Hassen Drira (TELECOM.
Computer vision.
PCA & LDA for Face Recognition
Recognition Part II Ali Farhadi CSE 455.
Face Recognition and Feature Subspaces
Face Recognition and Feature Subspaces
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
Multimodal Interaction Dr. Mike Spann
February 27, Face Recognition BIOM 426 Instructor: Natalia A. Schmid Imaging Modalities Processing Methods.
KYLE PATTERSON Automatic Age Estimation and Interactive Museum Exhibits Advisors: Prof. Cass and Prof. Lawson.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Face Recognition: An Introduction
The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew.
Goal and Motivation To study our (in)ability to detect inconsistencies in the illumination of objects in images Invited Talk! – Hany Farid: Photo Forensincs:
A Statistical Method for 3D Object Detection Applied to Face and Cars CVPR 2000 Henry Schneiderman and Takeo Kanade Robotics Institute, Carnegie Mellon.
Real-Time Detection, Alignment and Recognition of Human Faces Rogerio Schmidt Feris Changbo Hu Matthew Turk Pattern Recognition Project June 12, 2003.
3D Face Recognition Using Range Images
Facial Recognition Justin Kwong Megan Thompson Raymundo Vazquez-lugo.
A Convergent Solution to Tensor Subspace Learning.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Human Activity Recognition, Biometrics and Cybersecurity Mohamed Abdel-Mottaleb, Ph.D. Image Processing and Computer Vision Department of Electrical and.
CS332 Visual Processing Department of Computer Science Wellesley College High-Level Vision Face Recognition I.
2D-LDA: A statistical linear discriminant analysis for image matrix
Face Recognition and Feature Subspaces Devi Parikh Virginia Tech 11/05/15 Slides borrowed from Derek Hoiem, who borrowed some slides from Lana Lazebnik,
3D Face Recognition Using Range Images Literature Survey Joonsoo Lee 3/10/05.
Facial Expression Analysis Theoretical Results –Low-level and mid-level segmentation –High-level feature extraction for expression analysis (FACS – MPEG4.
FACE RECOGNITION. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Performance Analysis of 1D and 2D Statistical Measures on Standard Facial Image Databases International Conference on Emerging Trends in Engineering &
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Face Recognition and Feature Subspaces
Fast Preprocessing for Robust Face Sketch Synthesis
Recovery from Occlusion in Deep Feature Space for Face Recognition
Feature based vs. holistic processing
Feature description and matching
Final Year Project Presentation --- Magic Paint Face
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Hu Li Moments for Low Resolution Thermal Face Recognition
Domingo Mery Department of Computer Science
Eigenfaces for recognition (Turk & Pentland)
Presentation transcript:

Quantification of Facial Asymmetry for Expression-invariant Human Identification Yanxi Liu yanxi@cs.cmu.edu The Robotics Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA USA

Acknowledgement Joint work with Drs. Karen Schmidt and Jeff Cohn (Psychology, U. Of Pitt). Students who work on the data as research projects: Sinjini Mitra, Nicoleta Serban, and Rhiannon Weaver (statistics, CMU), Yan Karklin, Dan Bohus (scomputer science) and Marc Fasnacht (physics). Helpful discussions and advices provided by Drs. T. Minka, J. Schneider, B. Eddy, A. Moore and G. Gordon. Partially funded by DARPA HID grant to CMU entitled: “Space Time Biometrics for Human Identification in Video”

Human Faces are Asymmetrical Left Face Right Face

Under Balanced Frontal Lighting (from CMU PIE Database)

What is Facial Asymmetry? Intrinsic facial asymmetry in individuals is determined by biological growth, injury, age, expression … Extrinsic facial asymmetry is affected by viewing orientation, illuminations, shadows, highlights …

Extrinsic Facial asymmetry on an image is Pose-variant Left face Original Image Right Face

Facial Asymmetry Analysis A lot of studies in Psychology has been done on the topics of attractiveness v. facial asymmetry (Thornhill & Buelthoff 1999) expression v. facial movement asymmetry Identification Humans are extremely sensitive to facial asymmetry Facial attractiveness for men is inversely related to recognition accuracy (O’Toole 1998) Limitations: qualitative, subjective, still photos

Motivations Facial (a)symmetry is a holistic structural feature that has not been explored quantitatively before It is unknown whether intrinsic facial asymmetry is characteristic to human expressions or human identities

The question to be answered in this work How does intrinsic facial asymmetry affect human face identification?

DATA: Expression Videos Cohn-Kanade AU-Coded Facial Expression Database Neutral Peak joy anger disgust

Sample Facial Expression Frames Total 55 subjects. Each subject has three distinct expression videos of varied number of frames. Total 3703 frames. Neutral Joy Disgust Anger

Face Image Normalization Inner canthus Face Midline Philtrum Affine Deformation based on 3 reference points

Quantification of Facial Asymmetry 1. Density Difference: D-face D (x,y) = I(x,y) – I’(x,y) I(x,y) --- normalized face image, I’(x,y) --- bilateral reflection of I(x,y) about face midline 2. Edge Orientation Similarity: S-face S(x,y) = cos(Ie(x,y),I’e(x,y)) where Ie, Ie’ are edge images of I and I’ respectively,  is the angle between the two gradient vectors at each pair of corresponding points

Asymmetry Faces An half of D-face or S-face contains all the needed information. We call these half faces Dh, Sh,Dhx, Dhy, Shx,Shy AsymFaces. Original D-face S-face

Facial Asymmetry as a Biometric Inner canthus Philtrum Spatiotemporal facial asymmetry of expression videos Subj 85 Subj 10 Expression Videos from Cohn-Kanade Database Original D-face S-face joy anger disgust neutral Neutral Peak

Asymmetry Measure Dhy for two subjects each has 3 distinct expressions forehead forehead chin chin Joy anger | disgust Joy | anger | disgust

spatial temporal Forehead -- chin Forehead -- chin Forehead -- chin

spatial temporal Forehead -- chin Forehead -- chin Forehead -- chin

spatial Forehead -- chin Forehead -- chin Forehead -- chin

Evaluation of Discriminative Power of Each Dimension in SymFace Dhy Variance Ratio Bridge of nose forehead chin

Most Discriminating Facial Regions Found

Experiment Setup 55 subjects, each has three expression video sequences (joy, anger, disgust). Total of 3703 frames. Human identification test is done on ---- Experiment #1: train on joy and anger, test on disgust; Experiment #2: train on joy and disgust, test on anger; Experiment #3: train on disgust and anger, test on joy; Experiment #4: train on neutral expression frames,test on peak Experiment #5: train on peak expression frames,test on neutral The above five experiments are carried out using (1) AsymFaces, (2) Fisherfaces, and (3) AsymFaces and FisherFaces together.

Sample Results: Combining Fisherfaces (FF) with AsymFaces (AF) (Liu et al 2002) Data set is composed of 55 subjects, each has three expression videos. There are 1218 joy frames, 1414 anger frames and 1071 disgust frames. Total number of frames is 3703.

All combinations of FF and AF features are tested and evaluated quantitatively

Complement Conventional Face Classifier 107 pairs of face images taken from Feret database. It is shown that asymmetry-signature’s discriminating power demonstrated (1) has a p value << 0.001 from chance (2) is independent from features used in conventional classifiers, decreases the error rate of a PCA classifier by 38% (15%  9.3%)

Quantified Facial Asymmetry used for Pose estimation

Summary Quantification of facial asymmetry is computationally feasible. The intrinsic facial asymmetry of specific regions captures individual differences that are robust to variations in facial expression AsymFaces provides discriminating information that is complement to conventional face identification methods (FisherFaces)

Future Work (1) construct multiple, more robust facial asymmetry measures that can capture intrinsic facial asymmetry under illumination and pose variations using PIE as well as publicly available facial data. (2) develop computational models for studying how recognition rates is affected by facial asymmetry under gender, race, attractiveness, hyperspectral variations. (3) study pose estimation using a combination of facial asymmetry with skewed symmetry.