The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

Recognizing Human Actions by Attributes CVPR2011 Jingen Liu, Benjamin Kuipers, Silvio Savarese Dept. of Electrical Engineering and Computer Science University.
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Machine learning continued Image source:
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Face Alignment with Part-Based Modeling
Facial expression as an input annotation modality for affective speech-to-speech translation Éva Székely, Zeeshan Ahmed, Ingmar Steiner, Julie Carson-Berndsen.
Automatic Feature Extraction for Multi-view 3D Face Recognition
HYBRID-BOOST LEARNING FOR MULTI-POSE FACE DETECTION AND FACIAL EXPRESSION RECOGNITION Hsiuao-Ying ChenChung-Lin Huang Chih-Ming Fu Pattern Recognition,
Facial feature localization Presented by: Harvest Jang Spring 2002.
Automatic Analysis of Facial Expressions: The State of the Art Automatic Analysis of Facial Expressions: The State of the Art By Maja Pantic, Leon Rothkrantz.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
RECOGNIZING FACIAL EXPRESSIONS THROUGH TRACKING Salih Burak Gokturk.
A Study of Approaches for Object Recognition
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
TAUCHI – Tampere Unit for Computer-Human Interaction Automated recognition of facial expressi ns and identity 2003 UCIT Progress Report Ioulia Guizatdinova.
Object Detection and Tracking Mike Knowles 11 th January 2005
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
Recognizing Emotions in Facial Expressions
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Wang, Z., et al. Presented by: Kayla Henneman October 27, 2014 WHO IS HERE: LOCATION AWARE FACE RECOGNITION.
Facial Feature Detection
Gender and 3D Facial Symmetry: What’s the Relationship ? Xia BAIQIANG (University Lille1/LIFL) Boulbaba Ben Amor (TELECOM Lille1/LIFL) Hassen Drira (TELECOM.
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
1 Template-Based Classification Method for Chinese Character Recognition Presenter: Tienwei Tsai Department of Informaiton Management, Chihlee Institute.
Project 10 Facial Emotion Recognition Based On Mouth Analysis SSIP 08, Vienna 1
1 Facial Expression Recognition using KCCA with Combining Correlation Kernels and Kansei Information Yo Horikawa Kagawa University, Japan.
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
GA-Based Feature Selection and Parameter Optimization for Support Vector Machine Cheng-Lung Huang, Chieh-Jen Wang Expert Systems with Applications, Volume.
Multimodal Information Analysis for Emotion Recognition
Kernel Methods A B M Shawkat Ali 1 2 Data Mining ¤ DM or KDD (Knowledge Discovery in Databases) Extracting previously unknown, valid, and actionable.
1 Classification of real and pseudo microRNA precursors using local structure-sequence features and support vector machine Chenghai Xue, Fei Li, Tao He,
Object Recognition in Images Slides originally created by Bernd Heisele.
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search The best paper prize at CVPR 2008.
Transfer Learning for Image Classification Group No.: 15 Group member : Feng Cai Sauptik Dhar Sauptik.
Adaptive Rigid Multi-region Selection for 3D face recognition K. Chang, K. Bowyer, P. Flynn Paper presentation Kin-chung (Ryan) Wong 2006/7/27.
Sparse Bayesian Learning for Efficient Visual Tracking O. Williams, A. Blake & R. Cipolloa PAMI, Aug Presented by Yuting Qi Machine Learning Reading.
Intelligent Control and Automation, WCICA 2008.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
RECOGNIZING FACIAL EXPRESSIONS THROUGH TRACKING Salih Burak Gokturk.
Ekman’s Facial Expressions Study A Demonstration.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
Deformation Modeling for Robust 3D Face Matching Xioguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Department of Psychology Stephen F. Austin State University
Modeling Facial Shape and Appearance
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Recognizing Deformable Shapes
Building & Applying Emotion Recognition
ABSTRACT FACE RECOGNITION RESULTS
Support Vector Machines
EMOTIONAL INTELLIGENCE
Using Transductive SVMs for Object Classification in Images
LINEAR AND NON-LINEAR CLASSIFICATION USING SVM and KERNELS
Hu Li Moments for Low Resolution Thermal Face Recognition
Categorization by Learning and Combing Object Parts
Brief Review of Recognition + Context
AHED Automatic Human Emotion Detection
AHED Automatic Human Emotion Detection
Housam Babiker, Randy Goebel and Irene Cheng
A Novel Smoke Detection Method Using Support Vector Machine
Domingo Mery Department of Computer Science
Image Processing and Multi-domain Translation
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Presentation transcript:

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, and Zara Ambadar At: IEEE Computer society conference CVPRW 2010 Im going to present the extended Cohn-Kanade Dataset expansion I selected this paper due 2 reasons: 1- It’s related to our emotion classifier work. 2- It’s the dataset with most influence regarding emotion and AU studies. Presented by: Gustavo Augusto 15/11/2012

Introduction Emotion classification is a popular research topic since decades. One of the most used datasets used for study and development of facial expression detection was the Cohn-Kanade (CK) database. This paper talk about: Expansion and proper labeling of the CK Emotion classification and Action Units (AU) Emotion classification is a popular research topic, but in order to study it its necessary to have some base information. The Cohn-Kanade dataset aided countless projects by sharing this base information. Therefore, its a well known dataset in the research community. However, the CK dataset had poor labelling and no information regarding the action units. This paper is composed by 2 main parts: 1- the construction of the dataset and the labelling for emotions and Action Units; 2- the system built to automaticly recognize action units and emotions.

Extended Cohn-Kanade Conh-Kanade Conh-Kanade + expanded it to was composed by 486 FACS-coded sequences from 97 subjects Conh-Kanade + expanded it to 593 sequences from 123 subjects! FAC coder revision for the sequences Validated emotional labeling for the sequences Talking a little bit about the cohn kanade dataset: It was composed by 486 sequences of 87 subjects labelled with emotions. It was expanded to 593 sequences and also added a full fac coder review to detect all action units in the peak of the emotion of each sequence. Contributing mainly with the proper labelling.

Emotion Validation First label, subject’s impression of each of the 6 basic emotion categories plus contempt Anger Disgust Fear Happy Sadness Surprise Contempt First atempt to label the emotions was based on the subject impression. So they asked a sbuject to make a happy face and it was defined as happy. However...

Emotion Validation Unreliable This approach wasn’t good because people would vary from neutral to something different than the objective emotion. For example, some people might unvolutary laugh while making the angry expression, disvalidating the angry emotion.

Emotion Validation FACS codes with emotion prediction table Second FACS filtering Visual inspection Therefore they proposed 3 steps to validate the emotion presented on each sequence. After having a FAC coder (people trained to detect each AU), writting each AU present in each sequence: They use an emotion prediction table to validate the emotions. E.g. If this AU is present then its classified as Happy. A second FACS filtering, defining a criteria to classify each emotion. E.g. Must have BOTH AU X and Y present and musn’t have the Z AU. Since it had AU X and Y, it passed the first filter, but since it also have AU Z, doesn’t pass the filter. And at last, a visual inspection to check if the whole sequence change visually matches the expression( e.g. People wouldn’t change to other expressions in the middle. Only from neutral to objective expression).

Emotion Validation In the following table we can see the frequency of each AU presence in the CK+ database. In the table below we can see the criteria used to validate each emotion label.

Emotion Validation We can see in this picture an examples of emotion label validation. E.g. Picture g sadness compused by AU 1+2+4+15+17

The automatic System Overview Now that we explained how the database was created. We are going to explain the system they used to test the created dataset. An semi-automatic system that classifies action units and emotions. First we have the video as input, apply one AAM which will be explained later, then retreive features from the aam, and test with a previously created svm classifier

The automatic System Active Appearence Models (AMM): Defined by a 2D triangulated mesh Contains rigids and non-rigids geometric deformations Similiarity parameters for simple transforms Keyframes within each sequence manually labelled First the AAM is based on a specific shape, with the characteristics of a human face, divided by rigid areas e.g. Face contours and non-rigid e.g. Mouth. Then there are several similiarity paramenters to perform simples transforms such as rotations, translations and scales. This method isn’t automatic, because its required to manually insert landmarks corresponding to the neutral shape on the first images of a sequence. Then for the future frames it will adapt without requiring manual landmarking.

The automatic System Feature Extraction AMM 2D points (68 vertex points) Canonical normalized APPearence (APP) There are 2 kinds of features extracted The 2d landmarks of the AAM, i.e. The X and Y for each landmark The canonical normalized appearence, which is the image corresponding to the face. Ps: the examples of CAPP on the right aren’t from the paper, the capps used on the sytem are black and white.

The automatic System Support Vector Machine Then at last there is the classification. They use a SVM classification which consists in creating a division in the feature space, separating between answer and not the answer. E.g. If we use as feature one landmark corresponding to the eye, we have an 2 dimensional space. In this space, the svm calculates the best maximum hyperplane that separates one AU X from not AU X.

Results Results methods for AUs Results methods for Emotion SVM classification Supervised Learning 1 vs others Results methods for Emotion Multimodal system (all vs all) The results for AU were based on is it AU X or not ( 1 vs others) The results emotion were based on its emotion X and not any of the other emotions (all vs all)

Results Au detection SVM: 1 vs all ROC curve as result TP/FP Logical Linear Regression to combine scores In this table we can see for each action unit the frequency of that AU, and the ROC curve for each kind of feature, landmarks or textures. They also used logical linear regression to combine both features, improving the result.

Results Emotion detection 2D features Texture Both We have three tables with the results for emotion classification using 2d features, texture features at last the combination of the previous. As we can notice, given the nature of aam details on the mouth, it has a poor detection of emotions such as anger, fear, sadness and contemptuous. Since the texture have more detail on the mouth, the anger and sadness accuracy was increased. After combining both features, the accuracy of emotions like contemptuous and fear improved because they wer previously mistaken by other emotions.

Conclusion This paper contributes with an improved dataset, that may improve several works on emotion detection and AU detection. Raw SVM + texture classification results. This paper contributes with a well labelled dataset which should be used for our next emotion studies to perform automatic emotion classification and action unit classification. Their results were solely based on raw texture and 2d landmarks, we should test how well our features will perform with this dataset.

Questions? Thanks for watching. Questions?