Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.

Similar presentations


Presentation on theme: "The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo."— Presentation transcript:

1 The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, and Zara Ambadar At: IEEE Computer society conference CVPRW 2010 Im going to present the extended Cohn-Kanade Dataset expansion I selected this paper due 2 reasons: 1- It’s related to our emotion classifier work. 2- It’s the dataset with most influence regarding emotion and AU studies. Presented by: Gustavo Augusto 15/11/2012

2 Introduction Emotion classification is a popular research topic since decades. One of the most used datasets used for study and development of facial expression detection was the Cohn-Kanade (CK) database. This paper talk about: Expansion and proper labeling of the CK Emotion classification and Action Units (AU) Emotion classification is a popular research topic, but in order to study it its necessary to have some base information. The Cohn-Kanade dataset aided countless projects by sharing this base information. Therefore, its a well known dataset in the research community. However, the CK dataset had poor labelling and no information regarding the action units. This paper is composed by 2 main parts: 1- the construction of the dataset and the labelling for emotions and Action Units; 2- the system built to automaticly recognize action units and emotions.

3 Extended Cohn-Kanade Conh-Kanade Conh-Kanade + expanded it to
was composed by 486 FACS-coded sequences from 97 subjects Conh-Kanade + expanded it to 593 sequences from 123 subjects! FAC coder revision for the sequences Validated emotional labeling for the sequences Talking a little bit about the cohn kanade dataset: It was composed by 486 sequences of 87 subjects labelled with emotions. It was expanded to 593 sequences and also added a full fac coder review to detect all action units in the peak of the emotion of each sequence. Contributing mainly with the proper labelling.

4 Emotion Validation First label, subject’s impression of each of the 6 basic emotion categories plus contempt Anger Disgust Fear Happy Sadness Surprise Contempt First atempt to label the emotions was based on the subject impression. So they asked a sbuject to make a happy face and it was defined as happy. However...

5 Emotion Validation Unreliable
This approach wasn’t good because people would vary from neutral to something different than the objective emotion. For example, some people might unvolutary laugh while making the angry expression, disvalidating the angry emotion.

6 Emotion Validation FACS codes with emotion prediction table
Second FACS filtering Visual inspection Therefore they proposed 3 steps to validate the emotion presented on each sequence. After having a FAC coder (people trained to detect each AU), writting each AU present in each sequence: They use an emotion prediction table to validate the emotions. E.g. If this AU is present then its classified as Happy. A second FACS filtering, defining a criteria to classify each emotion. E.g. Must have BOTH AU X and Y present and musn’t have the Z AU. Since it had AU X and Y, it passed the first filter, but since it also have AU Z, doesn’t pass the filter. And at last, a visual inspection to check if the whole sequence change visually matches the expression( e.g. People wouldn’t change to other expressions in the middle. Only from neutral to objective expression).

7 Emotion Validation In the following table we can see the frequency of each AU presence in the CK+ database. In the table below we can see the criteria used to validate each emotion label.

8 Emotion Validation We can see in this picture an examples of emotion label validation. E.g. Picture g sadness compused by AU

9 The automatic System Overview
Now that we explained how the database was created. We are going to explain the system they used to test the created dataset. An semi-automatic system that classifies action units and emotions. First we have the video as input, apply one AAM which will be explained later, then retreive features from the aam, and test with a previously created svm classifier

10 The automatic System Active Appearence Models (AMM):
Defined by a 2D triangulated mesh Contains rigids and non-rigids geometric deformations Similiarity parameters for simple transforms Keyframes within each sequence manually labelled First the AAM is based on a specific shape, with the characteristics of a human face, divided by rigid areas e.g. Face contours and non-rigid e.g. Mouth. Then there are several similiarity paramenters to perform simples transforms such as rotations, translations and scales. This method isn’t automatic, because its required to manually insert landmarks corresponding to the neutral shape on the first images of a sequence. Then for the future frames it will adapt without requiring manual landmarking.

11 The automatic System Feature Extraction
AMM 2D points (68 vertex points) Canonical normalized APPearence (APP) There are 2 kinds of features extracted The 2d landmarks of the AAM, i.e. The X and Y for each landmark The canonical normalized appearence, which is the image corresponding to the face. Ps: the examples of CAPP on the right aren’t from the paper, the capps used on the sytem are black and white.

12 The automatic System Support Vector Machine
Then at last there is the classification. They use a SVM classification which consists in creating a division in the feature space, separating between answer and not the answer. E.g. If we use as feature one landmark corresponding to the eye, we have an 2 dimensional space. In this space, the svm calculates the best maximum hyperplane that separates one AU X from not AU X.

13 Results Results methods for AUs Results methods for Emotion
SVM classification Supervised Learning 1 vs others Results methods for Emotion Multimodal system (all vs all) The results for AU were based on is it AU X or not ( 1 vs others) The results emotion were based on its emotion X and not any of the other emotions (all vs all)

14 Results Au detection SVM: 1 vs all ROC curve as result TP/FP
Logical Linear Regression to combine scores In this table we can see for each action unit the frequency of that AU, and the ROC curve for each kind of feature, landmarks or textures. They also used logical linear regression to combine both features, improving the result.

15 Results Emotion detection 2D features Texture Both
We have three tables with the results for emotion classification using 2d features, texture features at last the combination of the previous. As we can notice, given the nature of aam details on the mouth, it has a poor detection of emotions such as anger, fear, sadness and contemptuous. Since the texture have more detail on the mouth, the anger and sadness accuracy was increased. After combining both features, the accuracy of emotions like contemptuous and fear improved because they wer previously mistaken by other emotions.

16 Conclusion This paper contributes with an improved dataset, that may improve several works on emotion detection and AU detection. Raw SVM + texture classification results. This paper contributes with a well labelled dataset which should be used for our next emotion studies to perform automatic emotion classification and action unit classification. Their results were solely based on raw texture and 2d landmarks, we should test how well our features will perform with this dataset.

17 Questions? Thanks for watching. Questions?


Download ppt "The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo."

Similar presentations


Ads by Google