Download presentation
Presentation is loading. Please wait.
1
Emotion-inducing tasks
DYNEMO: A Database of Dynamic and Spontaneous Emotional Facial Expressions Tcherkassof Anna*, Dupré Damien*, Dubois Michel*, Mandran Nadine, Meillon Brigitte, Boussard Gwenn, Adam Jean Michel, Caplier Alice, Guérin-Dugué Anne, Benoît Anne-Marie & Mermillod Martial * LIP, University of Grenoble 2 — LIG, University of Grenoble 1 — GIPSA-LAB, University of Grenoble 1 — IEP, University of Grenoble 2 — LAPSCO, University of Clermont-Ferrand Results Research on facial expression highlights several relevant expressive characteristics. In order to go into more details, researchers are in need of comprehensive facial expression databases. DynEmo is a database of dynamic and spontaneous emotional facial expressions videos, that is, naturalistic recordings of emotional facial displays in realistic settings. It offers a dynamic and spontaneous expressive material combined with emotional self-report (of expresser) and on-line assessments (of observers) data. The face of ordinary participants were videotaped while carrying an emotion-inducing task and were later watched by observers who assessed the emotions displayed. Thus, this database consists in 358 videos (1 to 15 min. long) associated with 2 types of data: 1. the emotional state of the expresser (self-reported once the inducing task completed); 2. the timeline of observers’ assessments regarding the emotions displayed all along the recording. Emotional Self-Report The questionnaire provides an indicator of the emotional state of the encoder during the task. (At present, the 12 emotional labels scales only have been analyzed) Dynamic Assessment In order to highlight the assessment of the decoders from a dynamic point of view, we calculated each 1/10 second the in-between decoders agreement for each label, during the unfolding of the video. It provides a timeline (cf. Fig. 4) where the emotional assessments of each decoder are superimposed (mass curves) 1/10 sec. after 1/10 sec. all along the video. Figure 3. Encoder’s self-report on the emotional labels questionnaire while carrying out a disgust-inducing task Video 26_1 Emotion-inducing tasks Annoyance / Astonishment / Boredom / Cheerfulness / Disgust / Fright / Curiosity / Moved / Pride / Shame Figure 4. Video of an encoder who carries out the disgust-induction task and its corresponding emotional expressive timeline underneath Method At one glance, one can visually identify when the target emotion is displayed. In the present video of a woman confronted to a disgusting stimulus, 70% of decoders have considered that she was expressing disgust (in blue) from sec. 34 to sec. 54 essentially. It can be noticed that during the same interval, about 30% of decoders have rated her face as Video 26_1 Emotional Induction Encoders (358 ordinary participants: 182 women and 176 males, from 25 to 65 year old, = 48, = 9,21) were recruited for a study devoted to a visual ergonomic visual task (cover story). They were covertly videotaped by 2 hidden cameras (Fig. 1) while carrying an emotion-inducing task. Once the task over, the encoders filled out a 51 scales questionnaire (6 points) regarding their emotional state: 35 action readiness scales Ex. The visual task you just carried out stirs up a tendency to approach, to make contact 3 dimensional scales Ex. The state you feel after carrying out this visual task is: Unpleasant …………….. Pleasant 1 comfort’s state scale Ex. During this visual task, you were: Ill at ease …………….. At ease 12 emotional labels scales Ex. How much this visual task made you feel disgusted? Ex. How much this visual task made you feel annoyed? Dynamic Assessment Decoders (171 students) assessed the videos via Oudjat, a data-processing interface (Tcherkassof et al., 2007). Oudjat allows decoders to assess on-line the emotions they perceive in the face of the encoder (Fig. 2). Pride Disgust Annoyance Astonishment Curiosity Boredom Moved Shame Humiliation No Emotion Fright Disappointment Cheerfulness Oudjat interface for real time emotional assessment Figure 2.1 Tagging device : during the expression’s unfolding, judge indicates the emotional intervals Figure 1. Whole face (camera 1), overview (camera 2) and participant’s screen (emotion-induction task). expressing fright instead (cf. yellow mass curve). From sec. 0 to sec. 33, several labels are in competition to describe what is expressed by her face. Discussion In order to investigate facial expressions of emotion, DynEmo offers a comprehensive database of videos of dynamic and spontaneous faces. Each video is associated with the expresser’s emotional state and the on- line ratings of decoders who assessed all the emotions displayed during the unfolding of the expression (emotional expressive timeline). Thus, the spontaneous and dynamic expressions are characterized very precisely in real-time. Emotional expressive time-lines (Fig.4) instantly signal, for each video, when the target emotion is displayed by the face. It also signals the periods where observers decode different emotions, that is, when a weak consensus exist between judges regarding what is displayed by the face (ambiguous expression). The timelines demonstrate that facial expressions of emotions are rarely prototypical and that idiosyncratic characteristics of expressers are often salient elements. Therefore, DynEmo provides an expressive material near to natural social interactions (HHI communications). Up to date, DynEmo is the most thorough database available. 358 videos associated with the expressers emotional self-reports are accessible. Out of these, 33 videos have been judged with real-time emotional assessments. The emotional expressive timeline of these 33 videos are also obtainable (the remaining videos are actually under judgment process). Free access to DynEmo (videos of dynamic and spontaneous emotional facial expressions and associated emotional data): Figure 2.2 Rating device: Judge selects the emotional label best describing each emotional interval First, the decoder marks on-line the video each time he/she perceives the beginning and the end of an emotion displayed by the face by pressing a key (Fig.2.1). Then, the decoder assesses each emotional sequences beforehand marked by selecting one of the 12 proposed emotional terms (Fig. 2.2). Each video has been assessed by about 20 decoders. References Tcherkassof, A., Bollon,T., Dubois, M., Pansu, P., & Adam, J. M. (2007). Facial expressions of emotions: A methodological contribution to the study of spontaneous and dynamic emotional faces. European Journal of Social Psychology, 37, ISRE 2009, August 6-8, Leuven, Belgium
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.