Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig

Slides:



Advertisements
Similar presentations
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science TU München, Germany A Person and Context.
Advertisements

12 th International Fall Workshop VISION, MODELING, AND VISUALIZATION 2007 November 7-9, 2007 Saarbrücken, Germany Estimating Natural Activity by Fitting.
Non Verbal Communication What does the following sign mean to you?
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Model-based Image Interpretation with Application to Facial Expression Recognition Matthias Wimmer
Technische Universität München Face Model Fitting based on Machine Learning from Multi-band Images of Facial Components Institute for Informatics Technische.
The University of Auckland New Zealand Matthias Wimmer Technische Universitat München Bruce MacDonald, Dinuka Jayamuni, and Arpit Yadav Department of Electrical.
A New Pan-Cultural Facial Expression of Emotion
HYBRID-BOOST LEARNING FOR MULTI-POSE FACE DETECTION AND FACIAL EXPRESSION RECOGNITION Hsiuao-Ying ChenChung-Lin Huang Chih-Ming Fu Pattern Recognition,
1 Emotion perception in old age and dementia Louise Phillips, Vasiliki Orgeta & Clare Scott School of Psychology, University of Aberdeen. Acknowledgements:
Effects of Attending an All-Women’s College on Women’s Recognition of Facial Emotional Expressions in Males and Females Alexandrina M. Gomes Petya D. Radoeva.
Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments Simone Hämmerle, Matthias Wimmer, Bernd Radig, Michael Beetz.
Meta-Cognition, Motivation, and Affect PSY504 Spring term, 2011 March 16, 2010.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
Techniques for Emotion Classification Julia Hirschberg COMS 4995/6998 Thanks to Kaushal Lahankar.
1 A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions Zhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang Reported.
Integrate face tracking into driving simulator COMP6470 Special topic in computing Lei Wang Supervisor: Tom Gedeon.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
1 IUT de Montreuil Université Paris 8 Emotion in Interaction: Embodied Conversational Agents Catherine Pelachaud.
Recognizing Emotions in Facial Expressions
1 Modeling Facial Shape and Appearance M. L. Gavrilova.
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
Chair for Computer Aided Medical Procedures & Augmented Reality Department of Computer Science | Technische Universität München Chair for Computer Aided.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Matthias Wimmer, Sylvia Pietzsch, Freek Stulp and Bernd Radig Chair for Image Understanding Institute for Computer Science Technische Universität München.
Mood of Visual Images Objective: Identify and create the mood represented by a visual image.
Aging and the use of emotional cues to guide social judgments Louise Phillips, Gillian Slessor & Rebecca Bull University of Aberdeen.
Unsupervised object discovery via self-organisation Presenter : Bo-Sheng Wang Authors: Teemu Kinnunen, Joni-Kristian Kamarainen, Lasse Lensu, Heikki Kälviäinen.
Crowdsourcing Game Development for Collecting Benchmark Data of Facial Expression Recognition Systems Department of Information and Learning Technology.
Laboratory of Computational Engineering Michael Frydrych, Making the head smile Smile? -> part of non-verbal communication Functions of non-verbal.
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
+ EQ: How are emotions communicated nonverbally and across cultures?
Music Emotion Recognition 許博智 謝承諺.
Face Model Fitting with Generic, Group-specific, and Person- specific Objective Functions Chair for Image Understanding and Knowledge-based Systems Institute.
Multimodal Information Analysis for Emotion Recognition
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
14/12/2009ICON Dipankar Das and Sivaji Bandyopadhyay Department of Computer Science & Engineering Jadavpur University, Kolkata , India ICON.
The Expression of Emotion: Nonverbal Communication.
Intelligent Control and Automation, WCICA 2008.
Automatic Facial Emotion Recognition Aitor Azcarate Felix Hageloh Koen van de Sande Roberto Valenti Supervisor: Nicu Sebe.
Method Introduction Results Discussion Different Neurocognitive Abilities Moderate the Relationship between Affect Perception and Community Functioning.
ACADS-SVMConclusions Introduction CMU-MMAC Unsupervised and weakly-supervised discovery of events in video (and audio) Fernando De la Torre.
MACHINE VISION GROUP Face image mapping from NIR to VIS Jie Chen Machine Vision Group
The Expression of Emotion: Nonverbal Communication.
SOCIAL PERCEPTION Chapter 4. Social Perception The study of how we form impressions of other people and make inferences about them.
Stephan Tschechne Chair for Image Understanding Computer Science Technische Universität München Designing vs. Learning the Objective.
Ekman’s Facial Expressions Study A Demonstration.
Facial Expressions Recognition “Words lie, your face doesn’t”
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
EXAMPLES ABSTRACT RESEARCH QUESTIONS DISCUSSION FUTURE DIRECTIONS Very little is known about how and to what extent emotions are conveyed through avatar.
Unique featural difference for happy and fear (in top down and middle out) and for happy and disgust (for bottom up): For fear, eyes are open and tense.
Multiple Information Pathways in the Expression of Conversational Affect S YNCHRONY & S YMMETRY S YMPOSIUM APS, M AY, 2016 A LLISON G RAY, T IMOTHY R.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
What is it? Details Look at the whole.  Form of communication ◦ Communicating without words ◦ Non-verbal ◦ Using facial expressions ◦ Gestures  Can.
2D-CSF Models of Visual System Development. Model of Adult Spatial Vision.
Mood Detection.
Do Expression and Identity Need Separate Representations?
British face stimuli Egyptian face stimuli
Modeling Facial Shape and Appearance
Presented By, Ankit Ranka Oct 19, 2009
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.
Rob Fergus Computer Vision
Physics-based simulation for visual computing applications
What is blue eyes ? aims on creating computational machines that have perceptual and sensory ability like those of human beings. interactive computer.
Domingo Mery Department of Computer Science
AHED Automatic Human Emotion Detection
Emotion.
Domingo Mery Department of Computer Science
Evaluate the integral {image}
Presentation transcript:

Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig Human Capabilities on Video-based Facial Expression Recognition

/10 Technische Universität München Ursula Zucker Motivation  Facial Expression Recognition  goal: human-like man-machine communication  six universal facial expressions [Ekman]: anger, disgust, fear, happiness, sadness, surprise  minimal muscle activity -> reliable recognition is difficult  recognition rate of state-of-the-art approaches: ~ 70%  Question  How reliable do humans specify facial expressions? -> survey to determine human capabilities

/10 Technische Universität München Ursula Zucker The Facial Expression Database Cohn-Kanade AU-Coded Facial Expression Database  488 image sequences (containing 4 up to 66 images)  each showing one of the six universal facial expressions  no natural facial expressions (simulated ground truth)  no context information

/10 Technische Universität München Ursula Zucker Description of Our Survey  Execution of the Survey  participants are shown randomly selected sequences  250 participants  5413 annotations -> approx. 11 per sequence

/10 Technische Universität München Ursula Zucker Evaluation  Evaluation of the Survey  no ground truth -> comparison of the annotations to one another  annotation rate for each sequence and each facial expression  relative agreement for an expression  confusion between facial expressions  Comparison to algorithms  recognition rate

/10 Technische Universität München Ursula Zucker Annotation Rate for Each Sequence  Explanation:  488 rows  1 row = 1 sequence  darker regions denote a higher annotation rate  sorted by similar annotation  Result:  happiness: best annotation rates  surprise and fear: get confused often  fear: difficult to tell apart

/10 Technische Universität München Ursula Zucker Relative Agreement  Explanation:  example: annotating the sequences as happiness ~ 350 sequences annotated as happiness by nobody, ~ 50 sequences annotated as happiness by everybody  well-recognized facial expressions have peaks at “0” and at “1”

/10 Technische Universität München Ursula Zucker Confusion Between Facial Expressions  fear and surprise: high confusion  happiness and disgust: low confusion confusion rate

/10 Technische Universität München Ursula Zucker Comparison: humans vs. algorithms  ground truth: provided by Michel et. al.  Results:  Michel et. al.: worse at recognizing anger  Schweiger et. al.: worse at recognizing disgust, fear, happiness and on the average

/10 Technische Universität München Ursula Zucker Conclusion  Survey applies similar assumptions as algorithms:  consideration of visual information only  no context information  no natural facial expressions  Summary of our results:  poor recognition rate of humans – worse than expected  some facial expressions get confused easily  Conclusion & Outlook:  integration of more sources of information is highly recommended, e. g. audio/language, context,...

/10 Technische Universität München Ursula Zucker Thank you