Facial Expression Analysis Theoretical Results –Low-level and mid-level segmentation –High-level feature extraction for expression analysis (FACS – MPEG4.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
From Facial Features to Facial Expressions A.Raouzaiou, K.Karpouzis and S.Kollias Image, Video and Multimedia Systems Laboratory National Technical University.
Descriptive schemes for facial expression introduction.
人機介面 Gesture Recognition
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
3D Face Modeling Michaël De Smet.
Facial feature localization Presented by: Harvest Jang Spring 2002.
Summary 1 l The Analytical Problem l Data Handling.
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
ITCS 6010 Spoken Language Systems: Architecture. Elements of a Spoken Language System Endpointing Feature extraction Recognition Natural language understanding.
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
Multimodal emotion recognition and expressivity analysis ICME 2005 Special Session Stefanos Kollias, Kostas Karpouzis Image, Video and Multimedia Systems.
Multimodal emotion recognition recognition models- application dependency –discrete / dimensional / appraisal theory models theoretical models of multimodal.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
Feature Detection and Emotion Recognition Chris Matthews Advisor: Prof. Cotter.
TAUCHI – Tampere Unit for Computer-Human Interaction Automated recognition of facial expressi ns and identity 2003 UCIT Progress Report Ioulia Guizatdinova.
Iris localization algorithm based on geometrical features of cow eyes Menglu Zhang Institute of Systems Engineering
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Triangle-based approach to the detection of human face March 2001 PATTERN RECOGNITION Speaker Jing. AIP Lab.
A Fuzzy System for Emotion Classification based on the MPEG-4 facial definition parameter set Nicolas Tsapatsoulis, Kostas Karpouzis, George Stamou, Fred.
Yanxi Liu The Robotics Institute School of Computer Science
김덕주 (Duck Ju Kim). Problems What is the objective of content-based video analysis? Why supervised identification has limitation? Why should use integrated.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Facial Feature Detection
Karthiknathan Srinivasan Sanchit Aggarwal
MITRE Corporation is a federally-funded research-and- development corporation that has developed their own facial recognition system, known as MITRE Matcher.
What’s Making That Sound ?
Project 10 Facial Emotion Recognition Based On Mouth Analysis SSIP 08, Vienna 1
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
EXPRESSED EMOTIONS Monica Villatoro. Vocab to learn * Throughout the ppt the words will be bold and italicized*  Emotions  Facial Codes  Primary Affects.
Recognition of meeting actions using information obtained from different modalities Natasa Jovanovic TKI University of Twente.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
REU Project RGBD gesture recognition with the Microsoft Kinect Steven Hickson.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
Three Topics Facial Animation 2D Animated Mesh MPEG-4 Audio.
Probabilistic Context Free Grammars for Representing Action Song Mao November 14, 2000.
Multimodal Information Analysis for Emotion Recognition
Recognition, Analysis and Synthesis of Gesture Expressivity George Caridakis IVML-ICCS.
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
Toward a Unified Scripting Language 1 Toward a Unified Scripting Language : Lessons Learned from Developing CML and AML Soft computing Laboratory Yonsei.
Multimodality, universals, natural interaction… and some other stories… Kostas Karpouzis & Stefanos Kollias ICCS/NTUA HUMAINE WP4.
Rick Parent - CIS681 Motion Analysis – Human Figure Processing video to extract information of objects Motion tracking Pose reconstruction Motion and subject.
Chapter 12 Object Recognition Chapter 12 Object Recognition 12.1 Patterns and pattern classes Definition of a pattern class:a family of patterns that share.
3D Face Recognition Using Range Images
AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1 1 Institute of Automation.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Dynamic Data Analysis Projects in the Image Analysis and Motion Capture Labs Figure: functional brain MRI of a monetary reward task; left: 16 cocaine subjects,
REU Project RGBD gesture recognition with the Microsoft Kinect.
Shankar Subramaniam University of California at San Diego Data to Biology.
Gesture Recognition 12/3/2009.
Jennifer Lee Final Automated Detection of Human Emotion.
By : Y N Jagadeesh Trainer – Soft skills Blue HR Solutions.
Face Detection Final Presentation Mark Lee Nic Phillips Paul Sowden Andy Tait 9 th May 2006.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Paralanguage: Nonverbal Communication I have learned to depend more on what people do than what they say in response to a direct question, to pay close.
Interpreting Ambiguous Emotional Expressions Speech Analysis and Interpretation Laboratory ACII 2009.
Face Detection 蔡宇軒.
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
CS4705 Natural Language Processing
Knowledge-based event recognition from salient regions of activity
Significant Figures in Operations
Multimodal Caricatural Mirror
Project #2 Multimodal Caricatural Mirror Intermediate report
Cultural Confusions Show that Facial Expressions Are Not Universal
Presentation transcript:

Facial Expression Analysis Theoretical Results –Low-level and mid-level segmentation –High-level feature extraction for expression analysis (FACS – MPEG4 FAPs)

Research Issues Which models/features (spatial /temporal) Which emotion representation Generalization over races / individuals Environment, context Multimodal, synchronization (hand gestures, postures, visemes, pauses)

Emotion analysis system overview f : Values derived from the calculated distances G : the value of a corresponding FAP

Multiple cue Facial Feature boundary extraction: eyes & mouth, eyebrows, nose Edge-based mask Intensity-based mask NN-based (Y,Cr,Cb, DCT coefficients of neighborhood) mask Each mask is validated independently

Multiple cue feature extraction – an example

Final mask validation through Anthropometry Facial distances measured by US Army 30 year period, Male/Female separation The measured distances are normalized by division with Distance 7, i.e. the distance between the inner corners of left and right eye, both points the human cannot move.

Detected Feature Points (FPs)

FAPs estimation Absence of clear quantitative definition of FAPs It is possible to model FAPs through FDP feature points movement using distances s(x,y) e.g. close_t_r_eyelid (F 20 ) - close_b_r_eyelid (F 22 )  D 13 =s (3.2,3.4)  f 13= D 13 - D 13-NEUTRAL

Sample Profiles of Anger A 1 : F 4 [22, 124], F 31 [-131, -25], F 32 [-136,-34], F 33 [-189,-109], F 34 [- 183,-105], F 35 [-101,-31], F 36 [-108,-32], F 37 [29,85], F 38 [27,89] A 2 : F 19 [-330,-200], F 20 [-335,-205], F 21 [200,330], F 22 [205,335], F 31 [-200,-80], F 32 [-194,-74], F 33 [-190,-70], F 34 =[-190,-70] A 3 : F 19 [-330,-200], F 20 [-335,-205], F 21 [200,330], F 22 [205,335], F 31 [-200,-80], F 32 [-194,-74], F 33 [70,190], F 34 [70,190]

Problems Low-level segmentation –environmental changes –Illumination –Pose –capturing device characteristics –noise

Problems Low-level to high level feature (FAP) generation –Accuracy of estimation –Validation of results Anthripometric/psychological constraints 3D information, analysis by synthesis –Adaptation to context

Problems Statistical / rule-based recognition of high level features –Definition of general rules –Adaptation of rules to context/individuals –Multimodal recognition – dynamic analysis speech/face/gesture/biosignal/temporal Relation between modalities (significance, attention, adaptation) Neurofuzzy approaches –Portability of systems to avatars/applications (ontologies, languages)