University of Paris 8 Animation improvements and face creation tool for ECAs Animation improvements and face creation tool for ECAs Nicolas Ech Chafai.

Slides:



Advertisements
Similar presentations
Towards Affective Interactive ECAs Catherine Pelachaud – University of Paris 8 Isabella Poggi – University of Rome 3 WP6 workshop, March 2005, Paris.
Advertisements

From Facial Features to Facial Expressions A.Raouzaiou, K.Karpouzis and S.Kollias Image, Video and Multimedia Systems Laboratory National Technical University.
Bruxelles, October 3-4, Interface Multimodal Analysis/Synthesis System for Human Interaction to Virtual and Augmented Environments IST Concertation.
Expressive Gestures for NAO NAO TechDay, 13/06/2012, Paris Le Quoc Anh - Catherine Pelachaud CNRS, LTCI, Telecom-ParisTech, France.
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Persuasive Listener in a Conversation Elisabetta Bevacqua, Chris Peters, Catherine Pelachaud (IUT de Montreuil - Paris 8)
On the parameterization of clapping Herwin van Welbergen Zsófia Ruttkay Human Media Interaction, University of Twente.
Capturing Facial Details by Space- time Shape-from-shading Yung-Sheng Lo *, I-Chen Lin *, Wen-Xing Zhang *, Wen-Chih Tai †, Shian-Jun Chiou † CAIG Lab,
Facial Tracking and Animation Todd Belote Bryan Harris David Brown Brad Busse.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
Face Poser: Interactive Modeling of 3D Facial Expressions Using Model Priors Manfred Lau 1,3 Jinxiang Chai 2 Ying-Qing Xu 3 Heung-Yeung Shum 3 1 Carnegie.
ENTERFACE ’10 Amsterdam, July-August 2010 Hamdi Dibeklio ğ lu Ilkka Kosunen Marcos Ortega Albert Ali Salah Petr Zuzánek.
Seminar Computer Animation Arjan Egges
Sep 14, Fall 2006IAT 4101 Design Teams Team Structure Interdisciplinary Teams.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
1 IUT de Montreuil Université Paris 8 Emotion in Interaction: Embodied Conversational Agents Catherine Pelachaud.
Producing Emotional Speech Thanks to Gabriel Schubiner.
Recognizing Emotions in Facial Expressions
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
BY: SAUMYA SUMAN 5 TH SEMESTER SECTION-A ROLL NO-93 PERCEPTION AND NON-VERBAL COMMUNICATION.
2.5D Cartoon Models Author - Alec Rivers from MIT Siming Liu 1.
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
Helsinki University of Technology Laboratory of Computational Engineering Modeling facial expressions for Finnish talking head Michael Frydrych, LCE,
Irfan Essa, Alex Pentland Facial Expression Recognition using a Dynamic Model and Motion Energy (a review by Paul Fitzpatrick for 6.892)
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
A FACEREADER- DRIVEN 3D EXPRESSIVE AVATAR Crystal Butler | Amsterdam 2013.
Use and Re-use of Facial Motion Capture M. Sanchez, J. Edge, S. King and S. Maddock.
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
Three Topics Facial Animation 2D Animated Mesh MPEG-4 Audio.
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
TAUCHI – Tampere Unit for Computer-Human Interaction Visualizing gaze path for analysis Oleg Špakov MUMIN workshop 2002, Tampere.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
University of Coimbra ISR – Institute of Systems and Robotics University of Coimbra - Portugal Institute of Systems and Robotics
1 1. Representing and Parameterizing Agent Behaviors Jan Allbeck and Norm Badler 연세대학교 컴퓨터과학과 로봇 공학 특강 학기 유 지 오.
1 Workshop « Multimodal Corpora » Jean-Claude MARTIN Patrizia PAGGIO Peter KÜEHNLEIN Rainer STIEFELHAGEN Fabio PIANESI.
Feedback Elisabetta Bevacqua, Dirk Heylen,, Catherine Pelachaud, Isabella Poggi, Marc Schröder.
In Your Face Body Language through facial expression.
Performance Comparison of Speaker and Emotion Recognition
Syed ardi syed yahya kamal 2011 chapter five.  Creating in-between positions is still a hallmark of animation.  Using techniques called interpolation.
Jennifer Lee Final Automated Detection of Human Emotion.
Animation From Observation: Motion Editing Dan Kong CMPS 260 Final Project.
-BY SAMPATH SAGAR( ) ABHISHEK ANAND( )
Chapter 6.7 Animation. 2 Overview When to use animation Feedback to player about interaction with UI and in-game action Communicating environmental* conditions.
Ekman’s Facial Expressions Study A Demonstration.
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
SS5305 – Popular Marker Setups 1. Objectives Marker Data Measurement Sequence Project Automation Framework (PAF) Popular marker setups PAF Interface 2.
Interpreting Ambiguous Emotional Expressions Speech Analysis and Interpretation Laboratory ACII 2009.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Nataliya Nadtoka James Edge, Philip Jackson, Adrian Hilton CVSSP Centre for Vision, Speech & Signal Processing UNIVERSITY OF SURREY.
Ruth Agada Dr. Jie Yan Bowie State University Computer Science Department S.O.A.R Summer Research Presentations September 15 th, 2009.
Unity 3D Rolfe Bozier 24-Apr-2017
Modeling Expressivity in ECAs
Automated Detection of Human Emotion
Chapter 6.7 Animation.
CAPTURING OF MOVEMENT DURING MUSIC PERFORMANCE
Voluntary (Motor Cortex)
Chapter 6.7 Animation.
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.
Categorizing sex and identity from the biological motion of faces
What is blue eyes ? aims on creating computational machines that have perceptual and sensory ability like those of human beings. interactive computer.
Easy Generation of Facial Animation Using Motion Graphs
Project #2 Multimodal Caricatural Mirror Intermediate report
AHED Automatic Human Emotion Detection
Computer Graphics Lecture 15.
Lecture 3. Virtual Worlds : Representation,Creation and Simulation ( II ) 고려대학교 그래픽스 연구실.
Automated Detection of Human Emotion
End-to-End Speech-Driven Facial Animation with Temporal GANs
Presentation transcript:

University of Paris 8 Animation improvements and face creation tool for ECAs Animation improvements and face creation tool for ECAs Nicolas Ech Chafai Nicolas Ech Chafai Benjamin Dariouch Benjamin Dariouch Maurizio Mancini Maurizio Mancini Catherine Pelachaud Catherine Pelachaud

Overview aiming at improving agent's facial animation quality: aiming at improving agent's facial animation quality: we are studying some motion captured data we are studying some motion captured data we apply results to our ECA we apply results to our ECA to allow the creation of individualized ECAs: to allow the creation of individualized ECAs: we developed one tool for MPEG4 face creation we developed one tool for MPEG4 face creation we propose some refinements to MPEG4 specification we propose some refinements to MPEG4 specification

MOCAP data analysis Three main goals: Three main goals: displacement of FAPs during emotion presentation displacement of FAPs during emotion presentation synchronization between different FAPs synchronization between different FAPs FAP values during transition between consecutive emotions FAP values during transition between consecutive emotions

Collected MOCAP data 2 actors 2 actors 33 markers, 21 of them are the MPEG4 FAPs 33 markers, 21 of them are the MPEG4 FAPs 78 sequences: 78 sequences: basic movements basic movements rising eyebrows rising eyebrows smiling smiling … basic emotions basic emotions anger anger happiness happiness surprise surprise …

MOCAP problems (we discovered that) obtaining usable data is not straightforward (we discovered that) obtaining usable data is not straightforward right size and shape markers have to be used right size and shape markers have to be used cameras have to be placed properly cameras have to be placed properly data has to be translated to the needed reference system data has to be translated to the needed reference system data has to be filtered from noise data has to be filtered from noise

Data example smile smile

Video examples frown clip frown clip file: clips/coline 36 eyebrows file: clips/coline 36 eyebrows fear clip fear clip file: clips/coline 56 fear file: clips/coline 56 fear

Facial animation model FAPs displacement during basic emotions FAPs displacement during basic emotions our model was simply based on onset-apex-offset our model was simply based on onset-apex-offset

Data observed model on real data we observed other general behaviors on real data we observed other general behaviors

Results we started to introduce the ADSR model: we started to introduce the ADSR model: given a sequence of (phase,intensity,duration) where phase is from {Attack, Decay, Sustain Release}, the FAP curve is built using keyframe Hermite interpolation: given a sequence of (phase,intensity,duration) where phase is from {Attack, Decay, Sustain Release}, the FAP curve is built using keyframe Hermite interpolation:

ADSR vs real data

ADSR example clip clip

MPEG4 face tool

imports models from Poser imports models from Poser allows the selection of the areas influenced by FDPs allows the selection of the areas influenced by FDPs

Tool's features automatic selection and symmetrization automatic selection and symmetrization automatic association region name available FDPs automatic association region name available FDPs

Example exports into a data file (containing geometry + regions) readable from the Greta's player exports into a data file (containing geometry + regions) readable from the Greta's player

Example flat.avi (note: female speech) flat.avi (note: female speech)

Added new FAPUs after adding new faces some refinements for the MPEG4 player will be needed after adding new faces some refinements for the MPEG4 player will be needed

Example clip without new FAPUs clip without new FAPUs clip with new FAPUs clip with new FAPUs

Conclusions more data has to be captured in proper way more data has to be captured in proper way focus more on interaction between different FAPs and transition between sequential expressions focus more on interaction between different FAPs and transition between sequential expressions ADSR has to be fully implemented ADSR has to be fully implemented for documentation, papers for documentation, papers for Greta applications available to the other HUMAINE members please contact us for Greta applications available to the other HUMAINE members please contact us