Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.

Slides:



Advertisements
Similar presentations
National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
Advertisements

Perceiving Animacy and Arousal in Transformed Displays of Human Interaction 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri,
Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios.
Towards Human-Realistic Animation by Observing Real Human Dynamics Darren Cosker Towards Human-Realistic Animation by Observing Real Human Dynamics Darren.
We consider situations in which the object is unknown the only way of doing pose estimation is then building a map between image measurements (features)
HUMAINE Summer School - September Basic Emotions from Body Movements HUMAINE Summer School 2006 Casa Paganini Genova, Italy Ahmad S. Shaarani The.
WP 6: Emotion in Interaction Catherine Pelachaud, U Paris 8 Plenary, 4-6 June 2007, Paris.
University of Paris 8 Animation improvements and face creation tool for ECAs Animation improvements and face creation tool for ECAs Nicolas Ech Chafai.
Expressive Gestures for NAO NAO TechDay, 13/06/2012, Paris Le Quoc Anh - Catherine Pelachaud CNRS, LTCI, Telecom-ParisTech, France.
Expressive Tangible Acoustic Interfaces Antonio Camurri, Corrado Canepa, and Gualtiero Volpe InfoMus Lab, DIST-University of Genova, Viale Causa 13, Genova,
Digital Interactive Entertainment Dr. Yangsheng Wang Professor of Institute of Automation Chinese Academy of Sciences
Persuasive Listener in a Conversation Elisabetta Bevacqua, Chris Peters, Catherine Pelachaud (IUT de Montreuil - Paris 8)
Image Search Presented by: Samantha Mahindrakar Diti Gandhi.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
ENTERFACE ’10 Amsterdam, July-August 2010 Hamdi Dibeklio ğ lu Ilkka Kosunen Marcos Ortega Albert Ali Salah Petr Zuzánek.
1 Transparent control of avatar gestures A prototype Francesca Barrientos GUIR Meeting  28 April 2000.
Instructor : Dr. K. R. Rao Presented by: Rajesh Radhakrishnan.
1 IUT de Montreuil Université Paris 8 Emotion in Interaction: Embodied Conversational Agents Catherine Pelachaud.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
Learning to classify the visual dynamics of a scene Nicoletta Noceti Università degli Studi di Genova Corso di Dottorato.
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Use and Re-use of Facial Motion Capture M. Sanchez, J. Edge, S. King and S. Maddock.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
Working group on multimodal meaning representation Dagstuhl workshop, Oct
Affective Interfaces Present and Future Challenges Introductory statement by Antonio Camurri (Univ of Genoa) Marc Leman (Univ of Gent) MEGA IST Multisensory.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
EyesWeb XMI Multimodal data recording, playing and analysis M. Mancini, Università di Genova (Italy)
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
Vrobotics I. DeSouza, I. Jookhun, R. Mete, J. Timbreza, Z. Hossain Group 3 “Helping people reach further”
Recognition, Analysis and Synthesis of Gesture Expressivity George Caridakis IVML-ICCS.
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Greta MPEG-4 compliant Script based behaviour generator system: Script based behaviour generator system: input - BML or APML input - BML or APML output.
Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.
Backchannels Through Gaze as Indicators of Persuasive Success E. Bevacqua, M. Mancini, C. Peters, C. Pelachaud University of Paris 8 Isabella Poggi Università.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Lecture 6: 18/5/1435 Computer Animation(2) Lecturer/ Kawther Abas CS- 375 Graphics and Human Computer Interaction.
Intelligent Robot Architecture (1-3)  Background of research  Research objectives  By recognizing and analyzing user’s utterances and actions, an intelligent.
1 Workshop « Multimodal Corpora » Jean-Claude MARTIN Patrizia PAGGIO Peter KÜEHNLEIN Rainer STIEFELHAGEN Fabio PIANESI.
Feedback Elisabetta Bevacqua, Dirk Heylen,, Catherine Pelachaud, Isabella Poggi, Marc Schröder.
4 November 2000Bridging the Gap Workshop 1 Control of avatar gestures Francesca Barrientos Computer Science Division UC Berkeley.
Human Figure Animation. Interacting Modules The ones identified –Speech, face, emotion Plus others: –Perception –Physiological states.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Behavioral Animation: Crowds.
M4 / September Integrating multimodal descriptions to index large video collections M4 meeting – Munich Nicolas Moënne-Loccoz, Bruno Janvier,
Conclusions This method for the production of animacy displays is viable. Displays filmed from the TopView are more often rated as being self- propelled.
Lucent Technologies - Proprietary 1 Interactive Pattern Discovery with Mirage Mirage uses exploratory visualization, intuitive graphical operations to.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham.
3DDI: 3D Direct Interaction John Canny Computer Science Division UC Berkeley.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Basic Theory of Motion Capture By: Vincent Verner.
Recognition and Expression of Emotions by a Symbiotic Android Head Daniele Mazzei, Abolfazl Zaraki, Nicole Lazzeri and Danilo De Rossi Presentation by:
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
OpenCV C++ Image Processing
Modeling Expressivity in ECAs
Crowds (and research in computer animation and games)
Computer Animation Algorithms and Techniques
REAL-TIME DETECTOR FOR UNUSUAL BEHAVIOR
Visual Information Retrieval
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
AHED Automatic Human Emotion Detection
Crowds (and research in computer animation and games)
Knowledge-based event recognition from salient regions of activity
Computer Graphics Lecture 15.
Computer Vision Readings
Presentation transcript:

Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8 Chris Peters Paris8

People involved DIST - full-body movement and gesture analysis Paris8 - Agent processing and behavior

Overview Scenario: agent that senses, interprets and copies a range of full-body movements from a person in the real world System able to - acquire input from a video camera - process information related to the expressivity of human movement - generate copying behaviours Towards a system that recognizes emotions of users from human movement and an expressive agent that shows empathy to them

General framework E. Bevacqua, A. Raouzaiou, C. Peters, G. Caridakis, K. Karpouzis, C. Pelachaud, M. Mancini, Multimodal sensing, interpretation and copying of movements by a virtual agent, PIT Encompasses domains of: –Sensing –Interpretation –Planning –Generation

The application From human motion to behaviour generation of expressive agents Full-body motion analysis of a dancer - real and virtual world Agent’s response to expressive human motion descriptors - quantity of motion - contraction/expansion Copying behaviour

Part 1. Sensing and analysis Real world Analysis –Computer vision techniques –Facial analysis –Gesture analysis –Full-body analysis Ambition: ‘switchable’ sensing –Real-world and virtual environment –Bridge gap between ECA and embedded virtual agents

Full-body analysis Expressive cues from human full-body movement –Real motion –Virtual motion Global indicators EyesWeb Expressive Gesture Processing Library* –MotionAnalysis: motion trackers (e.g., LK), movement expressive cues (QoM, CI,...). –TrajectoryProcessing: processing of 2D (physical or abstract) trajectories (e.g., kinematics, directness, …) –SpaceAnalysis *Camurri, A., Mazzarino, B. and Volpe, G., Analysis of Expressive Gesture: The Eyesweb Expressive Gesture Processing Library, in A. Camurri, G.Volpe (Eds.), “Gesture-based Communication in Human- Computer Interaction ”, LNAI 2915, Springer Verlag, 2004.

SMI and Quantity of Motion Quantity of Motion is an approximation of the amount of detected movement, based on Silhouette Motion Images QoM = Area(SMI[t, n])/Area(Silhouette[t])

A measure, ranging from 0 to 1, of how the dancer’s body uses the space surrounding it It can be calculated using a technique related to the bounding region, i.e., the minimum rectangle surrounding the dancer’s body: the algorithm compares the area covered by this rectangle with the area currently covered by the silhouette Contraction Index

Full-body analysis: examples in the real world and in the virtual environment (I) Analysis of quantity of motion and contraction index with EyesWeb (G. Castellano, C. Peters, Full-body analysis of real and virtual human motion for animating expressive agents, HUMAINE Presentation, Athens 2006) Real world and virtual environment Switchable sensing: analysis algorithms capable of - handling input from real-world video stream and from virtual data - providing similar results

Full-body analysis: examples in the real world and in the virtual environment (II)

Comparison of metrics: contraction index

Comparison of metrics: quantity of movement

Part 2. Interpretation and Behaviour Ideal goal: What do we use the expressive cues for? –Planning how to behave according to users’ quality of gesture In this work: Copying dancer’s quality of gesture

Analysis of gesture data Full-body analysis of a dancer Manual segmentation of dancer’s gestures Mean value of the quantity of motion and the contraction index of the dancer for each gesture

CI & QoM Copying Greta performs one gesture type (same shape) but copies the gesture quality of movement of the dancer Greta uses expressivity parameters to modulate the quality of her gestures Mapping expressive cues to expressivity parameters: »CI  Spatial extent »QoM  Temporal extent

Parameters scaling

Copying: an example Video of dancer moving and virtual agent performing gestures copying quality of the dancer motion DEMO!

Facial expressions (1) Show emotional facial expressions depending on users’ quality movement Study the relation between quality of movement and emotion Example: Link QoM and CI to threat:

Facial expressions (2) Example: Link QoM and CI to empathy:

Future Preliminary work Validation both for analysis and synthesis –Perceptive tests to study how users associate an emotional label to an expressive behaviour Towards a virtual agent able to recognize users’ emotions from their movement and to show empathy Real-time system with continuous input