Expressive Gestures for NAO NAO TechDay, 13/06/2012, Paris Le Quoc Anh - Catherine Pelachaud CNRS, LTCI, Telecom-ParisTech, France.

Slides:



Advertisements
Similar presentations
A project to develop software to assist people with autism to recognise, understand and express emotions through facial expressions,
Advertisements

GMD German National Research Center for Information Technology Darmstadt University of Technology Perspectives and Priorities for Digital Libraries Research.
National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
ViSiCAST: Virtual sign: Capture, Animation Storage & Transmission BDA Conference 2nd August 2000 Belfast Dr John Low RNID.
INTRODUCTION TO SIMULATION WITH OMNET++ José Daniel García Sánchez ARCOS Group – University Carlos III of Madrid.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
University of Paris 8 Animation improvements and face creation tool for ECAs Animation improvements and face creation tool for ECAs Nicolas Ech Chafai.
© 2006 Prentice Hall Leadership in Organizations 11-1 Chapter 11 Leadership in Teams and Decision Groups.
Problem solving methodology Information Technology Units Adapted from VCAA Study Design - Information Technology Byron Mitchell, November.
Let’s shake hands! On the coordination of gestures of humanoids Zsofi Ruttkay Herwin van Welbergen Balázs Varga.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents.
RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA Paul Piwek, ITRI, Brighton Brigitte Krenn, OFAI, Vienna Marc Schröder,
Herwin van Welbergen Dennis Reidsma Stefan Kopp.  Beyond turn taking interaction ◦ Continuous perception and behavior generation  Interpersonal coordination.
Oxygen pathway in mammals: Modeling of the passage from air to blood. Benjamin Mauroy Laboratory MSC, University Paris 7 / CNRS - France.
On the parameterization of clapping Herwin van Welbergen Zsófia Ruttkay Human Media Interaction, University of Twente.
Towards a Reactive Virtual Trainer Zsófia Ruttkay, Job Zwiers, Herwin van Welbergen, Dennis Reidsma HMI, Dept. of CS, University of Twente Amsterdam, The.
1 7M836 Animation & Rendering Animation Jakob Beetz Joran Jessurun
Communicating with Avatar Bodies Francesca Barrientos Computer Science UC Berkeley 8 July 1999 HCC Research Retreat.
1 IUT de Montreuil Université Paris 8 Emotion in Interaction: Embodied Conversational Agents Catherine Pelachaud.
Architectural Design Establishing the overall structure of a software system Objectives To introduce architectural design and to discuss its importance.
Building the Design Studio of the Future Aaron Adler Jacob Eisenstein Michael Oltmans Lisa Guttentag Randall Davis October 23, 2004.
1 7M836 Animation & Rendering Animation Jakob Beetz Joran Jessurun
Human-Computer Interaction IS 588 Spring 2007 Week 4 Dr. Dania Bilal Dr. Lorraine Normore.
Storytelling Performance Skills. Voice Mechanics Speaks with an appropriate volume for the audience to hear. Employs clear enunciation. Uses non- monotonous,
Helsinki University of Technology Laboratory of Computational Engineering Modeling facial expressions for Finnish talking head Michael Frydrych, LCE,
Célia Amegavie Ismaïl Tahry.  University of Science and Technology of Lille  Polytech Group (national network of 11 graduate schools within France's.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
Architecture-Based Runtime Software Evolution Peyman Oreizy, Nenad Medvidovic & Richard N. Taylor.
Recognition of meeting actions using information obtained from different modalities Natasa Jovanovic TKI University of Twente.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
Markup of Multimodal Emotion-Sensitive Corpora Berardina Nadja de Carolis, Univ. Bari Marc Schröder, DFKI.
Military Family Services Program Participant Survey Training Presentation.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
AsapRealizer 2.0: The Next Steps in Fluent Behavior Realization for ECAs Herwin van Welbergen, Ramin Yaghoubzadeh, Stefan Kopp Social Cognitive Systems.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
H.U. Hoppe: About the relation between C and C in CSCL H.U. Hoppe: About the relation between C and C in CSCL Part 1: ______________________________ Computational.
Sign Language corpora for analysis, processing and evaluation A. Braffort, L. Bolot, E. Chételat-Pelé, A. Choisier, M. Delorme, M. Filhol, J. Segouat,
1 Université Paris 8 Multimodal Expressive Embodied Conversational Agents Catherine Pelachaud Elisabetta Bevacqua Nicolas Ech Chafai, FT Maurizio Mancini.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
Recognition, Analysis and Synthesis of Gesture Expressivity George Caridakis IVML-ICCS.
Greta MPEG-4 compliant Script based behaviour generator system: Script based behaviour generator system: input - BML or APML input - BML or APML output.
1 MPML and SCREAM: Scripting the Bodies and Minds of Life-Like Characters Soft computing Laboratory Yonsei University October 27, 2004.
Yoonsang Lee Sungeun Kim Jehee Lee Seoul National University Data-Driven Biped Control.
ENTERFACE 08 Project 1 “MultiParty Communication with a Tour Guide ECA” Mid-term presentation August 19th, 2008.
A Common Ground for Virtual Humans: Using an Ontology in a Natural Language Oriented Virtual Human Architecture Arno Hartholt (ICT), Thomas Russ (ISI),
Toward a Unified Scripting Language 1 Toward a Unified Scripting Language : Lessons Learned from Developing CML and AML Soft computing Laboratory Yonsei.
ERDA : An Empathic Rational Dialog Agent1 Magalie Ochs (1),(2), Catherine Pelachaud (1) and David Sadek (2) (1) IUT de Montreuil, University Paris VIII,
Perceptual Analysis of Talking Avatar Head Movements: A Quantitative Perspective Xiaohan Ma, Binh H. Le, and Zhigang Deng Department of Computer Science.
1 1. Representing and Parameterizing Agent Behaviors Jan Allbeck and Norm Badler 연세대학교 컴퓨터과학과 로봇 공학 특강 학기 유 지 오.
Hirota lab. 1 Mentality Expression by the eyes of a Robot Presented by: Pujan Ziaie Supervisor: Prof. K. Hirota Dept. of Computational Intelligence and.
Österreichisches Forschnungsinstitut für Artificial Intelligence Representational Lego for ECAs Brigitte Krenn.
Feedback Elisabetta Bevacqua, Dirk Heylen,, Catherine Pelachaud, Isabella Poggi, Marc Schröder.
Partnership for International Research and Education A Global Living Laboratory for Cyberinfrastructure Application Enablement II. International Experience.
Human Figure Animation. Interacting Modules The ones identified –Speech, face, emotion Plus others: –Perception –Physiological states.
Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R.
ARCHITECTURES AND STANDARDS FOR IVAS AT THE SOCIAL COGNITIVE SYSTEMS GROUP H. van Welbergen, K. Bergmann, H. Buschmeier, S. Kahl, I. de Kok, A. Sadeghipour,
1 Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents S. Kawamoto, et al. October 27, 2004.
Electronic visualization laboratory, university of illinois at chicago Towards Lifelike Interfaces That Learn Jason Leigh, Andrew Johnson, Luc Renambot,
Multimodal Plan Representation for Adaptable BML Scheduling Dennis Reidsma, Herwin van Welbergen, Job Zwiers.
O The Feedback Culture. o Theories of Communication. o Barriers. REVIEW.
Cursive: Controlling Expressive Avatar Gesture using Pen Gesture Francesca A. Barrientos John F. Canny UC Berkeley Computer science division CVE’02, September.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents mid-term presentation.
Cursive A novel interaction technique for controlling expressive avatar gesture Francesca Barrientos and John Canny UC Berkeley UIST 12 November 2001,
Dance is a type of art that generally involves movement of the body, often rhythmic and to music. It is performed in many cultures as a form of emotional.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Model Checking Early Requirements Specifications in Tropos Presented by Chin-Yi Tsai.
Modeling Expressivity in ECAs
Effective Presentations
Francesca Barrientos and John Canny
Presentation transcript:

Expressive Gestures for NAO NAO TechDay, 13/06/2012, Paris Le Quoc Anh - Catherine Pelachaud CNRS, LTCI, Telecom-ParisTech, France

Le Quoc Anh & Catherine Pelachaud page 1 Objectives Generate communicative gestures for Nao robot Integrated within an existing platform for virtual agent Nonverbal behaviors described symbolically Synchronization (gestures and speech) Expressivity of gestures GVLEX project (Gesture & Voice for Expressive Reading) Robot tells a story expressively. Partners : LIMSI (linguistic aspects), Aldebaran (robotics), Acapela (speech synthesis), Telecom ParisTech (expressive gestures) NAO TechDay 2012

Le Quoc Anh & Catherine Pelachaud State of the art Several initiatives recently: Salem et Kopp (2012): robot ASIMO, the virtual framework MAX, gesture description with MURML. Aaron Holroyd et Charles Rich (2011): robot Melvin, motion scripts with BML, simple gestures, feedback to synchronize gestures and speech Ng-Thow-Hing et al. (2010): robot ASIMO, gestures selection, synchronization between gestures and speech. Nozawa et al. (2006): motion scripts with MPML-HP, robot HOAP-1 Our system: Focus on expressivity and synchronization of gestures with speech using a common platform for Greta and for Nao page 2 NAO TechDay 2012

Le Quoc Anh & Catherine Pelachaud Steps 1. Build a library of gestures from a corpus of storytelling video: the gesture shapes should not be identical (between the human, virtual agent, robot) but they have to convey the same meaning. 2. Use the GRETA system to generate gestures for Nao Following the SAIBA framework -Two representation languages: FML (Function Markup Language) and BML (Behavior Markup Language) -Three separated modules: plan communicative intents, select and plan gestures, and realize gestures page 3 NAO TechDay 2012 Text Intent Planning Behavior Planning Behavior Realizer FML BML Behavior Realizer GRETA System

Le Quoc Anh & Catherine Pelachaud Global diagram page 4 NAO TechDay 2012 FML BML KEYFRAMES LEXICON Gesture Selection Planification of gesture duration Synchronisation with AI speech Modification of gesture expressivity

Le Quoc Anh & Catherine Pelachaud Gesture Animation Planning Synchronization with speech The stroke phase coincides or precedes emphasized words of the speech (McNeill, 1992) Gesture stroke phase timing specified by synch points Expressivity of gestures The same prototype but different animations Parameters: -Spatial Extent (SPC): Amplitude of movement -Temporal Extent (TMP): Speed of movement -Power (PWR): Acceleration of movement -Repetition (REP): Number of Stroke times -Fluidity (FLD): Smoothness and Continuity -Stiffness (STF): Tension/Flexibility page 5 NAO TechDay 2012

Le Quoc Anh & Catherine Pelachaud Example page 6 NAO TechDay 2012 keyframe1] keyframe[2] keyframe[3] <speech id="s1" start="0.0“ \vce=speaker=Antoine\ \spd=180\ Et le troisième dit tristement: \vce=speaker=AntoineSad\ \spd=90\ \pau=200\ J'ai très faim! <gesture id="beat_hungry" start="s1:tm1" end=“start+1.5" stroke="0.5"> YCC XCenter Zmiddle OPENHAND INWARD YLowerEP XCenter ZNear OPEN INWARD

Le Quoc Anh & Catherine Pelachaud Compilation page 7 NAO TechDay 2012 BML Realizer API.AngleInterpolation (joints, values,times) Send timed key-positions to the robot using available APIs Animation is obtained by interpolating between joint values with robot built-in proprietary procedures.

Le Quoc Anh & Catherine Pelachaud Demo « Trois petits morceaux de nuit » page 8 NAO TechDay 2012

Le Quoc Anh & Catherine Pelachaud Conclusion A gesture model is designed, implemented for Nao while taking into account physical constraints of the robot. Common platform for both virtual agent and robot Expressivity model Future work Create gestures with different emotional colour and personal style Validate the model through perceptive evaluations page 9 NAO TechDay 2012

Le Quoc Anh & Catherine Pelachaud Acknowledgment page 10 NAO TechDay 2012 This work has been funded by the ANR GVLEX project It is supported from members of the laboratory TSI, Telecom-ParisTech