GESTURE EXPRESSIVITY Norman I. Badler Center for Human Modeling and Simulation University of Pennsylvania Philadelphia, PA 19104-6389 USA

Slides:



Advertisements
Similar presentations
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Advertisements

HUMAINE Summer School - September Basic Emotions from Body Movements HUMAINE Summer School 2006 Casa Paganini Genova, Italy Ahmad S. Shaarani The.
Principles of Animation Computer Animation SS2008.
LOCOMOTION IN INTERACTIVE ENVIRONMENTS Navjot Garg.
Virtual Reality Design Virtual reality systems are designed to produce in the participant the cognitive effects of feeling immersed in the environment.
Layered Acting for Character Animation By Mira Dontcheva Gary Yngve Zoran Popović presented by Danny House SIGGRAPH 2003.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Vision Based Control Motion Matt Baker Kevin VanDyke.
On the parameterization of clapping Herwin van Welbergen Zsófia Ruttkay Human Media Interaction, University of Twente.
Graphics Korea University Computer Animation Computer Graphics.
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
Dance Literacy - About LMA Written by Jeong Sun Park.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
CS274 Spring 01 Lecture 5 Copyright © Mark Meyer Lecture V Higher Level Motion Control CS274: Computer Animation and Simulation.
A Fuzzy System for Emotion Classification based on the MPEG-4 facial definition parameter set Nicolas Tsapatsoulis, Kostas Karpouzis, George Stamou, Fred.
Computer Animation CS 445/645 Fall Let’s talk about computer animation Must generate 30 frames per second of animation (24 fps for film) Issues.
The 12 Principles of Animation Digital Media 1 Mr. Nicholas Goble.
Verbal & Non-Verbal Communication Active & Passive Listening
Chapter 4 Social Structure: A Guide to Everyday Living
Non-verbal Communication
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
Eyes Alive Sooha Park - Lee Jeremy B. Badler - Norman I. Badler University of Pennsylvania - The Smith-Kettlewell Eye Research Institute Presentation Prepared.
Helping Learners. 1. Helping Learners Improve their Cognitive Understanding. 2. Help Learners Improve their Physical and Motor Fitness. 3. Help Learners.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
Expressive Emotional ECA ✔ Catherine Pelachaud ✔ Christopher Peters ✔ Maurizio Mancini.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
The Center for Human Modeling and Simulation (HMS) SEAS Faculty Meeting Overview May 2001 Norm Badler.
Computer Graphics 2 In the name of God. Outline Introduction Animation The most important senior groups Animation techniques Summary Walking, running,…examples.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
March 1, 20021ICT Virtual Human Workshop HUMAN FIGURE ANIMATION Norman I. Badler Center for Human Modeling and Simulation University of Pennsylvania Philadelphia,
Learning to perceive how hand-written digits were drawn Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.
Computational Choreography. Expressive Robot Movement.
ENTERFACE 08 Project 1 “MultiParty Communication with a Tour Guide ECA” Mid-term presentation August 19th, 2008.
Chapter Four Social Interaction in Everyday Life.
Communication Additional Notes. Communication Achievements 7% of all communication is accomplished Verbally. 55% of all communication is achieved through.
Toward a Unified Scripting Language 1 Toward a Unified Scripting Language : Lessons Learned from Developing CML and AML Soft computing Laboratory Yonsei.
Character Setup In addition to rigging for character models, rigging artists are also responsible for setting up animation controls for anything that is.
KAMI KITT ASSISTIVE TECHNOLOGY Chapter 7 Human/ Assistive Technology Interface.
Graphics Graphics Korea University cgvr.korea.ac.kr 1 Computer Animation 고려대학교 컴퓨터 그래픽스 연구실.
1 1. Representing and Parameterizing Agent Behaviors Jan Allbeck and Norm Badler 연세대학교 컴퓨터과학과 로봇 공학 특강 학기 유 지 오.
Anthropometrics. Standing Working Heights Sitting Working Heights Reaches Gender Strength Differences Postural Strength Differences.
12 Principles Of Animation (1)Squash and Stretch (2)Anticipation (3)Staging (4)Straight Ahead Action and Pose to Pose (5)Follow Through and Overlapping.
Human Figure Animation. Interacting Modules The ones identified –Speech, face, emotion Plus others: –Perception –Physiological states.
Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R.
Research Proposal Seminar: SHOW TIME Meeting 5 Subject: G-1342 Research Seminar Year: 2008/2009.
Interactive Control of Avatars Animated with Human Motion Data By: Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, Nancy S. Pollard Presented.
Computer Graphics Chapter 12 Computer Animation.
Rick Parent - CIS681 Reaching and Grasping Reaching control synthetic human arm to reach for object or position in space while possibly avoiding obstacles.
Animation Animation is about bringing things to life Technically: –Generate a sequence of images that, when played one after the other, make things move.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Author name here for Edited books chapter Assessing Balance and Designing Balance Programs chapter.
Human Joint Transportation in a Multi-User Virtual Environment Stephan Streuber Astros.
UCL Human Representation in Immersive Space. UCL Human Representation in Immersive Space Body ChatSensing Z X Y Zr YrXr Real–Time Animation.
Facial Expression Analysis Theoretical Results –Low-level and mid-level segmentation –High-level feature extraction for expression analysis (FACS – MPEG4.
Recognition and Expression of Emotions by a Symbiotic Android Head Daniele Mazzei, Abolfazl Zaraki, Nicole Lazzeri and Danilo De Rossi Presentation by:
Simulation of Characters in Entertainment Virtual Reality.
Visual Recognition of Human Movement Style Frank E. Pollick Department of Psychology University of Glasgow.
Date of download: 6/30/2016 Copyright © ASME. All rights reserved. From: Freebal: Design of a Dedicated Weight-Support System for Upper-Extremity Rehabilitation.
Modeling Expressivity in ECAs
Computer Graphics.
Computer Animation cgvr.korea.ac.kr.
Friction.
Dance Literacy - About LMA
Laban Movement Analysis Effort Actions
Application to Animating a Digital Actor on Flat Terrain
F. Durupinar, M. Kapadia, S. Deutsch, M. Neff & N. Badler
Synthesis of Motion from Simple Animations
Computer Graphics Lecture 15.
Presentation transcript:

GESTURE EXPRESSIVITY Norman I. Badler Center for Human Modeling and Simulation University of Pennsylvania Philadelphia, PA USA Norman I. Badler Center for Human Modeling and Simulation University of Pennsylvania Philadelphia, PA USA

May 7, 20032CASA 2003 Tutorial The “Realism Ceiling” for Human Models Visual Realism Time to createBehavioral Realism RealismSpecialeffectsInanimateobjectsLife-likevirtualhumans Real-timeAgents

May 7, 20033CASA 2003 Tutorial What Approaches Push the Curve Toward More Realism? Visual Realism Inanimateobjects MotionCapture Parameterization (face, gait, gesture) Life-likevirtualhumans Time to createBehavioral Realism Realism Real-time

May 7, 20034CASA 2003 Tutorial Why is Realism Still Hard? Visual Realism Inanimateobjects MotionCapture Parameterization (face, gait, gesture) Difficult to generalize What are the “right” parameters? Life-likevirtualhumans Time to createBehavioral Realism Realism Real-time

May 7, 20035CASA 2003 Tutorial Goal: Real-Time Interaction between Real and Virtual People Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires virtual person to sense state of live participants. Requires virtual person to sense state of live participants. What information is salient to both? What information is salient to both? Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires parameterized behaviors for flexible expression of virtual person’s internal state. Requires virtual person to sense state of live participants. Requires virtual person to sense state of live participants. What information is salient to both? What information is salient to both?

May 7, 20036CASA 2003 Tutorial Outline Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) EMOTE, eye movements, FacEMOTE EMOTE, eye movements, FacEMOTE Character consistency Character consistency Recognizing movement qualities Recognizing movement qualities The LiveActor environment The LiveActor environment Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) EMOTE, eye movements, FacEMOTE EMOTE, eye movements, FacEMOTE Character consistency Character consistency Recognizing movement qualities Recognizing movement qualities The LiveActor environment The LiveActor environment

May 7, 20037CASA 2003 Tutorial Parameterized Behaviors for Embodied Agents Use existing human behavior models as a guide. Use existing human behavior models as a guide. Want to drive embodied agent from internal state: goals, emotions, culture, motivation… Want to drive embodied agent from internal state: goals, emotions, culture, motivation… External behaviors help the observer perceive these internal [hidden] agent states. External behaviors help the observer perceive these internal [hidden] agent states. Use existing human behavior models as a guide. Use existing human behavior models as a guide. Want to drive embodied agent from internal state: goals, emotions, culture, motivation… Want to drive embodied agent from internal state: goals, emotions, culture, motivation… External behaviors help the observer perceive these internal [hidden] agent states. External behaviors help the observer perceive these internal [hidden] agent states.

May 7, 20038CASA 2003 Tutorial Human Movement Categories Voluntary (tasks, reach, look-at) Voluntary (tasks, reach, look-at) Dynamic (run, jump, fall) Dynamic (run, jump, fall) Involuntary (breathe, balance, blink) Involuntary (breathe, balance, blink) Subconscious Subconscious – Low level motor functions (fingers, legs, lips) – Communicative acts (facial expressions, limb gestures, body posture, eye movement) Voluntary (tasks, reach, look-at) Voluntary (tasks, reach, look-at) Dynamic (run, jump, fall) Dynamic (run, jump, fall) Involuntary (breathe, balance, blink) Involuntary (breathe, balance, blink) Subconscious Subconscious – Low level motor functions (fingers, legs, lips) – Communicative acts (facial expressions, limb gestures, body posture, eye movement)

May 7, 20039CASA 2003 Tutorial Parameterized Action Representation (PAR) Derived from BOTH Natural Language representation and animation requirements. Derived from BOTH Natural Language representation and animation requirements. Lets people instruct virtual agents. Lets people instruct virtual agents. May be associated with a process-based agent cognitive model. May be associated with a process-based agent cognitive model. Derived from BOTH Natural Language representation and animation requirements. Derived from BOTH Natural Language representation and animation requirements. Lets people instruct virtual agents. Lets people instruct virtual agents. May be associated with a process-based agent cognitive model. May be associated with a process-based agent cognitive model.

May 7, CASA 2003 Tutorial Parameterized Action Representation (PAR) Language(NLP) SynthesizedActions PAR LanguageGenerationObservedActions InternalState Actionary Virtual agent

May 7, CASA 2003 Tutorial PAR Examples Virtual Reality checkpoint trainer. Virtual Reality checkpoint trainer. Maintenance instruction validation. Maintenance instruction validation. The EMOTE motion qualities model. The EMOTE motion qualities model. Virtual Reality checkpoint trainer. Virtual Reality checkpoint trainer. Maintenance instruction validation. Maintenance instruction validation. The EMOTE motion qualities model. The EMOTE motion qualities model.

May 7, CASA 2003 Tutorial Checkpoint Virtual Environment

May 7, CASA 2003 Tutorial Maintenance Instruction Validation Example, F-22 Power Supply removal

May 7, CASA 2003 Tutorial Instructions 1.Rotate the handle at the base of the unit. 2.Disconnect the 4 bottom electric connectors. 3.Disconnect the 5 top electric connectors. 4.Disconnect the 2 coolant lines. 5.Unbolt the 8 bolts retaining the power supply to the airframe and support it accordingly, and remove it. 1.Rotate the handle at the base of the unit. 2.Disconnect the 4 bottom electric connectors. 3.Disconnect the 5 top electric connectors. 4.Disconnect the 2 coolant lines. 5.Unbolt the 8 bolts retaining the power supply to the airframe and support it accordingly, and remove it.

May 7, CASA 2003 Tutorial Executing the Corresponding PARs Eye view (Note attention Control) Instructions translated to PAR. PAR controls actions.

May 7, CASA 2003 Tutorial EMOTE Motion Quality Model EMOTE: A real-time motion quality model. EMOTE: A real-time motion quality model. Based on Effort and Shape components of Laban Movement Analysis. Based on Effort and Shape components of Laban Movement Analysis. Defines of movement with 8 parameters. Defines of movement with 8 parameters. Controls numerous lower level parameters of an articulated figure. Controls numerous lower level parameters of an articulated figure. EMOTE: A real-time motion quality model. EMOTE: A real-time motion quality model. Based on Effort and Shape components of Laban Movement Analysis. Based on Effort and Shape components of Laban Movement Analysis. Defines of movement with 8 parameters. Defines of movement with 8 parameters. Controls numerous lower level parameters of an articulated figure. Controls numerous lower level parameters of an articulated figure.

May 7, CASA 2003 Tutorial Effort Motion Factors Four factors range from an indulging extreme to a fighting extreme: Space: Indirect Direct Space: Indirect Direct Weight: Light Strong Weight: Light Strong Time: Sustained Sudden Time: Sustained Sudden Flow: Free Bound Flow: Free Bound Four factors range from an indulging extreme to a fighting extreme: Space: Indirect Direct Space: Indirect Direct Weight: Light Strong Weight: Light Strong Time: Sustained Sudden Time: Sustained Sudden Flow: Free Bound Flow: Free Bound

May 7, CASA 2003 Tutorial Shape Motion Factors Four factors relating body movement to space: Sagittal: Advancing Retreating Sagittal: Advancing Retreating Vertical: Rising Sinking Vertical: Rising Sinking Horizontal: Spreading Enclosing Horizontal: Spreading Enclosing Flow: Growing Shrinking Flow: Growing Shrinking Four factors relating body movement to space: Sagittal: Advancing Retreating Sagittal: Advancing Retreating Vertical: Rising Sinking Vertical: Rising Sinking Horizontal: Spreading Enclosing Horizontal: Spreading Enclosing Flow: Growing Shrinking Flow: Growing Shrinking

May 7, CASA 2003 Tutorial Inputs EMOTE Output Key Poses End Effector Goals Motion Capture Procedures Frame Rate Poses 4 Efforts 4 Shapes InverseKinematicsInterpolation

May 7, CASA 2003 Tutorial Applying Effort to Arm Motions Effort parameters consist of values in [-1,+1] for Space, Weight, Time, and FlowEffort parameters consist of values in [-1,+1] for Space, Weight, Time, and Flow Translated into low-level movement parametersTranslated into low-level movement parameters Trajectory DefinitionTrajectory Definition Timing ControlTiming Control FlourishesFlourishes Effort parameters consist of values in [-1,+1] for Space, Weight, Time, and FlowEffort parameters consist of values in [-1,+1] for Space, Weight, Time, and Flow Translated into low-level movement parametersTranslated into low-level movement parameters Trajectory DefinitionTrajectory Definition Timing ControlTiming Control FlourishesFlourishes

May 7, CASA 2003 Tutorial Applying Shape to Arm Motions Shape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensionsShape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensions For each dimension, we define an ellipse in the corresponding planeFor each dimension, we define an ellipse in the corresponding plane Parameter value specifies magnitude of movement of keypoint along ellipseParameter value specifies magnitude of movement of keypoint along ellipse Reach Space parameter moves keypoint away or toward body’s center of massReach Space parameter moves keypoint away or toward body’s center of mass Shape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensionsShape parameters consist of values in [-1,+1] for horizontal, vertical, sagittal dimensions For each dimension, we define an ellipse in the corresponding planeFor each dimension, we define an ellipse in the corresponding plane Parameter value specifies magnitude of movement of keypoint along ellipseParameter value specifies magnitude of movement of keypoint along ellipse Reach Space parameter moves keypoint away or toward body’s center of massReach Space parameter moves keypoint away or toward body’s center of mass

May 7, CASA 2003 Tutorial EMOTE Controls other Performance Parameters Velocity Velocity Anticipation Anticipation Overshoot Overshoot Time exponent Time exponent # of frames multiplier # of frames multiplier Wrist bend multiplier Wrist bend multiplier Wrist extension mult. Wrist extension mult. Hand shape Hand shape Velocity Velocity Anticipation Anticipation Overshoot Overshoot Time exponent Time exponent # of frames multiplier # of frames multiplier Wrist bend multiplier Wrist bend multiplier Wrist extension mult. Wrist extension mult. Hand shape Hand shape Squash Limb volume Path curvature Displacement multiplier Elbow twist magnitude Wrist twist magnitude Elbow twist frequency Wrist twist frequency

May 7, CASA 2003 Tutorial Motion Qualities are Significant Movements with EMOTE qualities give insight into the agent’s cognitive state. Movements with EMOTE qualities give insight into the agent’s cognitive state. When EMOTE qualities spread from limbs to body, movements appear more sincere. When EMOTE qualities spread from limbs to body, movements appear more sincere. Agent Observer Agent Observer Movements with EMOTE qualities give insight into the agent’s cognitive state. Movements with EMOTE qualities give insight into the agent’s cognitive state. When EMOTE qualities spread from limbs to body, movements appear more sincere. When EMOTE qualities spread from limbs to body, movements appear more sincere. Agent Observer Agent Observer ??? Inferred State Actions + EMOTE

May 7, CASA 2003 Tutorial Start with Scripted Motions

May 7, CASA 2003 Tutorial The Actor with Rising Efforts (Strong emotions?)

May 7, CASA 2003 Tutorial The Actor with Neutral Efforts (A Politician?)

May 7, CASA 2003 Tutorial Actor with Less Rising Shape (Not quite as excited?)

May 7, CASA 2003 Tutorial Moving the Shapes Inward (Woody Allen?)

May 7, CASA 2003 Tutorial With Light and Sustained Efforts (More solemn and serious?)

May 7, CASA 2003 Tutorial Without Torso Movements (Used Car Salesman?)

May 7, CASA 2003 Tutorial Hit the ball … forcefully. …softly. Application of EMOTE to Manner Variants (adverbs) in PAR: HIT

May 7, CASA 2003 Tutorial Define Links between Gesture Selection and Agent Model Gesture performance cues agent state! Normal people show a variety of EMOTE parameters during gestures. Normal people show a variety of EMOTE parameters during gestures. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. That’s why some synthetic characters appear bland and uninspired. That’s why some synthetic characters appear bland and uninspired. Gesture performance cues agent state! Normal people show a variety of EMOTE parameters during gestures. Normal people show a variety of EMOTE parameters during gestures. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. That’s why some synthetic characters appear bland and uninspired. That’s why some synthetic characters appear bland and uninspired.

May 7, CASA 2003 Tutorial Eye Saccade Generation: to Distinguish Various Agent States Face2Face Face2Face speaking, listening, thinking pre-process:run-time: pre-process:run-time: speaking, listening, thinking pre-process:run-time: pre-process:run-time:

May 7, CASA 2003 Tutorial Eye Movements Modeled from Human Performance Data Source Eyes fixed ahead Eyes moved by statistical model Full MPEG-4 face

May 7, CASA 2003 Tutorial Extending EMOTE to the Face: Inputs FacEMOTE Output MPEG-4 FAPs 4 Efforts 4 Shapes Action Unit Modifications MPEG-4 FAP stream

May 7, CASA 2003 Tutorial FAPs Used 1.`open_jaw’ 2.‘lower_t_midlip’ 3.‘raise_b_midlip’ 4.‘stretch_cornerlip’ 1.`open_jaw’ 2.‘lower_t_midlip’ 3.‘raise_b_midlip’ 4.‘stretch_cornerlip’ 5. ‘raise_l_cornerlip’, 6. ‘close_t_l_eyelid’, 7. ‘raise_l_i_eyebrow’, 8. ‘squeeze_l_eyebrow’. 47 used out of used out of 66. (Not visemes and expressions; nor tongue, nose, ears, pupils.) (Not visemes and expressions; nor tongue, nose, ears, pupils.) 8 primary parameters; rest interpolated from these. 8 primary parameters; rest interpolated from these.

May 7, CASA 2003 Tutorial Approach Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Maintain the overall structure of muscle actions (contracting or relaxing) but: Maintain the overall structure of muscle actions (contracting or relaxing) but: Change path/time of the muscle trajectory. Change path/time of the muscle trajectory. Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Preprocess FAP stream for target face model to find maxima and minima (optional but useful to avoid caricatures). Maintain the overall structure of muscle actions (contracting or relaxing) but: Maintain the overall structure of muscle actions (contracting or relaxing) but: Change path/time of the muscle trajectory. Change path/time of the muscle trajectory.

May 7, CASA 2003 Tutorial Main Steps Threshold local maxima and minima peaks as required by EMOTE parameters. Threshold local maxima and minima peaks as required by EMOTE parameters. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters. Threshold local maxima and minima peaks as required by EMOTE parameters. Threshold local maxima and minima peaks as required by EMOTE parameters. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate (multiply) the primary FAPs by EMOTE parameters to adjust their strength. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters. Modulate the secondary FAPs with weighted blending functions reflecting the relative influence of each primary parameter on particular secondary parameters.

May 7, CASA 2003 Tutorial Toward Character Consistency (very much in progress…) Apply a consistent set of parameters to: Arms (torso) – EMOTE Arms (torso) – EMOTE Face – FacEMOTE Face – FacEMOTE [Gait?] [Gait?] [Speech?!] [Speech?!] The following examples use gesture motion capture data and MPEG-4 face data The following examples use gesture motion capture data and MPEG-4 face data Apply a consistent set of parameters to: Arms (torso) – EMOTE Arms (torso) – EMOTE Face – FacEMOTE Face – FacEMOTE [Gait?] [Gait?] [Speech?!] [Speech?!] The following examples use gesture motion capture data and MPEG-4 face data The following examples use gesture motion capture data and MPEG-4 face data

May 7, CASA 2003 Tutorial Body without Face Motion Body by Salim Zayat

May 7, CASA 2003 Tutorial Face without Body Motion ‘Greta’ face courtesy Catherine Pelachaud

May 7, CASA 2003 Tutorial Face and Body with EMOTE Body – indirect, widening; Face – indirect

May 7, CASA 2003 Tutorial Mismatched Face and Body Qualities Body - indirect, light, free; Face - direct, heavy, bound

May 7, CASA 2003 Tutorial Next Steps: Add torso shape changes and rotations. Add torso shape changes and rotations. Fix gesture timing and synchronization (use BEAT?). Fix gesture timing and synchronization (use BEAT?). Adapt EMOTE phrasing to speech prosody. Adapt EMOTE phrasing to speech prosody. Find a suitable speech generator. Find a suitable speech generator. Investigate role of body realism. Investigate role of body realism. Develop an evaluation methodology. Develop an evaluation methodology. Add torso shape changes and rotations. Add torso shape changes and rotations. Fix gesture timing and synchronization (use BEAT?). Fix gesture timing and synchronization (use BEAT?). Adapt EMOTE phrasing to speech prosody. Adapt EMOTE phrasing to speech prosody. Find a suitable speech generator. Find a suitable speech generator. Investigate role of body realism. Investigate role of body realism. Develop an evaluation methodology. Develop an evaluation methodology.

May 7, CASA 2003 Tutorial Recognizing EMOTE Qualities Professional LMA notators can do it. Professional LMA notators can do it. Gives a “reading” of performer state. Gives a “reading” of performer state. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. Labeling EMOTE qualities in real-time would inform virtual agents of human state. Labeling EMOTE qualities in real-time would inform virtual agents of human state. Professional LMA notators can do it. Professional LMA notators can do it. Gives a “reading” of performer state. Gives a “reading” of performer state. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. The manner in which a gesture is performed may be more salient to understanding action (intent) than careful classification of the gesture itself. Labeling EMOTE qualities in real-time would inform virtual agents of human state. Labeling EMOTE qualities in real-time would inform virtual agents of human state.

May 7, CASA 2003 Tutorial Recognizing EMOTE Parameters in a Live Performance Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Ground truth from 2 LMA notators. Ground truth from 2 LMA notators. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view. Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Neural Networks trained to recognize occurrence of significant EMOTE parameters from motion capture (both in video and from 3D electromagnetic sensors). Ground truth from 2 LMA notators. Ground truth from 2 LMA notators. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view. Results encouraging that EMOTE features may be detectable in everyday actions and even from a single camera view.

May 7, CASA 2003 Tutorial Experimental Results – Confusion Matrices (3D Motion Capture) Time Network Predicted SustainedNeutralSudden Sustained4440 Neutral000 Sudden0050 Weight Network PredictedLightNeutralStrong Light1202 Neutral000 Strong2018 Flow Network PredictedFreeNeutralBound Free2710 Neutral000 Bound2014 Space Network PredictedIndirectNeutralDirect Indirect1310 Neutral000 Direct0111

May 7, CASA 2003 Tutorial Experimental Results – Confusion Matrices (2 Camera Video Input) Time Network Predicted SustainedNeutralSudden Sustained811 Neutral000 Sudden016 Weight Network PredictedLightNeutralStrong Light1211 Neutral000 Strong0210 Flow Network PredictedFreeNeutralBound Free720 Neutral000 Bound0211 Space Network PredictedIndirectNeutralDirect Indirect1320 Neutral000 Direct0110

May 7, CASA 2003 Tutorial EMOTE Parameters Determined from A Single Camera View First- and second-order features are mostly preserved. Initial results are encouraging.

May 7, CASA 2003 Tutorial LiveActor: Immersive Environment for Real + Virtual Player Interaction Based on Ascension ReActor IR real-time motion capture system.Based on Ascension ReActor IR real-time motion capture system. EON Reality stereo display wall.EON Reality stereo display wall. Emphasis on non-verbal communication channels.Emphasis on non-verbal communication channels. Based on Ascension ReActor IR real-time motion capture system.Based on Ascension ReActor IR real-time motion capture system. EON Reality stereo display wall.EON Reality stereo display wall. Emphasis on non-verbal communication channels.Emphasis on non-verbal communication channels.

May 7, CASA 2003 Tutorial The LiveActor Environment

May 7, CASA 2003 Tutorial Live Motion Capture

May 7, CASA 2003 Tutorial Example Based on DI-Guy from BDI plus Motion Capture and Scripts

May 7, CASA 2003 Tutorial Conclusion: A Plan for an Embodied Agent Architecture LiveActor environment for interactions between live and virtual agents.LiveActor environment for interactions between live and virtual agents. Embed qualities of real people into agents.Embed qualities of real people into agents. Sense actions and especially motion qualities of live player.Sense actions and especially motion qualities of live player. Link agent animation across body parts, multiple people, the environment, and the situation.Link agent animation across body parts, multiple people, the environment, and the situation. LiveActor environment for interactions between live and virtual agents.LiveActor environment for interactions between live and virtual agents. Embed qualities of real people into agents.Embed qualities of real people into agents. Sense actions and especially motion qualities of live player.Sense actions and especially motion qualities of live player. Link agent animation across body parts, multiple people, the environment, and the situation.Link agent animation across body parts, multiple people, the environment, and the situation.

May 7, CASA 2003 Tutorial Acknowledgments Martha Palmer, Aravind Joshi, Dimitris Metaxas; Jan Allbeck, Koji Ashida, Matt Beitler, Aaron Bloomfield, MeeRan Byun, Diane Chi, Monica Costa, Rama Bindiganavale, Neil Chatterjee, Charles Erignac, Karin Kipper, Seung-Joo Lee, Sooha Lee, Matt Leiker, Ying Liu, William Schuler, Harold Sun, Salim Zayat, and Liwei Zhao. Special thanks to Catherine Pelachaud and Eric Petajan. Sponsors: NSF, ONR, NASA, USAF, ARO, EDS, Alias- Wavefront, & Ascension Technologies Martha Palmer, Aravind Joshi, Dimitris Metaxas; Jan Allbeck, Koji Ashida, Matt Beitler, Aaron Bloomfield, MeeRan Byun, Diane Chi, Monica Costa, Rama Bindiganavale, Neil Chatterjee, Charles Erignac, Karin Kipper, Seung-Joo Lee, Sooha Lee, Matt Leiker, Ying Liu, William Schuler, Harold Sun, Salim Zayat, and Liwei Zhao. Special thanks to Catherine Pelachaud and Eric Petajan. Sponsors: NSF, ONR, NASA, USAF, ARO, EDS, Alias- Wavefront, & Ascension Technologies