Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.

Slides:



Advertisements
Similar presentations
Towards Human-Realistic Animation by Observing Real Human Dynamics Darren Cosker Towards Human-Realistic Animation by Observing Real Human Dynamics Darren.
Advertisements

University of Paris 8 Animation improvements and face creation tool for ECAs Animation improvements and face creation tool for ECAs Nicolas Ech Chafai.
Non Verbal Communication What does the following sign mean to you?
Addressing Patient Motivation In Virtual Reality Based Neurocognitive Rehabilitation A.S.Panic - M.Sc. Media & Knowledge Engineering Specialization Man.
Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig
DE L EARYOUS TRAINING INTERPERSONAL COMMUNICATION SKILLS USING UNCONSTRAINED TEXT INPUT Frederik Vaassen, Walter Daelemans Jeroen Wauters, Frederik Van.
Facial expression as an input annotation modality for affective speech-to-speech translation Éva Székely, Zeeshan Ahmed, Ingmar Steiner, Julie Carson-Berndsen.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
23-May-151 Multiparty Communication with a Tour Guide ECA Aleksandra Čereković HOTLab group Department of telecommunications Faculty of electrical engineering.
Meta-Cognition, Motivation, and Affect PSY504 Spring term, 2011 March 16, 2010.
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
1 IUT de Montreuil Université Paris 8 Emotion in Interaction: Embodied Conversational Agents Catherine Pelachaud.
Relationships Between Facial Movement and Emotions Dilay Özmumcu Psyc 374.
Emotional Intelligence and Agents – Survey and Possible Applications Mirjana Ivanovic, Milos Radovanovic, Zoran Budimac, Dejan Mitrovic, Vladimir Kurbalija,
Recognizing Emotions in Facial Expressions
Amarino:a toolkit for the rapid prototyping of mobile ubiquitous computing Bonifaz Kaufmann and Leah Buechley MIT Media Lab High-Low Tech Group Cambridge,
Human Emotion Synthesis David Oziem, Lisa Gralewski, Neill Campbell, Colin Dalton, David Gibson, Barry Thomas University of Bristol, Motion Ripper, 3CR.
McGraw-Hill/Irwin Copyright © 2008 by The McGraw-Hill Companies, Inc. All rights reserved. Communication Visibility is incredibly important. It’s very.
Laboratory of Computational Engineering Michael Frydrych, Making the head smile Smile? -> part of non-verbal communication Functions of non-verbal.
Electronic visualization laboratory, university of illinois at chicago Designing an Expressive Avatar of a Real Person 9/20/2010 Sangyoon Lee, Gordon Carlson,
EWatchdog: An Electronic Watchdog for Unobtrusive Emotion Detection based on Usage Analysis Rayhan Shikder Department.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
12 November 2010 New Way forward to ICT Literacy Training.
Writing Movie Reviews. Pair Activity While watching the video, answer the following questions on a size 2: How did the two critics begin their review.
A FACEREADER- DRIVEN 3D EXPRESSIVE AVATAR Crystal Butler | Amsterdam 2013.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Administriva l Questions? l Next Thursday – Cinematography from Lonny – Critiques of hw1 films CS 5964 L03- 1.
Facial animation retargeting framework using radial basis functions Tamás Umenhoffer, Balázs Tóth Introduction Realistic facial animation16 is a challenging.
Parser-Driven Games Tool programming © Allan C. Milne Abertay University v
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
Using UML, Patterns, and Java Object-Oriented Software Engineering Chapter 4, Requirements Elicitation.
SPEECH CONTENT Spanish Expressive Voices: Corpus for Emotion Research in Spanish R. Barra-Chicote 1, J. M. Montero 1, J. Macias-Guarasa 2, S. Lufti 1,
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
Assumes that events are governed by some lawful order
The Prometheus-ROADMAP Methodology Lin Padgham in collaboration with Leon Sterling and Michael Winikoff School of Computer Science and IT, RMIT University,
MIRALab Where Research means Creativity SVG Open 2005 University of Geneva 1 Converting 3D Facial Animation with Gouraud shaded SVG A method.
ENTERFACE 08 Project 1 “MultiParty Communication with a Tour Guide ECA” Mid-term presentation August 19th, 2008.
A Common Ground for Virtual Humans: Using an Ontology in a Natural Language Oriented Virtual Human Architecture Arno Hartholt (ICT), Thomas Russ (ISI),
The Expression of Emotion: Nonverbal Communication.
Selecting Simulation Authoring Tools and Environments By: Simon Puleo.
Performance Comparison of Speaker and Emotion Recognition
1 Galatea: Open-Source Software for Developing Anthropomorphic Spoken Dialog Agents S. Kawamoto, et al. October 27, 2004.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Administriva l James will run a hands on tutorial in WEB 130 today at 3:30 and again at 2:00 and 3:30 on Thursday. l Can everyone that wants to attend,
Electronic visualization laboratory, university of illinois at chicago Towards Lifelike Interfaces That Learn Jason Leigh, Andrew Johnson, Luc Renambot,
Virtual Tutor Application v1.0 Ruth Agada Dr. Jie Yan Bowie State University Computer Science Department.
The Expression of Emotion: Nonverbal Communication.
Immersive Virtual Characters for Educating Medical Communication Skills J. Hernendez, A. Stevens, D. S. Lind Department of Surgery (College of Medicine)
Ekman’s Facial Expressions Study A Demonstration.
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
Applying Intelligent and Immersive Game-based Learning in Distance Education Dr Oscar Lin 01/21/2010 Athabasca University, Edmonton.
Interpreting Ambiguous Emotional Expressions Speech Analysis and Interpretation Laboratory ACII 2009.
Investigate Plan Design Create Evaluate (Test it to objective evaluation at each stage of the design cycle) state – describe - explain the problem some.
Recognition and Expression of Emotions by a Symbiotic Android Head Daniele Mazzei, Abolfazl Zaraki, Nicole Lazzeri and Danilo De Rossi Presentation by:
EXAMPLES ABSTRACT RESEARCH QUESTIONS DISCUSSION FUTURE DIRECTIONS Very little is known about how and to what extent emotions are conveyed through avatar.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Computer Animation Algorithms and Techniques
An Emotive Lifelike Robotics Face for Face-to-Face Communication
Chapter 3 Choosing Information & Communications Technologies that Fit the Research Design Janet Salmons, PhD.
Theories of Emotion 3 Theories of Emotion.
Expressing and Experiencing Emotion
Motivation, Emotion, and Stress
“man, how you define computer
Towards lifelike Computer Interfaces that learn
Expressed Emotion Emotions are expressed on the face, by the body, and by the intonation of voice.
42.1 – Describe our ability to communicate nonverbally, and discuss gender differences in this capacity. Expressed Emotion Emotions are expressed on the.
Embodied Interfaces Many people assume that embodied agents (i.e. virtual humans) will enhance HCI as they can…. take advantage of pre-existing social.
Presentation transcript:

Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein

 What is Facial Expression Synthesis?  Background  Motivation  Methodology › Research Questions  Implementation › Why the Source SDK? › System Architecture › Script Design › Voice Recording › Source SDK – Face Poser › Source SDK – Hammer  User Study  Results  Conclusion

 Simulation of human facial communication into graphical form  The goal is to increase realism and enhance the user’s experience  Can extend facial expression synthesis techniques to show deception leakage in virtual faces

 Techniques for facial expression synthesis › Motion capture › Muscle based actuation › Deformations › Morphing or blending  Toolkits › Source SDK (Face Poser), Xface, BEAT, Face Toolkit, Expression

 Embodied Conversational Agents (ECAs) › Realistic, virtual avatars › Gestures › Facial expressions › Speech  Psychology › Deceptive expressions in humans › Paul Ekman’s universal emotions › Micro expressions › Duchenne de Boulogne

 The aim of this research is to evaluate the means of communication between human and computer › Specifically, deception in virtual agents › Using micro expressions  Literature covers how facial expression synthesis is performed  We want to focus on what these systems can be used for

 Question the feasibility of deceptive facial expression synthesis  Develop a deceptive facial expression synthesis technique  Evaluate this technique through a user study

 Is it feasible to create a facial expression synthesis technique that portrays deception? › Is the user able to detect and recognise deception leakage on a computer generated model? › What effect does the synthesised deception leakage of an animated 3D character have on the user's experience ?

 What are the challenges in implementing a technique for synthesising deception leakage in computer-generated characters? › What existing software is available for synthesising facial deception leakage and what is the optimal choice for portraying deception? › How can deception be implemented in facial expression synthesis with the tools currently available?

 Using the Source SDK  Design the scenarios used in the experiment  Generate audio dialogue to create choreographies in Face Poser  Use those choreographies to make maps in Hammer  Map runs through the Source Engine in Half Life 2

 Other toolkits evaluated: › Xface, rFace, Expression  Need high level of detail so that subtlety can be accurately portrayed

 Need to create a script for the scenarios to be included in the experiment  As we want to evaluate the expressions only, must remove other confounding factors  Contextual bias could be a confounding factor › Bad: “I can’t wait to see your parents. It will be great” › Good: “I can’t wait to see my parents. It will be great”  Need to express emotions equally › Anger, disgust, fear, happiness, sadness and surprise, Duchenne (genuine) smile and Pan American (non- genuine) smile  Scenarios need to be realistic

 Choosing voice actors › One male and one female  They were given a script and asked to voice the dialogue › Recorded using computer software › Encoded into a format Face Poser works with › WAV 16-bit PCM  The speech was required to be said in a neutral tone as to avoid bias induced in the voice

 SDK tool that produces choreographed sequences  Audio files are imported › Phoneme extraction/lip synching  Choreographies are created using a timeline of events › Contains expressions and audio  Result is exported into Valve Choreography Data (VCD) file

 SDK tool to create, edit, and export maps for Source Engine games  Build the physical geometry  Place entities in the map › Player spawn location, triggers, map changes, choreographies, characters  Place an event on a generic character that links to a choreography file  Compile and run through Half Life 2

 In the process of finalising the experiment  Participants › Ideal population: over 30 participants › Students, researchers and staff  Training phase › Some people will have a predisposition to recognising deception › Reduce this confounding factor by placing all participants on a similar experience level

 1. Evaluating the recognition of expressions › How well does the technique express emotion? › User is shown an expression or scenario and asked whether they thought it was genuine or deceptive, and/or what emotion they thought they were perceiving

 2. Evaluating subjective user experience › What level of quality is the interaction between computer and user? › Participants will be asked to fill out a feedback form › Subjective feedback wrt how easily they could identify deception and emotions

 Predict that there will be difficulty in accurately recognising deception › Similar with humans  However, it should show a recognition rate that is slightly greater than chance › This is all that is required to validate the research

 The main contribution of this research is the evaluation of facial expression synthesis portraying deception  The user study will answer the question regarding feasibility through recognition rate and subjective user experience  Question regarding challenges is answered in the implementation through selecting a pre existing solution and describing how the deception was implemented

 More complex facial expressions  More ways of portraying deception other than facially › Body leakage is important when detecting deception  Possible applications in gaming to improve the immersive experience

 Want a way to randomise the choreography files for every participant in the experiment  Solution: Custom software that inputs files from a location, randomises the ordering, then outputs in the Half Life 2 scenes directory  Written in C#