An Emotive Lifelike Robotics Face for Face-to-Face Communication

Slides:



Advertisements
Similar presentations
An effector pattern is an agent causing an effect, ie
Advertisements

Non Verbal Communication What does the following sign mean to you?
The Extended Cohn-Kanade Dataset(CK+):A complete dataset for action unit and emotion-specified expression Author:Patrick Lucey, Jeffrey F. Cohn, Takeo.
A New Pan-Cultural Facial Expression of Emotion
Instructors Edition. Psychology in Action, 9 th ed. By Dr. Karen Huffman Facial Characteristics Jim Matiya Psychology in Action 9 th Edition Karen Huffman.
Drawing and Painting Fun (4th) - Masks -. Does this mask look scary, funny, sad, angry, surprised or happy? SCARY HAPPY.
Human Face Modeling and Animation Example of problems in application of multimedia signal processing.
Feature Detection and Emotion Recognition Chris Matthews Advisor: Prof. Cotter.
Ch 4: Perceiving Persons Part 1: Sept. 17, Social Perception Get info from people, situations, & behavior – We make quick 1 st impressions of people.
EKMAN’S FACIAL EXPRESSIONS STUDY A Demonstration.
Recognizing Emotions in Facial Expressions
Stephanie Witte Wisconsin Lutheran College Deborrah Uecker COM 205
Sunee Holland University of South Australia School of Computer and Information Science Supervisor: Dr G Stewart Von Itzstein.
Aging and the use of emotional cues to guide social judgments Louise Phillips, Gillian Slessor & Rebecca Bull University of Aberdeen.
Project 10 Facial Emotion Recognition Based On Mouth Analysis SSIP 08, Vienna 1
2008 數位互動科技與產業應用研討會 互動表情呈現機器人之技術及趨勢 Approaches to Interactive Emotional Robots 謝銘原 南台科技大學 機器人研究中心 Robotics Research Center, Southern Taiwan University,
Electronic visualization laboratory, university of illinois at chicago Designing an Expressive Avatar of a Real Person 9/20/2010 Sangyoon Lee, Gordon Carlson,
EXPRESSED EMOTIONS Monica Villatoro. Vocab to learn * Throughout the ppt the words will be bold and italicized*  Emotions  Facial Codes  Primary Affects.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
1 Ying-li Tian, Member, IEEE, Takeo Kanade, Fellow, IEEE, and Jeffrey F. Cohn, Member, IEEE Presenter: I-Chung Hung Advisor: Dr. Yen-Ting Chen Date:
Lecture 15 – Social ‘Robots’. Lecture outline This week Selecting interfaces for robots. Personal robotics Chatbots AIML.
University of Coimbra ISR – Institute of Systems and Robotics University of Coimbra - Portugal Institute of Systems and Robotics
CHAPTER 11 NONVERBAL DELIVERY MGT 3213 – ORG. COMMUNICATION Mississippi State University College of Business.
Performance Driven Facial Animation
1 Chinese-Speaking 3D Talking Head Project No: H08040 Sang Siew Hoon Supervisor: Dr Ng Teck Khim.
Jennifer Lee Final Automated Detection of Human Emotion.
The Expression of Emotion: Nonverbal Communication.
Face Recognition Summary –Single pose –Multiple pose –Principal components analysis –Model-based recognition –Neural Networks.
Ekman’s Facial Expressions Study A Demonstration.
Facial Expressions Recognition “Words lie, your face doesn’t”
Facial Expressions and Emotions Mental Health. Total Participants Adults (30+ years old)328 Adults (30+ years old) Adolescents (13-19 years old)118 Adolescents.
Expressionbot: An Emotive Lifelike Robotic Face for Face-to- Face Communication Ali Mollahossenini, Gabriel Gairzer, Eric Borts, Stephen Conyers, Richard.
Recognition and Expression of Emotions by a Symbiotic Android Head Daniele Mazzei, Abolfazl Zaraki, Nicole Lazzeri and Danilo De Rossi Presentation by:
EXAMPLES ABSTRACT RESEARCH QUESTIONS DISCUSSION FUTURE DIRECTIONS Very little is known about how and to what extent emotions are conveyed through avatar.
Emotion and Sociable Humanoid Robots (Cynthia Breazeal) Yumeng Liao.
What is it? Details Look at the whole.  Form of communication ◦ Communicating without words ◦ Non-verbal ◦ Using facial expressions ◦ Gestures  Can.
Nataliya Nadtoka James Edge, Philip Jackson, Adrian Hilton CVSSP Centre for Vision, Speech & Signal Processing UNIVERSITY OF SURREY.
2D-CSF Models of Visual System Development. Model of Adult Spatial Vision.
Motivation and Emotion
Chapter 7: Getting Bent Out of Shape: Blend Shapes
Modeling Facial Shape and Appearance
Automated Detection of Human Emotion
Body Language The author: Ilyushkina N. M. Gymnasium-17.
Advanced Computer Animation Techniques
Communication Skills COMM 101 Lecture#2
Verbal and Non-verbal Communication Skills
Presented By, Ankit Ranka Oct 19, 2009
Do Now Watch video and answer the following question
Expressing and Experiencing Emotion
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression By: Patrick Lucey, Jeffrey F. Cohn, Takeo.
Nonverbal Communication
Botulinum toxin and the facial feedback hypothesis: Can looking better make you feel happier?  Murad Alam, MD, Karen C. Barrett, PhD, Robert M. Hodapp,
Session 4B: Friday Afternoon, December 6th
The Human Face as a Dynamic Tool for Social Communication
What is blue eyes ? aims on creating computational machines that have perceptual and sensory ability like those of human beings. interactive computer.
“Face recognition using Kinect”
Outline Of Today’s Discussion
“Let’s Talk” Lesson 10.
Final Project Presentation | CIS3203
Rachael E. Jack, Oliver G.B. Garrod, Philippe G. Schyns 
Making the Most of Your Life
The study of body movements
Project #2 Multimodal Caricatural Mirror Intermediate report
The Human Face as a Dynamic Tool for Social Communication
Do Now Put away cell phones Take out journals.
Emotion.
Copyright © Allyn & Bacon 2006
COMMUNICATION.
Domingo Mery Department of Computer Science
Automated Detection of Human Emotion
Presentation transcript:

An Emotive Lifelike Robotics Face for Face-to-Face Communication ExpressionBot An Emotive Lifelike Robotics Face for Face-to-Face Communication Ali Mollahosseini, Gabriel Graitzer, Eric Borts, Stephen Conyers, Richard M. Voyles, Ronald Cole, and Mohammad H. Mahoor Presented by Jake Kwon

Categories of Robotic Faces 1) Mechatronic Faces 2) Android Faces Kismet by MIT (2000) Albert Hubo (2010)

Categories of Robotic Faces 3) Onscreen Faces Grace (2004) Onscreen Pro: highly flexible and low cost (because we can change the appearance as we desire) Con: no physical embodiment, and still again looks scary Light-projected physical avatars Pro: Also highly flexible and low cost (as you can see facial expression is projected on a physical robot mask) Con: other than the universal weakness of all robotic faces so far, which is the scary appearance, this one is pretty good Baxter (2012)

Light-Projected Physical Avatar 3D printed facial mask Modeled in Autodesk Maya Wig for aesthetics Portable projector

Projection Calibration Direct projection of the animation on the mask appears distorted Projection itself is 2D yet the projected surface is 3D Calibration is done using checkerboard in the screen to define a piecewise homography mapping between the corresponding rectangles of the mask and on the screen

Speech Uses Bavieca speech recognizer (open source speech recognition toolkit written in c++) Visually similar phonemes are grouped together Ex) buy, pie, my Uses open source toolkit for speech recognition. To animate lips, ExpressionBot groups visually similar phonemes together, and it comes down to 20 uniques classes of phonemes Which is a skill that is rigorously used by animation studios.

Emotion Lip Blending Six basic expressions Joy, fear, sadness, anger, disgust, and surprise Examples Cheeks and lip corners are raised to express joy Inner eyebrows are raised, and eyebrows and lip corners are lowered to express sadness Conflicts Surprise and phoneme /b/ This formula basically considers both what expression is supposed to look like with current lip movement. Such effort is needed, because, For example, combining the surprise expression which causes the mouth to be fully open, conflicts with the production of phonemes like /b/, /f/ and /v/ that are produced with the lips closed or nearly closed. Combining the joy emotion with puckered mouth visemes such as /o/ will also result in visual speech and expressions that are not natural and are perceived as abnormal or creepy. To overcome this problem, we designed a table that provides a phoneme weight factor and a maximum emotion weight for every phoneme emotion combination. These values are adjusted empirically for each combination.

Emotion Lip Blending In order to blend the expressions with the lip movement, the animation uses the following formula to generate facial expressions based on the current phonemes and emotion morph targets: Fc = the current phoneme Fj^max = the desired expression model at the maximum intensity F0 = the Neutral model λj = the intensity of the j th expression model Fj, λj ∈ [0,1] One of the biggest conflicts researchers are facing is natural emotional communication while having a verbal conversation. In other words, when a robot is supposed to show happiness while saying phoneme /o/ such as hello. The facial expression rather turnout creepy because happiness makes mouth to smile yet pronuncing the phoneme /o/ makes the mouth make an “o” shape. To overcome such conflicts, expressionbot proposes following formula to create a natural emotional and verbal communication This formula basically considers both what expression is supposed to look like with current lip movement. Such effort is needed, because, For example, combining the surprise expression which causes the mouth to be fully open, conflicts with the production of phonemes like /b/, /f/ and /v/ that are produced with the lips closed or nearly closed. Combining the joy emotion with puckered mouth visemes such as /o/ will also result in visual speech and expressions that are not natural and are perceived as abnormal or creepy. To overcome this problem, we designed a table that provides a phoneme weight factor and a maximum emotion weight for every phoneme emotion combination. These values are adjusted empirically for each combination.

Experiments

Emotion Identification Experiment Participants were asked to classify expression of 3D and 2D animations Expression was shown for 5 seconds Participants were able to identify Anger much better in 3D than in 2D

Speech Realism Experiment Participants were presented with two speeches Basic Proposed (grouped phoneme) Rank realism on scale of 0 to 5 Find the effectiveness of grouping visually similar phonemes such as buy and pie

Eye Gaze Experiment Five subjects were seated around the head Shifting eye gaze only Screen Agent: 50% Physical Agent: 88% Shifting eye gaze + random head movement Screen Agent: 42% Physical Agent: 77% Subjects were asked to answer whether the head is gazing at them.

Conclusion Relatively low cost, $1500, due to 3D printed mask and open source speech recognition toolkit Better emotion expression More realistic speech Better eye gaze identification https://www.youtube.com/watch?v=4-hH2r1Dmlk&t=6s

Competition $249 on Kickstarter https://www.youtube.com/watch?v=TJaLVbjfn7I

What Did I Learn