Animating Idle Gaze Humanoid Agents in Social Game Environments Angelo Cafaro Raffaele Gaito

Slides:



Advertisements
Similar presentations
©2011 1www.id-book.com Evaluation studies: From controlled to natural settings Chapter 14.
Advertisements

Chapter 14: Usability testing and field studies
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Agent-based Modeling: A Brief Introduction Louis J. Gross The Institute for Environmental Modeling Departments of Ecology and Evolutionary Biology and.
A cognitive theory for affective user modelling in a virtual reality educational game George Katsionis, Maria Virvou Department of Informatics University.
Collection and Analysis of Multimodal Interaction in Direction Giving Dialogues Seikei University Takeo TsukamotoYumi Muroya Masashi Okamoto Yukiko Nakano.
Deciding How to Measure Usability How to conduct successful user requirements activity?
Persuasive Listener in a Conversation Elisabetta Bevacqua, Chris Peters, Catherine Pelachaud (IUT de Montreuil - Paris 8)
What is the Best Position? More Advanced Positioning February, R. Baker1.
From requirements to design
On the Impact of Delay on Real-Time Multiplayer Games Authors: Lothar Pantel, Lars C. Wolf Presented by: Bryan Wong.
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
A Multi-Agent System for Visualization Simulated User Behaviour B. de Vries, J. Dijkstra.
Unit 2: Self - Awareness By Dr. David Agnew and Mr. Jim Wendell Arkansas State University.
Agents to Simulate Social Human Behaviour in a Work Team Agents to Simulate Social Human Behaviour in a Work Team Barcelona, February Arantza Aldea.
Design and Decision Support Systems in Architecture, Building and Planning Human Behaviour Simulation B. de Vries.
Towards A Multi-Agent System for Network Decision Analysis Jan Dijkstra.
Emotional Intelligence and Agents – Survey and Possible Applications Mirjana Ivanovic, Milos Radovanovic, Zoran Budimac, Dejan Mitrovic, Vladimir Kurbalija,
Character Development. The Goals of Character Design In many genres, games structure gameplay around characters Characters should be distinctive and credible.
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
Developing Effective Questioning In Teaching Games For Understanding (TGfU) Pearson & Webb, 2008.
Participating actively in different kinds of decision- making and voting in order to influence public life Exploring different kinds of rights and obligations.
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
Creative Commons Attribution 3.0 creativecommons.org/licenses/by/3.0 Key Abstractions in Game Maker Foundations of Interactive Game Design Prof. Jim Whitehead.
Standard 2: Mental Maps Mental maps are the spatial images that we carry inside our heads about places, peoples, and the land. When we hear directions.
Modeling Driver Behavior in a Cognitive Architecture
Part I of Reporting Followed by A-Team Updates Jennifer Lowman September 23, 2013.
Locomotion control for a quadruped robot based on motor primitives Verena Hamburger.
We Are Learning To (WALT): Evaluate existing web graphics What I am Looking For (WILF): 4 evaluations that contain: – Detailed descriptions of target.
Effective Public Speaking Chapter # 3 Setting the Scene for Community in a Diverse Culture.
Observation & Analysis. Observation Field Research In the fields of social science, psychology and medicine, amongst others, observational study is an.
“Low Level” Intelligence for “Low Level” Character Animation Damián Isla Bungie Studios Microsoft Corp. Bruce Blumberg Synthetic Characters MIT Media Lab.
Karen Hookstadt, OTR Spalding Rehabilitation Hospital.
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
The Effect of Interface on Social Action in Online Virtual Worlds Anthony Steed Department of Computer Science University College London.
lesson 8.1 DRIVER INATTENTION AND DISTRACTIONS
Interpersonal Relationships in Group Interaction in CSCW Environments Yang Cao, Golha Sharifi, Yamini Upadrashta, Julita Vassileva University of Saskatchewan,
Aibo companion DOAS – group 1 Aitor Azcarate Onaindia Abeer Mahdi
Human Figure Animation. Interacting Modules The ones identified –Speech, face, emotion Plus others: –Perception –Physiological states.
Facilitate Group Learning
Advanced Science and Technology Letters Vol.32 (Architecture and Civil Engineering 2013), pp A Study on.
ARTIS-GRAVIR / IMAG INRIA ICA INPG A Physically-Based Particle Model of Emergent Crowd Behaviors Laure Heïgeas, Annie Luciani, Joelle Thollot, Nicolas.
How To Analyze a Reading Presented By: Dr. Akassi Content From The Norton’s Field Guide To Writing.
Angelo Cafaro 1 and Lorenzo Scagnetti 2 1 Center for Analysis and Design of Intelligent Agents (CADIA) School of Computer Science, Reykjavik.
INFO 414 Human Information Behavior Presentation tips.
Chapter 14 : Modeling Mobility Andreas Berl. 2 Motivation  Wireless network simulations often involve movements of entities  Examples  Users are roaming.
Scalable Behaviors for Crowd Simulation Mankyu Sung Michael Gleicher Stephen Chenney University of Wisconsin- Madison
Driver Attention for Information Display on Variable Message Signs with Graphics and Texts Chien-Jung Lai, Chi-Ying Wang National Chin-Yi University of.
Simulating Crowds Simulating Dynamical Features of Escape Panic & Self-Organization Phenomena in Pedestrian Crowds Papers by Helbing.
Questions about projects?. Eliza’s Daughters Chatterbots Julia – MUD agent that provides information as well as entertaining conversation – Fails Turing.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
Www3.informatik.uni-wuerzburg.de Institute of Computer Science Chair of Communication Networks Prof. Dr.-Ing. P. Tran-Gia Simulation Framework for Live.
Presented By Meet Shah. Goal  Automatically predicting the respondent’s reactions (accept or reject) to offers during face to face negotiation by analyzing.
Introducing Scratch Learning resources for the implementation of the scenario
Ch. 1 Section 1 Why Study Psychology?
Crowds (and research in computer animation and games)
A systematic literature review of empirical evidence on computer games and serious games Wakana Ishimaru Leo Liang.
Software Architecture in Practice
Unit 3 – Driver Physical Fitness
Understanding Agent Knowledge through Conversation
Bioagents and Biorobots David Kadleček, Michal Petrus, Pavel Nahodil
Crowds (and research in computer animation and games)
Interacting with AI Module 3 Working and living with AI
WELCOME.
1 – Understanding Behavior Styles
Introduction to Characterization
Presentation transcript:

Animating Idle Gaze Humanoid Agents in Social Game Environments Angelo Cafaro Raffaele Gaito

Before Starting...  What?  Erasmus Project  When?  14th September 2008 – 18th March 2009  Where?  Reykjavik, Iceland 2

...Before Starting  Háskólinn í Reykjavík (Reykjavik University)  CADIA:  Center for Analysis & Design of Intelligent Agents  Hannes Högni Vilhjálmsson  CCP – EVE Online 3

4 Introduction

Scenario  When players of Online Games can virtually meet face-to-face, all the animated behaviours that normally support and exhibit social interaction become important. 5 From golem.de /Copyright CCP Games

6 Scenario  An avatar is the graphical puppet that represents the player inside the game world;  In realistic looking virtual environments, like the upcoming space stations in CCP’s EVE Online, players cannot be tasked with micromanagement of avatar behaviour;  The avatars themselves have to exhibit a certain level of social intelligence. This is what we want to give them.

Scenario  In any social situation there’s a number of things that determine natural social behavior, including where a person is looking. We have divided these determiners into:  The type of social situation  The personal state of participants. Together, these determiners will impact the target, manner and timing of a gaze behavior. 7

8 Goals To model some of these factors in a virtual environment, in order to produce naturally looking gaze behavior for animated agents and avatars. Everyone should be able to look at each other and react (or not react) to each other’s gaze.

9 Methodology 1.General model of gaze behavior based on existing research literature; 2.Statistical data based on particular determiners of gaze through targeted video experiments; 3.Reproduce the observed gaze behavior in avatars within the CADIA Populus virtual environment.

10 The Model

11 Model Description Three main splits:  Personal States Factors;  Types of Social Situations;  Human Behaviour/Movements.

12 Personal State Factors Some Personal States Factors: Emotion; Mood; Personality / Character; Social Role; Conversation Model / History; Cognitive Activity; And more…

13 Types of Social Situations Some types of social situations: Idling “alone”; Having a conversation; Walking through the environment; Greeting and farewell rituals Talking performing a task; Emergency situations; And more…

14 Human Behaviour/Movements Gaze can involve many body parts: Eyes; Head; Torso; And more…

15 The Dynamics of Personal State The following sketch visualizes the possible differences in duration for the personal states: Green Green: the state typically lasts this long; Yellow Yellow: the state could possibly lasts this long; Red Red: the would never last this long.

16 Model ‘‘Reading Key’’ “a combination of a Social Situation and one or more Personal States, some of which can be regarded constant over the situation, influence people’s Behaviour, such as eye movements, etc…”

17 ‘‘Walking through the Model’’ Due to the high number of combinations between social situations and personal state factors we focused on two particular configurations, both very common: 1.Idle Gaze Standing:  Personal State Factor: cognitive activity;  Social Situation: idling “alone” (standing and waiting);  Movements: eyes, head and torso. 1.Idle Gaze Walking:  Personal State Factor: cognitive activity;  Social Situation: idling “alone” (walking down a street);  Movements: eyes, head and torso.

18 The Video Experiments

19 Experiments Description To test the soundness and validity of our Gaze Model we recorded 3 kind of video experiments: 1.Idle Gaze Standing: behaviour/movements of people standing idle and alone in public places and waiting for something (bus stops, etc…); 2.Idle Gaze Walking: behaviour/movements of people walking alone on a main street with shops onto the left (or right or both) side; 3.Affected Gaze: behaviour/movements of people affected by the gaze of an observer for a fixed time. People in public places Walking Alone or with other people.

20 Experiments Description – N° 1

21 Experiment N° 1 - Analysis

22 Experiment N° 1 – Case 3

23 Experiment N° 1 - Results As results of the video analysis we can extract 3 main patterns: 1. Subjects looking to various directions for short durations; 2. Subjects looking to various directions for long durations; 3. Subjects looking around.

24 Experiment N° 1 - Statistics

25 Experiments Description – N° 2

26 Experiment N° 2 - Analysis

27 Experiment N° 2 – Case 9

28 Experiment N° 2 - Results As results of the video analysis we can extract 4 main patterns: 1. Subjects preferred direction for gaze-aversion is down to the Ground; 2. Subjects close their eyelids when a movement of the head (or eyes) is coming; 3. In many part of the walking-line during the experiment the subjects look to the Ground; 4. They approximately never look up, up-left or up-right.

29 Experiment N° 2 – Statistics (1)

30 Experiment N° 2 – Statistics (2) People Cars

31 Gaze-Experiments Player Demo

32 Gaze-Experiment Player (Demo)  Recreating 2 scenes:  Hlemmur for experiment n° 1;  Laugavegur for experiment n° 2;  2 Cases choosed amoung all the video experiments;  Simple comparison between video and the virtual environment;  Behaviour preset (locomotion, head and eyes).

33 CADIAPopulus  CADIA Populus is a social simulation environment;  Powered by a reactive framework: (Claudio Pedica, Hannes Högni Vilhjálmsson: Social Perception and Steering for Online Avatars. IVA 2008: );  We used CADIAPopuls in order to simulate our gaze behaviours.

34 Autonomous Generated Gaze Demo

35 General Process

36 Potential Targets Selection 1. Area 1 2. Area 2 3. Area 3

37 Decision Process (Exp. 1) 5 proxemics areas:  Choose Probability  Min. Duration  Max. Duration  Look Probability 2 target types:  Objects  Persons

38 Decision Process (Exp. 2) 5 categories of targets: Same Gender Opposite Gender (Avatars) ShopsCars Other MovingFixed  Choose Probability  Min. Duration  Max. Duration  Look Probability

39 Decision Process (Common Features - 1)  Default Behaviour:  No potential targets;  Decision Process Result  Don’t look;  Different default directions (standing or walking);  Use of short term memory;  Changes in the potential targets;

40 Decision Process (Common Features - 2)  Avatar Profiles:  Gender;  Extrovert;  And so on…  Avatar Preferences:  Values between 0.0 and 1.0;  Same and Opposite Gender;  Cars;  Shops;  Gaze aversion:  Introvert avatars avert gaze in the mutual-gaze case.

41 Conclusions

42 Conclusions Analysis results of the experiments data revealed some confirmations in the preexisting licterature regarding our discovered patterns; Autonomous Gaze Generation implementation is completely coherent with our gaze model; An initial subjective evaluation tells us that passing from the Gaze Experiment Player to the Autonomous Generated Gaze implementation leads us to a more realistic result than we expected.

43 Future Works & Drawbacks  Control the speed of head movements;  Eyelids in the avatar’s head model;  No potential target case: another kind of experiment;  More detailed avatar profiles and features implementation;  Experiment n° 3 expansion;  Limited head vertical orientation in the avatar’s model;  Autonomous Gaze Generation strictly dependent on the scene;  Perception not based on scene occlusion.

44 Questions?