Download presentation
Presentation is loading. Please wait.
Published byAustin White Modified over 9 years ago
1
Animating Idle Gaze Humanoid Agents in Social Game Environments Angelo Cafaro angelcaf@inwind.it Raffaele Gaito raffaele@cosmopoli.it
2
Before Starting... What? Erasmus Project When? 14th September 2008 – 18th March 2009 Where? Reykjavik, Iceland 2
3
...Before Starting Háskólinn í Reykjavík (Reykjavik University) CADIA: Center for Analysis & Design of Intelligent Agents Hannes Högni Vilhjálmsson CCP – EVE Online 3
4
4 Introduction
5
Scenario When players of Online Games can virtually meet face-to-face, all the animated behaviours that normally support and exhibit social interaction become important. 5 From golem.de /Copyright CCP Games
6
6 Scenario An avatar is the graphical puppet that represents the player inside the game world; In realistic looking virtual environments, like the upcoming space stations in CCP’s EVE Online, players cannot be tasked with micromanagement of avatar behaviour; The avatars themselves have to exhibit a certain level of social intelligence. This is what we want to give them.
7
Scenario In any social situation there’s a number of things that determine natural social behavior, including where a person is looking. We have divided these determiners into: The type of social situation The personal state of participants. Together, these determiners will impact the target, manner and timing of a gaze behavior. 7
8
8 Goals To model some of these factors in a virtual environment, in order to produce naturally looking gaze behavior for animated agents and avatars. Everyone should be able to look at each other and react (or not react) to each other’s gaze.
9
9 Methodology 1.General model of gaze behavior based on existing research literature; 2.Statistical data based on particular determiners of gaze through targeted video experiments; 3.Reproduce the observed gaze behavior in avatars within the CADIA Populus virtual environment.
10
10 The Model
11
11 Model Description Three main splits: Personal States Factors; Types of Social Situations; Human Behaviour/Movements.
12
12 Personal State Factors Some Personal States Factors: Emotion; Mood; Personality / Character; Social Role; Conversation Model / History; Cognitive Activity; And more…
13
13 Types of Social Situations Some types of social situations: Idling “alone”; Having a conversation; Walking through the environment; Greeting and farewell rituals Talking performing a task; Emergency situations; And more…
14
14 Human Behaviour/Movements Gaze can involve many body parts: Eyes; Head; Torso; And more…
15
15 The Dynamics of Personal State The following sketch visualizes the possible differences in duration for the personal states: Green Green: the state typically lasts this long; Yellow Yellow: the state could possibly lasts this long; Red Red: the would never last this long.
16
16 Model ‘‘Reading Key’’ “a combination of a Social Situation and one or more Personal States, some of which can be regarded constant over the situation, influence people’s Behaviour, such as eye movements, etc…”
17
17 ‘‘Walking through the Model’’ Due to the high number of combinations between social situations and personal state factors we focused on two particular configurations, both very common: 1.Idle Gaze Standing: Personal State Factor: cognitive activity; Social Situation: idling “alone” (standing and waiting); Movements: eyes, head and torso. 1.Idle Gaze Walking: Personal State Factor: cognitive activity; Social Situation: idling “alone” (walking down a street); Movements: eyes, head and torso.
18
18 The Video Experiments
19
19 Experiments Description To test the soundness and validity of our Gaze Model we recorded 3 kind of video experiments: 1.Idle Gaze Standing: behaviour/movements of people standing idle and alone in public places and waiting for something (bus stops, etc…); 2.Idle Gaze Walking: behaviour/movements of people walking alone on a main street with shops onto the left (or right or both) side; 3.Affected Gaze: behaviour/movements of people affected by the gaze of an observer for a fixed time. People in public places Walking Alone or with other people.
20
20 Experiments Description – N° 1
21
21 Experiment N° 1 - Analysis
22
22 Experiment N° 1 – Case 3
23
23 Experiment N° 1 - Results As results of the video analysis we can extract 3 main patterns: 1. Subjects looking to various directions for short durations; 2. Subjects looking to various directions for long durations; 3. Subjects looking around.
24
24 Experiment N° 1 - Statistics
25
25 Experiments Description – N° 2
26
26 Experiment N° 2 - Analysis
27
27 Experiment N° 2 – Case 9
28
28 Experiment N° 2 - Results As results of the video analysis we can extract 4 main patterns: 1. Subjects preferred direction for gaze-aversion is down to the Ground; 2. Subjects close their eyelids when a movement of the head (or eyes) is coming; 3. In many part of the walking-line during the experiment the subjects look to the Ground; 4. They approximately never look up, up-left or up-right.
29
29 Experiment N° 2 – Statistics (1)
30
30 Experiment N° 2 – Statistics (2) People Cars
31
31 Gaze-Experiments Player Demo
32
32 Gaze-Experiment Player (Demo) Recreating 2 scenes: Hlemmur for experiment n° 1; Laugavegur for experiment n° 2; 2 Cases choosed amoung all the video experiments; Simple comparison between video and the virtual environment; Behaviour preset (locomotion, head and eyes).
33
33 CADIAPopulus CADIA Populus is a social simulation environment; Powered by a reactive framework: (Claudio Pedica, Hannes Högni Vilhjálmsson: Social Perception and Steering for Online Avatars. IVA 2008: 104-116); We used CADIAPopuls in order to simulate our gaze behaviours.
34
34 Autonomous Generated Gaze Demo
35
35 General Process
36
36 Potential Targets Selection 1. Area 1 2. Area 2 3. Area 3
37
37 Decision Process (Exp. 1) 5 proxemics areas: Choose Probability Min. Duration Max. Duration Look Probability 2 target types: Objects Persons
38
38 Decision Process (Exp. 2) 5 categories of targets: Same Gender Opposite Gender (Avatars) ShopsCars Other MovingFixed Choose Probability Min. Duration Max. Duration Look Probability
39
39 Decision Process (Common Features - 1) Default Behaviour: No potential targets; Decision Process Result Don’t look; Different default directions (standing or walking); Use of short term memory; Changes in the potential targets;
40
40 Decision Process (Common Features - 2) Avatar Profiles: Gender; Extrovert; And so on… Avatar Preferences: Values between 0.0 and 1.0; Same and Opposite Gender; Cars; Shops; Gaze aversion: Introvert avatars avert gaze in the mutual-gaze case.
41
41 Conclusions
42
42 Conclusions Analysis results of the experiments data revealed some confirmations in the preexisting licterature regarding our discovered patterns; Autonomous Gaze Generation implementation is completely coherent with our gaze model; An initial subjective evaluation tells us that passing from the Gaze Experiment Player to the Autonomous Generated Gaze implementation leads us to a more realistic result than we expected.
43
43 Future Works & Drawbacks Control the speed of head movements; Eyelids in the avatar’s head model; No potential target case: another kind of experiment; More detailed avatar profiles and features implementation; Experiment n° 3 expansion; Limited head vertical orientation in the avatar’s model; Autonomous Gaze Generation strictly dependent on the scene; Perception not based on scene occlusion.
44
44 Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.