Download presentation
Presentation is loading. Please wait.
Published byJune Franklin Modified over 6 years ago
1
Control of Attention and Gaze in Natural Environments
2
Selecting information from visual scenes
What happens when we’re in a visual scene like this? Natural scenes contain much more information than we can perceive in a brief exposure. If we view this scene for a sec or two, we’ll move our gaze around the image, perhaps looking at the bicycle in the center, or large objects like the building. This process of selecting particular information in the scene isn’t random - but we really don’t know what determines where we look & what we attend to. What controls the selection process?
3
What controls these processes?
Fundamental Constraints Acuity is spatially restricted. Attention is limited. Visual Working Memory is limited. Humans must select a limited subset of the available information in the environment. So when viewing scenes like this, we must deal with fundamental constraints>. Humans select a limited amt of the info from the vis env, and can retain only a limited amt. What controls these processes. Only a limited amount of information can be retained. What controls these processes?
4
Now this is not a question we usually ask
Now this is not a question we usually ask. Typically we ask slightly different questions - here’s an example. See a sequence of two brief images of simple shapes - one object is changed in the second view. Your job is to identify the changed item.
7
Anyone see the one that changed
Anyone see the one that changed? If you happened to be looking at the right spot, your may have seen it change - or you might have looked at a couple of the objects, but then forgotten what they were like between the two presentations - When people do expts like this they find that you can remember about 4 items. Gives us a visceral sense of limitations
8
Saliency - bottom-up Image properties eg contrast, edges, chromatic saliency can account for some fixations when viewing images of scenes. One approach to the problem is to try to predict where you look from an examination of the properties of the image. (Exogenous atten) gaze typically goes wit attn
9
Limitations of Saliency Models
Important information may not be salient eg Stop signs in a cluttered environment. Salient information may not be important - eg retinal image transients from eye/body movements. Doesn’t account for many observed fixations, especially in natural behavior (eg Land etc). However, there are important ways in which saliency models aren’t adequate to explain distribution of gaze in a scene.
10
Need to Study Natural Behavior
Natural vision is not the same as viewing pictures. Behavioral goals determine what information is needed. Task structure (often) allows interpretation of role of fixations. Inclined to think of vision as viewing a picture, but more often we’re acting in the environment - also, if viewing a 2D image, don’t really know what obs is doing - maybe remembering objects, maybe judging image quality - don’t really know - if tasl requiring overt actions, have a good idea of what the obs is doing form moment to moment Need for action means diff info is required Not only is stim different (2 vs 3 D, fov etc) the info you need is different
11
Top-down factors Viewing pictures of scenes is different from acting within scenes. Heading Obstacle avoidance Other problem with trying to explain fix patterns or dsn of attn by looking at fixations of images is that real vision really is differnet form looking at an image of a scene. If you’re in a scene you need different kinds of info from when you’re looking at an image. For example.. Can think of natural vision as being composed of a set of mini-tasks like this, and gaze needs to be doled out in the service of each of these tasks.Whwen looking at an image, not clear what obs is doing - recog, mem? Foot placement
12
To what extent is the selection of information from scenes determined by cognitive goals (ie top-down) and how much by the stimulus itself (ie salient regions - bottom-up effects)? 14
13
Modeling Top Down Control
Walter the Virtual Humanoid Virtual Humanoid has a small library of simple visual behaviors: Sidewalk Following Picking Up Blocks Avoiding Obstacles COULD A PURELY TD SYS WORK? This idea is behind the work of Sprague & Ballard, who developed a model of gaze behavior in a walking context. This is Walter, a virtual agent, Walter’s task is to walk through this virtual env. Walter uses vision to do 3 things. The agent has a small library of behavioral routines that need visual input. Through reinforcement learning the humanoid learns the appropriate policy by which to schedule extraction of visual information – In this model a top down scheduler to acquire visual information is adequate for obstacle avoidance and Walter’s other tasks Sprague & Ballard (2003) Each behavior uses a limited, task-relevant selection of visual information from scene. This is computationally efficient.
14
Walter’s sequence of fixations
litter obstacles sidewalk Model suggests that such a system is feasible - subject has a set of sub-tasks to perform, and gaze reflects performance of sub-tasks. Walter learns where/when to direct gaze using reinforcement learning algorithm.
15
Sprague & Ballard (VSS 2004)
What about unexpected events? Walter the Virtual Humanoid Sprague & Ballard (VSS 2004) However what Walter would be able to handle is an unexpected salient event, such as appearance of another pedestrian in the field of view Walter would be in trouble because he doesn’t have looking for other pedestrians in his behavioral repertoire
16
Dynamic Environments However what Walter would be able to handle is an unexpected salient event, such as appearance of another pedestrian in the field of view Walter would be in trouble because he doesn’t have looking for other pedestrians in his behavioral repertoire
17
Computational load Unexpected events Bottom-up Top-down Expensive
Can handle unexpected salient events Top-down Efficient How to deal with unexpected events? Top down systems are more efficient because they select limited, task-specific inf from the image, but will miss things not on the agenda. Bottom up systems that do a bunch of pre-processing of the image can catch a wider variety of information, but are comp expensive. How would a top down system deal with unexpected events? Through learning or frequent checking?
18
18
19
Reward weights estimated from human behavior using Inverse
Avatar path Human path Reward weights estimated from human behavior using Inverse Reinforcement Learning - Rothkopf 2008. 19
20
Driving Simulator
21
Gaze distribution is very different for different tasks
Time fixating Intersection. 21
22
The Problem Any selective perceptual system must choose the right visual computations, and when to carry them out. How do we deal with the unpredictability of the natural world? Answer - it’s not all that unpredictble and we’re really good at learnig it. So this is the essential problem for top-down systems - How do you know what to look for, and when to look for it? This tight link between vision and task demands brings up the problem of scheduling behaviors. The visual system has limited capacity and computational ability. How does the visual system manage between the current task goals and dealing with new stimuli that may change task demands?. How does this selection occur? Learning, Frequent checking Answer - it’s not all that unpredictable and we’re really good at learning it.
23
Human Gaze Distribution when Walking
Experimental Question: How sensitive are subjects to unexpected salient events? General Design: Subjects walked along a footpath in a virtual environment while avoiding pedestrians. Do subjects detect unexpected potential collisions? To examine these tradeoffs we designed a walking experiment in virtual reality in which we could manipulate the bottom up signal, What happens if ped suddenly starts to come at you - looming stim.
24
Virtual Walking Environment
Virtual Research V8 Head Mounted Display with 3rd Tech HiBall Wide Area motion tracker V8 optics with ASL501 Video Based Eye Tracker (Left) and ASL 210 Limbus Tracker (Right) D&c emily Limbus Tracker Our lab has several systems integrated to allow such a virtual reality experiment. We have a head mounted display with two eyetrackers installed. In the picture on the right hand side you can see the video based tracker for POG recording which is complemented by an limbus tracker used for saccade contingent updates. To allow the subjects to walk a sufficient length - a wide area motion tracking system is used to update the view inside the display while allowing the subject to walk the ~27 meter perimeter of rectangular path in the lab. Video Based Tracker
25
Bird’s Eye view of the virtual walking environment.
Virtual Environment Monument Bird’-eye view of the foot path that the subjects walked – 6 subjects each performed 6 trials of walking. 3 – in the no-following onditon and 3 in the following. Each trial consisted of walking around six times about 3-4minutes and Short clip of normal speed Bird’s Eye view of the virtual walking environment.
26
Experimental Protocol
1 - Normal Walking: Avoid the pedestrians while walking at a normal pace and staying on the sidewalk. 2 - Added Task: Identical to condition 1. However, the additional instruction of following a yellow pedestrian was given Normal walking Side-by-side pictures of the two conditions (explanation of them) 3 blocks of 6 circuits of each. Follow leader
27
What Happens to Gaze in Response to an Unexpected Salient Event?
Pedestrians’ paths Colliding pedestrian path The Unexpected Event: Pedestrians on a non-colliding path changed onto a collision course for 1 second (10% frequency). Change occurs during a saccade. Pedestrian must be 3-5 meters away and the angular delta could be no greater than 30degs. Contingent on saccade. Does a potential collision evoke a fixation?
28
Fixation on Collider In this clip a purple pedestrian appears in the visual field shortly after which the pedestrian starts on a collision path. The subject does not fixate the collider pedestrian during its collision course.
29
No Fixation During Collider Period
Purple ped – turn corner fixate ped – look to path maintain fix during collision period and as it passes
30
Probability of Fixation During Collision Period
Pedestrians’ paths Colliding pedestrian path More fixations on colliders in normal walking. No effect in Leader condition So collision event does seem to attract gaze, but only to a limited extent, and not if you have the added task of following a leader. . Normal Walking Follow Leader Controls Colliders
31
Why are colliders fixated?
Small increase in probability of fixating the collider. Failure of collider to attract attention with an added task (following) suggests that detections result from top-down monitoring.
32
Detecting a Collider Changes Fixation Strategy
Time fixating normal pedestrians following detection of a collider Normal Walking Follow Leader TD systems rely on estimating likelihood of environmental events, so detection of an unlikely or signif event like a potential collision might lead subjects to spend more time monitoring peds. This indicates that subjects can quickly modify their fixation strategy in response to information that indicates a need to change policy. “Miss” “Hit” Longer fixation on pedestrians following a detection of a collider
33
Subjects rely on active search to detect potentially
hazardous events like collisions, rather than reacting to bottom-up, looming signals. To make a top-down system work, Subjects need to learn statistics of environmental events and distribute gaze/attention based on these expectations.
34
Possible reservations…
Perhaps looming robots not similar enough to real pedestrians to evoke a bottom-up response.
35
Walking -Real World Experimental question:
Do subjects learn to deploy gaze in response to the probability of environmental events? General design: Subjects walked on an oval path and avoided pedestrians To examine these tradeoffs we designed a walking experiment in virtual reality in which we could manipulate the general task demands as well as a salient bottom up signal, used to probe the questions we have framed. Explain block heads
36
A subject wearing the ASL Mobile Eye
Experimental Setup A subject wearing the ASL Mobile Eye System components: Head mounted optics (76g), Color scene camera, Modified DVCR recorder, Eye Vision Software, PC Pentium 4, 2.8GHz processor
37
Experimental Design (ctd)
Occasionally some pedestrians veered on a collision course with the subject (for approx. 1 sec) 3 types of pedestrians: Trial 1: Rogue pedestrian - always collides Safe pedestrian - never collides Unpredictable pedestrian - collides 50% of time Trail 2: Rogue Safe Safe Rogue Unpredictable - remains same
38
Fixation on Collider
39
Effect of Collision Probability
Probability of fixating increased with higher collision probability.
40
Detecting Collisions: pro-active or reactive?
Note this may seem obvious, but in contrast, lot of work trying to predict fix locs by analyzing properties of image. Not clear what role of saliency might be in normal vision Body motion generates image motion over whole retina Probability of fixating risky pedestrian similar, whether or not he/she actually collides on that trial.
41
Learning to Adjust Gaze
Changes in fixation behavior fairly fast, happen over 4-5 encounters (Fixations on Rogue get longer, on Safe shorter)
42
Shorter Latencies for Rogue Fixations
Rogues are fixated earlier after they appear in the field of view. This change is also rapid.
43
Effect of Behavioral Relevance
Fixations on all pedestrians go down when pedestrians STOP instead of COLLIDING. STOPPING and COLLIDING should have comparable salience. Note the the Safe pedestrians behave identically in both conditions - only the Rogue changes behavior.
44
Fixation probability increases with probability of a collision.
Fixation probability similar whether or not the pedestrian collides on that encounter. Changes in fixation behavior fairly rapid (fixations on Rogue get longer, and earlier, and on Safe shorter, and later)
45
Our Experiment: Allocation of gaze when driving.
Effect of task on gaze allocation. Does task affect ability to detect unexpected events? Drive along street with other cars and pedestrians. 2 instructions - drive normally or follow a lead car. Measure fixation patterns in the two conditions. A competing task of following a leader diminished fixations on colliders and this is consistent with a top down strategy (and reprioritizing resources).
46
Note this may seem obvious, but in contrast, lot of work trying to predict fix locs by analyzing properties of image. Not clear what role of saliency might be in normal vision Body motion generates image motion over whole retina
47
Conclusions Subjects must learn the probabilistic structure of the
world and allocate gaze accordingly. That is, gaze control is model-based. Subjects behave very similarly despite unconstrained environment and absence of instructions. Control of gaze is proactive, not reactive, and thus is model based. Anticipatory use of gaze is probably necessary for much visually guided behavior.
48
Behaviors Compete for Gaze/ Attentional Resources
Competes for gaze resources, and we are inferring – attentional resources The probability of fixation is lower for both Safe and Rogue pedestrians in both the Leader conditions than in the baseline condition . Note that all pedestrians are allocated fewer fixations, even the Safe ones.
49
Conclusions Data consistent with task-driven sampling of visual information rather than bottom up capture of attention - No effect of increased salience of collision event. - Colliders fail to attract gaze in the leader condition, suggesting the extra task interferes with detection. Observers rapidly learn to deploy visual attention based on environmental probabilities. Such learning is necessary in order to deploy gaze and attention effectively. Competing task
50
Certain stimuli thought to capture attention bottom-up
(eg Theeuwes et al, 2001 etc ) Looming stimuli seem like good candidates for bottom-up attentional capture (Regan & Gray, 200; Franceroni & Simons,2003). . All have the intuition that attention is attracted by certain stimuli - Eg something about to hit you.extensive literature on what does and doesn’t capture attention exogenously - considerable debate.
51
No effect of increased collider speed.
Normal Walking No Leader Follow Leader To get more evidence on thi isue we increased the saliency of the collisding ped by incr speed at same time as ped turns onto a collison course. Greater saliency of the unexpected event does not increase fixations.
52
Other evidence for detection of colliders?
Do subjects slow down during collider period? Subjects slow down, but only when they fixate collider. Implies fixation measures “detection”. Slowing is greater if not previously fixated. Consistent with peripheral monitoring of previously fixated pedestrians.
53
Conclusions Subjects learn the probabilities of events in the environment and distribute gaze accordingly The findings from the Leader manipulation support the claim that different tasks compete for attention
54
Effect of Context Probability of fixating Safe pedestrian higher in a context of a riskier environment
56
Summary Direct comparison between real and virtual collisions is difficult, but colliders are still not reliably fixated. Subjects appear to be sensitive to several parameters of the environment: Experience Experience with the Rogue pedestrian elevated fixation probabilities of the Safe pedestrian to 70% (50% wto. exp.) Experience with the Safe lead to 80% fixation probability of the Rogue (89% wto. exp.) Experience of Safe carries less weight than the experience of Rogue Presumably, with such a highly salient stimulus, one would expect a high detection rate of these colliders. Our preliminary results show only a marginal increase in fixations on colliders in real environment (0-20% depending on the condition) when compared to those from the experiment described in Chapter 2. This result would favor active search as source of information (colliders missed if they don’t coincide with an active search episode), rather than bottom-up interpretation. If we compare fixations on Risky (goes from 62-70%) to colliders in virtual (40-60%)… In this experiment there are many more collisions, so there’s the overall context effect. Fixations on the Safe are higher with collisions, than with the stops…
57
Detection of signs at intersection results from frequent looks.
Shinoda et al. (2001) “Follow the car.” or “Follow the car and obey traffic rules.” Time fixating Intersection. What do we know? Previous work on dsn of attn in natural environments: Road Car Roadside Intersection Detection of signs at intersection results from frequent looks. 21
58
How well do human subjects detect unexpected events?
Shinoda et al. (2001) Detection of briefly presented Stop signs. Intersection P = 1.0 Mid-block P = 0.3 Greater probability of detection in probable locations Suggests Ss learn where to attend/look.
59
What do Humans Do? Shinoda et al. (2001) found better detection of unexpected stop signs in a virtual driving task. To try and answer this question its worth looking at human behavioral data. Shinoda et al subjects in a driving simulation - subjects strategically deployed fixations At key moments during the based on learning What are the capabilities and limitations of a top-down scheduler We would like to examine a more demanding situation – and does shinodas result generalize? A cartoon of a stop sign in the intersection, and in the middle of the block.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.