The longer one looks, the less one sees --- catching the features

Slides:



Advertisements
Similar presentations
Attention and neglect.
Advertisements

Visual Saliency: the signal from V1 to capture attention Li Zhaoping Head, Laboratory of natural intelligence Department of Psychology University College.
Visual Attention Attention is the ability to select objects of interest from the surrounding environment A reliable measure of attention is eye movement.
Chapter 6: Visual Attention. Overview of Questions Why do we pay attention to some parts of a scene but not to others? Do we have to pay attention to.
Covert Attention Mariel Velez What is attention? Attention is the ability to select objects of interest from the surrounding environment Involuntary.
Chapter 6: Visual Attention. Scanning a Scene Visual scanning – looking from place to place –Fixation –Saccadic eye movement Overt attention involves.
A saliency map model explains the effects of random variations along irrelevant dimensions in texture segmentation and visual search Li Zhaoping, University.
Image Retrieval Using Eye Movements Fred Stentiford & Wole Oyekoya University College London.
Pre-attentive Visual Processing: What does Attention do? What don’t we need it for?
Consequences of Attentional Selection Single unit recordings.
Training Phase Results The RT difference between gain and loss was numerically larger for the second half of the trials than the first half, as predicted,
Binding problems and feature integration theory. Feature detectors Neurons that fire to specific features of a stimulus Pathway away from retina shows.
Focal attention in visual search Professor: Liu Student: Ruby.
A bottom up visual saliency map in the primary visual cortex --- theory and its experimental tests. Li Zhaoping University College London Invited presentation.
 Example: seeing a bird that is singing in a tree or miss a road sign in plain sight  Cell phone use while driving reduces attention and memory for.
Orienting Attention to Semantic Categories T Cristescu, JT Devlin, AC Nobre Dept. Experimental Psychology and FMRIB Centre, University of Oxford, Oxford,
Attention to Orientation Results in an Inhibitory Surround in Orientation Space Acknowledgements Funding for this project was provided to MT through a.
Kimron Shapiro & Frances Garrad-Cole The University of Wales, Bangor
Journal of Vision. 2013;13(3):24. doi: / Figure Legend:
Journal of Vision. 2010;10(3):19. doi: / Figure Legend:
Computational Vision --- a window to our brain
Alejandro Lleras & Simona Buetti
Neural mechanisms underlying repetition suppression in occipitotemporal cortex Michael Ewbank MRC Cognition and Brain Sciences Unit, Cambridge, UK.
Cross-cultural differences on object perception
A bottom up visual saliency map in the primary visual cortex
Volume 53, Issue 1, Pages 9-16 (January 2007)
Brain States: Top-Down Influences in Sensory Processing
Volume 60, Issue 4, Pages (November 2008)
Interacting Roles of Attention and Visual Salience in V4
One-Dimensional Dynamics of Attention and Decision Making in LIP
Volume 87, Issue 4, Pages (August 2015)
Backward Masking and Unmasking Across Saccadic Eye Movements
A Source for Feature-Based Attention in the Prefrontal Cortex
Gijsbert Stoet, Lawrence H Snyder  Neuron 
Cognitive Processes PSY 334
Shape representation in the inferior temporal cortex of monkeys
Intact Memory for Irrelevant Information Impairs Perception in Amnesia
Vision: In the Brain of the Beholder
Goal-Related Activity in V4 during Free Viewing Visual Search
Alteration of Visual Perception prior to Microsaccades
Minami Ito, Charles D. Gilbert  Neuron 
Volume 18, Issue 4, Pages (January 2017)
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Cognitive Processes PSY 334
The Generality of Parietal Involvement in Visual Attention
Learning to Link Visual Contours
Pieter R. Roelfsema, Henk Spekreijse  Neuron 
Takashi Sato, Aditya Murthy, Kirk G. Thompson, Jeffrey D. Schall 
Neural Mechanisms of Visual Motion Perception in Primates
Brain States: Top-Down Influences in Sensory Processing
Intact Memory for Irrelevant Information Impairs Perception in Amnesia
Parietal and Frontal Cortex Encode Stimulus-Specific Mnemonic Representations during Visual Working Memory  Edward F. Ester, Thomas C. Sprague, John T.
Feature-based attention in visual cortex
Chapter 7 - Visual Attention
Neural Mechanisms of Speed-Accuracy Tradeoff
Ryo Sasaki, Takanori Uka  Neuron  Volume 62, Issue 1, Pages (April 2009)
Integration of Local Features into Global Shapes
Segregation of Object and Background Motion in Visual Area MT
Voluntary Attention Modulates fMRI Activity in Human MT–MST
Human vision: function
Sébastien Marti, Jean-Rémi King, Stanislas Dehaene  Neuron 
Visual selection: Neurons that make up their minds
Figure 1 Visuomotor grip force-tracking set-up and conditions
Visuomotor Origins of Covert Spatial Attention
Gijsbert Stoet, Lawrence H Snyder  Neuron 
Attention, Learning, and the Value of Information
The functional architecture of attention
Volume 99, Issue 1, Pages e4 (July 2018)
Experimental procedures.
Li Zhaoping, Nathalie Guyader  Current Biology 
Presentation transcript:

The longer one looks, the less one sees --- catching the features before they merge into objects Li Zhaoping, Nathalie Guyader, University College London http://www.gatsby.ucl.ac.uk/~zhaoping Presented at the meeting of Experimental Psychology Society, Birmingham, UK, April 10, 2006.

First, a demo: visual search for orientation singleton Now, superpose on each bar a vertical or horizontal bar.

Difficult --- even though the target still has an uniquely oriented bar. Is this due to dissimilarities between distractors in background? --- Duncan & Humphreys 1989. --- only partly!

In a control condition: Singleton 20 deg from vertical Now, superpose on each bar a vertical or horizontal bar.

Target not as difficult to spot Target has a unique shape

Comparing the two stimuli: Add horizontal/vertical bars to each bar Asimple Bsimple A: target difficult to find B: target easy to find

A: target difficult to find B: target easy to find Outline of the talk: A: target difficult to find B: target easy to find (1) To find the target, bottom up saliency by unique orientation feature, pre-attentively is mostly sufficient. (2) Target object recognition is unnecessary. (3) Viewpoint invariant object recognition (involving attention): Target in A, but not B, is identical to distractors, confusing! (4) Object/attention processes interfere with feature/pre-attentive processes. Thus the title.

V1 (primary visual cortex) Low level image features: Higher level objects: view-point independent … IT (infero-temporal cortex), LOC parietal cortex, (Tanaka, Rolls, Logothetis, Humphreys, Riddoch & Price, Kourtzi, etc) V1 (primary visual cortex) Hubel & Wiesel 1968 Representation requires no attention Wolfe & Bennett (1997). Representation requires attention. Stankiewicz, Hummel & Cooper (1998) When gaze is not on target. When gaze is on target. Hoffman (1998): attention at gaze positions in free viewing (except very briefly before saccades) Bottom up, unique feature attract attention. (Treisman, Julesz, Duncan & Humphreys, Wolfe, Itti &Koch, Li, etc) Object/shape viewpoint invariant recognition (Treisman, Humphreys et al, Grill-Spector, Kourtzi, Kanwisher, etc.)

Low level image features: Higher level objects: view-point independent … Unique feature to attract attention Pre-attentively Sufficient to locate the target. B In A but not B, target is identical to distractors, confusing! Attentively Typically, more is better, ….. Perhaps not here!

In a visual search experiment: Fixation Stimulus until button response: Subject’s eye movements were tracked Time Task: locate the target with a uniquely oriented bar, ASAP, by pressing left/right button for target in the left or right half of display A B Bsimple Asimple Stimuli: Hence, can not use feature based top-down attention to locate target initially. The uniquely oriented bar maybe randomly tilted right or left: or The task irrelevant bar on target maybe randomly or or

Display span 46x32 degrees in visual angle Condition A

Gaze arrives at target after a few saccades Mainly by the bottom up saliency of the unique orientation. Then ...

We call this an arrival-abandon-return (AAR) trial Gaze dawdled around the target, then abandoned and returned. We call this an arrival-abandon-return (AAR) trial

Condition B

Gaze arrival at target

Followed immediately by button response Measurements: RThand Reaction time of button press RTeye of gaze arrival at target. Performance: Button correct? Eye scan paths

RTeye A and B had similar RTeye RT data: for hand and gaze: (3 subjects: red/green/blue) RTeye Asimple Bsimple A B A and B had similar RTeye

RT data: for hand and for gaze: RTeye dark bars (3 subjects red/green/blue) RTeye dark bars RThand – RTeye(light bars) Asimple Bsimple 3-4 second delay before button press! A B Percentage of correct button presses

Eye scan paths Target location An arrive-abandon-return (AAR) trial A non-AAR trial An example for A An example for B Percentage of AAR trials Note: visual attention is directed to eye position --- top down interference to task in A A B RThand – RTeye for non-AAR trials

To probe the time course of the interference Experiment 2: visual search with time limit Mask stimulus: Fixation Search stimulus: Search stimulus Time Seemingly random time interval between them Task: Subject were instructed to take their time to button press for target location, before or after the mask appearance as they please, guess if necessary.

Subjects unaware of links between mask onset times and eye positions Gaze Contingency fixation Search stimulus onset Gaze arrival at target Mask onset time t Target viewing time without attention T Target viewing time with attention Subjects unaware of links between mask onset times and eye positions Interleaved with control trials of random and earlier times for mask onsets, such that

Comparing conditions A and B (interleaved in each session) Same performance in the beginning! B Improve marginally for B with time A Deteriorate significantly for A with time Before eye arrival on target Target (attentive) viewing time T (ms) i.e., eye-to-mask delay

The longer one looks, the less one sees Not because of any strategy to respond according to eye position at mask onset. Only 71% of button presses agree with eye position at mask onset (averaged over all trials in session). First a good guess, as good as without mask at all! Finally, clarified! Percent correct button presses For condition A only in blocked sessions Then, confused! Before eye arrival on target Target (attentive) viewing time T (ms) i.e., eye-to-mask delay

Dissection of events in the task: (1) Before gaze arrival at target, viewing target without attention Hoffman (1998) Visual attention is at the eye position in free viewing (except very briefly before saccades). Condition A Wolfe & Bennett (1997) --- pre-attentive objects are bundle of shapeless features seen as + viewing target as unbound stimulus features,

Dissection of events in the task: (2) Still before gaze arrival at target, without attention, unique feature (orientation) attract attention and gaze. Treisman, Julesz, Duncan & Humphreys, Wolfe, Koch & Ullman, Itti & Koch, etc. V1, as neural basis for a bottom up saliency map, via intra-cortical interactions, regardless of their feature preferences Li (1999, 2002) V1 activities drive saccades, via superior colliculus, to receptive field location of the active cell. Tehovnik, Slocum, & Schiller 2003 Bottom up feature based decision by V1 --- No object/orientation recognition necessary.

Dissection of events in the task: (3) After gaze arrive at target location, viewing the target attentively Features are bound into object shape by attention Treisman, etc. Recognized as the same object Stankiewicz, Hummel, Cooper (1998) --- Priming with attention is insensitive to viewpoint IT, parietal, LOC as neural basis. Tanaka, Rolls, Logothetis, Lawson & Humphreys, Riddoch & Price, Kourtzi, Grill-Spector, Treisman, etc. Object based decision by the higher level processes interferes with bottom up feature based decision by V1. Interference did not have to occur, data indicate that it actually does.

Dissection of events in the task: Catching the features before they merge into objects !! At least 100 ms with attention is needed to achieve viewpoint invariant object representation. Prediction: lesioning the brain area responsible for viewpoint invariant object recognition (e.g., by TMS or in clinical populations) should improve task performance. Object based decision by the higher level processes interferes with bottom up feature based decision by V1.

Subjects quickly learned to reduce/remove interference (condition A). Second session Percent correct button presses First session Target (attentive) viewing time T (ms) i.e., eye-to-mask delay

Summary Bottom up inattentive processes Attentive processes Interfere High level object processes Interfere Low level feature processes This is demonstrated in a visual search task. Interference can be removed/reduced by limiting processing time. Implications on interactions between different processing levels. More info: http://www.gatsby.ucl.ac.uk/~zhaoping

Comparing the two stimuli: A: target difficult to find B: target easy to find They have the same background distractors Both targets have a unique feature, an oriented bar,, in the whole image --- should be salient (by bottom up saliency) In A, the target object is a rotated and/or mirrored version of all distractors. In B, target object is uniquely shaped