Eye Movements and Working Memory Marc Pomplun Department of Computer Science University of Massachusetts at Boston Homepage:

Slides:



Advertisements
Similar presentations
Environmental Remote Sensing GEOG 2021
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Regional Processing Convolutional filters. Smoothing  Convolution can be used to achieve a variety of effects depending on the kernel.  Smoothing, or.
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
黃文中 Preview 2 3 The Saliency Map is a topographically arranged map that represents visual saliency of a corresponding visual scene. 4.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Edge Detection CSE P 576 Larry Zitnick
Current Trends in Image Quality Perception Mason Macklem Simon Fraser University
Lecture 4 Edge Detection
Lecture 5 Template matching
1 Image filtering Hybrid Images, Oliva et al.,
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
CS 376b Introduction to Computer Vision 02 / 27 / 2008 Instructor: Michael Eckmann.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Korea Univ. Division Information Management Engineering UI Lab. Korea Univ. Division Information Management Engineering UI Lab – 2 학기 Paper 7 Modeling.
Task switching is not a unitary phenomenon: Behavioral and neuroimaging evidence S.M. Ravizza 1 & C.S. Carter 1,2 Depts. of 1 Psychology & 2 Psychiatry,
Lecture 2: Image filtering
The Experimental Approach September 15, 2009Introduction to Cognitive Science Lecture 3: The Experimental Approach.
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
September 25, 2014Computer Vision Lecture 6: Spatial Filtering 1 Computing Object Orientation We compute the orientation of an object as the orientation.
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
Image Filtering. Problem! Noise is a problem, even in images! Gaussian NoiseSalt and Pepper Noise.
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Multiscale Moment-Based Painterly Rendering Diego Nehab and Luiz Velho
1 Perception and VR MONT 104S, Fall 2008 Session 13 Visual Attention.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Gaze-Controlled Human-Computer Interfaces Marc Pomplun Department of Computer Science University of Massachusetts at Boston Homepage:
Image Processing Edge detection Filtering: Noise suppresion.
Edge Detection Today’s reading Cipolla & Gee on edge detection (available online)Cipolla & Gee on edge detection From Sandlot ScienceSandlot Science.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
The effects of working memory load on negative priming in an N-back task Ewald Neumann Brain-Inspired Cognitive Systems (BICS) July, 2010.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Autonomous Robots Vision © Manfred Huber 2014.
September 19, 2013Computer Vision Lecture 6: Image Filtering 1 Image Filtering Many basic image processing techniques are based on convolution. In a convolution,
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Filtering Objective –improve SNR and CNR Challenges – blurs object boundaries and smears out important structures.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Announcements Project 0 due tomorrow night. Edge Detection Today’s readings Cipolla and Gee (handout) –supplemental: Forsyth, chapter 9Forsyth For Friday.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Edge Segmentation in Computer Images CSE350/ Sep 03.
A Comparison of Methods for Estimating the Capacity of Visual Working Memory: Examination of Encoding Limitations Domagoj Švegar & Dražen Domijan
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Environmental Remote Sensing GEOG 2021
Miguel Tavares Coimbra
Kimron Shapiro & Frances Garrad-Cole The University of Wales, Bangor
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
The involvement of visual and verbal representations in a quantitative and a qualitative visual change detection task. Laura Jenkins, and Dr Colin Hamilton.
Fourier Transform: Real-World Images
Computer Vision Lecture 9: Edge Detection II
Contourlet Transforms For Feature Detection
Object Recognition Today we will move on to… April 12, 2018
Magnetic Resonance Imaging
Edge Detection Today’s readings Cipolla and Gee Watt,
Fourier Transform of Boundaries
The Normalization Model of Attention
Intensity Transformation
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Eye Movements and Working Memory Marc Pomplun Department of Computer Science University of Massachusetts at Boston Homepage:

Eye Movements and Working Memory Overview: Image Processing: Convolution Filters Iconic Memory Representations for Visual Search Working Memory Use in a Natural Task The Working Memory - Eye Movement Tradeoff

Convolution Filters Grayscale Image: /9 Averaging Filter:

Image Processing Original Image: Filtered Image: /9 value = 1  1/9 + 6  1/9 + 3  1/9 + 2  1/  1/9 + 3  1/9 + 5  1/  1/9 + 6  1/9 = 47/9 =

Image Processing Original Image: Filtered Image: /9 value = 6  1/9 + 3  1/9 + 2  1/  1/9 + 3  1/  1/  1/9 + 6  1/9 + 9  1/9 = 60/9 =

Image Processing Original Image: Filtered Image: Now you can see the averaging (smoothing) effect of the 3  3 filter that we applied.

Gaussian Filters Discrete version: 1/273 implement decreasing influence by more distant pixels

Gaussian Filters original 3333999915  15 Effect of Gaussian smoothing:

Sobel Filters Sobel filters are an example for edge detection filters. Two small convolution filters are used successively: SxSx SySy

Sobel Filters Sobel filters yield two interesting pieces of information: The magnitude of the gradient (local change in brightness): The angle of the gradient (tells us about the orientation of an edge):

Sobel Filters Original image (left) and result of calculating the magnitude of the brightness gradient with a Sobel filter (right)

Rao, Zelinsky, Hayhoe & Ballard (2002): Eye Movements in Iconic Visual Search Question: How do people represent items in their memory for efficient visual search? Idea: Iconic (appearance-based) multiscale representation Such representations were modeled using spatiochromatic convolution filters of different scales and orientations.

Rao, Zelinsky, Hayhoe & Ballard Convolution filters used for the model.

Rao, Zelinsky, Hayhoe & Ballard According to the model, iconic visual search proceeds as follows: The first saccade is aimed at the point in the visual scene whose low-frequency features have the best match with the low-frequency features of the memorized object. For the programming of the following saccades, higher and higher frequencies are included, until the target is found.

Rao, Zelinsky, Hayhoe & Ballard Coarse-to-fine scanning mechanism

Rao, Zelinsky, Hayhoe & Ballard Conclusion: Good correspondence between modeled and empirical scanpaths

Ballard, Hayhoe & Pelz (1995): Memory Representations in Natural Tasks Task: Copy a pattern of colored blocks

Ballard, Hayhoe & Pelz Possible strategies for completing the block copying task. Participants performed the following operations: (M)odel inspection, (P)ickup, and (D)ropoff.

Ballard, Hayhoe & Pelz Typical hand and gaze trajectories for a single copying step

Ballard, Hayhoe & Pelz Empirical frequency of individual strategies in the block copying task

Ballard, Hayhoe & Pelz Conclusions: In the block copying task, working memory is only sparsely used. Instead, subjects prefer to make additional eye movements. Because eye movements are “inexpensive”, subjects use the visual scene as an “external memory” rather than building an internal representation of it.

Eye Movement - Working Memory Tradeoff (Inamdar & Pomplun, 2003) Based on the previous study by Ballard et al, it seems that using working memory is clearly more “expensive” than performing eye movements. So maybe a “cost model” is an adequate way of describing and predicting behavior in visual tasks.

Inamdar & Pomplun The basic idea is that the visual system (including the cognitive mechanisms that are required for performing the task) optimizes visual behavior, i.e. minimizes its effort (cost). Is there such a tradeoff between the use of working memory and eye movements? If so, what exactly is minimized? Can this be quantified?

Inamdar & Pomplun Let subjects perform a visual task that requires eye movements and extensive use of visual working memory. Vary the “cost” of eye movements. Hypothesis: If the assumed tradeoff between eye movements and working memory exists, costlier eye movements should lead to increased use of working memory.

Stimuli in Experiment 1 Subjects were presented with two columns of simple geometrical objects in three different colors and three different shapes. The columns were identical except for one object that differed in either its color or its shape (in target- present trials). Subjects had to indicate whether such a target was present or not. The objects in the non-attended hemifield were always masked. The cost of eye movements was varied by changing the distance between the two columns.

Stimuli in Experiment 1

Eye Movements in Experiment 1

Results of Experiment 1

Experiment 2 What happens if the capacity limit of visual working memory is reached? By just varying the distance between columns, the cost of eye movements cannot be dramatically increased. Idea: “Artificially” increase the cost of eye movements in the present paradigm by delaying the unmasking of objects after any gaze switch between hemifields.

Stimuli in Experiment 2 We used the same stimuli as in Experiment 1, but only those for the “medium-distance” condition. Three visibility delays were used: 0ms, 500ms, and 1000ms. During the delays, objects in both hemifields were masked.

Results of Experiment 2

Conclusions There clearly is a cost-minimizing behavior with regard to eye movements and working memory. However, the current data does not allow to build a quantitative model of this phenomenon. It seems that people slightly overestimate their working memory capacity when they are forced to heavily increase their working memory load.