Optimal Eye Movement Strategies In Visual Search.

Slides:



Advertisements
Similar presentations
J. Daunizeau Institute of Empirical Research in Economics, Zurich, Switzerland Brain and Spine Institute, Paris, France Bayesian inference.
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Pattern Recognition and Machine Learning
Pattern Recognition and Machine Learning
Pattern Classification. Chapter 2 (Part 1): Bayesian Decision Theory (Sections ) Introduction Bayesian Decision Theory–Continuous Features.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Chapter 4: Linear Models for Classification
Observers and Kalman Filters
Visual Recognition Tutorial
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
8. Statistical tests 8.1 Hypotheses K. Desch – Statistical methods of data analysis SS10 Frequent problem: Decision making based on statistical information.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Decision making as a model 7. Visual perception as Bayesian decision making.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Machine Learning CMPT 726 Simon Fraser University
Thanks to Nir Friedman, HU
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Empirical Bayes approaches to thresholding Bernard Silverman, University of Bristol (joint work with Iain Johnstone, Stanford) IMS meeting 30 July 2002.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Theory of Decision Time Dynamics, with Applications to Memory.
1 Bayesian methods for parameter estimation and data assimilation with crop models Part 2: Likelihood function and prior distribution David Makowski and.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
PATTERN RECOGNITION AND MACHINE LEARNING
SVCL Automatic detection of object based Region-of-Interest for image compression Sunhyoung Han.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
Bayesian inference review Objective –estimate unknown parameter  based on observations y. Result is given by probability distribution. Bayesian inference.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Conjugate Priors Multinomial Gaussian MAP Variance Estimation Example.
Learning Theory Reza Shadmehr Linear and quadratic decision boundaries Kernel estimates of density Missing data.
1 E. Fatemizadeh Statistical Pattern Recognition.
VISUAL ACUITY. Visual Acuity: Is the smallest visual angle that a person can see clearly. We will talk about four very different tasks… Yet all of these.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
Lecture 2: Statistical learning primer for biologists
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
1 Optimizing Decisions over the Long-term in the Presence of Uncertain Response Edward Kambour.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 04: GAUSSIAN CLASSIFIERS Objectives: Whitening.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Objectives: Chernoff Bound Bhattacharyya Bound ROC Curves Discrete Features Resources: V.V. – Chernoff Bound J.G. – Bhattacharyya T.T. – ROC Curves NIST.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
Bayesian Optimization. Problem Formulation Goal  Discover the X that maximizes Y  Global optimization Active experimentation  We can choose which values.
Optimal Decision-Making in Humans & Animals Angela Yu March 05, 2009.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Lecture 1.31 Criteria for optimal reception of radio signals.
Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin
LECTURE 03: DECISION SURFACES
Tracking Objects with Dynamics
Special Topics In Scientific Computing
Lecture 10: Observers and Kalman Filters
LECTURE 05: THRESHOLD DECODING
LECTURE 05: THRESHOLD DECODING
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Recognition and Machine Learning
Braden A. Purcell, Roozbeh Kiani  Neuron 
Visual Search and Attention
A Switching Observer for Human Perceptual Estimation
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Learning Theory Reza Shadmehr
Parametric Methods Berlin Chen, 2005 References:
Bayesian inference J. Daunizeau
Mathematical Foundations of BME
LECTURE 05: THRESHOLD DECODING
Stephen V. David, Benjamin Y. Hayden, James A. Mazer, Jack L. Gallant 
Mathematical Foundations of BME
Presentation transcript:

Optimal Eye Movement Strategies In Visual Search

Visual Accuity

Images from Laura Walker Renninger

Attention (And Fixation) Shifting Strategies Visual Saliency  Attend to what stands out from background Experience Guided Search (last class)  Attend to locations of maximum posterior probability (MAP) Information Maximization  Given low resolution in parafovea, perhaps we move our eyes to gather as much information as possible.  Low resolution in parafovea -> uncertainty  Move eyes to reduce this uncertainty (= gather information) Information maximization and MAP both predict task specificity of eye movements.

Yarbus (1967)

Najemnik & Geisler (2005) 1/f noise same statistics as natural images Target sine wave grating Manipulations Target contrast Background noise contrast

Measuring Visibility At Fovea Subject fixates at center Two displays in quick succession Task: determine which one contains the target grating Measure accuracy as a function of target and background contrast at each location For center location: Threshold = 82% accuracy

Derive Discriminability Curves For a given background contrast and target contrast d’: signal to noise ratio If noise is Gaussian, 1/d’ 2 is variance of noise distribution Two noise components  external noise: due to 1/f background  internal noise: due to inefficiency of sensory system background contrast.05, target contrast.07 background contrast.20 and target contrast.19

Terminology d’ E (i): discriminability due to external noise at location i d’ I (i,k(t)): discriminability at location i due to internal noise given fixation at current time is k(t) Combined noise from two independent Gaussian sources:

Snapshot Likelihood (Observation) Model Imagine a feature detector at each location that matches the target template (grating) against the visual information at that location W i,k(t) : Observation at location i at time t when fixation at k(t)  Mean = 0.5 if target present, -0.5 if target absent  Drawn from Gaussian with variance g[i,k(t)] -1

Integrating Sequence Of Observations Sequence of t=1…T fixations At each fixation, obtain noisy evidence concerning target presence at each location i  W k(1), W k(2), … W k(T) Bayesian ideal observer:

Quiz Are the W’s independent conditioned on a location (i or j)? It depends on nature of internal and external noise.

Conditioning On External Noise x i : (unknown) external noise at location I Marginalize out over x: Assuming internal noise is independent over time and space of external noise: And external noise is independent over space:

Conjugate Priors To The Rescue Because internal noise and external noise are Gaussian, Integral can be computed analytically. Gaussian prior Gaussian likelihood -> Gaussian posterior = +1 if q=i, -1 otherwise

Conjugate Priors To The Rescue Form of likelihood: Form of posterior: But ugly constant is the same in numerator and denom.

Final Result With Intuitive result Weighted sum of evidence, where weight ~ reliability Simple incremental rule for computing over time Related to 1/variance Of observation

What We Haven’t Discussed Yet How is next fixation location chosen?  Go to location most likely to contain target (MAP location)  Go to location that will obtain the greatest expected reduction in uncertainty (entropy)  Go to location that will obtain information that will maximize probability of correctly identifying target Comparison to random searcher  Ideal decision maker, but chooses fixations randomly

Choosing Next Fixation: Some Ugly Details C: correct identification of target Normal density Normal cdf Depends on mean and variance of probability density for an observation

Generating Fixation Sequences

Average Spatial Distribution Of Fixations For 1 st, 3d, and 5 th Saccades Ideal and Human Observers

Results I Median # fixations to locate target, as a function of foveated target’s visibility Background noise contrast =.025 solid = ideal searcher Background noise contrast =.20 dashed = random searcher Observer 1 Observer 2

Results II Median number of fixations to locate target as a function of target eccentricity (x axis) and target visibility in fovea (d’) Background noise contrast =.05 Background noise contrast =.20 Solid = ideal observer Dots = medians (less reliable at small eccentricities)

Results III Posterior probability at target location as a function of the number of fixations prior to finding target dashed = random searcher solid = ideal observer

Are Fixations Information Seeking? Comparison to MAP selection Can’t distinguish

Distribution of Fixations MAP selection vs. information seeking

Distribution of Fixations II Direction of fixations relative to center of display Confirms previous result

Take Home Visual search can be cast as optimal  Optimal choice of next fixation  Possibly not optimal integration of information over fixations …subject to limitation on quality of visual information  Noise in images  Acuity limitation of retina

Take Home II We’ve discussed several Bayesian accounts that cast vision and attention in terms of ideal observers. How does this analysis give us insight into how the visual system works?  Rigorous starting point for developing models  Provides well motivated computational framework  Can ask how human behavior deviates from optimal computation  Can ask how people achieve near-optimal performance with imperfect, noisy neural hardware