From: Objects predict fixations better than early saliency

Slides:



Advertisements
Similar presentations
From: What limits performance in the amblyopic visual system: Seeing signals in noise with an amblyopic brain Journal of Vision. 2008;8(4):1. doi: /8.4.1.
Advertisements

Journal of Vision. 2016;16(10):14. doi: / Figure Legend:
From: A unified Bayesian observer analysis for set size and cueing effects on perceptual decisions and saccades Journal of Vision. 2012;12(6):27. doi: /
From: System gamma as a function of image- and monitor-dynamic range
From: Statistics for optimal point prediction in natural images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
Journal of Vision. 2008;8(7):12. doi: / Figure Legend:
Journal of Vision. 2003;3(5):3. doi: /3.5.3 Figure Legend:
Journal of Vision. 2010;10(3):19. doi: / Figure Legend:
Journal of Vision. 2015;15(11):15. doi: / Figure Legend:
Journal of Vision. 2011;11(12):7. doi: / Figure Legend:
From: Why the long face? The importance of vertical image structure for biological “barcodes” underlying face recognition Journal of Vision. 2014;14(8):25.
From: Physiologic statokinetic dissociation is eliminated by equating static and kinetic perimetry testing procedures Journal of Vision. 2016;16(14):5.
Journal of Vision. 2004;4(9):3. doi: /4.9.3 Figure Legend:
Figure Legend: From: Bayesian inference for psychometric functions
From: What are the units of storage in visual working memory?
Journal of Vision. 2011;11(13):27. doi: / Figure Legend:
From: Neural bandwidth of veridical perception across the visual field
From: ERP P1-N1 changes associated with Vernier perceptual learning and its location specificity and transfer Journal of Vision. 2013;13(4):19. doi: /
From: Event-related potentials reveal an early advantage for luminance contours in the processing of objects Journal of Vision. 2011;11(7):1. doi: /
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2015;15(2):4. doi: / Figure Legend:
From: Spatial attention alleviates temporal crowding, but neither temporal nor spatial uncertainty are necessary for the emergence of temporal crowding.
From: The attenuation surface for contrast sensitivity has the form of a witch's hat within the central visual field Journal of Vision. 2012;12(11):23.
From: Illumination assumptions account for individual differences in the perceptual interpretation of a profoundly ambiguous stimulus in the color domain:
From: Visual speed sensitivity in the drum corps color guard
From: Human scotopic sensitivity is regulated postreceptorally by changing the speed of the scotopic response Journal of Vision. 2010;10(2):12. doi: /
From: Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces Journal of Vision. 2010;10(5):22.
From: Hue shifts produced by temporal asymmetries in chromatic signals
From: Motion processing with two eyes in three dimensions
Journal of Vision. 2011;11(8):11. doi: / Figure Legend:
Journal of Vision. 2012;12(6):33. doi: / Figure Legend:
Journal of Vision. 2008;8(3):33. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
Journal of Vision. 2009;9(5):31. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
Journal of Vision. 2013;13(4):1. doi: / Figure Legend:
From: Perceptual entrainment of individually unambiguous motions
From: Perception of light source distance from shading patterns
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
From: A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications Journal of.
Journal of Vision. 2010;10(12):1. doi: / Figure Legend:
Journal of Vision. 2017;17(4):6. doi: / Figure Legend:
From: Trichromatic reconstruction from the interleaved cone mosaic: Bayesian model and the color appearance of small spots Journal of Vision. 2008;8(5):15.
Figure Legend: From: Latitude and longitude vertical disparities
From: The vertical horopter is not adaptable, but it may be adaptive
From: Motion processing with two eyes in three dimensions
From: Probing visual consciousness: Rivalry between eyes and images
Journal of Vision. 2016;16(1):17. doi: / Figure Legend:
From: Expertise with multisensory events eliminates the effect of biological motion rotation on audiovisual synchrony perception Journal of Vision. 2010;10(5):2.
From: The perceptual basis of common photographic practice
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
From: What's color got to do with it
Journal of Vision. 2006;6(10):3. doi: / Figure Legend:
Journal of Vision. 2016;16(15):21. doi: / Figure Legend:
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
Figure Legend: From: Fixations on low-resolution images
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Figure Legend: From: Measuring visual clutter
From: Four types of ensemble coding in data visualizations
Journal of Vision. 2016;16(3):28. doi: / Figure Legend:
Figure Legend: From: Spatial structure of contextual modulation
Journal of Vision. 2014;14(8):6. doi: / Figure Legend:
From: Statistics for optimal point prediction in natural images
Figure Legend: From: Crowding and eccentricity determine reading rate
Journal of Vision. 2016;16(6):3. doi: / Figure Legend:
From: Flash suppression and flash facilitation in binocular rivalry
Figure Legend: From: The color constancy of three-dimensional objects
Presentation transcript:

From: Objects predict fixations better than early saliency Journal of Vision. 2008;8(14):18. doi:10.1167/8.14.18 Figure Legend: Object maps predict fixations. (A) Area under the curve (AUCs) for fixations predicted by saliency maps ( x-axis) and object maps ( y-axis). Each data point corresponds to one image. Distribution of either AUC depicted as marginals (same axes as scatter plot). For points above the diagonal object map's prediction is better than the saliency map's (68 images), below the diagonal the opposite is the case (25 images). Magenta numbers identify examples in panel B. (B) Examples of images, in which fixations are predicted best by the object map and reasonable by the saliency map (top), well by the object map despite bad prediction by saliency map (2nd from top), best prediction by saliency map, despite bad prediction by object map (2nd from bottom) and bad prediction by both (bottom). Left: image; middle: object map; right: saliency map. Fixations of all observers in cyan. Date of download: 1/2/2018 The Association for Research in Vision and Ophthalmology Copyright © 2018. All rights reserved.