From: Pattern matching is assessed in retinotopic coordinates

Slides:



Advertisements
Similar presentations
Figure Legend: From: Human grasp point selection
Advertisements

Journal of Vision. 2017;17(6):18. doi: / Figure Legend:
Journal of Vision. 2008;8(8):1. doi: /8.8.1 Figure Legend:
Journal of Vision. 2016;16(3):28. doi: / Figure Legend:
From: Early interference of context congruence on object processing in rapid visual categorization of natural scenes Journal of Vision. 2008;8(13):11.
Journal of Vision. 2016;16(10):14. doi: / Figure Legend:
From: A unified Bayesian observer analysis for set size and cueing effects on perceptual decisions and saccades Journal of Vision. 2012;12(6):27. doi: /
From: System gamma as a function of image- and monitor-dynamic range
From: Statistics for optimal point prediction in natural images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
Journal of Vision. 2008;8(7):12. doi: / Figure Legend:
Journal of Vision. 2010;10(3):19. doi: / Figure Legend:
From: Sandwich masking eliminates both visual awareness of faces and face-specific brain activity through a feedforward mechanism Journal of Vision. 2011;11(7):3.
Journal of Vision. 2011;11(12):7. doi: / Figure Legend:
Journal of Vision. 2013;13(7):6. doi: / Figure Legend:
From: Why the long face? The importance of vertical image structure for biological “barcodes” underlying face recognition Journal of Vision. 2014;14(8):25.
From: Physiologic statokinetic dissociation is eliminated by equating static and kinetic perimetry testing procedures Journal of Vision. 2016;16(14):5.
Figure Legend: From: Transparent layer constancy
From: What are the units of storage in visual working memory?
Figure Legend: From: A moving-barber-pole illusion
From: Breaking object correspondence across saccades impairs object recognition: The role of color and luminance Journal of Vision. 2016;16(11):1. doi: /
From: Event-related potentials reveal an early advantage for luminance contours in the processing of objects Journal of Vision. 2011;11(7):1. doi: /
From: An Ultrasound Probe Holder for Image-Guided Surgery
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2015;15(2):4. doi: / Figure Legend:
From: Spatial attention alleviates temporal crowding, but neither temporal nor spatial uncertainty are necessary for the emergence of temporal crowding.
From: Illumination discrimination in real and simulated scenes
Journal of Vision. 2009;9(5):31. doi: / Figure Legend:
From: The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations Journal of.
Journal of Vision. 2013;13(4):21. doi: / Figure Legend:
Journal of Vision. 2016;16(3):39. doi: / Figure Legend:
From: How the unstable eye sees a stable and moving world
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
Journal of Vision. 2011;11(8):11. doi: / Figure Legend:
From: RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research Journal of Vision. 2014;14(2):6. doi: /
From: What do we perceive in a glance of a real-world scene?
Journal of Vision. 2009;9(5):31. doi: / Figure Legend:
From: Optimum spatiotemporal receptive fields for vision in dim light
Figure Legend: From: Gating of remote effects on lightness
From: Perceptual entrainment of individually unambiguous motions
From: Perception of light source distance from shading patterns
Journal of Vision. 2015;15(6):4. doi: / Figure Legend:
Journal of Vision. 2017;17(2):1. doi: / Figure Legend:
Journal of Vision. 2017;17(5):6. doi: / Figure Legend:
Journal of Vision. 2014;14(14):2. doi: / Figure Legend:
Figure Legend: From: Latitude and longitude vertical disparities
Journal of Vision. 2007;7(11):9. doi: / Figure Legend:
From: Variability of eye movements when viewing dynamic natural scenes
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
From: What's color got to do with it
Journal of Vision. 2016;16(15):21. doi: / Figure Legend:
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
From: Shape, motion, and optical cues to stiffness of elastic objects
From: What predicts the strength of simultaneous color contrast?
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
From: Four types of ensemble coding in data visualizations
Journal of Vision. 2016;16(3):28. doi: / Figure Legend:
Journal of Vision. 2017;17(12):21. doi: / Figure Legend:
From: Objects predict fixations better than early saliency
Figure Legend: From: Eidolons: Novel stimuli for vision research
Journal of Vision. 2010;10(4):15. doi: / Figure Legend:
Figure Legend: From: Crowding and eccentricity determine reading rate
Figure Legend: From: Crowding and eccentricity determine reading rate
Figure Legend: From: Analysis of human vergence dynamics
From: Optimal disparity estimation in natural stereo images
From: Flash suppression and flash facilitation in binocular rivalry
Presentation transcript:

From: Pattern matching is assessed in retinotopic coordinates Journal of Vision. 2009;9(13):19. doi:10.1167/9.13.19 Figure Legend: Testing the coordinate frame for scene matching. Experimental conditions and results: (a) For any given base stimulus, there could be 6 different conditions: 4 match and 2 non-match, according to the test stimulus identity and position. The screen is depicted by the black elongated rectangle frame, and the location of the fixation point and the images on the screen is illustrated by their relative position inside the rectangle. In all conditions, the test stimulus could appear in one of two eccentricities (near: “option 1”, or far: “option 2”). The conditions were: (1) “same retina”—the test (match) picture appeared in the same retinotopic position but in a different screen position; (2) “same hemifield”—the test (match) stimulus appeared in the same hemifield (5.6 degrees from its original retinotopic position) in a different screen position; (3) “opposite hemifield” condition—the third match picture appeared in the opposite hemifield and in a different screen position; (4) “same screen” condition—third match picture appeared in the opposite hemifield but in the same screen position. Two non-match conditions: (5) “same hemifield”—third non-match picture appear in the same hemifield but in a different screen position as the base pictures; (6) “opposite hemifield”—third non-match picture appeared in the opposite hemifield but in the same screen position as the base pictures. (b) Performance level for the various “match” and the “non-match” conditions. Asterisks denote significant differences between marked “match” performance levels (* p < 0.05; ** p < 0.005). Date of download: 11/5/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.