Journal of Vision. 2017;17(6):18. doi: / Figure Legend:

Slides:



Advertisements
Similar presentations
Journal of Vision. 2017;17(6):18. doi: / Figure Legend:
Advertisements

Journal of Vision. 2016;16(13). doi: / Figure Legend:
From: Early interference of context congruence on object processing in rapid visual categorization of natural scenes Journal of Vision. 2008;8(13):11.
Journal of Vision. 2016;16(10):14. doi: / Figure Legend:
From: System gamma as a function of image- and monitor-dynamic range
From: Statistics for optimal point prediction in natural images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
Journal of Vision. 2008;8(7):12. doi: / Figure Legend:
Journal of Vision. 2003;3(5):3. doi: /3.5.3 Figure Legend:
From: Visual illusions based on single-field contrast asynchronies
From: Responses of the human visual cortex and LGN to achromatic and chromatic temporal modulations: An fMRI study Journal of Vision. 2010;10(13):13. doi: /
From: What are the units of storage in visual working memory?
Journal of Vision. 2011;11(13):27. doi: / Figure Legend:
Figure Legend: From: Visual detection of symmetry of 3D shapes
From: Neural bandwidth of veridical perception across the visual field
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2015;15(2):4. doi: / Figure Legend:
From: Illumination discrimination in real and simulated scenes
Journal of Vision. 2011;11(4):15. doi: / Figure Legend:
From: The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations Journal of.
Journal of Vision. 2013;13(4):21. doi: / Figure Legend:
From: The attenuation surface for contrast sensitivity has the form of a witch's hat within the central visual field Journal of Vision. 2012;12(11):23.
Journal of Vision. 2016;16(3):39. doi: / Figure Legend:
From: Visual speed sensitivity in the drum corps color guard
From: Pattern matching is assessed in retinotopic coordinates
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces Journal of Vision. 2010;10(5):22.
Journal of Vision. 2017;17(4):9. doi: / Figure Legend:
Journal of Vision. 2010;10(14):32. doi: / Figure Legend:
Journal of Vision. 2012;12(6):33. doi: / Figure Legend:
Journal of Vision. 2008;8(3):33. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
From: Optimum spatiotemporal receptive fields for vision in dim light
From: Surface color perception and equivalent illumination models
From: What do we perceive in a glance of a real-world scene?
Figure Legend: statistical inference
Journal of Vision. 2013;13(4):1. doi: / Figure Legend:
From: Perception of light source distance from shading patterns
Journal of Vision. 2015;15(6):4. doi: / Figure Legend:
Journal of Vision. 2017;17(2):1. doi: / Figure Legend:
From: Contour extracting networks in early extrastriate cortex
Journal of Vision. 2017;17(4):6. doi: / Figure Legend:
Journal of Vision. 2009;9(1):19. doi: / Figure Legend:
Journal of Vision. 2014;14(14):2. doi: / Figure Legend:
From: Probing visual consciousness: Rivalry between eyes and images
Journal of Vision. 2016;16(1):17. doi: / Figure Legend:
From: Visual sensitivity is a stronger determinant of illusory processes than auditory cue parameters in the sound-induced flash illusion Journal of Vision.
From: Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise Journal of Vision. 2009;9(1):2.
From: Variability of eye movements when viewing dynamic natural scenes
Journal of Vision. 2015;15(10):5. doi: / Figure Legend:
From: The perceptual basis of common photographic practice
Journal of Vision. 2006;6(10):3. doi: / Figure Legend:
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
From: Rethinking ADA signage standards for low-vision accessibility
From: Shape, motion, and optical cues to stiffness of elastic objects
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
From: Learning to integrate arbitrary signals from vision and touch
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Figure Legend: From: Measuring visual clutter
From: Four types of ensemble coding in data visualizations
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
Journal of Vision. 2014;14(8):6. doi: / Figure Legend:
From: The biological significance of color constancy: An agent-based model with bees foraging from flowers under varied illumination Journal of Vision.
Figure Legend: From: Crowding and eccentricity determine reading rate
Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
From: Suppression of monocular visual direction under fused binocular stimulation: Evoked potential measurements Journal of Vision. 2005;5(1):4. doi: /5.1.4.
Journal of Vision. 2016;16(13). doi: / Figure Legend:
Presentation transcript:

From: Visualizing fMRI BOLD responses to diverse naturalistic scenes using retinotopic projection Journal of Vision. 2017;17(6):18. doi:10.1167/17.6.18 Figure Legend: RP-images of four photos, pooling V1 and V2 voxels. (Column 1) Photographs of real scenes. (Columns 2–4) Data RP-images for the three subjects. The degree of similarity across subjects would not be easily apparent if voxel responses were viewed in anatomical space, in which each subject has a unique layout for V1 and V2. (Rightmost column) Model RP-images based on output of the local contrast integration model; model RP-images shown here pool receptive fields from all three subjects. As described in Methods, the receptive fields are based on subject data, but the responses used with these receptive fields in retinotopic projection can be based on model data or subject data. There is a strong similarity of the data RP-images and model RP-images for the first three photos. The photo in the bottom row is an example for which the model is a poor fit to the data. First three photos are modified from the copyright-free Berkeley Segmentation Dataset (Martin et al., 2001), https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html; bottom photo by Kendrick Kay who has made his images available for research and publication; see https://crcns.org/data-sets/vc/vim-1. Date of download: 12/27/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.