Journal of Vision. 2009;9(9):9. doi: /9.9.9 Figure Legend:

Slides:



Advertisements
Similar presentations
From: Complex interactions between spatial, orientation, and motion cues for biological motion perception across visual space Journal of Vision. 2013;13(2):8.
Advertisements

Figure Legend: in visual cortex
From: Statistics for optimal point prediction in natural images
Journal of Vision. 2011;11(12):7. doi: / Figure Legend:
From: Stereo Photo Measured ONH Shape Predicts Development of POAG in Subjects With Ocular Hypertension Invest. Ophthalmol. Vis. Sci ;56(8):
From: Vector subtraction using visual and extraretinal motion signals: A new look at efference copy and corollary discharge theories Journal of Vision.
From: Color signals in the primary visual cortex of marmosets
Journal of Vision. 2010;10(6):17. doi: / Figure Legend:
Figure Legend: From: Crowding and eccentricity determine reading rate
From: What are the units of storage in visual working memory?
Journal of Vision. 2011;11(13):27. doi: / Figure Legend:
From: Neural bandwidth of veridical perception across the visual field
Figure Legend: From: Fixations on low-resolution images
Figure Legend: From: The natural statistics of blur
From: Illumination assumptions account for individual differences in the perceptual interpretation of a profoundly ambiguous stimulus in the color domain:
Figure Legend: From: Adaptation to blurred and sharpened video
From: How the unstable eye sees a stable and moving world
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Motion processing with two eyes in three dimensions
Journal of Vision. 2011;11(8):11. doi: / Figure Legend:
From: Crowding is tuned for perceived (not physical) location
Journal of Vision. 2012;12(6):33. doi: / Figure Legend:
Figure Legend: From: Comparing integration rules in visual search
Journal of Vision. 2009;9(5):31. doi: / Figure Legend:
From: Optimum spatiotemporal receptive fields for vision in dim light
Journal of Vision. 2004;4(6):4. doi: /4.6.4 Figure Legend:
Figure Legend: statistical inference
From: Maximum likelihood difference scales represent perceptual magnitudes and predict appearance matches Journal of Vision. 2017;17(4):1. doi: /
From: Three-dimensional motion aftereffects reveal distinct direction-selective mechanisms for binocular processing of motion through depth Journal of.
From: Perceptual entrainment of individually unambiguous motions
From: Perception of light source distance from shading patterns
Figure Legend: From: Dynamics of oculomotor direction discrimination
Journal of Vision. 2011;11(6):14. doi: / Figure Legend:
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
Journal of Vision. 2009;9(1):19. doi: / Figure Legend:
From: Rat performance on visual detection task modeled with divisive normalization and adaptive decision thresholds Journal of Vision. 2011;11(9):1. doi: /
Journal of Vision. 2014;14(14):2. doi: / Figure Legend:
Figure Legend: From: Latitude and longitude vertical disparities
From: Motion processing with two eyes in three dimensions
From: Linear–nonlinear models of the red–green chromatic pathway
Journal of Vision. 2016;16(1):17. doi: / Figure Legend:
From: The psychometric function: The lapse rate revisited
From: Size-induced distortions in perceptual maps of visual space
From: Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise Journal of Vision. 2009;9(1):2.
From: Contour extracting networks in early extrastriate cortex
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
Journal of Vision. 2006;6(10):3. doi: / Figure Legend:
Figure Legend: From: Binocular combination of luminance profiles
Journal of Vision. 2016;16(15):21. doi: / Figure Legend:
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
Journal of Vision. 2016;16(10):11. doi: / Figure Legend:
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
Figure Legend: From: Eidolons: Novel stimuli for vision research
From: Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise Journal of Vision. 2009;9(1):2.
Journal of Vision. 2010;10(4):15. doi: / Figure Legend:
Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
Journal of Vision. 2016;16(6):3. doi: / Figure Legend:
Journal of Vision. 2013;13(3):1. doi: / Figure Legend:
From: Motion-induced blindness and microsaccades: Cause and effect
Journal of Vision. 2015;15(9):20. doi: / Figure Legend:
Cascaded Effects of Spatial Adaptation in the Early Visual System
Volume 66, Issue 4, Pages (May 2010)
Volume 71, Issue 4, Pages (August 2011)
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Volume 64, Issue 6, Pages (December 2009)
Representation of Color Stimuli in Awake Macaque Primary Visual Cortex
Volume 66, Issue 4, Pages (May 2010)
Presentation transcript:

From: Visual motion aftereffects arise from a cascade of two isomorphic adaptation mechanisms Journal of Vision. 2009;9(9):9. doi:10.1167/9.9.9 Figure Legend: Simulation of adaptation effects arising from gain reduction at two stages of a velocity processing cascade. (a) The first stage consists of two (classes of) nondirectional channels, tuned for low and high speeds. In the illustrated example, adaptation changes the relative response gain of the high speed channel (blue region). Channels of the first stage then provide input to a second processing stage containing more narrowly tuned direction-selective velocity channels, which inherit the adaptation induced gain changes from the first stage but also undergo additional (directional) gain changes themselves (red region). The percept is decoded from the second stage. The middle column shows one-dimensional slices along the horizontal velocity axis. (b) Simulation results for the case of inherited gain reduction from the first stage only (blue), as well as the full model with gain changes in both stages (red). The population consisted of 21 equally spaced channels with rectified cosine tuning curves and equal tuning width when plotted in the chosen perceptual space. The mean response of each channel is determined by its tuning curve (shown in (a)) and response variability was assumed to follow a Poisson process. The perceptual estimate was assumed to be the population average, computed as v^(vs) = ∑iv0irivs/∑irivs, where v0i is the preferred velocity of channel i and ri is its response to a stimulus with velocity vs. The model qualitatively predicts the subjects' perceptual bias as shown in Figure 3. Results represent the average bias measured over 30,000 samples of the population response for each test velocity (see Supplementary material for simulation results for estimation variability). The gain reduction profile was Gaussian, with width twice and amplitude half as large for the nondirectional than for the directional mechanism, roughly matched to the fitted parameters from our subjects' data (see Figure 4b). Date of download: 11/7/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.