Journal of Vision. 2015;15(10):5. doi: / Figure Legend:

Slides:



Advertisements
Similar presentations
Figure Legend: From: Human grasp point selection
Advertisements

Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
Journal of Vision. 2016;16(10):14. doi: / Figure Legend:
From: A unified Bayesian observer analysis for set size and cueing effects on perceptual decisions and saccades Journal of Vision. 2012;12(6):27. doi: /
From: Attentive and pre-attentive aspects of figural processing
From: Statistics for optimal point prediction in natural images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
From: Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design Journal of Vision. 2017;17(1):24. doi: /
From: Unsupervised clustering method to detect microsaccades
Journal of Vision. 2011;11(12):7. doi: / Figure Legend:
Journal of Vision. 2012;12(13):19. doi: / Figure Legend:
From: Occlusion-related lateral connections stabilize kinetic depth stimuli through perceptual coupling Journal of Vision. 2009;9(10):20. doi: /
From: What are the units of storage in visual working memory?
Journal of Vision. 2011;11(13):27. doi: / Figure Legend:
Journal of Vision. 2011;11(9):17. doi: / Figure Legend:
From: ERP P1-N1 changes associated with Vernier perceptual learning and its location specificity and transfer Journal of Vision. 2013;13(4):19. doi: /
Journal of Vision. 2017;17(6):6. doi: / Figure Legend:
From: Event-related potentials reveal an early advantage for luminance contours in the processing of objects Journal of Vision. 2011;11(7):1. doi: /
Figure Legend: From: Fixations on low-resolution images
From: Spatial attention alleviates temporal crowding, but neither temporal nor spatial uncertainty are necessary for the emergence of temporal crowding.
From: Illumination discrimination in real and simulated scenes
Journal of Vision. 2015;15(16):13. doi: / Figure Legend:
Journal of Vision. 2013;13(4):21. doi: / Figure Legend:
From: Illumination assumptions account for individual differences in the perceptual interpretation of a profoundly ambiguous stimulus in the color domain:
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Hue shifts produced by temporal asymmetries in chromatic signals
From: Motion processing with two eyes in three dimensions
Figure Legend: From: An escape from crowding
Figure Legend: From: The color lexicon of the Somali language
From: Crowding is tuned for perceived (not physical) location
Figure Legend: From: Probing intermediate stages of shape processing
From: Contextual effects on real bicolored glossy surfaces
Journal of Vision. 2008;8(3):33. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
Figure Legend: From: Comparing integration rules in visual search
Journal of Vision. 2010;10(4):22. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
From: Maximum likelihood difference scales represent perceptual magnitudes and predict appearance matches Journal of Vision. 2017;17(4):1. doi: /
Journal of Vision. 2009;9(5):1. doi: /9.5.1 Figure Legend:
Journal of Vision. 2008;8(1):1. doi: /8.1.1 Figure Legend:
From: Perception of light source distance from shading patterns
Journal of Vision. 2015;15(6):4. doi: / Figure Legend:
Figure Legend: From: Dynamics of oculomotor direction discrimination
Journal of Vision. 2017;17(2):1. doi: / Figure Legend:
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
From: Contour extracting networks in early extrastriate cortex
Journal of Vision. 2010;10(12):1. doi: / Figure Legend:
Journal of Vision. 2009;9(1):19. doi: / Figure Legend:
From: Size-induced distortions in perceptual maps of visual space
From: Contour extracting networks in early extrastriate cortex
From: The perceptual basis of common photographic practice
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
From: What's color got to do with it
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
From: Rethinking ADA signage standards for low-vision accessibility
Figure Legend: From: Fixations on low-resolution images
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Figure Legend: From: Measuring visual clutter
From: Four types of ensemble coding in data visualizations
Journal of Vision. 2016;16(3):28. doi: / Figure Legend:
Figure Legend: From: Spatial structure of contextual modulation
From: Objects predict fixations better than early saliency
Figure Legend: From: Eidolons: Novel stimuli for vision research
Journal of Vision. 2010;10(4):15. doi: / Figure Legend:
Figure Legend: From: Crowding and eccentricity determine reading rate
Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
Figure Legend: From: The color constancy of three-dimensional objects
Presentation transcript:

From: Differences in perceptual learning transfer as a function of training task Journal of Vision. 2015;15(10):5. doi:10.1167/15.10.5 Figure Legend: Learning solutions and their effect on transfer. (A) In learning a simple discriminative mapping, there is initial uncertainty about where the boundary lies (gray region), the extent of which is reduced over time. However, this discriminative mapping is of no value at an orthogonal orientation. (B) In learning a continuous relationship between perceived orientation and output estimate, there is initial uncertainty regarding the slope of the relationship (i.e., many possible lines that are consistent with the data, here represented as many individual black lines within the gray region indicative of overall uncertainty). As data are observed over time, the degree of uncertainty is reduced (e.g., the space of possible lines is narrowed to include only the true relationship). Finally, because the relationship is continuous with orientation, the learning is applicable to the orthogonal orientation, represented as an extrapolation to a new angle in the final panel. Date of download: 12/26/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.