Journal of Vision. 2014;14(14):2. doi: / Figure Legend:

Slides:



Advertisements
Similar presentations
Date of download: 9/18/2016 The Association for Research in Vision and Ophthalmology Copyright © All rights reserved. From: What determines the relationship.
Advertisements

From: Early interference of context congruence on object processing in rapid visual categorization of natural scenes Journal of Vision. 2008;8(13):11.
From: Attentive and pre-attentive aspects of figural processing
From: Statistics for optimal point prediction in natural images
From: Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design Journal of Vision. 2017;17(1):24. doi: /
Journal of Vision. 2008;8(7):12. doi: / Figure Legend:
Journal of Vision. 2003;3(5):3. doi: /3.5.3 Figure Legend:
Journal of Vision. 2011;11(12):7. doi: / Figure Legend:
Journal of Vision. 2010;10(6):17. doi: / Figure Legend:
From: Why the long face? The importance of vertical image structure for biological “barcodes” underlying face recognition Journal of Vision. 2014;14(8):25.
Figure Legend: From: Transparent layer constancy
From: What are the units of storage in visual working memory?
Journal of Vision. 2008;8(10):12. doi: / Figure Legend:
Journal of Vision. 2011;11(13):27. doi: / Figure Legend:
From: Neural bandwidth of veridical perception across the visual field
From: Contribution of the crystalline lens gradient refractive index to the accommodation amplitude in non-human primates: In vitro studies Journal of.
From: Event-related potentials reveal an early advantage for luminance contours in the processing of objects Journal of Vision. 2011;11(7):1. doi: /
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2015;15(2):4. doi: / Figure Legend:
From: The gender-specific face aftereffect is based in retinotopic not spatiotopic coordinates across several natural image transformations Journal of.
Journal of Vision. 2008;8(11):3. doi: / Figure Legend:
Journal of Vision. 2013;13(4):21. doi: / Figure Legend:
Journal of Vision. 2016;16(3):39. doi: / Figure Legend:
From: Pattern matching is assessed in retinotopic coordinates
Figure Legend: From: Adaptation to blurred and sharpened video
From: How the unstable eye sees a stable and moving world
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces Journal of Vision. 2010;10(5):22.
Journal of Vision. 2009;9(9):9. doi: /9.9.9 Figure Legend:
Journal of Vision. 2011;11(8):11. doi: / Figure Legend:
Journal of Vision. 2012;12(6):33. doi: / Figure Legend:
Journal of Vision. 2008;8(3):33. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
From: Slant from texture and disparity cues: Optimal cue combination
From: Surface color perception and equivalent illumination models
From: Maximum likelihood difference scales represent perceptual magnitudes and predict appearance matches Journal of Vision. 2017;17(4):1. doi: /
From: Perceptual entrainment of individually unambiguous motions
From: Perception of light source distance from shading patterns
Journal of Vision. 2011;11(6):14. doi: / Figure Legend:
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
Journal of Vision. 2010;10(12):1. doi: / Figure Legend:
From: Rat performance on visual detection task modeled with divisive normalization and adaptive decision thresholds Journal of Vision. 2011;11(9):1. doi: /
From: The vertical horopter is not adaptable, but it may be adaptive
From: Probing visual consciousness: Rivalry between eyes and images
From: Visual sensitivity is a stronger determinant of illusory processes than auditory cue parameters in the sound-induced flash illusion Journal of Vision.
From: Size-induced distortions in perceptual maps of visual space
From: Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise Journal of Vision. 2009;9(1):2.
From: Contour extracting networks in early extrastriate cortex
From: The perceptual basis of common photographic practice
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
Journal of Vision. 2006;6(10):3. doi: / Figure Legend:
Journal of Vision. 2016;16(15):21. doi: / Figure Legend:
From: Shape, motion, and optical cues to stiffness of elastic objects
Journal of Vision. 2017;17(6):13. doi: / Figure Legend:
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2011;11(10):16. doi: / Figure Legend:
From: Four types of ensemble coding in data visualizations
Figure Legend: From: The fine structure of multifocal ERG topographies
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
From: Objects predict fixations better than early saliency
From: Effects of Intraframe Distortion on Measures of Cone Mosaic Geometry from Adaptive Optics Scanning Light Ophthalmoscopy Trans. Vis. Sci. Tech ;5(1):10.
Figure Legend: From: Crowding and eccentricity determine reading rate
Figure Legend: From: Crowding and eccentricity determine reading rate
Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
Journal of Vision. 2016;16(6):3. doi: / Figure Legend:
Figure Legend: From: Analysis of human vergence dynamics
Figure Legend: From: The color constancy of three-dimensional objects
Presentation transcript:

From: Are optical distortions used as a cue for material properties of thick transparent objects? Journal of Vision. 2014;14(14):2. doi:10.1167/14.14.2 Figure Legend: The settings for the refractive indices lie between the predictions for similarity matches of background distortions (D match) and similarity matches of specular reflections (SR match). (a) Relative position of the refractive indices of the test object (RT) in the interval between the respective predictions of D matches and SR matches for all four thickness ratios (TT/TS) as a function of the refractive index of the standard object (RS). The black line represents the average setting across all four thickness ratios. (b) Example of a fixed standard object (left column, RS = 1.5) and a test object (right column, TT/TS = 100/150) with different settings for the refractive index RT. The topmost object shows a similarity match based solely on background distortions (D match, gained by simulations using the R-T compensation); the lowermost object shows a similarity match of specular reflections (SR match, gained empirically). The refractive index of the center object corresponds to the subjects' mean setting. The images show only a part of the complete stimulus illustrated in Figure 9. Date of download: 12/23/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.