Journal of Vision. 2017;17(4):9. doi: /17.4.9 Figure Legend:

Slides:



Advertisements
Similar presentations
Date of download: 6/22/2016 Copyright © 2016 SPIE. All rights reserved. Trim method to select zoom window. Figure Legend: From: Region-of-interest processing.
Advertisements

Date of download: 6/27/2016 Copyright © ASME. All rights reserved. From: Comparative Study in Predicting the Global Solar Radiation for Darwin, Australia.
From: Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task Journal of Vision.
Journal of Vision. 2016;16(3):28. doi: / Figure Legend:
Journal of Vision. 2016;16(10):14. doi: / Figure Legend:
From: System gamma as a function of image- and monitor-dynamic range
From: Statistics for optimal point prediction in natural images
From: Estimating 3D tilt from local image cues in natural scenes
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
Journal of Vision. 2008;8(7):12. doi: / Figure Legend:
Journal of Vision. 2003;3(5):3. doi: /3.5.3 Figure Legend:
Journal of Vision. 2011;11(12):7. doi: / Figure Legend:
Journal of Vision. 2008;8(3):33. doi: / Figure Legend:
From: Visual illusions based on single-field contrast asynchronies
From: Why the long face? The importance of vertical image structure for biological “barcodes” underlying face recognition Journal of Vision. 2014;14(8):25.
From: What are the units of storage in visual working memory?
Journal of Vision. 2011;11(13):27. doi: / Figure Legend:
From: Does print size matter for reading
From: Neural bandwidth of veridical perception across the visual field
From: Breaking object correspondence across saccades impairs object recognition: The role of color and luminance Journal of Vision. 2016;16(11):1. doi: /
From: Event-related potentials reveal an early advantage for luminance contours in the processing of objects Journal of Vision. 2011;11(7):1. doi: /
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2015;15(2):4. doi: / Figure Legend:
From: Uvemaster: A Mobile App-Based Decision Support System for the Differential Diagnosis of Uveitis Invest. Ophthalmol. Vis. Sci ;58(10):
From: Illumination discrimination in real and simulated scenes
Journal of Vision. 2015;15(16):13. doi: / Figure Legend:
From: The attenuation surface for contrast sensitivity has the form of a witch's hat within the central visual field Journal of Vision. 2012;12(11):23.
From: Human scotopic sensitivity is regulated postreceptorally by changing the speed of the scotopic response Journal of Vision. 2010;10(2):12. doi: /
Figure Legend: From: Adaptation to blurred and sharpened video
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
Journal of Vision. 2017;17(4):9. doi: / Figure Legend:
From: Noise masking of White's illusion exposes the weakness of current spatial filtering models of lightness perception Journal of Vision. 2015;15(14):1.
Journal of Vision. 2008;8(3):33. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
Journal of Vision. 2009;9(5):31. doi: / Figure Legend:
From: What do we perceive in a glance of a real-world scene?
Figure Legend: From: Gating of remote effects on lightness
From: Cloud Manufacturing: Current Trends and Future Implementations
From: Perception of light source distance from shading patterns
Journal of Vision. 2015;15(6):4. doi: / Figure Legend:
Journal of Vision. 2016;16(2):9. doi: / Figure Legend:
From: Peripheral vision and pattern recognition: A review
From: Contour extracting networks in early extrastriate cortex
Journal of Vision. 2010;10(12):1. doi: / Figure Legend:
Date of download: 12/22/2017 Copyright © ASME. All rights reserved.
Journal of Vision. 2009;9(1):19. doi: / Figure Legend:
From: Probing visual consciousness: Rivalry between eyes and images
From: Linear–nonlinear models of the red–green chromatic pathway
From: Variability of eye movements when viewing dynamic natural scenes
Journal of Vision. 2015;15(10):5. doi: / Figure Legend:
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
From: What's color got to do with it
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
From: Rethinking ADA signage standards for low-vision accessibility
Figure Legend: From: Fixations on low-resolution images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
From: Learning to integrate arbitrary signals from vision and touch
Figure Legend: From: The resolution of facial expressions of emotion
Figure Legend: From: Measuring visual clutter
From: Four types of ensemble coding in data visualizations
Journal of Vision. 2015;15(9):7. doi: / Figure Legend:
Journal of Vision. 2016;16(3):28. doi: / Figure Legend:
Figure Legend: From: The fine structure of multifocal ERG topographies
Journal of Vision. 2014;14(8):6. doi: / Figure Legend:
From: Statistics for optimal point prediction in natural images
Figure Legend: From: Eidolons: Novel stimuli for vision research
Figure Legend: From: Crowding and eccentricity determine reading rate
Figure Legend: From: Crowding and eccentricity determine reading rate
Journal of Vision. 2006;6(5):2. doi: /6.5.2 Figure Legend:
From: Suppression of monocular visual direction under fused binocular stimulation: Evoked potential measurements Journal of Vision. 2005;5(1):4. doi: /5.1.4.
Presentation transcript:

From: Central and peripheral vision for scene recognition: A neurocomputational modeling exploration Journal of Vision. 2017;17(4):9. doi:10.1167/17.4.9 Figure Legend: The network architecture of TM and TDM. (a) Shows the structure of one the two pathways in TDM; this network is used in Experiment 3.1. The network has seven layers, including five convolutional layers with filter size M × M and N (the number to the right of each layer) feature maps for each layer, and two fully connected (fc6 and output) layers. (b) Shows the architecture of TM. The input is preprocessed by Gabor filter banks and PCA before feeding into a two-layer neural network, and the output layer is modulated by the gating layer (Gate). (c) Shows the two-pathway TDM (used in Experiment 3.2) that models central and peripheral visual information processing. The two side-by-side pathways have identical structure, and converge at the output layer, with the weights between fc6 and the output layer modulated by the gating layer (Gate). Date of download: 11/13/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.