Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Examples of synthetic simple scenes (in anaglyph 3-D). Each visual stimulus in.

Slides:



Advertisements
Similar presentations
Date of download: 5/27/2016 Copyright © 2016 American Medical Association. All rights reserved. From: Effect of Dietary Protein Content on Weight Gain,
Advertisements

Date of download: 5/29/2016 Copyright © 2016 SPIE. All rights reserved. GLCM analysis was performed for [(a), (c), (e) and (g)] the model images consisting.
Date of download: 5/29/2016 Copyright © ASME. All rights reserved. From: Pattern Analysis of IDEO's Human-Centered Design Methods in Developing Regions.
Date of download: 5/30/2016 Copyright © 2016 American Medical Association. All rights reserved. From: Functional Brain Electrical Activity Mapping in Boys.
Date of download: 6/3/2016 Copyright © 2016 SPIE. All rights reserved. (a) Exploded view of the camera with its four minilenses, (b) assembled camera,
Date of download: 6/3/2016 Copyright © 2016 SPIE. All rights reserved. (a) Confocal microscope images of human adipose-derived stem cells (hASCs) labeled.
Date of download: 6/17/2016 Copyright © 2016 SPIE. All rights reserved. Rapamycin-induced FRET. (a) Schematic diagram representing the binding events involved.
Date of download: 6/21/2016 Copyright © 2016 SPIE. All rights reserved. Spectra of procaine HCl, the ingredients of the formulations (a) and of skin that.
Date of download: 6/21/2016 Copyright © 2016 SPIE. All rights reserved. The aerial image intensity profile of eight model terms for a typical pattern.
Date of download: 6/21/2016 Copyright © 2016 SPIE. All rights reserved. Example (a) T1 postcontrast image and (b) T2 FLAIR image after preprocessing. An.
Date of download: 6/24/2016 Copyright © 2016 SPIE. All rights reserved. The internal structure of the aligned 2CCD camera. Figure Legend: From: Pixel-to-pixel.
Date of download: 6/24/2016 Copyright © 2016 SPIE. All rights reserved. Single pixel camera architecture. 16 Figure Legend: From: Image reconstruction.
Date of download: 6/26/2016 Copyright © 2016 SPIE. All rights reserved. Screen shots of an example stimulus on the prism array-based display. Although.
Date of download: 6/28/2016 Copyright © 2016 SPIE. All rights reserved. (a) An example stimuli slice that contains one target T (red circle), one distractor.
Date of download: 6/29/2016 Copyright © 2016 SPIE. All rights reserved. Overview of the sparse correlaton model for land-use scene classification. Figure.
Date of download: 7/2/2016 Copyright © 2016 SPIE. All rights reserved. Simulation of the ablation cross section by a sequence of laser pulses with an ideal.
Date of download: 7/3/2016 Copyright © 2016 SPIE. All rights reserved. Subsets of the images involved in the eye-tracking experiments: (a) LC and LS datasets.
Date of download: 7/5/2016 Copyright © 2016 SPIE. All rights reserved. Chemical structures for the LCOs qFTAA and hFTAA. Figure Legend: From: Spectral.
Date of download: 7/6/2016 Copyright © 2016 SPIE. All rights reserved. Polar plot of the magnification over the full field of view for two different panomorph.
Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. The process of compression/decompression used in our method. Figure Legend: From:
Date of download: 7/9/2016 Copyright © 2016 SPIE. All rights reserved. Schematic diagram of the coordinate system; θ is the viewing angle and O.R. represents.
Date of download: 7/9/2016 Copyright © 2016 SPIE. All rights reserved. Schematics of free-standing modules (black) with stationary consideration of front-sided.
Date of download: 7/9/2016 Copyright © 2016 SPIE. All rights reserved. A two-dimensional sampling lattice. v1 and v2 are the basis vectors. T1 and T2 are.
Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Survey results for the individual scenes. The ranking of the TMOs is illustrated.
Date of download: 11/12/2016 Copyright © 2016 SPIE. All rights reserved. PD1: first perceptual decomposition: the radial selectivity is indexed by Roman.
Date of download: 9/25/2017 Copyright © ASME. All rights reserved.
Date of download: 10/2/2017 Copyright © ASME. All rights reserved.
Journal of Vision. 2017;17(6):18. doi: / Figure Legend:
From: Breakdown of within- and between-network Resting State Functional Magnetic Resonance Imaging Connectivity during Propofol-induced Loss of Consciousness.
Date of download: 10/9/2017 Copyright © ASME. All rights reserved.
Copyright © 2007 American Medical Association. All rights reserved.
Copyright © 2005 American Medical Association. All rights reserved.
Date of download: 10/12/2017 Copyright © ASME. All rights reserved.
From: Early interference of context congruence on object processing in rapid visual categorization of natural scenes Journal of Vision. 2008;8(13):11.
From: Statistics for optimal point prediction in natural images
Journal of Vision. 2011;11(8):11. doi: / Figure Legend:
Journal of Vision. 2009;9(6):10. doi: / Figure Legend:
From: Occlusion-related lateral connections stabilize kinetic depth stimuli through perceptual coupling Journal of Vision. 2009;9(10):20. doi: /
Copyright © 2011 American Medical Association. All rights reserved.
Date of download: 10/28/2017 Copyright © ASME. All rights reserved.
Figure Legend: From: A moving-barber-pole illusion
Journal of Vision. 2015;15(2):4. doi: / Figure Legend:
Copyright © 2004 American Medical Association. All rights reserved.
Copyright © 2010 American Medical Association. All rights reserved.
Date of download: 11/3/2017 Copyright © ASME. All rights reserved.
Journal of Vision. 2008;8(11):3. doi: / Figure Legend:
Journal of Vision. 2011;11(8):11. doi: / Figure Legend:
Journal of Vision. 2010;10(14):32. doi: / Figure Legend:
Figure Legend: From: Comparing integration rules in visual search
From: Slant from texture and disparity cues: Optimal cue combination
From: Maximum likelihood difference scales represent perceptual magnitudes and predict appearance matches Journal of Vision. 2017;17(4):1. doi: /
Date of download: 11/13/2017 Copyright © ASME. All rights reserved.
From: Perceptual entrainment of individually unambiguous motions
From: A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications Journal of.
Journal of Vision. 2010;10(12):1. doi: / Figure Legend:
From: Rat performance on visual detection task modeled with divisive normalization and adaptive decision thresholds Journal of Vision. 2011;11(9):1. doi: /
Journal of Vision. 2014;14(14):2. doi: / Figure Legend:
From: Motion processing with two eyes in three dimensions
Journal of Vision. 2007;7(11):9. doi: / Figure Legend:
From: Visual sensitivity is a stronger determinant of illusory processes than auditory cue parameters in the sound-induced flash illusion Journal of Vision.
Journal of Vision. 2011;11(10):16. doi: / Figure Legend:
Journal of Vision. 2011;11(8):11. doi: / Figure Legend:
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
From: The Multimodal Dynamics of a Walnut Tree: Experiments and Models
From: The Multimodal Dynamics of a Walnut Tree: Experiments and Models
Volume 22, Issue 14, Pages (July 2012)
Volume 66, Issue 4, Pages (May 2010)
Gamma and the Coordination of Spiking Activity in Early Visual Cortex
Volume 66, Issue 4, Pages (May 2010)
Søren K. Andersen, Steven A. Hillyard, Matthias M. Müller 
Presentation transcript:

Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Examples of synthetic simple scenes (in anaglyph 3-D). Each visual stimulus in (a) and (b) includes two gray meteor objects: one meteor object was intended to induce visual discomfort due to excessive screen disparities (i.e., disparity-induced discomfort), while the other meteor object was intended to induce visual discomfort due to fast changes of disparity (i.e., motion-induced discomfort). The lateral distances between centers of the two meteor objects were set to (a) 5 deg and (b) 15 deg, respectively. The figure (c) is the magnified version of a meteor object and figure (d) illustrates the presentation of synthetic simple scenes in our experimental environment. Figure Legend: From: Experimental investigation of discomfort combination: toward visual discomfort prediction for stereoscopic videos J. Electron. Imaging. 2014;23(1): doi: /1.JEI

Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Viewing condition in our subjective experiments. (a) Stereoscopic display used in our experiment and (b) top view of the apparatus for the presentation of visual stimuli. Figure Legend: From: Experimental investigation of discomfort combination: toward visual discomfort prediction for stereoscopic videos J. Electron. Imaging. 2014;23(1): doi: /1.JEI

Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Two-dimensional distribution of 64 stereoscopic videos along disparity and in-depth motion directions. The x-axis indicates the disparity magnitude and the y-axis indicates the in-depth motion velocity. Note that the units of the x-axis and y-axis are the angular disparity (deg) and the change of angular disparity (deg/s), respectively. Figure Legend: From: Experimental investigation of discomfort combination: toward visual discomfort prediction for stereoscopic videos J. Electron. Imaging. 2014;23(1): doi: /1.JEI

Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Normalized mean opinion scores for each synthetic simple-scene. The abscissa represents the content index and the ordinate represents the normalized comfort scale, respectively. The error bars represent the 95% confidence interval. Figure Legend: From: Experimental investigation of discomfort combination: toward visual discomfort prediction for stereoscopic videos J. Electron. Imaging. 2014;23(1): doi: /1.JEI

Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Realistic and natural stereoscopic videos used in the verification experiment. The first and second columns correspond to examples of animated scenes, and the other columns correspond to examples of natural scenes. Note that these scenes contain various living and nonliving entities in diverse places, such as humans, man-made objects, etc. Figure Legend: From: Experimental investigation of discomfort combination: toward visual discomfort prediction for stereoscopic videos J. Electron. Imaging. 2014;23(1): doi: /1.JEI

Date of download: 9/17/2016 Copyright © 2016 SPIE. All rights reserved. Normalized MOS (nMOS) for each visual stimulus. The abscissa represents the content index and the ordinate represents the normalized comfort scale, respectively. The error bars represent the 95% confidence interval. Note that higher nMOS represents more uncomfortable. Figure Legend: From: Experimental investigation of discomfort combination: toward visual discomfort prediction for stereoscopic videos J. Electron. Imaging. 2014;23(1): doi: /1.JEI