Journal of Vision. 2016;16(8):14. doi: / Figure Legend:

Slides:



Advertisements
Similar presentations
Figure Legend: From: Human grasp point selection
Advertisements

Journal of Vision. 2015;15(13):22. doi: / Figure Legend:
From: System gamma as a function of image- and monitor-dynamic range
Date of download: 10/16/2017 Copyright © ASME. All rights reserved.
From: Statistics for optimal point prediction in natural images
Journal of Vision. 2016;16(10):9. doi: / Figure Legend:
From: Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design Journal of Vision. 2017;17(1):24. doi: /
Journal of Vision. 2015;15(9):3. doi: / Figure Legend:
Journal of Vision. 2003;3(5):3. doi: /3.5.3 Figure Legend:
Journal of Vision. 2015;15(11):15. doi: / Figure Legend:
From: Salience of unique hues and implications for color theory
Journal of Vision. 2010;10(6):17. doi: / Figure Legend:
Journal of Vision. 2004;4(9):3. doi: /4.9.3 Figure Legend:
From: What are the units of storage in visual working memory?
Journal of Vision. 2011;11(9):17. doi: / Figure Legend:
Figure Legend: From: Visual detection of symmetry of 3D shapes
Journal of Vision. 2014;14(12):21. doi: / Figure Legend:
From: Breaking object correspondence across saccades impairs object recognition: The role of color and luminance Journal of Vision. 2016;16(11):1. doi: /
From: Event-related potentials reveal an early advantage for luminance contours in the processing of objects Journal of Vision. 2011;11(7):1. doi: /
From: Spatial attention alleviates temporal crowding, but neither temporal nor spatial uncertainty are necessary for the emergence of temporal crowding.
Figure Legend: From: The modern Japanese color lexicon
From: The attenuation surface for contrast sensitivity has the form of a witch's hat within the central visual field Journal of Vision. 2012;12(11):23.
Figure Legend: From: Key characteristics of specular stereo
From: Illumination assumptions account for individual differences in the perceptual interpretation of a profoundly ambiguous stimulus in the color domain:
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Metacontrast masking within and between visual channels: Effects of orientation and spatial frequency contrasts Journal of Vision. 2010;10(6):12.
From: Motion processing with two eyes in three dimensions
Figure Legend: From: An escape from crowding
Journal of Vision. 2012;12(6):33. doi: / Figure Legend:
From: Contextual effects on real bicolored glossy surfaces
Journal of Vision. 2010;10(4):22. doi: / Figure Legend:
From: Low-level awareness accompanies “unconscious” high-level processing during continuous flash suppression Journal of Vision. 2016;16(1):3. doi: /
Journal of Vision. 2009;9(5):31. doi: / Figure Legend:
Journal of Vision. 2004;4(6):4. doi: /4.6.4 Figure Legend:
Figure Legend: statistical inference
From: Perceptual entrainment of individually unambiguous motions
Figure Legend: From: Dynamics of oculomotor direction discrimination
Journal of Vision. 2017;17(2):1. doi: / Figure Legend:
Journal of Vision. 2010;10(12):1. doi: / Figure Legend:
Figure Legend: From: Latitude and longitude vertical disparities
From: The vertical horopter is not adaptable, but it may be adaptive
From: Probing visual consciousness: Rivalry between eyes and images
From: Linear–nonlinear models of the red–green chromatic pathway
Journal of Vision. 2016;16(1):17. doi: / Figure Legend:
From: Rapid visual categorization of natural scene contexts with equalized amplitude spectrum and increasing phase noise Journal of Vision. 2009;9(1):2.
Journal of Vision. 2015;15(10):5. doi: / Figure Legend:
Journal of Vision. 2016;16(6):12. doi: / Figure Legend:
From: What's color got to do with it
Figure Legend: From: Binocular combination of luminance profiles
Journal of Vision. 2016;16(15):21. doi: / Figure Legend:
From: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity Journal of Vision. 2004;4(9):1. doi: /4.9.1.
From: What predicts the strength of simultaneous color contrast?
Figure Legend: From: The resolution of facial expressions of emotion
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Journal of Vision. 2017;17(8):6. doi: / Figure Legend:
Journal of Vision. 2013;13(7):20. doi: / Figure Legend:
From: Four types of ensemble coding in data visualizations
Journal of Vision. 2008;8(11):18. doi: / Figure Legend:
From: Objects predict fixations better than early saliency
Journal of Vision. 2010;10(4):15. doi: / Figure Legend:
Figure Legend: From: Crowding and eccentricity determine reading rate
Trans. Vis. Sci. Tech ;6(4):5. doi: /tvst Figure Legend:
Journal of Vision. 2016;16(6):3. doi: / Figure Legend:
Figure Legend: From: Analysis of human vergence dynamics
Figure Legend: From: The color constancy of three-dimensional objects
The Human Face as a Dynamic Tool for Social Communication
Journal of Vision. 2015;15(9):20. doi: / Figure Legend:
Rachael E. Jack, Oliver G.B. Garrod, Philippe G. Schyns 
Data-Driven Methods to Diversify Knowledge of Human Psychology
The Human Face as a Dynamic Tool for Social Communication
Presentation transcript:

From: Space-by-time manifold representation of dynamic facial expressions for emotion categorization Journal of Vision. 2016;16(8):14. doi:10.1167/16.8.14 Figure Legend: Illustrative application of our method to simulated AU data. (A) Comparison of dimensionality-reduction methods. Here we show how three dimensionality reduction methods (NMF, color-coded in orange; PCA, color-coded in cyan; and ICA, color-coded in magenta) recode a simulated data set comprising the activations of two AUs—Upper Lid Raiser (AU5) on the y-axis and Jaw Drop (AU26) on the x-axis. Data are generated by the linear combination of two basis functions w1 and w2 that represent two functional dimensions. The first dimension consists of a low activation of Upper Lid Raiser and a high activation of Jaw Drop; the second, vice versa (see bars). Gray spheres reflect individual trials (600 in total) represented with different coefficients (c1 and c2) for the basis functions. On half of the trials, we impose correlations (p = 0.7) between c1 and c2 to represent AU synergies. NMF recovers the correct basis functions (see orange axes). In contrast, neither PCA (cyan axes) nor ICA (magenta axes) recovers the original basis, because of their underlying assumptions. PCA starts from the dimension explaining the most variance, adding one orthogonal dimension, and ICA looks for independent dimensions. For both PCA and ICA, the second dimension comprises negative values for one of the two AUs, which is incompatible with their functional role as representations of AU activations. (B) Emotion discrimination in the NMF space. In the space defined by w1 and w2, we apply LDA to discriminate trials categorized as fear (blue spheres) from those categorized as surprise (red spheres). LDA determines the categorization boundary (green line) that reliably discriminates the two emotions (93% correct discrimination). Date of download: 11/12/2017 The Association for Research in Vision and Ophthalmology Copyright © 2017. All rights reserved.