Volume 27, Issue 1, Pages (January 2017)

Slides:



Advertisements
Similar presentations
Attention Narrows Position Tuning of Population Responses in V1
Advertisements

Volume 60, Issue 4, Pages (November 2008)
Hippocampal Attractor Dynamics Predict Memory-Based Decision Making
Volume 27, Issue 7, Pages (April 2017)
Signal, Noise, and Variation in Neural and Sensory-Motor Latency
Araceli Ramirez-Cardenas, Maria Moskaleva, Andreas Nieder 
Kevin Mann, Courtney L. Gallen, Thomas R. Clandinin  Current Biology 
Aaron R. Seitz, Praveen K. Pilly, Christopher C. Pack  Current Biology 
Shape representation in the inferior temporal cortex of monkeys
Volume 28, Issue 7, Pages e5 (April 2018)
Allison B Sekuler, Carl M Gaspar, Jason M Gold, Patrick J Bennett 
Caspar M. Schwiedrzik, Winrich A. Freiwald  Neuron 
Jason A. Cromer, Jefferson E. Roy, Earl K. Miller  Neuron 
Ian M. Finn, Nicholas J. Priebe, David Ferster  Neuron 
Volume 27, Issue 2, Pages (January 2017)
Shaul Druckmann, Dmitri B. Chklovskii  Current Biology 
Activity in Both Hippocampus and Perirhinal Cortex Predicts the Memory Strength of Subsequently Remembered Information  Yael Shrager, C. Brock Kirwan,
Volume 21, Issue 2, Pages (January 2011)
Differential Impact of Behavioral Relevance on Quantity Coding in Primate Frontal and Parietal Neurons  Pooja Viswanathan, Andreas Nieder  Current Biology 
Feature- and Order-Based Timing Representations in the Frontal Cortex
CA3 Retrieves Coherent Representations from Degraded Input: Direct Evidence for CA3 Pattern Completion and Dentate Gyrus Pattern Separation  Joshua P.
Caspar M. Schwiedrzik, Winrich A. Freiwald  Neuron 
Posterior parietal cortex
Volume 27, Issue 19, Pages e2 (October 2017)
Volume 49, Issue 3, Pages (February 2006)
Cortical Mechanisms of Smooth Eye Movements Revealed by Dynamic Covariations of Neural and Behavioral Responses  David Schoppik, Katherine I. Nagel, Stephen.
Volume 66, Issue 4, Pages (May 2010)
James A. Strother, Aljoscha Nern, Michael B. Reiser  Current Biology 
Volume 71, Issue 4, Pages (August 2011)
Syed A. Chowdhury, Gregory C. DeAngelis  Neuron 
Volume 26, Issue 7, Pages (April 2016)
Dynamic Coding for Cognitive Control in Prefrontal Cortex
Jianing Yu, David Ferster  Neuron 
Nicolas Catz, Peter W. Dicke, Peter Thier  Current Biology 
Nicholas J. Priebe, David Ferster  Neuron 
Spatially Periodic Activation Patterns of Retrosplenial Cortex Encode Route Sub-spaces and Distance Traveled  Andrew S. Alexander, Douglas A. Nitz  Current.
Motor Learning with Unstable Neural Representations
Neural Mechanisms of Visual Motion Perception in Primates
Acetylcholine Mediates Behavioral and Neural Post-Error Control
Volume 81, Issue 5, Pages (March 2014)
Parietal and Frontal Cortex Encode Stimulus-Specific Mnemonic Representations during Visual Working Memory  Edward F. Ester, Thomas C. Sprague, John T.
Optimizing Neural Information Capacity through Discretization
Georg B. Keller, Tobias Bonhoeffer, Mark Hübener  Neuron 
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Volume 45, Issue 5, Pages (March 2005)
Volume 25, Issue 3, Pages (February 2015)
Timing, Timing, Timing: Fast Decoding of Object Information from Intracranial Field Potentials in Human Visual Cortex  Hesheng Liu, Yigal Agam, Joseph.
Social Signals in Primate Orbitofrontal Cortex
Caudate Microstimulation Increases Value of Specific Choices
Robust Selectivity to Two-Object Images in Human Visual Cortex
The Normalization Model of Attention
Neural Coding: Bumps on the Move
Volume 27, Issue 3, Pages (February 2017)
Humans Have an Expectation That Gaze Is Directed Toward Them
Jason A. Cromer, Jefferson E. Roy, Earl K. Miller  Neuron 
Tuning to Natural Stimulus Dynamics in Primary Auditory Cortex
Gregor Rainer, Earl K Miller  Neuron 
Decoding Successive Computational Stages of Saliency Processing
Volume 47, Issue 6, Pages (September 2005)
Daniela Vallentin, Andreas Nieder  Current Biology 
Volume 28, Issue 7, Pages e5 (April 2018)
Volume 50, Issue 1, Pages (April 2006)
Volume 27, Issue 2, Pages (August 2000)
Marie P. Suver, Akira Mamiya, Michael H. Dickinson  Current Biology 
Volume 67, Issue 6, Pages (September 2010)
Volume 66, Issue 4, Pages (May 2010)
Visual Perception: One World from Two Eyes
Clark Fisher, Winrich A. Freiwald  Current Biology 
Volume 27, Issue 6, Pages (March 2017)
Volume 27, Issue 2, Pages (January 2017)
Presentation transcript:

Volume 27, Issue 1, Pages 62-67 (January 2017) View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation  Joel Z. Leibo, Qianli Liao, Fabio Anselmi, Winrich A. Freiwald, Tomaso Poggio  Current Biology  Volume 27, Issue 1, Pages 62-67 (January 2017) DOI: 10.1016/j.cub.2016.10.015 Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 1 Schematic of the Macaque Face-Patch System (A) Side view of computer-inflated macaque cortex with six areas of face-selective cortex (red) in the temporal lobe together with connectivity graph (orange) [13]. Face areas are named based on their anatomical location: AF, anterior fundus; AL, anterior lateral; AM, anterior medial; MF, middle fundus; ML; middle lateral; PL, posterior lateral, and have been found to be directly connected to each other to form a face-processing network [12]. Recordings from three face areas, ML, AL, and AM, during presentations of faces at different head orientations revealed qualitatively different tuning properties, schematized in (B). (B) Prototypical ML neurons are tuned to head orientation, e.g., as shown, a left profile. A prototypical neuron in AL, when tuned to one profile view, is tuned to the mirror-symmetric profile view as well. And a typical neuron in AM is only weakly tuned to head orientation. Because of this increasing invariance to in-depth rotation, increasing invariance to size and position (not shown), and increased average response latencies from ML to AL to AM, it is thought that the main AL properties, including mirror-symmetry, have to be understood as transformations of ML representations and the main AM properties as transformations of AL representations [7]. Current Biology 2017 27, 62-67DOI: (10.1016/j.cub.2016.10.015) Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 2 Illustration of the Model Inputs are first encoded in a V1-like model. Its first layer (simple cells) corresponds to the S1 layer of the HMAX model. Its second layer (complex cells) corresponds to the C1 layer of HMAX [14]. In the view-based model, the V1-like encoding is then projected onto stored frames giwk at orientation i, from videos of transforming faces k=1,…,K. Finally, the last layer is computed as μk=∑iη(〈x,giwk〉). That is, the kth element of the output is computed by summing over all responses to cells tuned to views of the kth template face. In the PCA model, the V1-like encoding is instead projected onto templates wik describing the ith PC of the kth template face’s transformation video. The pooling in the final layer is then over all of the PCs derived from the same identity. That is, it is computed as μk=∑iη(〈x,wik〉). In both the view-based and PCA models, units in the output layer pool over all of the units in the previous layer, corresponding to projections onto the same template individual’s views (view-based model) or PCs (PCA model). Current Biology 2017 27, 62-67DOI: (10.1016/j.cub.2016.10.015) Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 3 Model Performance on the Task of Same-Different Face Pair Matching (A1–A3) The structure of the models tested in (B). (A1) The V1-like model encodes an input image in the C1 layer of HMAX, which models complex cells in V1 [14]. (A2) The view-based model encodes an input image as μk(x)=∑i=1|G|〈x,giwk〉2, where x is the V1-like encoding. (A3) The top r PCs model encodes an input image as μk(x)=∑i=1r〈x,wik〉2, where x is the V1-like encoding. (B1 and B2) The test of depth-rotation invariance required discriminating unfamiliar faces. That is, the template faces did not appear in the test set, so this is a test of depth-rotation invariance from a single example view. (B1) In each trial, two face images appear and the task is to indicate whether they depict the same or different faces. They may appear at different orientations from each other. For classification of an image pair (a,b) as depicting the same or a different individual, the cosine similarity of the two representations was compared to a threshold. The threshold was varied systematically in order to compute the area under the receiver operating characteristic (ROC) curve (AUC). (B2) In each test, 600 pairs of face images were sampled from the set of faces with orientations in the current testing interval. Three hundred pairs depicted the same individual, and 300 pairs depicted different individuals. Testing intervals were ordered by inclusion and were always symmetric about 0° the set of frontal faces; i.e., they were [−x,x] for x=5°,…,95°. The radius of the testing interval x, dubbed the invariance radius, is the abscissa. AUC declines as the range of testing orientations is widened. As long as enough PCs are used, the proposed model performs on par with the view-based model. It even exceeds its performance if the complete set of PCs is used. Both models outperform the baseline HMAX C1 representation. The error bars were computed over repetitions of the experiment with different template and test sets; see the Supplemental Experimental Procedures. (C1 and C2) Mirror-symmetric orientation tuning of the raw pixels-based model. 〈xθ,wi〉2 is shown as a function of the orientation of xθ. Here, each curve represents a different PC. Below are shown the PCs wik visualized as images. They are either symmetric (even) or antisymmetric (odd) about the vertical midline. See also Figure S1. Current Biology 2017 27, 62-67DOI: (10.1016/j.cub.2016.10.015) Copyright © 2017 Elsevier Ltd Terms and Conditions

Figure 4 Population Representations of Face View and Identity (A) Model population similarity matrices corresponding to the simulation of Figure 3B were obtained by computing Pearson’s linear correlation coefficient between each test sample pair. Left: the similarity matrix for the V1-like representation, the C1 layer of HMAX [14]. Middle: the similarity matrix for the penultimate layer of the PCA-model of Figure 3B. It models AL. Right: the similarity matrix for the final layer of the PCA-model of Figure 3B. It models AM. (B) Compare the results in (A) to the corresponding neuronal similarity matrices from [7]. See also Figure S1. Current Biology 2017 27, 62-67DOI: (10.1016/j.cub.2016.10.015) Copyright © 2017 Elsevier Ltd Terms and Conditions