Xaq Pitkow, Dora E. Angelaki  Neuron 

Slides:



Advertisements
Similar presentations
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Advertisements

6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
“What Not” Detectors Help the Brain See in Depth
Volume 86, Issue 3, Pages (May 2015)
To Detect and Correct: Norm Violations and Their Enforcement
Volume 92, Issue 2, Pages (October 2016)
Signal, Noise, and Variation in Neural and Sensory-Motor Latency
Volume 54, Issue 6, Pages (June 2007)
Volume 82, Issue 1, Pages (April 2014)
Coding of the Reach Vector in Parietal Area 5d
A Vestibular Sensation: Probabilistic Approaches to Spatial Perception
Martin O'Neill, Wolfram Schultz  Neuron 
Cooperative Nonlinearities in Auditory Cortical Neurons
Spatiotemporal Response Properties of Optic-Flow Processing Neurons
Volume 66, Issue 6, Pages (June 2010)
Shaul Druckmann, Dmitri B. Chklovskii  Current Biology 
Volume 87, Issue 1, Pages (July 2015)
Keno Juechems, Jan Balaguer, Maria Ruz, Christopher Summerfield  Neuron 
Volume 81, Issue 6, Pages (March 2014)
Perceptual Learning and Decision-Making in Human Medial Frontal Cortex
Michael L. Morgan, Gregory C. DeAngelis, Dora E. Angelaki  Neuron 
An Optimal Decision Population Code that Accounts for Correlated Variability Unambiguously Predicts a Subject’s Choice  Federico Carnevale, Victor de Lafuente,
Volume 62, Issue 5, Pages (June 2009)
Feature- and Order-Based Timing Representations in the Frontal Cortex
Confidence as Bayesian Probability: From Neural Origins to Behavior
Hedging Your Bets by Learning Reward Correlations in the Human Brain
A Switching Observer for Human Perceptual Estimation
Dustin E. Stansbury, Thomas Naselaris, Jack L. Gallant  Neuron 
Attentional Modulations Related to Spatial Gating but Not to Allocation of Limited Resources in Primate V1  Yuzhi Chen, Eyal Seidemann  Neuron  Volume.
Cortical Mechanisms of Smooth Eye Movements Revealed by Dynamic Covariations of Neural and Behavioral Responses  David Schoppik, Katherine I. Nagel, Stephen.
Volume 66, Issue 4, Pages (May 2010)
Volume 71, Issue 4, Pages (August 2011)
Volume 66, Issue 4, Pages (May 2010)
Consolidation Promotes the Emergence of Representational Overlap in the Hippocampus and Medial Prefrontal Cortex  Alexa Tompary, Lila Davachi  Neuron 
Syed A. Chowdhury, Gregory C. DeAngelis  Neuron 
Parallel Interdigitated Distributed Networks within the Individual Estimated by Intrinsic Functional Connectivity  Rodrigo M. Braga, Randy L. Buckner 
Volume 90, Issue 3, Pages (May 2016)
Dynamic Coding for Cognitive Control in Prefrontal Cortex
Adaptation Disrupts Motion Integration in the Primate Dorsal Stream
Sparseness and Expansion in Sensory Representations
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
A. Saez, M. Rigotti, S. Ostojic, S. Fusi, C.D. Salzman  Neuron 
Vivek R. Athalye, Karunesh Ganguly, Rui M. Costa, Jose M. Carmena 
How Can Single Sensory Neurons Predict Behavior?
A Switching Observer for Human Perceptual Estimation
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
Joseph T. McGuire, Matthew R. Nassar, Joshua I. Gold, Joseph W. Kable 
Optimizing Neural Information Capacity through Discretization
Medial Axis Shape Coding in Macaque Inferotemporal Cortex
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Volume 91, Issue 5, Pages (September 2016)
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
A Scalable Population Code for Time in the Striatum
Volume 78, Issue 5, Pages (June 2013)
Mapping and Cracking Sensorimotor Circuits in Genetic Model Organisms
James M. Jeanne, Tatyana O. Sharpee, Timothy Q. Gentner  Neuron 
Guilhem Ibos, David J. Freedman  Neuron 
Volume 86, Issue 3, Pages (May 2015)
Volume 64, Issue 6, Pages (December 2009)
Franco Pestilli, Marisa Carrasco, David J. Heeger, Justin L. Gardner 
The Normalization Model of Attention
Paying Attention to the Details of Attention
Perceptual Classification in a Rapidly Changing Environment
Volume 23, Issue 21, Pages (November 2013)
Volume 74, Issue 1, Pages (April 2012)
Volume 66, Issue 4, Pages (May 2010)
Toward Functional Classification of Neuronal Types
Maxwell H. Turner, Fred Rieke  Neuron 
Christoph Daube, Robin A.A. Ince, Joachim Gross  Current Biology 
Keno Juechems, Jan Balaguer, Maria Ruz, Christopher Summerfield  Neuron 
Presentation transcript:

Inference in the Brain: Statistics Flowing in Redundant Population Codes  Xaq Pitkow, Dora E. Angelaki  Neuron  Volume 94, Issue 5, Pages 943-953 (June 2017) DOI: 10.1016/j.neuron.2017.05.028 Copyright © 2017 Elsevier Inc. Terms and Conditions

Figure 1 Inference in Graphical Models Embedded in Neural Activity (A) A probabilistic graphical model compactly represents direct relationships (statistical interactions or conditional dependencies; see Box 1) among many variables. This can be depicted as a factor graph, with circles showing variables x and squares showing interactions ψ between subsets of variables. Three variable nodes (red, green, and blue) and two interaction factors (brown) are highlighted. Observed variables are shaded gray. (B) Illustration of our message-passing neural inference model. Neural activity resides in a high-dimensional space r (top). Distinct statistics Ti(r) reflect the strength of certain patterns within the neural activity (red, green, and blue uniformly colored activation patterns). These statistics serve as estimators of parameters θi, such as the mode or inverse covariance, for the brain’s approximate posterior probabilities over latent variables (colored probability distributions). As neuronal activities evolve over time, the specific patterns that serve as summary statistics are updated as well. We hypothesize that the resultant recoding dynamics obey a canonical message-passing equation that has the same form for all interacting variables. This generic equation updates the posterior parameters at time t according to θi(t+1) = M [θi(t), θNi(t) ; ψ, G], where M represents a family of message-passing functions, θNi represents all parameters in the set of neighbors Ni on the graphical model that interact with θi, the factors ψ determine the particular interaction strengths between groups of variables, and G are global hyperparameters that specify one message-passing algorithm from within a given family. The message-passing parameter updates can be expressed equivalently in terms of the neural statistics since Ti(r,t) ≈ θi(t). The statistics flow through the population codes as the neural activity evolves, thereby representing the dynamics of probabilistic inference on a graphical model. Neuron 2017 94, 943-953DOI: (10.1016/j.neuron.2017.05.028) Copyright © 2017 Elsevier Inc. Terms and Conditions

Figure 2 Nonlinearity and Redundancy in Neural Computation (A) The task of separating yellow from blue dots cannot be accomplished by linear operations because the task-relevant variable (color) is entangled with a nuisance variable (horizontal position). After embedding the data nonlinearly into a higher-dimensional space, the task-relevant variable becomes linearly separable. (B) Signals from the sensory periphery have limited information content, illustrated here by a cartoon tiling of an abstract neural response space r = (r1,r2). Each square represents the resolution at which the responses can be reliably discriminated, up to a precision determined by noise (η). When sensory signals are embedded into a higher-dimensional space r’ = (r’1,r’2,r’3), the noise is transformed the same way as the signals. This produces a redundant population code with high-order correlations. (C) Such a code is redundant, with many copies of the same information in different neural response dimensions. Consequently, there are many ways to accomplish a given nonlinear transformation of the encoded variable, and thus not all details about neural transformations (red) matter. For this reason, it is advantageous to model the more abstract, representational level on which the nonlinearity affects the information content. Here, a basis of simple nonlinearities (e.g., polynomials, bluish) can be a convenient representation for these latent variables, and the relative weighting of different coarse types of nonlinearities may be more revealing than fine details as long as the different readouts of fine scale are dominated by information-limiting noise. (D) A simple example showing that the brain needs nonlinear computation due to nuisance variation. Linear computation can discriminate angles x1 of Gabor images I of a fixed polarity x2 = ± 1 because the nuisance-conditional mean of the linear function r = w · I is tuned to angle (red). If the polarity is unknown, however, the mean is not tuned (black). Tuning is recovered in the variance (blue), since the two polarities cause large amplitudes (plus and minus) for some angles. Quadratic statistics T(r) = r2 therefore encode the latent variable x1. (E) A quadratic transformation r2 can be produced by the sum of several rectified linear functions [·]+ with uniformly spaced thresholds T. Neuron 2017 94, 943-953DOI: (10.1016/j.neuron.2017.05.028) Copyright © 2017 Elsevier Inc. Terms and Conditions

Figure 3 Schematic for Understanding Distributed Codes This figure sketches how behavioral models (left) that predict latent variables, interactions, and actions should be compared against neural representations of those same features (right). This comparison first finds a best-fit neural encoding of behaviorally relevant variables, and then checks whether the recoding and decoding based on those variables match between behavioral and neural models. The behavioral model defines latent variables needed for the task, and we measure the neural encoding by using targeted dimensionality reduction between neural activity and the latent variables (top). We quantify recoding by the transformations of latent variables and compare those transformations established by the behavioral model to the corresponding changes measured between the neural representations of those variables (middle row). Our favored model predicts that these interactions are described by a message-passing algorithm, which describes one particular class of transformations (Figure 1). Finally, we measure decoding by predicting actions, and compare predictions from the behavioral model and the corresponding neural representations (bottom row). A match between behavioral models and neural models, by measures like log-likelihood or variance explained, provides evidence that these models accurately reflect the encoding, recoding, and decoding processes, and thus describe the core elements of neural computation. Neuron 2017 94, 943-953DOI: (10.1016/j.neuron.2017.05.028) Copyright © 2017 Elsevier Inc. Terms and Conditions