Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
What is the neural code?. Alan Litke, UCSD Reading out the neural code.
Integration of sensory modalities
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
BCS547 Neural Encoding.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 10 Statistical Modelling Martin Russell.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Spike Train decoding Summary Decoding of stimulus from response –Two choice case Discrimination ROC curves –Population decoding MAP and ML estimators.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Zen, and the Art of Neural Decoding using an EM Algorithm Parameterized Kalman Filter and Gaussian Spatial Smoothing Michael Prerau, MS.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Algorithm Taxonomy Thus far we have focused on:
Statistical learning and optimal control:
Population Coding Alexandre Pouget Okinawa Computational Neuroscience Course Okinawa, Japan November 2004.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
STUDY, MODEL & INTERFACE WITH MOTOR CORTEX Presented by - Waseem Khatri.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Neural Networks Design  Motivation  Evolutionary training  Evolutionary design of the architecture.
Statistical learning and optimal control: A framework for biological learning and motor control Lecture 4: Stochastic optimal control Reza Shadmehr Johns.
Learning Theory Reza Shadmehr Optimal feedback control stochastic feedback control with and without additive noise.
Population coding Population code formulation Methods for decoding: population vector Bayesian inference maximum a posteriori maximum likelihood Fisher.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
An Introduction to Kalman Filtering by Arthur Pece
Lecture 2: Statistical learning primer for biologists
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Bayesian Perception.
Ch 7. Computing with Population Coding Summarized by Kim, Kwonill Bayesian Brain: Probabilistic Approaches to Neural Coding P. Latham & A. Pouget.
Learning Deep Generative Models by Ruslan Salakhutdinov
Authors: Peter W. Battaglia, Robert A. Jacobs, and Richard N. Aslin
Deep Feedforward Networks
Using Sensor Data Effectively
Classification with Perceptrons Reading:
Synapses Signal is carried chemically across the synaptic cleft.
Volume 20, Issue 5, Pages (May 1998)
The Brain as an Efficient and Robust Adaptive Learner
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Volume 36, Issue 5, Pages (December 2002)
Gaussian Mixture Models And their training with the EM algorithm
Integration of sensory modalities
Volume 20, Issue 5, Pages (May 1998)
5.2 Least-Squares Fit to a Straight Line
John Widloski, Ila R. Fiete  Neuron 
Brendan K. Murphy, Kenneth D. Miller  Neuron 
Probabilistic Population Codes for Bayesian Decision Making
Confidence as Bayesian Probability: From Neural Origins to Behavior
C. Shawn Green, Alexandre Pouget, Daphne Bavelier  Current Biology 
Volume 66, Issue 4, Pages (May 2010)
Thomas Akam, Dimitri M. Kullmann  Neuron 
Parametric Methods Berlin Chen, 2005 References:
Origin and Function of Tuning Diversity in Macaque Visual Cortex
Prediction of Orientation Selectivity from Receptive Field Architecture in Simple Cells of Cat Visual Cortex  Ilan Lampl, Jeffrey S. Anderson, Deda C.
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Mathematical Foundations of BME Reza Shadmehr
Winter in Kraków photographed by Marcin Ryczek
Volume 74, Issue 1, Pages (April 2012)
Volume 66, Issue 4, Pages (May 2010)
Presentation transcript:

Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget

Stimulus (s) neurons encode Response (r) decode

Tuning curves sensory and motor info often encoded in “tuning curves” neurons give a characteristic “bell shaped” response

Difficulty of decoding noisy neurons create variable responses to same stimuli brain must estimate encoded variables from the “noisy hill” of a population response

Population vector estimator assign each neuron a vector vector length is proportional to activity vector direction corresponds to preferred direction Sum vectors

Population vector estimator Vector summation is equivalent to fitting a cosine function peak of cosine is estimate of direction

How good is an estimator? need to compare variance of estimator after repeated presentations to a lower bound the maximum likelihood estimate gives the lower variance bound for a given amount of independent noise VS

Stimulus (s) neurons encode Response (r) decode

Maximum Likelihood Decoding Maximum likelihood estimator Decoding Encoding

Goal: biological ML estimator recurrent neural network with broadly tuned units can achieve ML estimate with noise independent of firing rate can approximate ML estimate with activity- dependent noise

General Architecture units are fully connected and are arranged in frequency columns and orientation rows weights implement a 2-D Gaussian filter: 20 Preferred Frequency Preferred orientation PΘPΘ PλPλ

Input tuning curves circular normal functions with some spontaneous activity: Gaussian noise is added to inputs:

Unit updates & normalization units are convolved with filter (local excitation) responses are normalized divisively (global inhibition)

Results Rapidly converges strongly dependent on contrast

Results sigmoidal response curve after 3 iterations, becomes a step after 20 actual neuron

Noise Effects Width of input tuning curve held constant width of output tuning curve varied by adjusting spatial extent of the weights Flat Noise Proportional Noise

Analysis Q1: Why does the optimal width depend on noise? Q2: Why does the network perform better for flat noise? Flat Noise Proportional Noise

Analysis Smallest achievable variance: = inverse of the covariance matrix of the noise = vector of the derivative of the input tuning curve with respect to For Gaussian noise: Trace term is 0 when R is independent of Θ (flat noise) Θ

Summary network gives a good approximation of the optimal tuning curve determined by ML type of noise (flat vs proportional) affected variance and optimal tuning width