The linear/nonlinear model s*f 1. The spike-triggered average.

Slides:



Advertisements
Similar presentations
FMRI Methods Lecture 10 – Using natural stimuli. Reductionism Reducing complex things into simpler components Explaining the whole as a sum of its parts.
Advertisements

Pattern Recognition and Machine Learning
On an Improved Chaos Shift Keying Communication Scheme Timothy J. Wren & Tai C. Yang.
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
What is the neural code?. Alan Litke, UCSD Reading out the neural code.
2nd level analysis – design matrix, contrasts and inference
EigenFaces and EigenPatches Useful model of variation in a region –Region must be fixed shape (eg rectangle) Developed for face recognition Generalised.
Feature Selection as Relevant Information Encoding Naftali Tishby School of Computer Science and Engineering The Hebrew University, Jerusalem, Israel NIPS.
Component Analysis (Review)
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Chapter 2.
Biological Modeling of Neural Networks: Week 9 – Coding and Decoding Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 What is a good neuron model? - Models.
Spike Train Statistics Sabri IPM. Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement.
Auditory Nerve Laboratory: What was the Stimulus? Bertrand Delgutte HST.723 – Neural Coding and Perception of Sound.
Neurophysics Part 1: Neural encoding and decoding (Ch 1-4) Stimulus to response (1-2) Response to stimulus, information in spikes (3-4) Part 2: Neurons.
What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t),
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
For a random variable X with distribution p(x), entropy is given by H[X] = -  x p(x) log 2 p(x) “Information” = mutual information: how much knowing the.
Reinagel lectures 2006 Take home message about LGN 1. Lateral geniculate nucleus transmits information from retina to cortex 2. It is not known what computation.
Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Statistical analysis and modeling of neural data Lecture 4 Bijan Pesaran 17 Sept, 2007.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Goals Looking at data –Signal and noise Structure of signals –Stationarity (and not) –Gaussianity (and not) –Poisson (and not) Data analysis as variability.
Cracking the Population Code Dario Ringach University of California, Los Angeles.
Neural Networks Lecture 8: Two simple learning algorithms
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Unsupervised learning
Presented By Wanchen Lu 2/25/2013
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
Additive Data Perturbation: data reconstruction attacks.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
FMRI Methods Lecture7 – Review: analyses & statistics.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
Theoretical Neuroscience Physics 405, Copenhagen University Block 4, Spring 2007 John Hertz (Nordita) Office: rm Kc10, NBI Blegdamsvej Tel (office)
What is the neural code?. Alan Litke, UCSD What is the neural code?
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU Oct 5, Course.
1 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB) [Bayesian Brain]
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Sensation & Perception. Motion Vision I: Basic Motion Vision.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
Biological Modeling of Neural Networks: Week 10 – Neuronal Populations Wulfram Gerstner EPFL, Lausanne, Switzerland 10.1 Cortical Populations - columns.
Neural Codes. Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo.
The Neural Code Baktash Babadi SCS, IPM Fall 2004.
Principal Component Analysis (PCA)
9.3 Filtered delay embeddings
CH 5: Multivariate Methods
Machine Learning Today: Reading: Maria Florina Balcan
Predicting Every Spike
the statistics of the environment
spike-triggering stimulus features
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
CS4670: Intro to Computer Vision
Cortical Mechanisms of Smooth Eye Movements Revealed by Dynamic Covariations of Neural and Behavioral Responses  David Schoppik, Katherine I. Nagel, Stephen.
Volume 60, Issue 5, Pages (December 2008)
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Adaptive Rescaling Maximizes Information Transmission
Lecture 16. Classification (II): Practical Considerations
Dimensionality reduction in neuroscience
Presentation transcript:

The linear/nonlinear model s*f 1

The spike-triggered average

More generally, one can conceive of the action of the neuron or neural system as selecting a low dimensional subset of its inputs. Start with a very high dimensional description (eg. an image or a time-varying waveform) and pick out a small set of relevant dimensions. s(t) s1s1 s2s2 s3s3 s1s1 s2s2 s3s3 s4s4 s5s5 s.. snsn S(t) = (S 1,S 2,S 3,…,S n ) Dimensionality reduction P(response | stimulus)  P(response | s 1, s 2,.., s n )

Linear filtering Linear filtering = convolution = projection s1s1 s2s2 s3s3 Stimulus feature is a vector in a high-dimensional stimulus space

How to find the components of this model s*f 1

The input/output function is: which can be derived from data using Bayes’ rule: Determining the nonlinear input/output function

Tuning curve: P(spike|s) = P(s|spike) P(spike) / P(s) Nonlinear input/output function Tuning curve: P(spike|s) = P(s|spike) P(spike) / P(s) s P(s|spike)P(s) s P(s|spike)P(s)

Models with multiple features Linear filter s & nonlinearity: r(t) = g(f 1 *s, f 2 *s, …, f n *s)

Spike-triggered average Gaussian prior stimulus distribution Spike-conditional distribution covariance Determining linear features from white noise

The covariance matrix is Properties: The number of eigenvalues significantly different from zero is the number of relevant stimulus features The corresponding eigenvectors are the relevant features (or span the relevant subspace ) Stimulus prior Bialek et al., 1988; Brenner et al., 2000; Bialek and de Ruyter, 2005 Identifying multiple features Spike-triggered stimulus correlation Spike-triggered average

Inner and outer products u  v =  u i v i u  v = projection of u onto v u  u = length of u u x v = M where M ij = u i v j

Eigenvalues and eigenvectors M u = u

Singular value decomposition m n = xx M U  V

Principal component analysis M M * = (U  V)(V *  * U * ) = U  (V V * )  * U * = U  * U * C = U  U *

The covariance matrix is Properties: The number of eigenvalues significantly different from zero is the number of relevant stimulus features The corresponding eigenvectors are the relevant features (or span the relevant subspace ) Stimulus prior Bialek et al., 1988; Brenner et al., 2000; Bialek and de Ruyter, 2005 Identifying multiple features Spike-triggered stimulus correlation Spike-triggered average

Let’s develop some intuition for how this works: a filter-and-fire threshold-crossing model with AHP Keat, Reinagel, Reid and Meister, Predicting every spike. Neuron (2001) Spiking is controlled by a single filter, f (t) Spikes happen generally on an upward threshold crossing of the filtered stimulus  expect 2 relevant features, the filter f (t) and its time derivative f ’(t) A toy example: a filter-and-fire model

STA f(t) f'(t) Covariance analysis of a filter-and-fire model

Example: rat somatosensory (barrel) cortex Ras Petersen and Mathew Diamond, SISSA Record from single units in barrel cortex Some real data

Spike-triggered average: Pre-spike time (ms) Normalised velocity White noise analysis in barrel cortex

Is the neuron simply not very responsive to a white noise stimulus? White noise analysis in barrel cortex

PriorSpike- triggered Difference Covariance matrices from barrel cortical neurons

Pre-spike time (ms) Velocity EigenspectrumLeading modes Eigenspectrum from barrel cortical neurons

Input/output relations wrt first two filters, alone: and in quadrature: Input/output relations from barrel cortical neurons

Pre-spike time (ms) Velocity (arbitrary units) How about the other modes? Next pair with +ve eigenvalues Pair with -ve eigenvalues Less significant eigenmodes from barrel cortical neurons

Firing rate decreases with increasing projection: suppressive modes Input/output relations for negative pair Negative eigenmode pair

Salamander retinal ganglion cells perform a variety of computations Michael Berry

Almost filter-and-fire-like

Not a threshold-crossing neuron

Complex cell like

Bimodal: two separate features are encoded as either/or

Complex cells in V1 Rust et al., Neuron 2005

integrators H1, some single cortical neurons differentiators Retina, simple cells, HH neuron, auditory neurons frequency-power detectors V1 complex cells, somatosensory, auditory, retina Basic types of computation

When have you done a good job? When the tuning curve over your variable is interesting. How to quantify interesting?

Tuning curve: P(spike|s) = P(s|spike) P(spike) / P(s) Goodness measure: D KL (P(s|spike) | P(s)) When have you done a good job? Tuning curve: P(spike|s) = P(s|spike) P(spike) / P(s) s Boring: spikes unrelated to stimulus P(s|spike)P(s) s Interesting: spikes are selective P(s|spike)P(s)

Maximally informative dimensions Choose filter in order to maximize D KL between spike-conditional and prior distributions Equivalent to maximizing mutual information between stimulus and spike Sharpee, Rust and Bialek, Neural Computation, 2004 Does not depend on white noise inputs Likely to be most appropriate for deriving models from natural stimuli s = I*f P(s|spike)P(s)

1.Single, best filter determined by the first moment 2.A family of filters derived using the second moment 3.Use the entire distribution: information theoretic methods Removes requirement for Gaussian stimuli Finding relevant features

Less basic coding models Linear filter s & nonlinearity: r(t) = g(f 1 *s, f 2 *s, …, f n *s) …shortcomings?

Less basic coding models GLM: r(t) = g(f 1 *s + f 2 *r) Pillow et al., Nature 2008; Truccolo,.., Brown, J. Neurophysiol. 2005

Less basic coding models GLM: r(t) = g(f*s + h*r) …shortcomings?

Less basic coding models GLM: r(t) = g(f 1 *s + h 1 *r 1 + h 2 *r 2 +…) …shortcomings?

Poisson spiking

Shadlen and Newsome, 1998

Poisson spiking Properties: Distribution: P T [k] = (rT) k exp(-rT)/k! Mean: = rT Variance: Var(x) = rT Interval distribution: P(T) = r exp(-rT) Fano factor: F = 1

Fano factor Data fit to: variance = A  mean B A B Area MT Poisson spiking in the brain

How good is the Poisson model? ISI analysis ISI Distribution from an area MT Neuron ISI distribution generated from a Poisson model with a Gaussian refractory period Poisson spiking in the brain

Hodgkin-Huxley neuron driven by noise