Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.

Slides:



Advertisements
Similar presentations
FMRI Methods Lecture 10 – Using natural stimuli. Reductionism Reducing complex things into simpler components Explaining the whole as a sum of its parts.
Advertisements

What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
What is the neural code?. Alan Litke, UCSD Reading out the neural code.
Eigen Decomposition and Singular Value Decomposition
The linear/nonlinear model s*f 1. The spike-triggered average.
Chapter 2.
Lecture 7: Basis Functions & Fourier Series
Lecture 15 Orthogonal Functions Fourier Series. LGA mean daily temperature time series is there a global warming signal?
Spike Train Statistics Sabri IPM. Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Auditory Nerve Laboratory: What was the Stimulus? Bertrand Delgutte HST.723 – Neural Coding and Perception of Sound.
Neurophysics Part 1: Neural encoding and decoding (Ch 1-4) Stimulus to response (1-2) Response to stimulus, information in spikes (3-4) Part 2: Neurons.
What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t),
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Gaussian process emulation of multiple outputs Tony O’Hagan, MUCM, Sheffield.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
For a random variable X with distribution p(x), entropy is given by H[X] = -  x p(x) log 2 p(x) “Information” = mutual information: how much knowing the.
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling csc610, 2001.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
A Quick Practical Guide to PCA and ICA Ted Brookings, UCSB Physics 11/13/06.
Statistical analysis and modeling of neural data Lecture 4 Bijan Pesaran 17 Sept, 2007.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Principal Component Analysis Principles and Application.
Separate multivariate observations
Review of Probability.
Differential Equations
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Aristotle University of Thessaloniki – Department of Geodesy and Surveying A. DermanisSignals and Spectral Methods in Geoinformatics A. Dermanis Signals.
CISE315 SaS, L171/16 Lecture 8: Basis Functions & Fourier Series 3. Basis functions: Concept of basis function. Fourier series representation of time functions.
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
Mathematical Preliminaries. 37 Matrix Theory Vectors nth element of vector u : u(n) Matrix mth row and nth column of A : a(m,n) column vector.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Neural coding (1) LECTURE 8. I.Introduction − Topographic Maps in Cortex − Synesthesia − Firing rates and tuning curves.
FMRI Methods Lecture7 – Review: analyses & statistics.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Basics of Neural Networks Neural Network Topologies.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Theoretical Neuroscience Physics 405, Copenhagen University Block 4, Spring 2007 John Hertz (Nordita) Office: rm Kc10, NBI Blegdamsvej Tel (office)
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
What is the neural code?. Alan Litke, UCSD What is the neural code?
ارتباطات داده (883-40) فرآیندهای تصادفی نیمسال دوّم افشین همّت یار دانشکده مهندسی کامپیوتر 1.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU Oct 5, Course.
Principal Component Analysis (PCA)
Arab Open University Faculty of Computer Studies M132: Linear Algebra
1 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB) [Bayesian Brain]
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Lecture 2: Everything you need to know to know about point processes Outline: basic ideas homogeneous (stationary) Poisson processes Poisson distribution.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
Feature Extraction 主講人:虞台文.
ECE 8443 – Pattern Recognition ECE 3163 – Signals and Systems Objectives: Eigenfunctions Fourier Series of CT Signals Trigonometric Fourier Series Dirichlet.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Neural Codes. Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo.
CLASSIFICATION OF ECG SIGNAL USING WAVELET ANALYSIS
Effects of noisy adaptation on neural spiking statistics Tilo Schwalger,Benjamin Lindner, Karin Fisch, Jan Benda Max-Planck Institute for the Physics of.
LECTURE 11: Advanced Discriminant Analysis
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
SIGNALS PROCESSING AND ANALYSIS
LECTURE 10: DISCRIMINANT ANALYSIS
9.3 Filtered delay embeddings
Systems of Differential Equations Nonhomogeneous Systems
spike-triggering stimulus features
EE513 Audio Signals and Systems
Feature space tansformation methods
LECTURE 09: DISCRIMINANT ANALYSIS
Principal Component Analysis
Presentation transcript:

spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of neural computation

Given a set of data, want to find the best reduced dimensional description. The data are the set of stimuli that lead up to a spike, S n (t), where t = 1, 2, 3, …., D Variance of a random variable = Covariance = Compute the difference matrix between covariance matrix of the spike-triggered stimuli and that of all stimuli Find its eigensystem to define the dimensions of interest

Eigensystem: any matrix M can be decomposed as M = U V U T, where U is an orthogonal matrix; V is a diagonal matrix, diag([ 1, 2,.., D ]). Each eigenvalue has a corresponding eigenvector, the orthogonal columns of U. The value of the eigenvalue classifies the eigenvectors as belonging to column space = orthogonal basis for relevant dimensions null space = orthogonal basis for irrelevant dimensions We will project the stimuli into the column space.

This method finds an orthogonal basis in which to describe the data, and ranks each “axis” according to its importance in capturing the data. Related to principal component analysis. Functional basis set.

Two large eigenmodes: f(t) and f’(t)

Example: An auditory neuron is responsible for detecting sound at a certain frequency . Phase is not important. The appropriate “directions” describing this neuron’s relevant feature space are Cos(  t) and Sin(  t). This will describe any signal at that frequency, independent of phase: cos(A+B) = cos(A) cos(B) - sin(A) sin(B)  cos(  t +  ) = a cos(  t) + b sin(  t), a = cos(  b = -sin(  ). Note that a 2 + b 2 = 1; all such stimuli lie on a ring.

Pre-spike time (ms) Velocity Modes look like local frequency detectors, in conjugate pairs (sin & cosine)… and they sum in quadrature, i.e. the decision function depends only on x 2 + y 

Basic types of computation: integrators (H1) differentiators (retina, simple cells, single neurons) frequency-power detectors (complex cells, somatosensory, auditory, retina)

spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of neural computation

Spike statistics Stochastic process that generates a sequence of events: point process Probability of an event at time t depends only on preceding event: renewal process All events are statistically independent: Poisson process Homogeneous Poisson: r(t) = r independent of time probability to see a spike only depends on the time you watch. P T [n] = (rT) n exp(-rT)/n! Exercise: the mean of this distribution is rT the variance of this distribution is also rT. The Fano factor = variance/mean = 1 for Poisson processes. The CV = coefficient of variation = STD/mean = 1 for Poisson Interspike interval distribution P(T) = r exp(-rT)

The Poisson model (homogeneous) Probability of n spikes in time T as function of (rate  T) Poisson approaches Gaussian for large rT (here = 10)

How good is the Poisson model? Fano Factor Fano factor Data fit to: variance = A  mean B A B Area MT

How good is the Poisson model? ISI analysis ISI Distribution from an area MT Neuron ISI distribution generated from a Poisson model with a Gaussian refractory period

How good is the Poisson Model? C V analysis Coefficients of Variation for a set of V1 and MT Neurons Poisson Poisson with ref. period

Interval distribution of Hodgkin-Huxley neuron driven by noise

What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t), or a modulated spiking probability, P(r = spike|s(t)). We measure spike times. Implicit: a Poisson model, where spikes are generated randomly with local rate r(t). However, most spike trains are not Poisson (refractoriness, internal dynamics). Fine temporal structure might be meaningful.  Consider spike patterns or “words”, e.g. symbols including multiple spikes and the interval between retinal ganglion cells: “when” and “how much”

Spike Triggered Average 2-Spike Triggered Average (10 ms separation) 2-Spike Triggered Average (5 ms) Multiple spike symbols from the fly motion sensitive neuron

Let’s start with a rate response, r(t) and a stimulus, s(t). The optimal linear estimator is closest to satisfying Predicting the firing rate Want to solve for K. Multiply by s(t-  ’) and integrate over t: Note that we have produced terms which are simply correlation functions: Given a convolution, Fourier transform: Now we have a straightforward algebraic equation for K(w): Solving for K(t),

Predicting the firing rate For white noise, the correlation function C ss (  ) =    So K(  ) is simply C rs (  ). Going back to: