spike-triggering stimulus features

Slides:



Advertisements
Similar presentations
FMRI Methods Lecture 10 – Using natural stimuli. Reductionism Reducing complex things into simpler components Explaining the whole as a sum of its parts.
Advertisements

Pattern Recognition and Machine Learning
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
What is the neural code?. Alan Litke, UCSD Reading out the neural code.
The linear/nonlinear model s*f 1. The spike-triggered average.
Air Force Technical Applications Center 1 Subspace Based Three- Component Array Processing Gregory Wagner Nuclear Treaty Monitoring Geophysics Division.
Covariance Matrix Applications
Face Recognition Ying Wu Electrical and Computer Engineering Northwestern University, Evanston, IL
Neural Computation Chapter 3. Neural Computation Outline Comparison of behavioral and neural response on a discrimination task –Bayes rule –ROC curves.
Dimensionality Reduction PCA -- SVD
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
COMPUTER AIDED DIAGNOSIS: FEATURE SELECTION Prof. Yasser Mostafa Kadah –
What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t),
ROC Statistics for the Lazy Machine Learner in All of Us Bradley Malin Lecture for COS Lab School of Computer Science Carnegie Mellon University 9/22/2005.
Lecture 7: Principal component analysis (PCA)
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
How well can we learn what the stimulus is by looking at the neural responses? We will discuss two approaches: devise and evaluate explicit algorithms.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
Dimensionality reduction. Outline From distances to points : – MultiDimensional Scaling (MDS) – FastMap Dimensionality Reductions or data projections.
Supervised and Unsupervised learning and application to Neuroscience Cours CA6b-4.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Principal Component Analysis Principles and Application.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Summarized by Soo-Jin Kim
Chapter 2 Dimensionality Reduction. Linear Methods
Presented By Wanchen Lu 2/25/2013
Principles of Pattern Recognition
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Chapter 9 Factor Analysis
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Basics of Neural Networks Neural Network Topologies.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
What is the neural code?. Alan Litke, UCSD What is the neural code?
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
1 Bayesian Decision Theory Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking and.
BCS547 Neural Decoding.
Visualizing and Exploring Data 1. Outline 1.Introduction 2.Summarizing Data: Some Simple Examples 3.Tools for Displaying Single Variable 4.Tools for Displaying.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU Oct 5, Course.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Spectrum Sensing In Cognitive Radio Networks
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Feature Extraction 主講人:虞台文.
Information Processing by Neuronal Populations Chapter 6: Single-neuron and ensemble contributions to decoding simultaneously recoded spike trains Information.
Multivariate statistical methods. Multivariate methods multivariate dataset – group of n objects, m variables (as a rule n>m, if possible). confirmation.
Lecture 1.31 Criteria for optimal reception of radio signals.
Decoding How well can we learn what the stimulus is by looking
LECTURE 11: Advanced Discriminant Analysis
Multivariate Methods of
LECTURE 10: DISCRIMINANT ANALYSIS
9.3 Filtered delay embeddings
Synapses Signal is carried chemically across the synaptic cleft.
IX International Workshop ACAT
Principal Component Analysis
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
EE513 Audio Signals and Systems
SVD, PCA, AND THE NFL By: Andrew Zachary.
Generally Discriminant Analysis
Principal Components What matters most?.
LECTURE 09: DISCRIMINANT ANALYSIS
Feature Selection Methods
Principal Component Analysis
Perceptual learning Nisheeth 15th February 2019.
Presentation transcript:

spike-triggering stimulus features Functional models of neural computation spike-triggering stimulus features f1 multidimensional decision function x1 stimulus X(t) f2 spike output Y(t) x2 f3 x3

Given a set of data, want to find the best reduced dimensional description. The data are the set of stimuli that lead up to a spike, Sn(t) , where t = 1, 2, 3, …., D Variance of a random variable = < (X-mean(X))2> Covariance = < (X – mean(X))T (X – mean(X)) > Compute the difference matrix between covariance matrix of the spike-triggered stimuli and that of all stimuli Find its eigensystem to define the dimensions of interest

Eigensystem: any matrix M can be decomposed as M = U V UT , where U is an orthogonal matrix; V is a diagonal matrix, diag([l1,l2,..,lD]). Each eigenvalue has a corresponding eigenvector, the orthogonal columns of U. The value of the eigenvalue classifies the eigenvectors as belonging to column space = orthogonal basis for relevant dimensions null space = orthogonal basis for irrelevant dimensions We will project the stimuli into the column space.

This method finds an orthogonal basis in which to describe the data, and ranks each “axis” according to its importance in capturing the data. Related to principal component analysis.

Example: An auditory neuron is responsible for detecting sound at a certain frequency w. Phase is not important. The appropriate “directions” describing this neuron’s relevant feature space are Cos(wt) and Sin(wt). This will describe any signal at that frequency, independent of phase: cos(A+B) = cos(A) cos(B) - sin(A) sin(B)  cos(wt + f) = a cos(wt) + b sin(wt), a = cos(f), b = -sin(f). Note that a2 + b2 = 1; all such stimuli lie on a ring.

and they sum in quadrature, i.e. the decision function 50 100 150 -0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 Pre-spike time (ms) Velocity Modes look like local frequency detectors, in conjugate pairs (sin & cosine)… and they sum in quadrature, i.e. the decision function depends only on x2 + y2

Basic types of computation: integrators (H1) differentiators (retina, simple cells, single neurons) power detectors (complex cells, somatosensory, auditory, retina)

Decoding How well can we learn what the stimulus is by looking at the neural responses? Two approaches: devise explicit algorithms for extracting a stimulus estimate directly quantify the relationship between stimulus and response using information theory

Motion detection task: two-state forced choice Britten et al.: behavioral monkey data + neural responses

Discriminability: d’ = ( <r>+ - <r>- )/ sr Behavioral performance Neural data at different coherences Discriminability: d’ = ( <r>+ - <r>- )/ sr

z p(r|+) p(r|-) <r>+ <r>- Signal detection theory: r is the “test”. a(z) = P[ r>= z|-] false alarm rate, “size” b(z) = P[ r>= z|+] hit rate, “power” Could maximize P[correct] = (b(z) + 1 – a(z))/2

ROC curves: summarize performance of test for different thresholds z Want b  1, a  0.

The area under the ROC curve corresponds to P[correct] for a two-alternative forced choice task: first presentation acts as threshold for second. If p[r|+] and p[r|-] are both Gaussian, P[correct] = ½ erfc(-d’/2). Ideal observer: performs as area under ROC curve.

Close correspondence between neural and behaviour.. Why so many neurons? Correlations limit performance.

What is the best test function to use? (other than r) Neyman-Pearson lemma: the optimal test function is the likelihood ratio, l(r) = p[r|+] / p[r|-]. Note that l(z) = (db/dz) / (da/dz) = db/da