1 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB) [Bayesian Brain]

Slides:



Advertisements
Similar presentations
What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
Advertisements

What is the neural code?. Alan Litke, UCSD Reading out the neural code.
The linear/nonlinear model s*f 1. The spike-triggered average.
Component Analysis (Review)
Spike Train Statistics Sabri IPM. Review of spike train  Extracting information from spike trains  Noisy environment:  in vitro  in vivo  measurement.
Neurophysics Part 1: Neural encoding and decoding (Ch 1-4) Stimulus to response (1-2) Response to stimulus, information in spikes (3-4) Part 2: Neurons.
1 Testing the Efficiency of Sensory Coding with Optimal Stimulus Ensembles C. K. Machens, T. Gollisch, O. Kolesnikova, and A.V.M. Herz Presented by Tomoki.
Artificial Spiking Neural Networks
What is the language of single cells? What are the elementary symbols of the code? Most typically, we think about the response as a firing rate, r(t),
1 Correlations Without Synchrony Presented by: Oded Ashkenazi Carlos D. Brody.
NOISE and DELAYS in NEUROPHYSICS Andre Longtin Center for Neural Dynamics and Computation Department of Physics Department of Cellular and Molecular Medicine.
1 3. Spiking neurons and response variability Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Point process and hybrid spectral analysis.
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
For a random variable X with distribution p(x), entropy is given by H[X] = -  x p(x) log 2 p(x) “Information” = mutual information: how much knowing the.
Lecture Notes for CMPUT 466/551 Nilanjan Ray
Dimensional reduction, PCA
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
Spike Train decoding Summary Decoding of stimulus from response –Two choice case Discrimination ROC curves –Population decoding MAP and ML estimators.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Cracking the Population Code Dario Ringach University of California, Los Angeles.
Summarized by Soo-Jin Kim
Biological Modeling of Neural Networks: Week 9 – Adaptation and firing patterns Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 Firing patterns and adaptation.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Biological Modeling of Neural Networks Week 8 – Noisy input models: Barrage of spike arrivals Wulfram Gerstner EPFL, Lausanne, Switzerland 8.1 Variation.
STUDY, MODEL & INTERFACE WITH MOTOR CORTEX Presented by - Waseem Khatri.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
Neural codes and spiking models. Neuronal codes Spiking models: Hodgkin Huxley Model (small regeneration) Reduction of the HH-Model to two dimensions.
On Natural Scenes Analysis, Sparsity and Coding Efficiency Redwood Center for Theoretical Neuroscience University of California, Berkeley Mind, Brain.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Theoretical Neuroscience Physics 405, Copenhagen University Block 4, Spring 2007 John Hertz (Nordita) Office: rm Kc10, NBI Blegdamsvej Tel (office)
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
What is the neural code?. Alan Litke, UCSD What is the neural code?
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Principal Component Analysis Machine Learning. Last Time Expectation Maximization in Graphical Models – Baum Welch.
Estimating the firing rate
BCS547 Neural Decoding.
1 4. Associators and Synaptic Plasticity Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Neural Coding: Integrate-and-Fire Models of Single and Multi-Neuron Responses Jonathan Pillow HHMI and NYU Oct 5, Course.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
6.4 Random Fields on Graphs 6.5 Random Fields Models In “Adaptive Cooperative Systems” Summarized by Ho-Sik Seok.
Feature Selection and Extraction Michael J. Watts
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
CHARACTERIZATION OF NONLINEAR NEURON RESPONSES AMSC 664 Final Presentation Matt Whiteway Dr. Daniel A. Butts Neuroscience.
Multi-label Prediction via Sparse Infinite CCA Piyush Rai and Hal Daume III NIPS 2009 Presented by Lingbo Li ECE, Duke University July 16th, 2010 Note:
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
Information Processing by Neuronal Populations Chapter 6: Single-neuron and ensemble contributions to decoding simultaneously recoded spike trains Information.
1 5. Representations and the neural code Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science.
Chapter 4. Analysis of Brain-Like Structures and Dynamics (2/2) Creating Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans 09/25.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Neural Codes. Neuronal codes Spiking models: Hodgkin Huxley Model (brief repetition) Reduction of the HH-Model to two dimensions (general) FitzHugh-Nagumo.
Ch 7. Computing with Population Coding Summarized by Kim, Kwonill Bayesian Brain: Probabilistic Approaches to Neural Coding P. Latham & A. Pouget.
The Neural Code Baktash Babadi SCS, IPM Fall 2004.
Biological Modeling of Neural Networks Week 11 – Variability and Noise: Autocorrelation Wulfram Gerstner EPFL, Lausanne, Switzerland 11.1 Variation of.
Bayesian Brain - Chapter 11 Neural Models of Bayesian Belief Propagation Rajesh P.N. Rao Summary by B.-H. Kim Biointelligence Lab School of.
Biointelligence Laboratory, Seoul National University
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Ch 14. Combining Models Pattern Recognition and Machine Learning, C. M
Multiplexed Spike Coding and Adaptation in the Thalamus
SMEM Algorithm for Mixture Models
Biointelligence Laboratory, Seoul National University
A Fast Fixed-Point Algorithm for Independent Component Analysis
Multiplexed Spike Coding and Adaptation in the Thalamus
Information Processing by Neuronal Populations Chapter 5 Measuring distributed properties of neural representations beyond the decoding of local variables:
Multiplexed Spike Coding and Adaptation in the Thalamus
Lecture 16. Classification (II): Practical Considerations
Toward Functional Classification of Neuronal Types
Marios Mattheakis and Pavlos Protopapas
Presentation transcript:

1 2 Spike Coding Adrienne Fairhall Summary by Kim, Hoon Hee (SNU-BI LAB) [Bayesian Brain]

(C) 2007 SNU CSE Biointelligence Lab Spike Coding Spikes information  Single  Sequences Spike encoding  Cascade model  Covariance Method Spike decoding Adaptive spike coding 2

(C) 2007 SNU CSE Biointelligence Lab 3 Spikes: What kind of Code?

(C) 2007 SNU CSE Biointelligence Lab Spikes: Timing and Information 4 Entropy Mutual Information  S: stimulus, R: response  Total Entropy Noise Entropy

(C) 2007 SNU CSE Biointelligence Lab Spikes: Information in Single Spikes Spike (r=1) No spike (r=0) Noise Entropy Information Information per spike 5

(C) 2007 SNU CSE Biointelligence Lab Spikes: Information in Spike Sequences (1) A spike train and its representation in terms of binary “letters.” N bins : N-letter binary words, w. 6 P(w) P(w|s(t))

(C) 2007 SNU CSE Biointelligence Lab Spikes: Information in Spike Sequences (2) Two parameters  dt: bin width  L=N*dtTotal : duration of the word The issue of finite sampling poses something of a problem for information-theoretic approaches 7 Information rate

(C) 2007 SNU CSE Biointelligence Lab Encoding and Decoding : Linear Decoding Optimal linear kernel K(t) C rs : spike-triggered average (STA) C ss : autocorrelation Using white noise stimulus 8

(C) 2007 SNU CSE Biointelligence Lab Encoding and Decoding: Cascade Models Cascade Models Decision function EX) Two principal weakness  It is limited to only one linear feature  The model as a predictor for neural output is that it generate only a time-varying probability, or rate.  Poisson spike train (Every spike is independent.) 9

(C) 2007 SNU CSE Biointelligence Lab Encoding and Decoding: Cascade Models Modified cascade model Integrate-and-fire model 10

(C) 2007 SNU CSE Biointelligence Lab Encoding and Decoding: Finding Multiple Features Spike-triggered covariance matrix Eigenvalue decomposition of :  Irrelevant dimensions : eigenvalues close to zero  Relevant dimensions : variance either less than the prior or greater. Principal component analysis (PCA) 11

(C) 2007 SNU CSE Biointelligence Lab Examples of the Application of Covariance Methods (1) Neural Model Second filter Two significant modes(negative) STA is linear combination of f and f’. Noise effect Spike interdependence 12

(C) 2007 SNU CSE Biointelligence Lab Examples of the Application of Covariance Methods (2) Leaky integrate-and-fire neuron (LIF) C: capacitance, R: resistance, Vc: theshold, V: membrane potential Causal exponential kernel Low limit of integration 13

(C) 2007 SNU CSE Biointelligence Lab Examples of the Application of Covariance Methods (3) How change in the neuron’s biophysics  Nucleus magnocellularis(NM)  DTX effect 14 Reverse correlation

(C) 2007 SNU CSE Biointelligence Lab Using Information to Assess Decoding Decoding : to what extent has one captured what is relevant about the stimulus? Use Bayse rule N-dimensional model Single-spike information 1D STA-based model recovers ~ 63%, 2D model recovers ~75%. 15

(C) 2007 SNU CSE Biointelligence Lab Fly large monopolar cells Adaptive Spike Coding (1) Adaptation (cat’s toepad) 16

(C) 2007 SNU CSE Biointelligence Lab Adaptive Spike Coding (2) Although the firing rate is changing, we can use a variant of the information methods. White noise stimulus Standard deviation 17 Input/output relation