Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.

Slides:



Advertisements
Similar presentations
FMRI Methods Lecture 10 – Using natural stimuli. Reductionism Reducing complex things into simpler components Explaining the whole as a sum of its parts.
Advertisements

What is the neural code? Puchalla et al., What is the neural code? Encoding: how does a stimulus cause the pattern of responses? what are the responses.
What is the neural code?. Alan Litke, UCSD Reading out the neural code.
Independent Component Analysis
Independent Component Analysis: The Fast ICA algorithm
The linear/nonlinear model s*f 1. The spike-triggered average.
Chapter 2.
An Information-Maximization Approach to Blind Separation and Blind Deconvolution A.J. Bell and T.J. Sejnowski Computational Modeling of Intelligence (Fri)
Biological Modeling of Neural Networks: Week 9 – Coding and Decoding Wulfram Gerstner EPFL, Lausanne, Switzerland 9.1 What is a good neuron model? - Models.
Dimension reduction (1)
Microphone Array Post-filter based on Spatially- Correlated Noise Measurements for Distant Speech Recognition Kenichi Kumatani, Disney Research, Pittsburgh.
2008 SIAM Conference on Imaging Science July 7, 2008 Jason A. Palmer
Visual Recognition Tutorial
Independent Component Analysis & Blind Source Separation
For stimulus s, have estimated s est Bias: Cramer-Rao bound: Mean square error: Variance: Fisher information How good is our estimate? (ML is unbiased:
Parameter Estimation: Maximum Likelihood Estimation Chapter 3 (Duda et al.) – Sections CS479/679 Pattern Recognition Dr. George Bebis.
Independent Component Analysis (ICA)
For a random variable X with distribution p(x), entropy is given by H[X] = -  x p(x) log 2 p(x) “Information” = mutual information: how much knowing the.
Independent Component Analysis & Blind Source Separation Ata Kaban The University of Birmingham.
Projection Pursuit. Projection Pursuit (PP) PCA and FDA are linear, PP may be linear or non-linear. Find interesting “criterion of fit”, or “figure of.
Information Theory and Learning
3/24/2006Lecture notes for Speech Communications Multi-channel speech enhancement Chunjian Li DICOM, Aalborg University.
Independent Component Analysis (ICA) and Factor Analysis (FA)
ICA of Functional MRI Data: An Overview V.D. Calhoun, T. Adali, L.K. Hansen, et al., ICA 2003 Symposium Paper Presentation by Avshalom Elyada February.
Bayesian belief networks 2. PCA and ICA
Spike Train decoding Summary Decoding of stimulus from response –Two choice case Discrimination ROC curves –Population decoding MAP and ML estimators.
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Linear and Non-Linear ICA-BSS I C A  Independent Component Analysis B S S  Blind Source Separation Carlos G. Puntonet Dept.of Architecture.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
HELSINKI UNIVERSITY OF TECHNOLOGY LABORATORY OF COMPUTER AND INFORMATION SCIENCE NEURAL NETWORKS RESEACH CENTRE Variability of Independent Components.
Survey on ICA Technical Report, Aapo Hyvärinen, 1999.
ERP DATA ACQUISITION & PREPROCESSING EEG Acquisition: 256 scalp sites; vertex recording reference (Geodesic Sensor Net)..01 Hz to 100 Hz analogue filter;
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Heart Sound Background Noise Removal Haim Appleboim Biomedical Seminar February 2007.
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
Correntropy as a similarity measure Weifeng Liu, P. P. Pokharel, Jose Principe Computational NeuroEngineering Laboratory University of Florida
Independent Component Analysis Zhen Wei, Li Jin, Yuxue Jin Department of Statistics Stanford University An Introduction.
Blind Source Separation by Independent Components Analysis Professor Dr. Barrie W. Jervis School of Engineering Sheffield Hallam University England
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Projects: 1.Predictive coding in balanced spiking networks (Erwan Ledoux). 2.Using Canonical Correlation Analysis (CCA) to analyse neural data (David Schulz).
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
On Natural Scenes Analysis, Sparsity and Coding Efficiency Redwood Center for Theoretical Neuroscience University of California, Berkeley Mind, Brain.
An Introduction to Blind Source Separation Kenny Hild Sept. 19, 2001.
What is the neural code?. Alan Litke, UCSD What is the neural code?
BCS547 Neural Decoding.
Computational Intelligence: Methods and Applications Lecture 8 Projection Pursuit & Independent Component Analysis Włodzisław Duch Dept. of Informatics,
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Principal Component Analysis (PCA)
Understanding early visual coding from information theory By Li Zhaoping Lecture at EU advanced course in computational neuroscience, Arcachon, France,
Independent Component Analysis Independent Component Analysis.
Introduction to Independent Component Analysis Math 285 project Fall 2015 Jingmei Lu Xixi Lu 12/10/2015.
Ch 3. Likelihood Based Approach to Modeling the Neural Code Bayesian Brain: Probabilistic Approaches to Neural Coding eds. K Doya, S Ishii, A Pouget, and.
An Introduction of Independent Component Analysis (ICA) Xiaoling Wang Jan. 28, 2003.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 09: Discriminant Analysis Objectives: Principal.
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
By: Soroosh Mariooryad Advisor: Dr.Sameti 1 BSS & ICA Speech Recognition - Spring 2008.
CSC2535: Computation in Neural Networks Lecture 7: Independent Components Analysis Geoffrey Hinton.
Research Process. Information Theoretic Blind Source Separation with ANFIS and Wavelet Analysis 03 February 2006 서경호.
LECTURE 11: Advanced Discriminant Analysis
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Brain Electrophysiological Signal Processing: Preprocessing
PCA vs ICA vs LDA.
A Fast Fixed-Point Algorithm for Independent Component Analysis
Parametric Methods Berlin Chen, 2005 References:
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Feature Extraction (I)
Presentation transcript:

Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006

Introduction Levels of Analysis in Systems and Cognitive Neuroscience  Spikes: primary neural signals  Single cells and receptive fields  Multiple electrode recordings  fMRI  EEG and ERPs Retinal Ganglion Cell Receptive Field Visual Cortical (V1) Cell Receptive Field

Receptive Field Estimation: A New Information Theoretic Method (Sharpee et al, 2004) V1 cells of primary concern Linear-Nonlinear Model: estimate the Wiener filter, estimate non-linearity graphically Classically, white noise stimuli were used Works best for Gaussian stimulus ensembles Natural Stimuli: non-Gaussian From Simoncelli et al, 2003

The Model Receptive field as a special dimension in the high-dimensional stimulus space Hence, reduce dimensionality of the stimulus space conditioned on the neural response To formulate this, define the density  I spike defines the mutual information between the entire stimulus ensemble and the spike In practice, use the time average equation Sharpee et al, 2004

Optimization Algortihm and Results Finding “most informative” dimensions:  I spike : total mutual information;  If only a few dimensions in the stimulus space are relevant, then I spike should be equal to mutual information between spike and the relevant subspace in the direction of the vector v  Find the pdfs of the projections onto the relevant subspace v  Maximize I v with respect to v to obtain the relevant dimension, i.e., the receptive field Figure: the comparison of the standard method with the present method applied on model in last slide

Independent Component Analysis (ICA) Blind source separation Blind: input and transfer function unknown Very ill-posed without further assumptions  f linear  A, usually symmetric  s are independent (hence ICA)  Most commonly: n is zero Independece: joint density factorizes Independence: mutual information is zero The problem: estimate independent sources through inversion of the matrix A. Observed signals Unknown sources Additive/observational noise Unknown function

ICA Estimation Techniques Basic idea: minimize mutual information between the components of s. Maximum likelihood (ML) method  Likelihood definition  Log-likelihood  Batch of T samples  Use W = A -1 Maximize L; equivalent to minimizing mutual information

ICA estimation (contd.) Cumulant (moment) based methods: kurtosis = fourth central moment; mutual information approximations involving kurtosis Negentropy: difference of entropies between Gaussian vector and the vector of interest; measure of non-Gaussianity Infomax ICA: maximize information transmission in a neural network

Applications of ICA EEG and ERP analysis  Infomax ICA most commonly applied technique; gives rise to temporally independent EEG signals  Independent components: can they tell us anything about the brain activity? fMRI: spatially independent processes (?) Speech separation Natural images: independent components give V1 like receptive fields Source:

Other techniques applicable to neural science Point process analysis of neural coding Information theoretic analysis of information coding in the neural system Principal components analysis to neural recordings and spike sorting Recently developed nonlinear dimensionality reduction techniques like Isomap, Hessian eigenmaps, Laplacian eigenmaps etc in face and object recognition.