Zen, and the Art of Neural Decoding using an EM Algorithm Parameterized Kalman Filter and Gaussian Spatial Smoothing Michael Prerau, MS.

Slides:



Advertisements
Similar presentations
State Space Models. Let { x t :t T} and { y t :t T} denote two vector valued time series that satisfy the system of equations: y t = A t x t + v t (The.
Advertisements

Image Modeling & Segmentation
Unsupervised Learning
Expectation Maximization Expectation Maximization A “Gentle” Introduction Scott Morris Department of Computer Science.
Learning HMM parameters
EM Algorithm Jur van den Berg.
The EM algorithm LING 572 Fei Xia Week 10: 03/09/2010.
Spike Trains Kenneth D. Harris 3/2/2015. You have recorded one neuron How do you analyse the data? Different types of experiment: Controlled presentation.
OPTIMUM FILTERING.
Segmentation and Fitting Using Probabilistic Methods
Visual Recognition Tutorial
EE-148 Expectation Maximization Markus Weber 5/11/99.
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget.
… Hidden Markov Models Markov assumption: Transition model:
Midterm Review. The Midterm Everything we have talked about so far Stuff from HW I won’t ask you to do as complicated calculations as the HW Don’t need.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
Application of Statistical Techniques to Neural Data Analysis Aniket Kaloti 03/07/2006.
Unsupervised Learning: Clustering Rong Jin Outline  Unsupervised learning  K means for clustering  Expectation Maximization algorithm for clustering.
Most slides from Expectation Maximization (EM) Northwestern University EECS 395/495 Special Topics in Machine Learning.
Gaussian Mixture Example: Start After First Iteration.
Today Today: Chapter 9 Assignment: 9.2, 9.4, 9.42 (Geo(p)=“geometric distribution”), 9-R9(a,b) Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
Part 4 c Baum-Welch Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Expectation Maximization Algorithm
Maximum Likelihood (ML), Expectation Maximization (EM)
Kalman Filtering Jur van den Berg. Kalman Filtering (Optimal) estimation of the (hidden) state of a linear dynamic process of which we obtain noisy (partial)
What is it? When would you use it? Why does it work? How do you implement it? Where does it stand in relation to other methods? EM algorithm reading group.
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Kernel Methods Part 2 Bing Han June 26, Local Likelihood Logistic Regression.
Statistical analysis and modeling of neural data Lecture 4 Bijan Pesaran 17 Sept, 2007.
Expectation-Maximization (EM) Chapter 3 (Duda et al.) – Section 3.9
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
EM Algorithm Likelihood, Mixture Models and Clustering.
A Unifying Review of Linear Gaussian Models
1 EM for BNs Graphical Models – Carlos Guestrin Carnegie Mellon University November 24 th, 2008 Readings: 18.1, 18.2, –  Carlos Guestrin.
Likelihood probability of observing the data given a model with certain parameters Maximum Likelihood Estimation (MLE) –find the parameter combination.
EM Algorithm in HMM and Linear Dynamical Systems by Yang Jinsan.
EM and expected complete log-likelihood Mixture of Experts
Segmental Hidden Markov Models with Random Effects for Waveform Modeling Author: Seyoung Kim & Padhraic Smyth Presentor: Lu Ren.
Least-Mean-Square Training of Cluster-Weighted-Modeling National Taiwan University Department of Computer Science and Information Engineering.
Learning Theory Reza Shadmehr logistic regression, iterative re-weighted least squares.
Prognosis of gear health using stochastic dynamical models with online parameter estimation 10th International PhD Workshop on Systems and Control a Young.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
CY3A2 System identification1 Maximum Likelihood Estimation: Maximum Likelihood is an ancient concept in estimation theory. Suppose that e is a discrete.
Lecture 6 Spring 2010 Dr. Jianjun Hu CSCE883 Machine Learning.
CS Statistical Machine learning Lecture 24
Gaussian Mixture Models and Expectation-Maximization Algorithm.
An Introduction To The Kalman Filter By, Santhosh Kumar.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Kalman Filtering And Smoothing
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Auto-regressive dynamical models Continuous form of Markov process Linear Gaussian model Hidden states and stochastic observations (emissions) Statistical.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Computational Intelligence: Methods and Applications Lecture 26 Density estimation, Expectation Maximization. Włodzisław Duch Dept. of Informatics, UMK.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Locating a Shift in the Mean of a Time Series Melvin J. Hinich Applied Research Laboratories University of Texas at Austin
Probability Theory and Parameter Estimation I
LECTURE 11: Advanced Discriminant Analysis
ECE 7251: Signal Detection and Estimation
Classification of unlabeled data:
Kalman’s Beautiful Filter (an introduction)
Latent Variables, Mixture Models and EM
Lecture 17 Kalman Filter.
Biointelligence Laboratory, Seoul National University
Probabilistic Surrogate Models
Presentation transcript:

Zen, and the Art of Neural Decoding using an EM Algorithm Parameterized Kalman Filter and Gaussian Spatial Smoothing Michael Prerau, MS

Encoding/Decoding Process Generate a smoothed Gaussian white noise stimulus Generate a random kernel, D and convolve with the stimulus to generate a spike rate Drive Poisson spike generator Decode and find K Use K to decode from new stimuli “real time”

Encoding/Decoding Process

Encoding/Decoding

Stimulus

Decoded Estimate

State-Space Modeling Hidden State: Where sputnik really is Observations: What the towers see State equation: How sputnik ideally moves Observation equation: If we knew where sputnik was, how would that relate to our observations? Parameters:

State-Space Modeling Observations State estimate

The Kalman Filter Gaussian state The actual stimulus intensity Gaussian observations The filtered estimate State Equation Observation Equation State Estimate

State Equation: Random Walk AR Model Observation Equation: Linear Model Parameters The Kalman Filter Application to the Intensity Estimate where

Complete Data Likelihood Log-likelihood The Kalman Filter Application to the Intensity Estimate

Forward Filter Derivation Most likely hidden state will maximize log-likelihood: Maximize for x k and solve: Arrange Kalman style:

For hidden state variance, first take the 2 nd derivative of the log likelihood: Then take the negative of the inverse for the variance of the hidden state: Forward Filter Derivation

The EM Algorithm Suppose we don’t know the parameter values? Use the Expectation Maximization (EM) Algorithm (Dempster, Laird, and Rubin, 1977) Iterative maximization E-step: Take the most likely (Expected value) value of the state process given the parameters M-step: Maximize for the most likely parameters given the estimated state values

E-Step for Intensity Model Take the expected value of the joint likelihood: We will encounter terms such as: Can be solved with the state-space covariance algorithm (De Jong and MacKinnon, 1988)

Example : M-Step for Intensity Model For the M-Step, maximize with respect to each parameter. Set equal to zero and solve

M-Step for Intensity Model M-Step Summary:

The EM Algorithm

Kalman Estimate

2D Gaussian Spatial Smoothing

Gaussian Spatially Smoothed Estimate

Kalman Filtering the Gaussian Smoothed Estimate

Comparison

Stimulus S est KalmanGaussian Smoothed Kalman

fin