Using Feed Forward NN for EEG Signal Classification Amin Fazel April 2006 Department of Computer Science and Electrical Engineering University of Missouri.

Slides:



Advertisements
Similar presentations
Liner Predictive Pitch Synchronization Voiced speech detection, analysis and synthesis Jim Bryan Florida Institute of Technology ECE5525 Final Project.
Advertisements

Brain-computer interfaces: classifying imaginary movements and effects of tDCS Iulia Comşa MRes Computational Neuroscience and Cognitive Robotics Supervisors:
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach By Adil Mehmood Khan.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
Introduction The aim the project is to analyse non real time EEG (Electroencephalogram) signal using different mathematical models in Matlab to predict.
A 12-WEEK PROJECT IN Speech Coding and Recognition by Fu-Tien Hsiao and Vedrana Andersen.
ELE Adaptive Signal Processing
Presenter: Yufan Liu November 17th,
Overview of Adaptive Multi-Rate Narrow Band (AMR-NB) Speech Codec
Pole Zero Speech Models Speech is nonstationary. It can approximately be considered stationary over short intervals (20-40 ms). Over thisinterval the source.
Adaboost and its application
Classification of Music According to Genres Using Neural Networks, Genetic Algorithms and Fuzzy Systems.
Self-organizing Learning Array based Value System — SOLAR-V Yinyin Liu EE690 Ohio University Spring 2005.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Audio classification Discriminating speech, music and environmental audio Rajas A. Sambhare ECE 539.
1 CS 551/651: Structure of Spoken Language Lecture 8: Mathematical Descriptions of the Speech Signal John-Paul Hosom Fall 2008.
Functional Brain Signal Processing: EEG & fMRI Lesson 8 Kaushik Majumdar Indian Statistical Institute Bangalore Center M.Tech.
6.2 - The power Spectrum of a Digital PAM Signal A digtal PAM signal at the input to a communication channl scale factor (where 2d is the “Euclidean.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
April 12, 2005Week 13 1 EE521 Analog and Digital Communications James K. Beard, Ph. D. Tuesday, March 29, 2005
CE Digital Signal Processing Fall 1992 Waveform Coding Hossein Sameti Department of Computer Engineering Sharif University of Technology.
Supervisor: Dr. Eddie Jones Co-supervisor: Dr Martin Glavin Electronic Engineering Department Final Year Project 2008/09 Development of a Speaker Recognition/Verification.
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
Overview of Part I, CMSC5707 Advanced Topics in Artificial Intelligence KH Wong (6 weeks) Audio signal processing – Signals in time & frequency domains.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
Authors: Sriram Ganapathy, Samuel Thomas, and Hynek Hermansky Temporal envelope compensation for robust phoneme recognition using modulation spectrum.
Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method.
Classifying Event-Related Desynchronization in EEG, ECoG, and MEG Signals Kim Sang-Hyuk.
Analysis of Movement Related EEG Signal by Time Dependent Fractal Dimension and Neural Network for Brain Computer Interface NI NI SOE (D3) Fractal and.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Non-Bayes classifiers. Linear discriminants, neural networks.
EE Audio Signals and Systems Linear Prediction Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
VOCODERS. Vocoders Speech Coding Systems Implemented in the transmitter for analysis of the voice signal Complex than waveform coders High economy in.
Look who’s talking? Project 3.1 Yannick Thimister Han van Venrooij Bob Verlinden Project DKE Maastricht University.
Akram Bitar and Larry Manevitz Department of Computer Science
Dr. Sudharman K. Jayaweera and Amila Kariyapperuma ECE Department University of New Mexico Ankur Sharma Department of ECE Indian Institute of Technology,
Chapter 2 Single Layer Feedforward Networks
CS434/534: Mobile Computing and Wireless Networks Y. Richard Yang 08/30/2012.
RCC-Mean Subtraction Robust Feature and Compare Various Feature based Methods for Robust Speech Recognition in presence of Telephone Noise Amin Fazel Sharif.
Chapter 20 Speech Encoding by Parameters 20.1 Linear Predictive Coding (LPC) 20.2 Linear Predictive Vocoder 20.3 Code Excited Linear Prediction (CELP)
By Sarita Jondhale 1 Signal preprocessor: “conditions” the speech signal s(n) to new form which is more suitable for the analysis Postprocessor: operate.
Chapter 15: Classification of Time- Embedded EEG Using Short-Time Principal Component Analysis by Nguyen Duc Thang 5/2009.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
Chapter 11 – Neural Nets © Galit Shmueli and Peter Bruce 2010 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
Automatic Classification of Audio Data by Carlos H. L. Costa, Jaime D. Valle, Ro L. Koerich IEEE International Conference on Systems, Man, and Cybernetics.
Deep Learning Amin Sobhani.
[Ran Manor and Amir B.Geva] Yehu Sapir Outlines Review
Digital Communications Chapter 13. Source Coding
Vocoders.
Soft Computing Applied to Finite Element Tasks
Automatic Sleep Stage Classification using a Neural Network Algorithm
Linear Prediction.
1 Vocoders. 2 The Channel Vocoder (analyzer) : The channel vocoder employs a bank of bandpass filters,  Each having a bandwidth between 100 HZ and 300.
Term Project Presentation By: Keerthi C Nagaraj Dated: 30th April 2003
Prof. Carolina Ruiz Department of Computer Science
Machine Learning Today: Reading: Maria Florina Balcan
Neuro-Computing Lecture 4 Radial Basis Function Network
Department of Electrical Engineering
Linear Prediction.
Speech Processing Final Project
Lecture 16. Classification (II): Practical Considerations
Multichannel Link Path Analysis
Akram Bitar and Larry Manevitz Department of Computer Science
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Using Feed Forward NN for EEG Signal Classification Amin Fazel April 2006 Department of Computer Science and Electrical Engineering University of Missouri – Kansas City

Thursday, April 27, 2006 CS and EE Department UMKC 2/21 Introduction Features Classifier Results Outline

Thursday, April 27, 2006 CS and EE Department UMKC 3/21 Introduction Developed systems –Allow trained subjects to communicate via BCIs –BCI control signal Amplitude of mu (8-12 Hz) or high-beta (18-26) rhythms P300 event-related potential Slow Cortical Potential (SCPs) –Frequency domain information mu and/or beta rhythm amplitude –Time domain information P300 dc potential in the form of SCPs

Thursday, April 27, 2006 CS and EE Department UMKC 4/21 Data Set This set pertaining to SCPs in a healthy human subject –Subjects were asked to move a cursor up or down on a computer screen while their SCPs were recorded –6 channels with sampling rate 256Hz –Training Set: 268 trials 168 originated from day I 100 trails from day II –Test Set: 293 trials

Thursday, April 27, 2006 CS and EE Department UMKC 5/21 Data Set

Thursday, April 27, 2006 CS and EE Department UMKC 6/21 Introduction Features Classifier Results Outline

Thursday, April 27, 2006 CS and EE Department UMKC 7/21 Feature Used SCPs features: –Using mean subtraction of the time domain data for each of 6 channels To prevent dc offset effect

Thursday, April 27, 2006 CS and EE Department UMKC 8/21 Mean Subtraction

Thursday, April 27, 2006 CS and EE Department UMKC 9/21 Feature Extraction Linear Predictive Coding (LPC) estimating the parameters of the current signal sample by using the parameter values from linear combinations of past signal samples

Thursday, April 27, 2006 CS and EE Department UMKC 10/21 LPC FE - Framming Frame Blocking LPC Analysis s (n) Feature vector NM N Overlap of 50% Framing window = 0.5 sec 13 frames 896 sample or 3.5 sec per channel

Thursday, April 27, 2006 CS and EE Department UMKC 11/21 Features: LPC Linear Predictive Coding (LPC) provides – low-dimension representation of signal at one frame – “analytically tractable” method LPC models signal as approximate linear combination of previous p samples: where a 1, a 2, … a p are constant for each frame of signal.

Thursday, April 27, 2006 CS and EE Department UMKC 12/21 Linear Predictive Model LPC coefficients are a filter H(z) s(n) u(n) G

Thursday, April 27, 2006 CS and EE Department UMKC 13/21 LPC extraction Given a window of signal samples, the first p+1 terms of the autocorrelation sequence are calculated from where i = 0,..,p. The filter coefficients are then computed recursively using a set of auxiliary coefficients k i which can be interpreted as the reflection coefficients and the prediction error E which is initially equal to r 0. Let and be the reflection and filter coefficients for a filter of order i-1, then a filter of order i can be calculated in three steps.

Thursday, April 27, 2006 CS and EE Department UMKC 14/21 LPC extraction Firstly, a new set of reflection coefficients are calculated for j = 1,.., i-1 and Secondly, E is updated Finally, new filter coefficients are computed for j = 1,…,i-1 and This process is repeated from i=1 through to the required filter order i=p.

Thursday, April 27, 2006 CS and EE Department UMKC 15/21 Introduction Feature Extraction Classifier Results Outline

Thursday, April 27, 2006 CS and EE Department UMKC 16/21 Feed Forward NN Multilayer feed forward networks are universal approximators (Hornik 1989) –as few as one hidden layer However, in practice, things are not so simple

Thursday, April 27, 2006 CS and EE Department UMKC 17/21 Feed Forward NN As we have –13 frames –12 LPC features for each –6 channels So –936 PEs in the input layer –10, 40 PEs in the hidden layer –1 PE in the output layer

Thursday, April 27, 2006 CS and EE Department UMKC 18/21 Introduction Features Classifier Results Outline

Thursday, April 27, 2006 CS and EE Department UMKC 19/21 Results Number of Hidden Layer Using validation data set Test Set (% Mean) Test Set (% Best) 10No Yes Yes Yes Yes No Mean data subtraction was applied. Features: 12 order LPC

Thursday, April 27, 2006 CS and EE Department UMKC 20/21 Introduction Feature s Classifier Results Outline

Thanks for your patience !