Automated Interpretation of EEGs: Integrating Temporal and Spectral Modeling Christian Ward, Dr. Iyad Obeid and Dr. Joseph Picone Neural Engineering Data.

Slides:



Advertisements
Similar presentations
1 Gesture recognition Using HMMs and size functions.
Advertisements

M. Tabrizi: Seizure Detection May, A COMPARATIVE ANALYSIS OF NONLINEAR FEATURES FOR AN HMM-BASED SEIZURE DETECTION SYSTEM Masih Tabrizi and Joseph.
Manual Interpretation of EEGs: A Machine Learning Perspective Christian Ward, Dr. Iyad Obeid and Dr. Joseph Picone Neural Engineering Data Consortium College.
An Overview of Machine Learning
Supervised Learning Recap
Nonparametric-Bayesian approach for automatic generation of subword units- Initial study Amir Harati Institute for Signal and Information Processing Temple.
Corpus Development EEG signal files and reports had to be manually paired, de-identified and annotated: Corpus Development EEG signal files and reports.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Page 0 of 8 Time Series Classification – phoneme recognition in reconstructed phase space Sanjay Patil Intelligent Electronics Systems Human and Systems.
Lecture #1COMP 527 Pattern Recognition1 Pattern Recognition Why? To provide machines with perception & cognition capabilities so that they could interact.
THE TUH EEG CORPUS: A Big Data Resource for Automated EEG Interpretation A. Harati, S. López, I. Obeid and J. Picone Neural Engineering Data Consortium.
CS Machine Learning. What is Machine Learning? Adapt to / learn from data  To optimize a performance function Can be used to:  Extract knowledge.
Abstract EEGs, which record electrical activity on the scalp using an array of electrodes, are routinely used in clinical settings to.
Automatic Labeling of EEGs Using Deep Learning M. Golmohammadi, A. Harati, S. Lopez I. Obeid and J. Picone Neural Engineering Data Consortium College of.
Isolated-Word Speech Recognition Using Hidden Markov Models
1 Robust HMM classification schemes for speaker recognition using integral decode Marie Roch Florida International University.
Data Processing Machine Learning Algorithm The data is processed by machine algorithms based on hidden Markov models and deep learning. They are then utilized.
Abstract The emergence of big data and deep learning is enabling the ability to automatically learn how to interpret EEGs from a big data archive. The.
7-Speech Recognition Speech Recognition Concepts
Views The architecture was specifically changed to accommodate multiple views. The used of the QStackedWidget makes it easy to switch between the different.
THE TUH EEG CORPUS: The Largest Open Source Clinical EEG Corpus Iyad Obeid and Joseph Picone Neural Engineering Data Consortium Temple University Philadelphia,
THE TUH EEG CORPUS: A Big Data Resource for Automated EEG Interpretation A. Harati, S. López, I. Obeid and J. Picone Neural Engineering Data Consortium.
Manual Interpretation of EEGs: A Machine Learning Perspective Christian Ward, Dr. Iyad Obeid and Dr. Joseph Picone Neural Engineering Data Consortium College.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
Data Acquisition An EEG measurement represents a difference between the voltages at two electrodes. The signal is usually displayed using a montage which.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
NLP. Introduction to NLP Sequence of random variables that aren’t independent Examples –weather reports –text.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
TUH EEG Corpus Data Analysis 38,437 files from the Corpus were analyzed. 3,738 of these EEGs do not contain the proper channel assignments specified in.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
Nuria Lopez-Bigas Methods and tools in functional genomics (microarrays) BCO17.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
Big Mechanism for Processing EEG Clinical Information on Big Data Aim 1: Automatically Recognize and Time-Align Events in EEG Signals Aim 2: Automatically.
Automatic Discovery and Processing of EEG Cohorts from Clinical Records Mission: Enable comparative research by automatically uncovering clinical knowledge.
Abstract Automatic detection of sleep state is important to enhance the quick diagnostic of sleep conditions. The analysis of EEGs is a difficult time-consuming.
Demonstration A Python-based user interface: Waveform and spectrogram views are supported. User-configurable montages and filtering. Scrolling by time.
Feature Extraction Find best Alignment between primitives and data Found Alignment? TUH EEG Corpus Supervised Learning Process Reestimate Parameters Recall.
Market: Customer Survey: 57 clinicians from academic medical centers and community hospitals, and 44 industry professionals. Primary Customer Need: 70%
DISCRETE HIDDEN MARKOV MODEL IMPLEMENTATION DIGITAL SPEECH PROCESSING HOMEWORK #1 DISCRETE HIDDEN MARKOV MODEL IMPLEMENTATION Date: Oct, Revised.
EEL 6586: AUTOMATIC SPEECH PROCESSING Hidden Markov Model Lecture Mark D. Skowronski Computational Neuro-Engineering Lab University of Florida March 31,
Automated Speach Recognotion Automated Speach Recognition By: Amichai Painsky.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Pattern Recognition NTUEE 高奕豪 2005/4/14. Outline Introduction Definition, Examples, Related Fields, System, and Design Approaches Bayesian, Hidden Markov.
Abstract Automatic detection of sleep state is an important queue in accurate detection of sleep conditions. The analysis of EEGs is a difficult time-consuming.
Improved EEG Event Classification Using Differential Energy A.Harati, M. Golmohammadi, S. Lopez, I. Obeid and J. Picone Neural Engineering Data Consortium.
By: Nicole Cappella. Why I chose Speech Recognition  Always interested me  Dr. Phil Show Manti Teo Girlfriend Hoax  Three separate voice analysts proved.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Mixture Densities Maximum Likelihood Estimates.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
The Neural Engineering Data Consortium Mission: To focus the research community on a progression of research questions and to generate massive data sets.
Descriptive Statistics The means for all but the C 3 features exhibit a significant difference between both classes. On the other hand, the variances for.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Learning, Uncertainty, and Information: Learning Parameters
Scalable EEG interpretation using Deep Learning and Schema Descriptors
EEL 6586: AUTOMATIC SPEECH PROCESSING Hidden Markov Model Lecture
CLASSIFICATION OF SLEEP EVENTS IN EEG’S USING HIDDEN MARKOV MODELS
G. Suarez, J. Soares, S. Lopez, I. Obeid and J. Picone
Optimizing Channel Selection for Seizure Detection
EEG Recognition Using The Kaldi Speech Recognition Toolkit
AN ANALYSIS OF TWO COMMON REFERENCE POINTS FOR EEGS
CONTEXT DEPENDENT CLASSIFICATION
Vinit Shah, Joseph Picone and Iyad Obeid
Automatic Interpretation of EEGs for Clinical Decision Support
feature extraction methods for EEG EVENT DETECTION
LECTURE 21: CLUSTERING Objectives: Mixture Densities Maximum Likelihood Estimates Application to Gaussian Mixture Models k-Means Clustering Fuzzy k-Means.
EEG Event Classification Using Deep Learning
A Dissertation Proposal by: Vinit Shah
EEG Event Classification Using Deep Learning
Presentation transcript:

Automated Interpretation of EEGs: Integrating Temporal and Spectral Modeling Christian Ward, Dr. Iyad Obeid and Dr. Joseph Picone Neural Engineering Data Consortium College of Engineering Temple University Philadelphia, Pennsylvania, USA

NEDC TutorialNovember 8, Abstract The goal of this presentation, the second part of a two-part presentation on our research on EEGs, is to describe how we apply contemporary machine learning technology to the problem of interpreting EEGs. The input to the system is an EEG. The output is a transcribed signal and a probability vector containing probabilities associated with various diagnoses. Machine learning is used to optimize the parameters of the underlying mathematical models. Central to this process is the existence of a large corpus of EEGs. In this case, we leverage the TUH EEG Corpus, which contains over 20,000 EEGs collected over a 11-year period from 2002 to The corpus contains patient histories and diagnoses, but does not contain time-aligned detailed transcriptions of the signals. A major goal of this presentation is to explain how we can build powerful systems using a semi-supervised, or even unsupervised, process. [… this paragraph will eventually say something about performance …]

NEDC TutorialNovember 8, Transcriptions prm_07 prm_03 prm_05 prm_09 prm_01 N-channel EEG Signal with transcriptions: What are primitives? A set of events (e.g. prm_07 above), or artifacts, appearing in the EEG waveforms that physicians use to confirm a diagnosis. Each diagnosis can be represented as a sequence of these primitives. Typically a set of 6 to 12 types of artifacts (e.g., spike, GPED, eye blink). What is a transcription? A sequence of primitives, or labels, that describes each segment, or epoch of the waveform. A time-aligned transcription includes start/stop times of each symbol.

NEDC TutorialNovember 8, Transcriptions and Graphs prm_07 prm_03 prm_05 prm_09 prm_01 N-channel EEG Signal with transcriptions: Transcriptions, often called “truth markings”, are used by machine learning systems to discover a mapping between the waveform (input) and the transcription (output). Transcriptions can be simple high-level classifications (e.g., stroke) or lower-level descriptions (e.g, “noise prm_07 noise prm_05 noise”). Transcriptions can be linear (a left to right graph with one level) or hierarchical (containing high level symbols that can be represented as sequences of primitives). Transcriptions can be represented as graphs and are often coded in XML.

NEDC TutorialNovember 8, Supervised Learning prm_07 prm_03 prm_05 prm_09 prm_01 N-channel EEG Signal with transcriptions: Modern pattern recognition systems often require transcriptions, but not time alignments, for training. A generative model is trained by increasing the likelihood that the model produced the data given the transcription. Different ways transcriptions are used during the training process:  Fully supervised: every sample of the waveform is assigned to one or more labels; time-alignments for labels are provided (start and stop times).  Semi-supervised: only the labels are provided; unlabeled data is often assumed to be “background noise”; no alignments.  Partially-supervised (or flexible): some labels are missing.  Unsupervised: no labels; self-organization (e.g. clustering).

NEDC TutorialNovember 8, Forced Alignment prm_07 prm_03 prm_05 prm_09 prm_01 N-channel EEG Signal with transcriptions: Assume a single channel is labeled: “background” prm_07 “background” prm_05 “background” Further, assume each primitive is represented by some sort of finite state machine (e.g., a hidden Markov model). During semi-supervised training, the system attempts to find the best alignment between the primitives and the data. The result of this optimization process is a “state-frame map” – each state of the model is mapped to one or more frames of data. Every frame of data is accounted for and mapped to some state in some model.

NEDC TutorialNovember 8, Case Study: Speech Recognition and Phonemes

NEDC TutorialNovember 8, Case Study: Hidden Markov Models prm_07: Each primitive is represented by an N-state hidden Markov model (HMM). The topology of the model is typically a 3-state “left-to-right” HMM as shown. The supervised learning process aligns the models to the data, and then reestimates the parameters of the model based on all the data assigned to that state (based on EM and Baum-Welch training). The process is iterative and the models are guaranteed to converge to the maximum likelihood solution. Three passes of training is usually adequate for each state of parameter training.

NEDC TutorialNovember 8, Future Work (1) Baseline System No. 1 (Jan. 1): Simple MFCC-like features Fixed-size epochs (approx. 1 sec) Process each channel independently Classify each epoch on each channel as normal/abnormal Classify session as normal/abnormal based epoch classifications (2) Baseline System No. 2 (Feb. 1): MFCC-like features / channel-independent processing HMM models for 6 to 12 primitives Standard HMM training using semi-supervised transcriptions NN or RF classifier for mapping HMM scores to diagnoses (3) Optimal System (Jun. 1): Deep learning for the primitive models NN or RF classifier for mapping to diagnoses Explore different features, topologies, etc. using systems (2) and (3)

NEDC TutorialNovember 8, Brief Bibliography of Relevant Documentation [1] Picone, J. (1993). Signal modeling techniques in speech recognition. Proceedings of the IEEE, 81(9), 1215–1247. [2]Picone, J. (1990). Continuous speech recognition using hidden Markov models. IEEE ASSP Magazine, 7(3), 26–41.