Wong, Gardner, Krieger, Litt (2006) Zack Dvey-Aharon, March 2008.

Slides:



Advertisements
Similar presentations
Pattern Finding and Pattern Discovery in Time Series
Advertisements

Chapter 7 Hypothesis Testing
Foundations of Technical Analysis: Computational Algorithms, Statistical Inference, and Empirical Implementation Written by Andrew W.Lo, Harry Mamaysky,
CS344 : Introduction to Artificial Intelligence
An Approach to ECG Delineation using Wavelet Analysis and Hidden Markov Models Maarten Vaessen (FdAW/Master Operations Research) Iwan de Jong (IDEE/MI)
Yasuhiro Fujiwara (NTT Cyber Space Labs)
COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Instructor: Dr. John J. Kerbs, Associate Professor Joint Ph.D. in Social Work and Sociology.
Introduction The aim the project is to analyse non real time EEG (Electroencephalogram) signal using different mathematical models in Matlab to predict.
Tutorial on Hidden Markov Models.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Hidden Markov Models: Applications in Bioinformatics Gleb Haynatzki, Ph.D. Creighton University March 31, 2003.
Foundations of Statistical NLP Chapter 9. Markov Models 한 기 덕한 기 덕.
Albert Gatt Corpora and Statistical Methods Lecture 8.
Brain tumor classification based on EEG hidden dynamics Authors: Rosaria silipo, Gustavo Deco, Helmut Bartsch Advisor: Dr. Hsu Graduate: Yu-Wei Su.
… Hidden Markov Models Markov assumption: Transition model:
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
. Class 5: HMMs and Profile HMMs. Review of HMM u Hidden Markov Models l Probabilistic models of sequences u Consist of two parts: l Hidden states These.
Recording a Game of Go: Hidden Markov Model Improves Weak Classifier Steven Scher
Modeling biological data and structure with probabilistic networks I Yuan Gao, Ph.D. 11/05/2002 Slides prepared from text material by Simon Kasif and Arthur.
Efficient Estimation of Emission Probabilities in profile HMM By Virpi Ahola et al Reviewed By Alok Datar.
Handwritten Character Recognition using Hidden Markov Models Quantifying the marginal benefit of exploiting correlations between adjacent characters and.
Statistical Natural Language Processing. What is NLP?  Natural Language Processing (NLP), or Computational Linguistics, is concerned with theoretical.
Sensys 2009 Speaker:Lawrence.  Introduction  Overview & Challenges  Algorithm  Travel Time Estimation  Evaluation  Conclusion.
Abstract EEGs, which record electrical activity on the scalp using an array of electrodes, are routinely used in clinical settings to.
1 A Network Traffic Classification based on Coupled Hidden Markov Models Fei Zhang, Wenjun Wu National Lab of Software Development.
ALGORITHMIC TRADING Hidden Markov Models. Overview a)Introduction b)Methodology c)Programming codes d)Conclusion.
Overview Definition Hypothesis
Chapter 8 Introduction to Hypothesis Testing
Online Chinese Character Handwriting Recognition for Linux
Isolated-Word Speech Recognition Using Hidden Markov Models
Alignment and classification of time series gene expression in clinical studies Tien-ho Lin, Naftali Kaminski and Ziv Bar-Joseph.
Chapter 8 Hypothesis Testing I. Chapter Outline  An Overview of Hypothesis Testing  The Five-Step Model for Hypothesis Testing  One-Tailed and Two-Tailed.
Chapter 8 Introduction to Hypothesis Testing
BINF6201/8201 Hidden Markov Models for Sequence Analysis
Tokenization & POS-Tagging
TUH EEG Corpus Data Analysis 38,437 files from the Corpus were analyzed. 3,738 of these EEGs do not contain the proper channel assignments specified in.
Estimating Activity-Travel Patterns from Cellular Network Data
Project Lachesis: Parsing and Modeling Location Histories Daniel Keeney CS 4440.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
Chapter 8 Hypothesis Testing I. Significant Differences  Hypothesis testing is designed to detect significant differences: differences that did not occur.
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
Algorithms in Computational Biology11Department of Mathematics & Computer Science Algorithms in Computational Biology Markov Chains and Hidden Markov Model.
MaskIt: Privately Releasing User Context Streams for Personalized Mobile Applications SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference.
Automatic Discovery and Processing of EEG Cohorts from Clinical Records Mission: Enable comparative research by automatically uncovering clinical knowledge.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,... Si Sj.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
EEG processing based on IFAST system and Artificial Neural Networks for early detection of Alzheimer’s disease.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
26134 Business Statistics Week 4 Tutorial Simple Linear Regression Key concepts in this tutorial are listed below 1. Detecting.
Chapter 8: Introduction to Hypothesis Testing. Hypothesis Testing A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
Evaluating Classifiers. Reading for this topic: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website)
CPH Dr. Charnigo Chap. 11 Notes Figure 11.2 provides a diagram which shows, at a glance, what a neural network does. Inputs X 1, X 2,.., X P are.
The Neural Engineering Data Consortium Mission: To focus the research community on a progression of research questions and to generate massive data sets.
COmbining Probable TRAjectories — COPTRA
Evaluating Classifiers
Statistical Models for Automatic Speech Recognition
P-value Approach for Test Conclusion
Hidden Markov Models Part 2: Algorithms
EEG Recognition Using The Kaldi Speech Recognition Toolkit
Statistical Models for Automatic Speech Recognition
CONTEXT DEPENDENT CLASSIFICATION
feature extraction methods for EEG EVENT DETECTION
Handwritten Characters Recognition Based on an HMM Model
1 Chapter 8: Introduction to Hypothesis Testing. 2 Hypothesis Testing The general goal of a hypothesis test is to rule out chance (sampling error) as.
Speaking patterns -MAS.662J, Fall 2004
Presentation transcript:

Wong, Gardner, Krieger, Litt (2006) Zack Dvey-Aharon, March 2008

 What EEG is and how it may help to indicate seizures  Past work and the goal of study  HMM: A short Introduction  Model used in study and it’s restrictions  Methodology overview  Results and remarks  Criticism  Questions

 EEG = (or Electroencephalography) is the measurement of electrical activity produced by the brain as recorded from electrodes placed on the scalp.

 Seizure (epilepsy)

 In a case of a seizure, it is noticed in some of the areas\channel of the EEG signal.  In all of the cases there are pre-seizure spikes that appear very clearly.  Spikes and long disturbances can occur often even in a totally healthy patient.

 As we have seen, there are medical studies stating spikes appear differently before seizures, and that these can be therefore used for a predictive analysis.  No method convincingly demonstrated prospective seizure prediction sufficient.  The problematic tradeoff: Accuracy Vs. low FPR.  Writers claim that current top methods appear in research are (1) based on study design adding many assumptions and (2) Address only extreme cases with high rate of seizures, failing to handle FPR.

 HMM = Hidden Markov Models. These mathematical models are based on the Markovian Assumption:  (1) The observations are an outcome to a “hidden- state”, one of states in a chain that represent the state of the object.  (2) The probability to change from state to another depends only on the last (N) past transitions. (N- Markovian assumption)  (3) observations are stochastically distributed according to the current state.

 HMM parameters:  X – states of the model  Y – observations  A – a matrix that represents transition probabilities  B – a matrix that represent emission probabilities

 At first we train a three-state HMM, with states 1, 2, and 3 denoting the baseline, detected, and seizure states, respectively.  Model Restrictions: (1) aii = 1 - 1/Di * (2) a13 < a23 (3) b11 > b12, b21 < b22 (4) b33 = 1, b13 = b23 = b31 = b32 = 0 * where Di is the average duration of state i

 Training prediction algorithm using raw EEG signal, and labeled data of an expert states observation are added to train HMM network  Model is trained, and using the Viterbi algorithm, the most probable state sequence is found, clearing “transition” noise  Then the statistical association between seizure & detected states can be measured in order to validate the hypothesis.

 Algorithm shows two major achievements:  (1) Demonstrating on a specific prediction algorithm (of Gardner, 2006), HMM showed output can be smoother too lower FPR in more than 70%  (2) Using the algorithm as a post processing tool can increase detection ratio (demonstrated against that specific prediction algorithm, 17/29 against 5/29). Red arrows: False positives Black arrow: False negative

On Top: Global minima of the HMM training process On Bottom: How using the Model can help drop false-positives

 Enthusiasm caused writers to lose focus, from evaluating prediction algorithms to improving them.  All experiments were based on evaluating and improving one specific algorithm, which is not explained and in any aspect proved to be a very weak predictor.  Model definition can be improved very easily, adding more restrictions (as proved from B matrix results, for example).  The model also contains statistic estimations, against statement that methodology is free from study designs and data assumptions.

?