Anh Nguyen, Raghda Alqurashi, Zohreh Raghebi

Slides:



Advertisements
Similar presentations
BIOPOTENTIAL AMPLIFIERS
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Voiceprint System Development Design, implement, test unique voiceprint biometric system Research Day Presentation, May 3 rd 2013 Rahul Raj (Team Lead),
Capturing Best Practice for Microarray Gene Expression Data Analysis Gregory Piatetsky-Shapiro Tom Khabaza Sridhar Ramaswamy Presented briefly by Joey.
FM-BASED INDOOR LOCALIZATION TsungYun 1.
ELECTROCARDIOGRAM (ECG)
EHealth Workshop 2003Virginia Tech e-Textiles Group An E-Textile System for Motion Analysis Mark Jones, Thurmon Lockhart, and Thomas Martin Virginia Tech.
Artifact (artefact) reduction in EEG – and a bit of ERP basics CNC, 19 November 2014 Jakob Heinzle Translational Neuromodeling Unit.
S-SENCE Signal processing for chemical sensors Martin Holmberg S-SENCE Applied Physics, Department of Physics and Measurement Technology (IFM) Linköping.
1 Wireless Communication Low Complexity Multiuser Detection Rami Abdallah University of Illinois at Urbana Champaign 12/06/2007.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
TRADING OFF PREDICTION ACCURACY AND POWER CONSUMPTION FOR CONTEXT- AWARE WEARABLE COMPUTING Presented By: Jeff Khoshgozaran.
Integrating POMDP and RL for a Two Layer Simulated Robot Architecture Presented by Alp Sardağ.
Feature Screening Concept: A greedy feature selection method. Rank features and discard those whose ranking criterions are below the threshold. Problem:
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Physical-layer Identification of RFID Devices Authors: Boris Danev, Thomas S. Heyde-Benjamin, and Srdjan Capkun Presented by Zhitao Yang 1.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
DIGITAL VOICE NETWORKS ECE 421E Tuesday, October 02, 2012.
Attention Deficit Hyperactivity Disorder (ADHD) Student Classification Using Genetic Algorithm and Artificial Neural Network S. Yenaeng 1, S. Saelee 2.
Applications of Independent Component Analysis Terrence Sejnowski Computational Neurobiology Laboratory The Salk Institute.
ERP DATA ACQUISITION & PREPROCESSING EEG Acquisition: 256 scalp sites; vertex recording reference (Geodesic Sensor Net)..01 Hz to 100 Hz analogue filter;
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Heart Sound Background Noise Removal Haim Appleboim Biomedical Seminar February 2007.
BAGGING ALGORITHM, ONLINE BOOSTING AND VISION Se – Hoon Park.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Analysis of Movement Related EEG Signal by Time Dependent Fractal Dimension and Neural Network for Brain Computer Interface NI NI SOE (D3) Fractal and.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
© 2002 IBM Corporation IBM Research 1 Policy Transformation Techniques in Policy- based System Management Mandis Beigi, Seraphin Calo and Dinesh Verma.
2D-LDA: A statistical linear discriminant analysis for image matrix
PROSTHETIC LEG PRESENTED BY:-AWAIS IJAZ HASNAT AHMED KHAN.
Learning A Better Compiler Predicting Unroll Factors using Supervised Classification And Integrating CPU and L2 Cache Voltage Scaling using Machine Learning.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
An E-Textiles. Virginia Tech e-Textiles Group Design of an e-textile computer architecture – Networking – Fault tolerance – Power aware – Programming.
LECTURE 14: DIMENSIONALITY REDUCTION: PRINCIPAL COMPONENT REGRESSION March 21, 2016 SDS 293 Machine Learning B.A. Miller.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Introduction to Discrete-Time Control Systems fall
Generalized and Hybrid Fast-ICA Implementation using GPU
EMG-HUMAN MACHINE INTERFACE SYSTEM
CSC2535: Computation in Neural Networks Lecture 11 Extracting coherent properties by maximizing mutual information across space or time Geoffrey Hinton.
Brain operated wheelchair
LECTURE 11: Advanced Discriminant Analysis
An Artificial Intelligence Approach to Precision Oncology
Bag-of-Visual-Words Based Feature Extraction
FACE RECOGNITION TECHNOLOGY
FINAL YEAR PROJECT 1 WPB49804 Development of arm gesture device
Automatic Sleep Stage Classification using a Neural Network Algorithm
Epileptic Seizure Prediction
Fabien LOTTE, Cuntai GUAN Brain-Computer Interfaces laboratory
Sampling rate conversion by a rational factor
Basic machine learning background with Python scikit-learn
Development of smart sensors system for machine fault diagnosis
Final Year Project Presentation --- Magic Paint Face
Electromyography E.M.G..
EPS Sensors & Hand Tracking/Gesture Recognition
Electromyography E.M.G..
REMOTE SENSING Multispectral Image Classification
Christophe Dubach, Timothy M. Jones and Michael F.P. O’Boyle
EE513 Audio Signals and Systems
Department of Electrical Engineering
Faculty of Science IT Department Lecturer: Raz Dara MA.
Non-negative Matrix Factorization (NMF)
Experimental Evaluation
Automatic Handwriting Generation
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Lecture 16. Classification (II): Practical Considerations
Modeling IDS using hybrid intelligent systems
Feature Selection in BCIs (section 5 and 6 of Review paper)
Presentation transcript:

A Lightweight And Inexpensive In-ear Sensing System For Automatic Whole-night Sleep Stage Monitoring Anh Nguyen, Raghda Alqurashi, Zohreh Raghebi Presenters:Justin Shen & Ruomin Ba

CONTENTS 01 02 03 04 05 06 Background Challenges Hardware Architecture (Section 1, 2) 01 Challenges 02 (Section 3, 7) Hardware 03 (Section 3,4,5,6,7) Architecture 04 Evaluation (Section 8) 05 (Section 10) Conclusion 06

01 BACKGROUND

Background Sleep is important for healthy body function and has impact on Learning Psychological health Hormone regulation Appetite And much more Thus, it is clinically important to reliably monitor a person’s sleep health Figure: https://www.israel21c.org/sleep-vital-to-repair-dna-damage-israeli-study-finds/

Sleep Monitoring: Polysomnography (PSG) Background Sleep Monitoring: Polysomnography (PSG) Typically sleep studies involve Polysomnography (PSG) in a sleep laboratory Monitors three main signals (among others): (1) Brain activity (EEG) (2) Eye movements (EOG) (3) Muscle activity (EMG) Figure: http://www.alphasleeplab.com/blog /2017/5/22/sleep-studies-part-1

Background Drawbacks of PSG: obtrusive attachment of large number of wired sensors need to travel to sleep laboratory risk of losing sensor contact whenever patient move need of well-trained expert to review sleep staging result Propose the use of Low-cost in-ear biosignal sensing system (LIBS) to monitor sleep without these drawbacks

02 CHALLENGES

Challenges The desired signal is low amplitude, thus LIBS must be able to measure the signal well without sacrificing comfort Separation of EEG, EOG, and EMG signals from the single channel in-ear mixture Separation algorithm must be robust to account for difference of physiological condition and other variation among individuals

03 HARDWARE

Signal Acquisition: Hardware Design (1/3) Sensor compose of a earplug base with electrode Block foam earplug as base Allows the sensor to follow the shape of the ear canal Provides comfort while avoiding high cost associated with personalization Also reduces the motion noise caused by jaw motion

Signal Acquisition: Hardware Design (2/3) Sensor compose of a earplug base with electrode Coat surface with layers of thin silver leaves Conductive silver cloth acts as electrode Works with base for a low and consistent surface resistance -> reliable signals

Signal Acquisition: Hardware Design (3/3) Oval shaped electrode based on anatomy of ear canal Experimented with different materials (silver leaves, fabric, copper) Increase of the distance between the main electrodes -> signal fidelity Signals amplified and passed to microcontroller

04 ARCHITECTURE

Architecture Three separate modules

6V battery source for power and safety Architecture Data Acquisition and Preprocessing LIBS uses a brain-computer interface (BCI) board named OpenBCI to sample and digitize the in-ear signal 6V battery source for power and safety Sampling rate of 2kHz and a gain of 24dB Captured signals are preprocessed to eliminate possible signal interference (from body movement, electrical noise, etc.) Signals are stored in an SD card and then processed offline Signal separation Signal classification

Why separate EEG, EOG, EMG and noise is difficult? Architecture Signal Separation The biosignal sensed by LIBS is a single channel mixture of four components: EEG, EOG, EMG and noise. Why separate EEG, EOG, EMG and noise is difficult? Overlapping characteristics of three signals in both time and frequency domains. Random activation of the sources generating them. Signal variation from person to person and in different recordings.

What tools we could use to solve the problem? Architecture Signal Separation What tools we could use to solve the problem? A. Principle Component Analysis (PCA) B. Independent Component Analysis (ICA) C. Empirical Mode Decomposition (EMD) D. Non-negative Matrix Factorization (NMF) Why authors choose NMF? The number of collected channels is equal to or larger than the number of source signals (except NMF) The factorized components describing the source signal are selected manually. The channel number that LIBS has (1 channel) is lower than the number of signals of interest (3 signals).

What is Non-negative Matrix Factorization (NMF)? Architecture Signal Separation What is Non-negative Matrix Factorization (NMF)? NMF is one kind of non-linear dimension reduction algorithm Optimization Target: The non-negativity makes the resulting matrices easier to inspect

What are the problems with applying NMF into LIBS? Architecture Signal Separation What are the problems with applying NMF into LIBS? The non-convex solution space of NMF causes the non-unique estimation of the original source signals. The variance of the biosignals on different sleeps. How to solve these two problems? Combine NMF with Spectral Template Learning Algorithm Use Itakura Saito (IS) divergence as the cost function. Since it holds a scale-invarianct property that could help minimize the variance on different sleeps.

Architecture Signal Separation W is the spectral template matrix representing basis patterns (components). H is the activation matrix expressing time points (positions) when the signal patterns in W is activatied.

Architecture Sleep Stage Classification Feature Extraction Find the pattern between sleep stage and EEG, EOG & EMG Decision Tree and Random Forest Feature Selection Among All features, find what features are important

Architecture Sleep Stage Classification

What is the reason to do feature selection? Architecture Sleep Stage Classification What is the reason to do feature selection? The performance of a classification algorithm can degrade when all extracted features are used to determine the sleep stages. Comment: The figure above is only the result of decision tree and random forest algorithms. Actually a lot of sophisticated classification algorithms could automatically choose the important features related to the output. For example, neural network could reduce the influence of unimportant features with sufficient training. In this case, feature selection may not be necessary.

How to do feature selection? Architecture Sleep Stage Classification How to do feature selection? Forward Selection (FSP) identifies the most effective combination of features extracted from in-ear signal. Features are selected sequentially until the addition of a new feature results in no performance improvement in prediction. A feature is added to the set of selected features if it not only improves the misclassification error but also is less redundant given the features already selected.

What classification algorithms authors considered? Neural Network Architecture Sleep Stage Classification What classification algorithms authors considered? Neural Network Reject Reason: Long training time and complexity SVM Reject Reason: Long training time and difficulty to understand the learned function Decision Tree & Random Forest Easy to implement and interpret Highest performance Flexible and work well with catrgorical data

05 EVALUATION

Evaluation System Performance

Evaluation Signal Acquisition

Evaluation Signal Acquisition

Evaluation Signal Separation

Evaluation Sleep Stage Classification

06 CONCLUSION

Conclusion In this paper, the authors presented the desgin, implementation and evaluation of LIBS. LIBS could achieve similar performance like PSG in terms of sleep stages classification accuracy. It has the advantages of low cost, easy operation and comfortable wearing during sleep. It has the potential to become a fundamental sleep monitoring device in the future.