Tonal Index in Digital Recognition of Lung Auscultation Marcin Wiśniewski,Tomasz Zieliński 2016/7/12 Signal Processing Algorithms, Architectures,Arrangements,

Slides:



Advertisements
Similar presentations
Decibel values: sum and difference. Sound level summation in dB (1): Incoherent (energetic) sum of two different sounds: Lp 1 = 10 log (p 1 /p rif ) 2.
Advertisements

Frequency analysis.
Learning Introductory Signal Processing Using Multimedia 1 Outline Overview of Information and Communications Some signal processing concepts Tools available.
Advanced Speech Enhancement in Noisy Environments
Introduction The aim the project is to analyse non real time EEG (Electroencephalogram) signal using different mathematical models in Matlab to predict.
Method for differential diagnosis of centrally and peripherally generated wheezes Raymond Murphy and Andrey Vyshedskiy, Brigham and Women’s / Faulkner.
G. Valenzise *, L. Gerosa, M. Tagliasacchi *, F. Antonacci *, A. Sarti * IEEE Int. Conf. On Advanced Video and Signal-based Surveillance, 2007 * Dipartimento.
Feature Vector Selection and Use With Hidden Markov Models to Identify Frequency-Modulated Bioacoustic Signals Amidst Noise T. Scott Brandes IEEE Transactions.
Content-Based Classification, Search & Retrieval of Audio Erling Wold, Thom Blum, Douglas Keislar, James Wheaton Presented By: Adelle C. Knight.
SIGNAL PROCESSING TECHNIQUES USED FOR THE ANALYSIS OF ACOUSTIC SIGNALS FROM HEART AND LUNGS TO DETECT PULMONARY EDEMA 1 Pratibha Sharma Electrical, Computer.
Page 0 of 8 Time Series Classification – phoneme recognition in reconstructed phase space Sanjay Patil Intelligent Electronics Systems Human and Systems.
Single-Channel Speech Enhancement in Both White and Colored Noise Xin Lei Xiao Li Han Yan June 5, 2002.
MODULATION SPECTRUM EQUALIZATION FOR ROBUST SPEECH RECOGNITION Source: Automatic Speech Recognition & Understanding, ASRU. IEEE Workshop on Author.
MPEG Audio Compression by V. Loumos. Introduction Motion Picture Experts Group (MPEG) International Standards Organization (ISO) First High Fidelity Audio.
Speech Recognition in Noise
Modeling of Mel Frequency Features for Non Stationary Noise I.AndrianakisP.R.White Signal Processing and Control Group Institute of Sound and Vibration.
Chapter 8. Linear Systems with Random Inputs 1 0. Introduction 1. Linear system fundamentals 2. Random signal response of linear systems Spectral.
1 New Technique for Improving Speech Intelligibility for the Hearing Impaired Miriam Furst-Yust School of Electrical Engineering Tel Aviv University.
Gaussian Mixture-Sound Field Landmark Model for Robot Localization Talker: Prof. Jwu-Sheng Hu Department of Electrical and Control Engineering National.
Acoustic biomarkers of Chronic Obstructive Lung Disease Raymond Murphy and Andrey Vyshedskiy, Brigham and Women’s / Faulkner Hospitals, Boston MA Introduction.
Sensitivity Evaluation of Subspace-based Damage Detection Technique Saeid Allahdadian Dr. Carlos Ventura PhD Student, The University of British Columbia,
Normalization of the Speech Modulation Spectra for Robust Speech Recognition Xiong Xiao, Eng Siong Chng, and Haizhou Li Wen-Yi Chu Department of Computer.
Kinect Player Gender Recognition from Speech Analysis
ACCURATE TELEMONITORING OF PARKINSON’S DISEASE SYMPTOM SEVERITY USING SPEECH SIGNALS Schematic representation of the UPDRS estimation process Athanasios.
SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIXFACTORIZATION AND SPECTRAL MASKS Jain-De,Lee Emad M. GraisHakan Erdogan 17 th International.
Wheeze Patterns In Patients With Asthma and COPD Raymond Murphy and Bryan Flietstra, Brigham and Women’s / Faulkner Hospitals, Boston MA A wheeze as it.
SoundSense by Andrius Andrijauskas. Introduction  Today’s mobile phones come with various embedded sensors such as GPS, WiFi, compass, etc.  Arguably,
INTRODUCTION  Sibilant speech is aperiodic.  the fricatives /s/, / ʃ /, /z/ and / Ʒ / and the affricatives /t ʃ / and /d Ʒ /  we present a sibilant.
Matlab -based Scope Automation and data analysis SW 29/05/2012 Presents by- Abed Mahmoud & Hasan Natoor Supervisor– Avi Biran.
Automatic detection of microchiroptera echolocation calls from field recordings using machine learning algorithms Mark D. Skowronski and John G. Harris.
Speech Enhancement Using Spectral Subtraction
2010/12/11 Frequency Domain Blind Source Separation Based Noise Suppression to Hearing Aids (Part 1) Presenter: Cian-Bei Hong Advisor: Dr. Yeou-Jiunn Chen.
Jacob Zurasky ECE5526 – Spring 2011
Automatic Ballistocardiogram (BCG) Beat Detection Using a Template Matching Approach Adviser: Ji-Jer Huang Presenter: Zhe-Lin Cai Date:2014/12/24 30th.
Overview of Part I, CMSC5707 Advanced Topics in Artificial Intelligence KH Wong (6 weeks) Audio signal processing – Signals in time & frequency domains.
Signals CY2G2/SE2A2 Information Theory and Signals Aims: To discuss further concepts in information theory and to introduce signal theory. Outcomes:
ICASSP Speech Discrimination Based on Multiscale Spectro–Temporal Modulations Nima Mesgarani, Shihab Shamma, University of Maryland Malcolm Slaney.
Speaker Recognition by Habib ur Rehman Abdul Basit CENTER FOR ADVANCED STUDIES IN ENGINERING Digital Signal Processing ( Term Project )
Online Kinect Handwritten Digit Recognition Based on Dynamic Time Warping and Support Vector Machine Journal of Information & Computational Science, 2015.
Authors: Sriram Ganapathy, Samuel Thomas, and Hynek Hermansky Temporal envelope compensation for robust phoneme recognition using modulation spectrum.
D EVELOPMENT OF A D IGITAL A USCULTATION D EVICE FOR REAL TIME M URMUR D ETECTION Tanvi Kalra 1, Abhinav 2 and Sneh Anand Department of Biomedical.
Feature Vector Selection and Use With Hidden Markov Models to Identify Frequency-Modulated Bioacoustic Signals Amidst Noise T. Scott Brandes IEEE Transactions.
Audio processing methods on marine mammal vocalizations Xanadu Halkias Laboratory for the Recognition and Organization of Speech and Audio
USE OF IMPROVED FEATURE VECTORS IN SPECTRAL SUBTRACTION METHOD Emrah Besci, Semih Ergin, M.Bilginer Gülmezoğlu, Atalay Barkana Osmangazi University, Electrical.
ADVANCED SPECTRAL ANALYSIS OF OUT-OF-STEP OPERATION OF SYNCHRONOUS MACHINES Zbigniew Leonowicz BSI Riken, ABSP Lab. Wako-shi, Saitama, Japan
2010/12/11 Frequency Domain Blind Source Separation Based Noise Suppression to Hearing Aids (Part 3) Presenter: Cian-Bei Hong Advisor: Dr. Yeou-Jiunn Chen.
2016/1/31 Unobtrusive Assessment of Motor Patterns During Sleep Based on Mattress Indentation Measurements Vincent,,, Vincent Verhaert, Bart Haex, Tom.
Unconstrained Sleep Measuring Device for Breathing Movement 非穿戴式睡眠呼吸運動量測裝置 Presenter: Kun-Han Jhan Advisor: Dr. Chun-Ju Hou Date:2013/12/25.
IIT Bombay 17 th National Conference on Communications, Jan. 2011, Bangalore, India Sp Pr. 1, P3 1/21 Detection of Burst Onset Landmarks in Speech.
A. R. Jayan, P. C. Pandey, EE Dept., IIT Bombay 1 Abstract Perception of speech under adverse listening conditions may be improved by processing it to.
Introduction The aim of this work is investigating the differences of Heart Rate Variability (HRV) features between normal subjects and patients suffering.
Statistical Signal Processing Research Laboratory(SSPRL) UT Acoustic Laboratory(UTAL) SINGLE CHANNEL SPEECH ENHANCEMENT TECHNIQUE FOR LOW SNR QUASI-PERIODIC.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
Speech Enhancement Algorithm for Digital Hearing Aids
National Mathematics Day
Introduction to Audio Watermarking Schemes N. Lazic and P
Recognition of arrhythmic Electrocardiogram using Wavelet based Feature Extraction Authors Atrija Singh Dept. Of Electronics and Communication Engineering.
Automatic Sleep Stage Classification using a Neural Network Algorithm
III. Analysis of Modulation Metrics IV. Modifications
3. Applications to Speaker Verification
Kocaeli University Introduction to Engineering Applications
National Conference on Recent Advances in Wireless Communication & Artificial Intelligence (RAWCAI-2014) Organized by Department of Electronics & Communication.
CS 188: Artificial Intelligence Fall 2008
UNIT 5. Linear Systems with Random Inputs
MPEG-1 Overview of MPEG-1 Standard
Lecture 2: Frequency & Time Domains presented by David Shires
Emad M. Grais Hakan Erdogan
Presenter: Shih-Hsiang(士翔)
Combination of Feature and Channel Compensation (1/2)
Practical Hidden Voice Attacks against Speech and Speaker Recognition Systems NDSS 2019 Hadi Abdullah, Washington Garcia, Christian Peeters, Patrick.
Presentation transcript:

Tonal Index in Digital Recognition of Lung Auscultation Marcin Wiśniewski,Tomasz Zieliński 2016/7/12 Signal Processing Algorithms, Architectures,Arrangements, and Applications Conference Proceedings (SPA), 2011 Presenter : Kun-Han Jhan Advisor : Dr. Chun-Ju Hou Date :

Outline Introduction Lung sounds Wheezes Descriptors Testing methodology Experiments Conclusions References 2016/7/12

Introduction Asthma ◦ Secretion or mucus  ◦ Muscle contraction ◦ Main indicator  An appearance of wheezes in breath cycle 2016/7/12

Introduction Lungs auscultation ◦ A non-invasive test in asthma ◦ Evaluate the stage of the disease ◦ Evaluate the level of wheeze appearance The main problem of lungs auscultation ◦ Depend on doctor’s experience ◦ Subjective 2016/7/12

Introduction The advantage of digital lung auscultation ◦ Objective and unambiguous ◦ Telemedicine  Doctor can see the results without necessity of direct meeting  Patients not have to go to the hospital  Increase the comfort of patient’s life  Decrease stress in direct meetings with doctors 2016/7/12

Lung sound Location ◦ Trachea ◦ Bronchus ◦ Alveoli Characteristics ◦ Like a noise ◦ Frequency rang : 20 ~ 1.6kHz 2016/7/12 Lung sound

Wheeze ◦ A single or multi tone sound  Duration > 80 ms  Frequency: 100 ~ 1k Hz ◦ Mixed normal lung sounds with wheezes 2016/7/12

Wheezes Descriptors Features ◦ Kurtosis (K) ◦ Spectral Peaks Entropy (SPE) ◦ Frequency Ratio (FR) ◦ Spectral Flatness (SF) ◦ Tonal index 2016/7/12

Wheezes Descriptors Kurtosis (K) ◦ Measure a level of peakedness of a probability distribution in time domain ◦ k = 3 (noisy signal with normal or sub-gaussian distribution) ◦ k > 3 (the signal with wheezes) 2016/7/12 x : input signal μ : mean σ 2 : variance 眾數眾數 中位數中位數 μ

Wheezes Descriptors 2016/7/12 Spectral peaks entropy (SPE) ◦ Frequency domain Cn : peak value of frequency spectrum : total sum of these peaks Entropy:

Wheezes Descriptors 2016/7/12 Frequency ratio ◦ Frequency feature ◦ The signal with wheezes has higher values of this ratio than normal lung sounds ◦ The area under the power spectral density of ROI ◦ The area of total power spectral density ◦ FR descriptor was modified and tested once again as a Energy Ratio (ER) descriptor

Spectral flatness ◦ A signal feature defined in frequency domain ◦ A ratio of geometrical and arithmetical mean values Wheezes Descriptors 2016/7/12 : geometrical mean value : arithmetical mean value

Tonal index ◦ A spectral feature ◦ MPEG psychoacoustic model ◦ FFT module and phase Wheezes Descriptors 2016/7/12

Testing methodology 2016/7/12 Tonal signals simulation  Artificial wheezes: multi-frequency signals with random three frequencies (100~1200Hz)  Normal breathing signals Features testing  Signal samples: 1024 points  Add white Gaussian noise with different SNR scale  Sampling frequency: 8 KHz Recognition  SVM(Support Vector Machine)

Experiments The modeled wheezes ◦ Artificial noise with normal ◦ Training signals with different gains To recognition process ◦ 100 samples 2016/7/12

Experiments Artificial signals

Experiments 2016/7/12 Artificial signals

Experiments Hybrid data ◦ Artificial wheezes added to the normal lung sounds taken from chest auscultation. ◦ 8 kHz/16-bit recorder ◦ Panasonic WM-61 microphones To recognition process ◦ 28 samples 2016/7/12

Experiments Hybrid signals

Experiments 2016/7/12 Hybrid signals

Experiments 2016/7/12 Artificial data ◦ TI ◦ SPE. Hybrid data ◦ TI In both study the FR shows the worst effectiveness.

Conclusions The tonal index is more sensitive to tonality in noisy signals reaches full efficiency in lower SNR as well. Increasing number of features in algorithm not necessarily improves effectiveness of recognition. The best result reaches the algorithm with 2 features{TI,ER}– 94.2% and {K,TI}– 94.6% effectiveness. 2016/7/12

References [1] Aydore, S.; Sen, I.; Kahya, Y.P.; Mihcak, M.K., Classification of respiratory signals by linear analysis, Engineering in Medicine and Biology Society, EMBC Annual International Conference of the IEEE Publication Year: 2009, pp [2] Jianmin Zhang; Wee Ser; Jufeng Yu; Zhang, T.T.; A Novel Wheeze Detection Method for WearableMonitoring Systems Intelligent Ubiquitous Computing and Education, 2009 International Symposium on Publication Year: 2009 pp [3] A.H. Gray, J.D. Markel, A spectral-flatness measure for studying the autocorrelation method of linear prediction of speech analysis, IEEE Trans. Acoust. Speech Signal Process., 1974, 22, pp. 207–217 [4] H. Pasterkamp, S.S. Kraman, G. R. Wodicka, Respiratory Sounds. Advances Beyond the Stethoscope, Am. J. Respir. Crit. Care Med., Volume 156, Number 3, pp , September /7/12