Download presentation
Presentation is loading. Please wait.
Published byΙπποκράτης Κούνδουρος Modified over 6 years ago
1
AND ITS FUNCTIONS IN THORACIC IDENTIFICATION
Dual Peak Detection AND ITS FUNCTIONS IN THORACIC IDENTIFICATION
2
Purpose Dr. McGregor had previously created and implemented a biometric identification system, called THIS. THIS software system attempted to solve an identification problem: given a population of subjects, can we correctly identify a given individual in that population based on some quantifiable traits? Previously, classification software had been successfully used to identify individuals based on electrocardiogram (ECG) and phonocardiogram (PCG) data. However, Laser Doppler Vibrometer (LDV) signals stood at 61% accuracy. This could be improved on.
3
So why was LDV accuracy low? Part I: segment analysis
LDV signal data represents a person’s heartbeat taken from their Carotid artery Heartbeats themselves consist of multiple parts: Questions: Do all parts contain equal information? Which parts should we focus on for information? Which parts (if any) should we ignore?
4
So why was LDV accuracy low? Part II: segment comparisons
Time dependence: Our computational analysis is being performed on spectrograms of the LDV signal Spectrograms are broken into time dependent bins Therefore, our spectrograms are time dependent But segments of heartbeats are being chopped out of a continuous signal… Questions: How should we compare two heartbeat segments?
5
Dynamic Time Warping (DTW) and Optimal Subsequence Bijection (OSB)
Commonly used in signal comparisons, an algorithm that “warps” signals so that they are not compared using traditional Cartesian coordinates.
6
Shift to Dual Peak Detection:
Further research on Dr. Chen’s LDV research showed that inclusion of either of DTW or OSB algorithms did not produce statistically significant difference in results while using a similar approach. Why? Suggests that “warping” heartbeat segments loses too much of the information Additional discussion with Dr. Moshe Khurgel from the Biology department revealed the idea that the majority of identifiable information in a heartbeat comes not from systolic pressure, but from diastolic pressure. Led to a focus on increased analysis of the secondary peaks as points of interest
7
Problem at hand Previous implementation of algorithms only allowed for primary peak detection; it did not consider secondary peaks. Desynchronization problem: once we have the secondary peaks, how can we get them to be comparable with one another in a time-dependent space?
8
Secondary Peak Detection
Previous algorithm required a window size in order to determine primary peaks. Poorly given parameter could lead to many false positives or negatives. Based on conversations with Jose Corona and Dr. Verne Leininger from the Math department, implemented a new peak detection algorithm that did not require a window size and also detected secondary peaks.
9
Secondary Peak Detection
10
Now to align the heartbeat segments
Rather than warp the data around our two key points of interest and possibly destroy information, Extract a certain number of samples before and after the primary and secondary peaks And then concatenate them together in order to make the signal for that heartbeat segment This guarantees that the primary and secondary peaks of every segment for every subject are aligned This allows us to get the best possible results from feature selection
11
Segments: Old versus New
12
Alignment of Key Features:
13
Testing Results With optimized parameters of: 11 Training Segments
15 Testing Segments 25 Bins per segment 240 samples before primary peak 220 samples after primary peak 25 samples before secondary peak 30 samples after secondary peak We Achieved: A 13% increase in 1st rank accuracy (61% to 69%) A overall increase in non-1st rank accuracies Higher robustness with regards to parameters
14
Additional minor items
Implemented a caching system that saves and loads peak locations to a file based on signal’s unique ID’s in order to cut down on computation time. Numerous edits to the THIS software’s GUI and backend data manipulation in order to eliminate redundancies, add additional dialogue options, and modify visible parameters. Beginning of an alteration to THIS that would allow the user to input training and testing parameters once, then automatically choose the parameters for samples taken around peaks in order to self- optimize via Monte Carlo.
15
Pictures Cited content/uploads/2013/05/ecg-2.gif Remaining were screenshotted from THIS, which was implemented in MATLAB.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.