Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mitchell Kossoris, Catelyn Scholl, Zhi Zheng

Similar presentations


Presentation on theme: "Mitchell Kossoris, Catelyn Scholl, Zhi Zheng"— Presentation transcript:

1 Mitchell Kossoris, Catelyn Scholl, Zhi Zheng
Classifying Physiological and Response Data to Detect Distracted Driving Events Mitchell Kossoris, Catelyn Scholl, Zhi Zheng

2 Distracted Driving Dataset
Controlled driving simulator 68 volunteers Same highway 4 driving stimuli types: No stimuli Cognitive stimuli Emotional stimuli Sensorimotor stimuli

3 Distracted Driving Dataset
Data collected: Speed Acceleration Brake force Steering Lane position Palm EDA Heart Rate Breathing Rate Gaze position

4 Processing Normalization
Used to account for differences in each participant’s range of each feature E.g. Person 1 has a resting heart rate of 70 bpm vs person 2 with 80 bpm

5 Processing - Interpolation
Forward fills missing data Segments of less than 10 linear-filled More data points to be analyzed Less data removed

6 Processing - Data Removal
Continuous missing segments of data removed Longer than 10 seconds Fewer data inaccuracies over large segments

7 Processing - Rolling Mean
Mean of each consecutive 10 second interval Dampen large differences caused by devices Reduces outliers

8 Processing - Balancing
Ensures classifiers are not biased toward one class Data is split evenly between classes Trained and tested on balanced data

9 Classifiers K-Nearest Neighbor Support Vector Machines Random Forest
Naive Bayes Neural Network

10 Evaluation Accuracy Baseline accuracy of our data, using a basic percentage of correctly-classified data points

11 Evaluation Mean Squared Error
Mean squared error regression loss between ground truth and estimated target values

12 Evaluation F1 Scores Weighted average of precision and recall where its best value is 1 and worst value is 0

13 Evaluation Confusion Matrix
Count of true positives, false positives, true negatives, and false negatives.

14 Evaluation K-Fold Cross Validation
The dataset was split into k consecutive folds Each fold was then used once as a validation while the k-1 remaining folds formed the training set

15 Results Random forest: 99.38% accurate
K-Nearest Neighbor: 93.9% accurate Support Vector Machines: 80.4% accurate Neural Net: 79.2% accurate Naive Bayes: 75.7% accurate


Download ppt "Mitchell Kossoris, Catelyn Scholl, Zhi Zheng"

Similar presentations


Ads by Google