Download presentation
Presentation is loading. Please wait.
Published byКрста Борисављевић Modified over 6 years ago
1
Activity Recognition Classification in Action
2
Mobile Activity Recognition
Mobile devices like smartphones and smartwatches have many sensors Some sensors measure motion Tri-axial accelerometer Gyroscope GPS and other location sensors Activity Recognition is now pretty common but wasn’t when this research started It took a year for our Fitbit order to be delivered
3
What is Activity Recognition?
Identifying a user’s activity based on data In our case the mobile sensor data from the accelerometer and gyroscope What type of data mining task is this? Classification How would you formulate this as a classification task? Not so obvious if you have not read the paper, since time dimension complicates things
4
More on Activity Recognition
Examples of activities Walking, jogging, running, jumping, washing dishes, playing basketball, reading, partying, studying, eating, drinking, etc Why do we care? Context sensitive “smart” devices Fitness and health applications To track what we do for other purposes
5
The Data The data is collected at 20 Hz
A timestamped sequence of numbers for each of 3 dimensions for both sensors
6
Walking Data Watch Gyroscope Phone accelerometer
7
Phone Accelerometer (Jogging)
8
Phone Accelerometer (Standing)
9
WISDM Activity Recognition Studies
2010 study using only smartphones Good results, but only 6 basic activities (29 subjects) More refined studies over next few years, including impact of personal models 2017 study 18 activities and 51 test subjects Includes eating actvities Sensors Evaluates accel and gyro on watch and phone (4 sensors) Evaluates 5 fused sensors
10
The 2016 Smartwatch Activities
General Activities Walking* Jogging* Climbing Stairs* Sitting* Standing* Kicking Soccer Ball General Activities (hand-oriented) Dribbling Basketball Playing Catch with Tennis Ball Typing Handwriting Clapping Brushing Teeth Folding Clothes Eating Activities (hand-oriented) Eating Pasta Eating Soup Eating Sandwich Eating Chips Drinking from a Cup * These used in the 2010 smartphone study
11
Formulation as Classification
Take the raw time series sensor data for non- overlapping 10 second chunks and create one example Use higher level features to describe behavior over the 10 second period This is data transformation Mapping the data to a very different representation This is because most classification algorithms assume examples, not time series data
12
High Level Features: 43 Total
Average[3]: Average acceleration (per axis) Standard Deviation[3]: SD per axis Average Absolute difference[3]: per axis Average Resultant Acceleration[3]: average of square root of sum of squares of 3 values Time Between Peaks[3] Binned Distribution[30]: For each axis take max – min value, create 10 equal sized bins, and record fraction in each bin
13
Types of Models Impersonal Models Personal Models
Generated using data from a panel of other users Build model based on 50 subjects and test on 51st Repeat 51 times so evaluate on every subject using all other subjects Personal Models Generated using data from the intended user. Must generate 51 models and carefully partition data for each subject
14
Results
15
2010 Study using Impersonal Model (IB3 Method)
72.4% Accuracy Predicted Class Walking Jogging Stairs Sitting Standing Lying Down Actual Class 2209 46 789 2 4 45 1656 148 1 412 54 869 3 10 47 553 30 241 8 57 6 448 5 7 301 13 131
16
2010 Study using Personal Model (IB3 Method)
98.4% accuracy Predicted Class Walking Jogging Stairs Sitting Standing Lying Down Actual Class 3033 1 24 4 1788 42 1292 870 2 6 5 11 509 8 7 442
17
2010 Study Accuracy Results
% of Records Correctly Classified Personal Universal Straw Man IB3 J48 NN Walking 99.2 97.5 99.1 72.4 77.3 60.6 37.7 Jogging 99.6 98.9 99.9 89.5 89.7 89.9 22.8 Stairs 96.5 91.7 98.0 64.9 56.7 67.6 16.5 Sitting 98.6 97.6 97.7 62.8 78.0 10.9 Standing 96.8 96.4 97.3 85.8 92.0 93.6 6.4 Lying Down 95.9 95.0 96.9 28.6 26.2 60.7 5.7 Overall 98.4 96.6 98.7 74.9 71.2
18
Personal Model Accuracy (RF)
19
Impersonal Model Accuracy (RF)
20
Accuracy of Different Classification Algorithms
21
Personal Model Learning Curves
The x-axis represents the amount of training data per activity
22
Impersonal Model Learning Curves
The x-axis represents the amount of training data per activity per panelist (50 panelists)
23
Personal Model Learning Curves (RF with varying sensors)
24
Impersonal Model Learning Curves (RF with varying sensors)
25
Impersonal Model Learning Curves (varying number of panelists)
26
Hybrid Models Much related work uses hybrid models rather than personal and impersonal models Hybrid models are treated as if they are impersonal models, but allow the panelist to also be in the test set Easy to generate since can use cross-validation on labeled data with many users But this is cheating! One might assume that small overlap is not a problem since if 50 people in the panel, then each test subject has only a 2% overlap with training data Our prior research shows that it is a huge problem and hybrid models perform like personal models The classifier must implicitly identify the subject, and our work on biometrics shows that this is achievable
27
Actitracker The phone-based research was incorporated into a deployed app/system called Actitracker The development effort to handle real-time activity recognition was substantial Actitracker is no longer supported
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.