Download presentation
Presentation is loading. Please wait.
Published byRandolph Bridges Modified over 7 years ago
1
Hold and Sign: A Novel Behavioral Biometrics for Smartphone User Authentication
Presented by: Dhruva Kumar Srinivasa Team-mate: Nagadeesh Nagaraja
2
Authors Attaullah Buriro - Dept. of Inf. Eng. & Comput. Sci., Univ. of Trento, Trento, Italy Bruno Crispo - DistrNet, KU Leuven, Leuven, Belgium Filippo Delfrari - Dept. of Inf. Eng. & Comput. Sci., Univ. of Trento, Trento, Italy Konrad Wrona - NATO Commun. & Inf. Agency, The Hague, Netherlands
3
User authentication Pattern Pin Password Gestures Biometrics
4
Biometric-based authentication
Physiological Fingerprint Face Retina Odor Behavioral Typing rhythm Gait Voice
5
Handwritten Signature?
Socially and legally accepted form of personal identification. Feasible to implement on smartphones. Challenges? Intra-class variability leads to high FRR Inter-class similarity leads to high FAR
6
Proposed approach Authentication system based on how the user holds his phone while signing on the screen. The system profiles the user based on the touch-points and micro-movements of the phone. Safer than a PIN or Password as “shoulder surfing” is almost impossible.
7
Existing authentication systems
Sensor-based authentication Physical three-dimensional sensors built into most smartphones – accelerometers, gyroscopes and orientation sensors. E.g: On-body detection Touch-based authentication Compares the geometry of the gesture/pattern. E.g: gestures, patterns, knock code Signature-based authentication Computes similarity score between signatures. E.g: voice, face, signature
8
What makes Hold & Sign different?
It is bi-modal; it takes into account phone and finger movements during the signing process. It relies on the screen touch-points and velocity of finger movement during signing – neither the image nor the geometry of the signature is used. It does not impose a restriction on the gesture to be used – user is free to choose any pattern he/she is already familiar with such as a signature.
9
Threat model Attacker is already in possession of the device.
Attacker can be a stranger, family member, friend or co-worker. Goal of the attacker: To gain access to the device and its contents.
10
Solution Consider all the touch-points pushed for the entire signature and the velocity of the finger movement. All the physical sensors are triggered and kept running during the whole signing process. Combine the extracted features from both built-in sensors and the touchscreen to profile user behavior. A user profile template is formed based on the selected feature subset and is then stored in the main database.
11
Solution
12
Data Source - Sensors Three built-in three-dimensional sensors: the accelerometer, the gravity sensor, and the magnetometer. Two additional sensor readings from the accelerometer: High-Pass Filter – contribution of the force of gravity is eliminated. Low-Pass Filter – force of gravity is isolated. In Android, the SensorEvent API is used to collect these readings. A fourth dimension to all of these sensors was calculated. Magnitude 𝑆 𝑀 = 𝑎 𝑥 𝑎 𝑦 𝑎 𝑧 2 , where 𝑆 𝑀 is the resultant dimension and 𝑎 𝑥 , 𝑎 𝑦 and 𝑎 𝑧 are the acceleration along X, Y and Z directions.
13
Data Source - Touchscreen
In Android, the MotionEvent API provides a class for tracking the motion of the finger on the screen. The VelocityTracker API is used to track the motion of the pointer on the touchscreen.
14
Classifiers chosen Generally, the problem of user biometric authentication is solved in two ways: with binary classification and anomaly detection. Four different verifiers were chosen: BayesNET K-Nearest Neighbor (KNN) Multilayer Perceptron (MLP) Random Forest (RF).
15
Success metrics True Acceptance Rate (TAR) - The proportion of attempts of a legitimate user correctly accepted by the system. False Acceptance Rate (FAR) - The proportion of attempts of an adversary wrongly granted access to the system. FAR = 1 - TRR False Rejection Rate (FRR) - The proportion of attempts of a legitimate user wrongly rejected by the system. FRR = 1 - TAR True Rejection Rate (TRR) - The proportion of attempts of an adversary correctly rejected by the system. Failure to Acquire Rate (FTAR) - The proportion of failed recognition attempts (due to system limitations). A reason for this failure could be the inability of the sensor to capture, insufficient sample size, number of features, etc.
16
Data Collection Android supports data collection in both fixed and customized intervals after registering the sensors. Such intervals are often termed Sensor_Delay_Modes. Hold & Sign uses SENSOR_DELAY_GAME since SENSOR_DELAY_NORMAL and SEN- SOR_DELAY_UI were too slow and SENSOR_DELAY_FASTEST includes noise in the data collection. 30 volunteers (22 male and 8 female) from several nationalities; the majority of them are either Master's or Ph.D. students but not security experts. Data collected in three different activities, sitting, standing and walking with Google Nexus 5.
17
Features Gathered 4 data streams from every 3- dimensional sensor, and extracted 4 statistical features, namely mean, standard deviation, skewness, and kurtosis, from every data stream. In total 16 features were obtained from all four dimensions of each sensor. Similarly, we extracted 13 features from touchscreen data. The extracted features from touchscreen data are shown.
18
Features fusion The fusion of data as early as possible may increase the recognition accuracy of the system. Data fusion was done at the feature level. The fusion of 16 features from each sensor makes a new feature vector and this feature vector is called the pattern of user's hold behavior. Similarly, the feature vector of sign behavior is called a sign pattern. The length of the fused feature vector for both modalities becomes 93 features.
19
Feature Subset Selection
Feature subset selection is the process of choosing the best possible subset, i.e. the set that gives the maximum accuracy, from the original feature set. Feature set was evaluated with Recursive Feature Elimination (RFE) feature subset selection methods using scikit-learn.
20
Analysis Data analyzed in two settings,
verifying legitimate user scenario attack scenario In the verifying legitimate user scenario, the system was trained with the data from the owner and then tested with the patterns belonging to that owner. The results were reported in terms of TAR and FRR. In the attack scenario, the system was trained with all the data samples from the owner and then tested with the patterns belonging to the other 29 users. The results were reported in terms of FAR and TRR.
21
Results Results were reported in three ways: intra-activity, interactivity and activity fusion. Intra-activity - training and testing each single activity (i.e. training walking to test walking only). Inter-activity - training with one single activity and using that training for testing all activities. Activity fusion - the combined data of all 3 activities for both training and testing (i.e. training with fused data from walking, sitting and standing) to test all activities.
22
Results contd. Intra-activity Inter-activity
≥79% TAR with full and ≥85% TAR with chosen RFE feature subset. unsatisfactory results (65.82% at best)
23
Need for activity fusion
Training the system in just one activity and using it in multiple activities does not lead to good results. Instead, the patterns of multiple activities were combined and the RFE feature selection method was applied on the combined data.
24
Hold & Sign implementation
Uses the MLP classifier based on the feature set extracted using the RFE method. Analysis was performed using this application on a Google Nexus 5 smartphone running Android 4.4.4
25
Performance Measured three different timings:
sample acquisition time training time testing time Computed these times for 3 different settings: with 15, 30 and 45 patterns. Tested each setting on the Google Nexus 5 with 35 tries for each time. Results are averaged over all 35 runs.
26
Performance contd. Sample acquisition time Training/Testing time
Training time is the time required to train the classifier. 3.497s, 6.193s and 9.310s for classifier training with 15, 30 and 45 patterns. Testing time is the time required by the system to accept/reject the authentication attempt. 0.200s, 0.213s, and 0.253s for testing with 15, 30 and 45 patterns.
27
Power consumption Hold & Sign Common tasks
All the steps (sensor data collection, feature extraction, etc.) disabled = 460 mW Only sensor data collection enabled = 493 mW Sensor data collection and feature extraction enabled = 588 mW Full functionality enabled ≈1000 mW A one-minute phone call: 1054 mW Sending a text message: 302 mW Sending or receiving an over WiFi: 432 mW Sending or receiving an over a mobile network: 610 mW
28
Tradeoffs Between Training and Accuracy
29
Feedback 11-question questionnaire adapted from the System Usability Scale (SUS) to the chosen volunteers (30 users) An optional subjective question: What did you like or dislike about the mechanism? Feedback received from 18 out of 30 volunteers (60%). Achieved an average SUS score of 68.33%, better than the well-established voice recognition score (66%) and its fusion with the face (46%) and gestures (50%). Some negative responses: Initial setup too cumbersome. Having to sign multiple times whereas setting up a PIN is easier. Requires the use of both hands.
30
Limitations Requires the use of both hands.
Cannot predict the user's ongoing activity in order to extract the best pre-selected features and use them for verifying user identity. How does it stack up against the increasingly popular fingerprint authentication?
31
References Attaullah Buriro, Bruno Crispo, Filippo Delfrari and Konrad Wrona “Hold and Sign: A Novel Behavioral Biometrics for Smartphone User Authentication”, Security and Privacy Workshops (SPW), 2016 IEEE
32
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.