Download presentation
Presentation is loading. Please wait.
1
When to engage in interaction – and how
When to engage in interaction – and how? EEG-based enhancement of robot’s ability to sense social signals in HRI Paper by: Stefan Ehrlich, Agnieszka Wykowska, Karinne Ramirez-Amaro, and Gordon Cheng Presentation by: Calvin Li COMS 6731 Humanoid Robotics
2
Social Interaction How do people do it?
Nonverbal cues can often signal whether to engage socially, and how to do so Touch, gestures, body posture, gaze, etc. Ability of robots to interact with humans socially is becoming increasingly important Human Robot Interaction (HRI) Working in households, with the elderly, in hospitals, etc.
3
Motivation To identify human intention to initiate eye contact with a robot Allows humanoid to determine whether and when to engage with a human To distinguish an observer as being initiator or responder to eye contact Helps determine social role of robot
4
But how do you do that Picking up on nonverbal social cues using traditional robot sensors can be a very difficult task Just go straight to the brain Use electroencephalography (EEG) to pick up on electrophysiological patterns emitted from the brain Use EEG data to train classifiers
5
Related Work: Brain Computer Interfaces (BCI)
Young, hip new area of research Active BCI – users actively modulate brain activity to control a computer device Passive BCI – user’s state of mind continuously monitored so that an application can adjust in order to maintain the user’s engagement (e.g. video game difficulty) Robot storyteller: robot adapted and exaggerated its gestures while telling a story in response to varying levels of vigilance from human observers – improved overall vigilance and ability for observers to recall details of the story
6
The Experiment Human participant sits in front of iCub robot with EEG system attached Robot sitting with gaze averted toward computer screen Belief manipulation Participants were told that there was an algorithm that works in real time allowing the robot to detect relevant EEG data pertaining to the intent to initiate eye contact, and then respond by returning eye contact There was no such algorithm in reality, but getting participants to believe that they could actively initiate eye contact with the robot gave useful EEG data so that this intent could be classified
7
The Experiment
8
The Experiment 12 consecutive “blocks” – each block consisting of 10 trials as per the illustration below At the beginning of the block, the user knows whether it is a “YOU INITIATE” or “ROBOT INITIATES” block, and a beep would play after a random time between 5-8 seconds into each trial, determining when the intended initiator should start its gaze
9
Data Segmentation Baseline segments Intention segments
EEG data taken during the rest periods between each trial Intention segments EEG data taken after the beep in the “YOU INITIATE” blocks –brain activity representing intention to initiate gaze Initiator segments EEG data taken during the actual gaze in “YOU INITIATE” blocks Responder segments EEG data taken during the actual gaze in “ROBOT INITIATES” blocks
10
Feature Extraction Each data segment separated into 5 different frequency bands theta (4-7 Hz), low alpha (7-10 Hz), high alpha (10-13 Hz), beta (14-30 Hz) and gamma ( Hz) 32 channels x 5 frequency bands = 160 features per data segment
11
Classification Aimed to distinguish: Support vector machine (SVM)
intent and baseline segments (when to engage) Initiator and responder segments (how to engage) Support vector machine (SVM) RBF kernel used –often reported as most suitable kernel for EEG-based classification 2 evaluation techniques 5-times-5-fold participant-individual cross-validation (CV) Each participant’s data separated into 5 folds, 4 of which used for training and the remaining used for testing; test score averaged over all 5 folds for each of 5 participants Leave-1-participant-out-validation SVM trained on data of 4 participants, then the remaining participant is used to test the model; measured for leaving out each of the participants
12
Model accuracy Two different classifications
Distinction between intention to initiate eye contact and baseline brain activity Distinction between brain activity when participant is the initiator vs responder High within-participant (CV) averages (80.4%, 77.0%), promising across-participant (L1O) averages (64.2%, 61.0%)
13
Moving Forward Get way more participants
Would probably achieve better L1O accuracy Combine EEG signals with visual, auditory, and tactile signals to create more comprehensive social engagement mechanisms Implement the imaginary algorithm that the participants believed in so that a robot could actually detect intention to initiate eye contact as well as the social role it plays during eye contact
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.