Can a Humanoid Robot Spot a Liar?

Slides:



Advertisements
Similar presentations
Autonomic Scaling of Cloud Computing Resources
Advertisements

Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Random Forest Predrag Radenković 3237/10
Supervised Learning Techniques over Twitter Data Kleisarchaki Sofia.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Simple Neural Nets For Pattern Classification
1 Application of Metamorphic Testing to Supervised Classifiers Xiaoyuan Xie, Tsong Yueh Chen Swinburne University of Technology Christian Murphy, Gail.
1 Computerized Recognition of Emotions from Physiological Signals Physiological Signals Dec Mr. Amit Peled Mr. Hilel Polak Instructor : Mr. Eyal.
Supervised classification performance (prediction) assessment Dr. Huiru Zheng Dr. Franscisco Azuaje School of Computing and Mathematics Faculty of Engineering.
Predicting the Semantic Orientation of Adjective Vasileios Hatzivassiloglou and Kathleen R. McKeown Presented By Yash Satsangi.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter Twelve: Designing, Conducting, Analyzing, and Interpreting Experiments.
Selective Sampling on Probabilistic Labels Peng Peng, Raymond Chi-Wing Wong CSE, HKUST 1.
RESEARCH DESIGN.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Interviewing and Deception Detection Techniques for Rapid Screening and Credibility Assessment Dr. Jay F. Nunamaker, Jr. Dr. Judee K. Burgoon.
Chapter 2 Doing Social Psychology Research. Why Should You Learn About Research Methods?  It can improve your reasoning about real-life events  This.
The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter Seven: The Basics of Experimentation II: Final Considerations, Unanticipated.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
Unit One: Chapter Two Introduction of Psychology.
Chapter Seven: The Basics of Experimentation II: Final Considerations, Unanticipated Influences, and Cross-Cultural Issues.
Automatic Discovery and Processing of EEG Cohorts from Clinical Records Mission: Enable comparative research by automatically uncovering clinical knowledge.
Chong Ho Yu.  Data mining (DM) is a cluster of techniques, including decision trees, artificial neural networks, and clustering, which has been employed.
Brian Lukoff Stanford University October 13, 2006.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
In part from: Yizhou Sun 2008 An Introduction to WEKA Explorer.
Research Methodology Proposal Prepared by: Norhasmizawati Ibrahim (813750)
Unveiling Zeus Automated Classification of Malware Samples Abedelaziz Mohaisen Omar Alrawi Verisign Inc, VA, USA Verisign Labs, VA, USA
Investigating the effect of self-awareness on deception: a new direction in deception research Jordan H. Nunan Dr. Andrea Shawyer Professor Becky Milne.
Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation Dr.G.M.Nasira R. Vidya R. P. Jaia Priyankka.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Machine Learning: Ensemble Methods
A Generic Approach to Big Data Alarms Prioritization
Under the Shadow of Sunshine: Understanding and Detecting Bulletproof Hosting on Legitimate Service Provider Networks Sumayah Alrwais, Xiaojing Liao, Xianghang.
Sharing of previous experiences on scraping Istat’s experience
Hypothesis Tests l Chapter 7 l 7.1 Developing Null and Alternative
TIM 58 Chapter 3, continued: Requirements Determination Ch
Automated Experiments on Ad Privacy Settings
Deep Recurrent Q-Learning of Behavioral Intervention
An Artificial Intelligence Approach to Precision Oncology
Testifying in Administrative and Criminal Proceedings
When to engage in interaction – and how
Classification with Perceptrons Reading:
Enhancing User identification during Reading by Applying Content-Based Text Analysis to Eye- Movement Patterns Akram Bayat Amir Hossein Bayat Marc.
4.1.
Sia Gravani 10th May th ICTMC & 38th SCT, Liverpool
Machine Learning Week 1.
Intro to Machine Learning
Experiments in Machine Learning
Soft Error Detection for Iterative Applications Using Offline Training
Brainstorming 1. Group Members Express Any Idea That Comes To Mind No Matter How Strange, Weird or Unusual. Do Not Censor Any of Your Ideas. 2. Do Not.
The Research Process From Topic to Question.
Dr. Unnikrishnan P.C. Professor, EEE
iSRD Spam Review Detection with Imbalanced Data Distributions
Psych 231: Research Methods in Psychology
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Psych 231: Research Methods in Psychology
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Inferential Statistics
Psych 231: Research Methods in Psychology
Interviewing witnesses
Evaluating Classifiers
Parallel Session: BR maintenance Quality in maintenance of a BR:
Criteria used for statement analysis in courts and asylum procedures
Modeling IDS using hybrid intelligent systems
Presentation transcript:

Can a Humanoid Robot Spot a Liar? 2018 IEEE-RAS 18th International Conference on Humanoid Robotics Beijing, China, November 6-9, 2018 by A.M. Aroyo, J. Gonzalez-Billandon, A. Tonelli, A. Sciutti, M. Gori, G. Sandini and F. Rea presented by Kaylah Kennedy

Motivation Lie Detection is a necessary skill for several social positions including teachers, reporters, law enforcement, etc Robots with this skill could greatly assist professionals in these positions by automating the process of lie detection with less invasive techniques (such as eye blinking and pupil dilation)

Typical Lying Cues Lying requires more cognitive load as liars have to build a plausible story and monitor its coherence This cognitive load has a measurable effect on traits such as eye blinking, pupil dilation, and response time

Overview Inspired by literature on human-human interaction, this work investigates whether the behavioral cues associated to lying – including eye movements and response temporal features – are apparent also during human-humanoid interaction and can be leveraged by the robot to detect deception Using the results of this study, the authors then propose machine learning models that could be used to train an interactive robotic platform to detect deception

The experiment room was setup as an interrogation room divided into 3 zones with black curtains with the participant seated in the middle on a rotating chair participants were monitored with cameras, heart-rate chest sensor, microphone, and Tobii Eyetracker glasses which recorded pupil dilation and eye movements Setup of Experiment

Procedure Before starting the experiment, participants filled out a questionnaire in order to identify particular psychological features that could influence the results Participants were told that they were witnesses of two crimes and must help investigators find out who’s responsible Experiment is separated into three phases: General questions Participants were asked to truthfully and quickly answer 20 general questions First question session 10 questions asked by the robot investigator Second question session 10 questions asked by the human investigator

Procedure Pt. 2 After completing general questions, participants “randomly” picked a role from a box The participants thought their role was random but in fact the experimenters had chosen their roles a priori Participants were either truth-tellers or protectors They were then shown a video for each round of interrogation, each recording a theft The participants then went through the two question sessions in which the interrogator asked a mix of yes/no and open-ended questions based on what they saw in the video

A psychological profile was made for each participant based on their score on the questionnaire There is a statistically significant difference in the response time when participants have to say the truth with respect to saying a lie There are also significant differences in the average pupil dilatation for both eyes while telling the truth rather than a lie. There were no significant differences between the time participants spent talking with the robot or human investigator Results

Machine Learning System The system was defined as a binary classification system with an robotic autonomous system identifying either true or false answers The dataset used to train the model was extracted from the data gathered in the experiment It was then split into the sub-datasets D1 and D2 in order to address two different levels of lie: (i) detect future lies on known participants p1; (ii) detect lies on unprecedented participants p2.

Datasets D1 is a set of participants' answers without any participant identification D2 is a set of participants with their answers, and each answer is associated to the corresponding participant. The D1 dataset is used to demonstrate the possibility to spot future lies from already known participants, but not on unseen participants. The D2 dataset is instead used to try to obtain a general lie detector using a subset of participants as test set The literature on deception detection and evidences from the post-analysis provide a starting point to select the features that can be used to create the input vector X

D1 ML Model To detect future lies of a known person, a decision tree classifier was trained on D1 The learned tree can be directly translated into a set of rules to produce an expert system that can be easily ported

D2 ML Model The second algorithm used was a multilayer perceptron (MLP) with one hidden layer A binary cross entropy loss function with Adam optimization was used to train the network The MLP was trained on D2 to demonstrate the possibility to spot lies answer on unseen person.

Accuracies of Models Different refinements were made to the data such as: Psychological scores from the pre-questionnaire were added to the input vector X In order to find a connection between the psychological profile of the participants and the behavioral cues, a regression analysis was run on the difference between lie and truth per each participant. Eloquence correlates with openness to experience with F(1,12)=6.13, p=0.19. Another behavioral trait that correlates with the time to respond is neuroticism F(1,12)=8.37, p=0.013. Response time and eloquence were normalized with the baseline data of the experiment to remove speaking differences of participants The last improvement of the dataset was inspired from the analysis of the experiment's videos, it could be observed that some participants chose to deceive using avoidance (e.g., "I don’t know") or telling a true statement followed by a lie. These different strategies introduced more variability in the dataset producing ambiguity in the data labeling A new labeling of the dataset was made depending the different strategies to lie. The lie class was refined by manual labeling using video of the experiments so to identify clear lie

Summary This research demonstrates that the same set of behavioral cues studied in previous human literature can be used to detect lies and more importantly can be extracted with non- invasive measures. The results also suggest there are no main differences between an interrogation performed by a human and a robot

Next Steps Future work will be focused on refining the experimental methodology, in particular with the aim of better motivating participants to deceive, for instance by providing monetary rewards in case of successful deception of the interrogators. This might constitute a significant drive towards the increase in the effort associated with the deception act. By consequence this could develop into stronger cognitive load associated with the deceit and hence exhibit more evident behavioral cues Efforts could also be made to explore the effects of the appearance of robots on participants’ deceptive behaviors (e.g., virtual agent on screen) A bigger sample size will be helpful in order to increase the accuracy of the ML models

Thank You! Any questions?