Webcam-based attention tracking

Slides:



Advertisements
Similar presentations
Kien A. Hua Division of Computer Science University of Central Florida.
Advertisements

1 upa usability professionals’ associationwww.usabilityprofessionals.org Adding Value with Eyetracking Natalie Webb Amberlight Partners London UK Tony.
Gazing the screens Using eye-tracking for evaluating visually complex TV programming Author: Amund Berg Supervisor: Frode Volden Gjøvik University College.
Mr. Perminous KAHOME, University of Nairobi, Nairobi, Kenya. Dr. Elisha T.O. OPIYO, SCI, University of Nairobi, Nairobi, Kenya. Prof. William OKELLO-ODONGO,
From Controlled to Natural Settings
Webinars. One of the fastest growing methodologies There are similarities and differences between classroom training and webinars.
EWatchdog: An Electronic Watchdog for Unobtrusive Emotion Detection based on Usage Analysis Rayhan Shikder Department.
Online Training Instructions Medicaid Time Study.
Leveraging Asset Reputation Systems to Detect and Prevent Fraud and Abuse at LinkedIn Jenelle Bray Staff Data Scientist Strata + Hadoop World New York,
Student Submissions Integrity Diagnosis System (SSID) Min-Yen Kan.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Lecture Notes for Revisiting Use Case Descriptions.
3.1 Observation with Eye Tracking. Eye Tracking 凝視點 (Fixation)  「我癡癡地凝視著他,有好一段時間,我幾乎 忘了這個世界的存在」  定義  個體專注於視覺刺激中特定位置時  眼球相對固定不動時所停留的位置  兩項參數  (
WHY GAME ELEMENTS MAKE LEARNING SO ATTRACTIVE? A CASE STUDY USING EYE-TRACKING TECHNOLOGY Magdalena Borys Institute of Computer Science, Lublin University.
Investigating the Use of Eye-Tracking Technology for Assessment: A case study of research and innovation at a Special School INNOVATION IN THE ASSESSMENT.
Skill acquisition
The Biology of Consciousness
Speaker & Author: Victor Manuel García-Barrios
Cognitive Research for Exploratory Search (CRES)
Predicting Interface Failures For Better Traffic Management.
Designing the Future Classroom
Welcome to your first Online Class Session
Getting Started with SAM
Video Conferencing Video conferencing is a form of communication where users with computers and web cams communicate using both audio and video over the.
Course so far Image formation and color! Filtering! Image frequency! Feature points! Bags of words! Classifiers! Sliding windows! Big data!
Spoken English Website: | us:
The Friday Institute MOOCs for Educators Initiative Glenn Kleiman Friday Institute for Educational Innovation NC State University College of Education.
Machine Learning for the Quantified Self
Agenda Motivation & Goals „Out-of-the-Loop“-Phenomenon MINIMA Concept
When to engage in interaction – and how
Conferences Presenter Training
Welcome to your first Online Session
Explore the Essential Factors to Create E-Learning Platform.
Do-It-Yourself Eye Tracker: Impact of the Viewing Angle on the
Big Blue Button A Canvas Workshop
Child Outcomes Summary (COS) Process Training Module
Azure AD Deployment Are you maximising your Azure AD investment?
Play game, pause video, move cursor… with your eyes
4.01 Examine web page development and design.
Chapter 2 Human Information Processing
From Controlled to Natural Settings
Intro – RAF software OUTLINE: Real-time data - on aircraft or ground
A User Attention Based Visible Watermarking Scheme
Interior Camera - A solution to Driver Monitoring Status
Identifying Confusion from Eye-Tracking Data
Internet of Things A Process Calculus Approach
English Language Teaching Department
Thinking outside the box
Interaction in Urban Traffic – Insights into an Observation of Pedestrian-Vehicle Encounters André Dietrich, Chair of Ergonomics, TUM
Public Education, Outreach and User Interface
Gaze Prediction for Recommender Systems
Advocacy 101. Advocacy 101 Agenda What is advocacy? Why do we advocate? How do we advocate? Test your knowledge Why does advocacy work? What is “raising.
4.01 Examine web page development and design.
Presentation Can be Found at:
Manage Meetings.
Team Meetings Unit 3 Employability and Professional Development
Open Source Software Development Processes Version 2.5, 8 June 2002
The Second Elearning Workshop
Blended Learning: Improve Disaster Risk Management in India by Using Knowledge-Base and Statistics 24/02/2019 An introduction course on InWEnt Blended-learning.
INNOVATION IN THE ASSESSMENT OF LEARNING
Experimental Evaluation
CS 594: Empirical Methods in HCC Sensor Data Streams in HCI Research
Elluminate your World! A quick guide to the explanation on how to use the many features of Elluminate (in Blackboard) to communicate with a classmate or.
ERS is your Patrol’s reporting made Fun Easy ACCURATE
Academic Information Technology Services
Motivation It can effectively mine multi-modal knowledge with structured textural and visual relationships from web automatically. We propose BC-DNN method.
Children and Networks Suha Hajyahia Tareza Haddad.
E I N E I N Item 27: Demonstrate how print is read
Mole: Motion Leaks through Smartwatch Sensors
SimVenture Evolution Employability Skills
Presentation transcript:

Webcam-based attention tracking Christoph Lofi, Yue Zhao, Wing Nguyen, Claudia Hauff Wing Nguyen is the project programmer, the remaining members are researchers.

THE PROPOSAL The proposal as accepted by CEL.

“... explore the design and development of a scalable, privacy-aware technology that automatically tracks learners’ attention states … in … xMOOCs … once inattention is detected, the learner is alerted” Main goal as stated in the proposal.

RQ1 Is browser-based eye tracking sufficiently accurate to enable tracking and detection of attention states in real-time? RQ2 What signals are most suitable to communicate the observed attention drops to the learner (e.g. audio signals, visual signals, multi-modal)? RQ3 Does the deployment of a real-time attention tracker positively impact MOOC learners’ engagement and performance? 3 Research Questions Hig-quality eyetracking vs. Web-cam based eyetracking.

OUR WORK (so far)

80% 40% 0% 3 Research Questions RQ1 Is browser-based eye tracking sufficiently accurate to enable the tracking and detection of attention states in real-time? RQ2 What signals are most suitable to communicate the observed attention drops to the learner (e.g. audio signals, visual signals, multi-modal)? RQ3 Does the deployment of a real-time attention tracker positively impact MOOC learners’ engagement and performance? 3 Research Questions 80% 40% 0% Progress so far.

Our argument chain & study (submitted to ECTEL) Mind-wandering and attention lapses have been studied for a long time in the traditional classroom. In video-heavy MOOCs mind-wandering may be even more severe due to constant temptations (emailing, chatting, etc.) Previous research (not in a video-watching scenario though) has shown eye movements and gaze patterns to be predictive of mind-wandering. Our argument chain & study (submitted to ECTEL) How well does a Webcam-based setup fare in the detection of eye movements and gaze features that are predictive of mind-wandering? (we left out the real-time part for now) Previous work has made use of expensive and specialized eye tracking hardware.

Study: 13 participants 2 MOOC videos (7-8 minutes each) Periodic self-reports of mind-wandering → ground truth Webcam and Tobii data were logged & features extracted → Supervised machine learning Were you distracted in the last 30s?

Gaze heatmaps of two participants over a 30s interval. Reported mind-wandering. Gaze heatmaps of two participants over a 30s interval. No mind-wandering.

Exploratory analysis Mind-wandering is frequent (rate of 29%), even in short videos Participants grow tired, mind-wandering increases in the second video

Detection results F1 measure Leave-one-out cross validation Detectability of mind-wandering is highly user-dependent Webcam-based data works slightly better for our purposes than the Tobii data (we don’t yet know why) F1 measure

Open issues Our study required a calibration step - how do we get around this in a real MOOC setting? Our trained model is optimized for features derived from 30-60 second time spans. How well does it work in a real-time setting (second to second decision)? How can we acquire large-scale training data?