Detecting Eye Contact Using Wearable Eye-Tracking Glasses.

Slides:



Advertisements
Similar presentations
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science TU München, Germany A Person and Context.
Advertisements

DDDAS: Stochastic Multicue Tracking of Objects with Many Degrees of Freedom PIs: D. Metaxas, A. Elgammal and V. Pavlovic Dept of CS, Rutgers University.
Rapid Object Detection using a Boosted Cascade of Simple Features Paul Viola, Michael Jones Conference on Computer Vision and Pattern Recognition 2001.
On-the-fly Specific Person Retrieval University of Oxford 24 th May 2012 Omkar M. Parkhi, Andrea Vedaldi and Andrew Zisserman.
Face Alignment with Part-Based Modeling
Tracking Learning Detection
Database-Based Hand Pose Estimation CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Performance Evaluation Measures for Face Detection Algorithms Prag Sharma, Richard B. Reilly DSP Research Group, Department of Electronic and Electrical.
Character retrieval and annotation in multimedia
Joint Eye Tracking and Head Pose Estimation for Gaze Estimation
Facial feature localization Presented by: Harvest Jang Spring 2002.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Personal Driving Diary: Constructing a Video Archive of Everyday Driving Events IEEE workshop on Motion and Video Computing ( WMVC) 2011 IEEE Workshop.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Unsupervised Clustering in Multimodal Multiparty Meeting Analysis.
Automatic Pose Estimation of 3D Facial Models Yi Sun and Lijun Yin Department of Computer Science State University of New York at Binghamton Binghamton,
4EyesFace-Realtime face detection, tracking, alignment and recognition Changbo Hu, Rogerio Feris and Matthew Turk.
Gaze Awareness for Videoconferencing: A Software Approach Nicolas Werro.
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept.
Facial Features Extraction Amit Pillay Ravi Mattani Amit Pillay Ravi Mattani.
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
TEAM-1 JACKIE ABBAZIO SASHA PEREZ DENISE SILVA ROBERT TESORIERO Face Recognition Systems.
Facial Recognition CSE 391 Kris Lord.
HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM.
The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities.
GM-Carnegie Mellon Autonomous Driving CRL TitleAutomated Image Analysis for Robust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen,
Vision: Smart Home Control with Head- Mounted Sensors for Vision and Brain Activity Pieter Simoens, Elias De Coninck, Thomas Vervust, Jan- Frederik Van.
“Hello! My name is... Buffy” Automatic Naming of Characters in TV Video Mark Everingham, Josef Sivic and Andrew Zisserman Arun Shyam.
Visual Tracking with Online Multiple Instance Learning
KYLE PATTERSON Automatic Age Estimation and Interactive Museum Exhibits Advisors: Prof. Cass and Prof. Lawson.
 Face Detection Face Detection  Eye Detection Eye Detection  Lip-reading Lip-reading  Facial Feature Detection Facial Feature Detection  Gaze Tracking.
A General Framework for Tracking Multiple People from a Moving Camera
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
 An eye tracking system records how the eyes move when a subject is sitting in front of a computer screen.  The human eyes are constantly moving until.
NM – LREC 2008 /1 N. Moreau 1, D. Mostefa 1, R. Stiefelhagen 2, S. Burger 3, K. Choukri 1 1 ELDA, 2 UKA-ISL, 3 CMU s:
Facial Feature Extraction Yuri Vanzine C490/B657 Computer Vision.
Sign Classification Boosted Cascade of Classifiers using University of Southern California Thang Dinh Eunyoung Kim
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Pairwise Linear Regression: An Efficient and Fast Multi-view Facial Expression Recognition By: Anusha Reddy Tokala.
Sign Language – 5 Parameters Sign Language has an internal structure. They can be broken down into smaller parts. These parts are called the PARAMETERS.
A Seminar Report On Face Recognition Technology A Seminar Report On Face Recognition Technology 123seminarsonly.com.
BAGGING ALGORITHM, ONLINE BOOSTING AND VISION Se – Hoon Park.
National institute of science & technology BLINK DETECTION AND TRACKING OF EYES FOR EYE LOCALIZATION LOPAMUDRA CS BLINK DETECTION AND TRACKING.
GAZE ESTIMATION CMPE Motivation  User - computer interaction.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
Expectation-Maximization (EM) Case Studies
Counting How Many Words You Read
Student: Ibraheem Frieslaar Supervisor: Mehrdad Ghaziasgar.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Analysis of the Human Face
Typing Pattern Authentication Techniques 3 rd Quarter Luke Knepper.
Text From Corners: A Novel Approach to Detect Text and Caption in Videos Xu Zhao, Kai-Hsiang Lin, Yun Fu, Member, IEEE, Yuxiao Hu, Member, IEEE, Yuncai.
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Online Updating Appearance Generative Mixture Model for Meanshift Tracking Jilin Tu, Thomas Huang Elec. and Comp. Engr. Dept. Univ. of Illinois at Urbana.
9.913 Pattern Recognition for Vision Class9 - Object Detection and Recognition Bernd Heisele.
Face Detection Final Presentation Mark Lee Nic Phillips Paul Sowden Andy Tait 9 th May 2006.
CS 548 Spring 2016 Model and Regression Trees Showcase by Yanran Ma, Thanaporn Patikorn, Boya Zhou Showcasing work by Gabriele Fanelli, Juergen Gall, and.
Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces Speaker: Po-Kai Shen Advisor: Tsai-Rong Chang Date: 2010/6/14.
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
FINGERTEC FACE ID FACE RECOGNITION Technology Overview.
Goal: Predicting the Where and What of actors and actions through Online Action Localization Figure 1.
Gender Classification Using Scaled Conjugate Gradient Back Propagation
Eye Detection and Gaze Estimation
COMPUTER VISION Tam Lam
Learning complex visual concepts
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Volodymyr Bobyr Supervised by Aayushjungbahadur Rana
Report 2 Brandon Silva.
Presentation transcript:

Detecting Eye Contact Using Wearable Eye-Tracking Glasses

Eye contact is important aspect in face to face interaction In case of children Atypical pattern of gaze and eye contact Early sign of autism

Goal: Eye contact detection between two Individuals Single wearable eye tracking device Clinical settings

Multiple static camera-distant, problem with frontal view Mutual eye tracking – Use EOG

Face analysis Problem of finding and analysing faces in video Computer vision Localize the face and facial parts (eyebrow, eye, nose, mouth etc) Gaze estimation using 2D appearance of eye Estimate the 3D gaze direction based on a single image of an eye Key idea: Learn the appearance model of the eye for different gaze directions Large number of samples

OMRON OKAO Commercial vision library Detect and analyse child’s face Takes the video as input Localizes all facial parts in video Estimates 3D head pose If eyes in the correct position, Gaze direction Promising result for frontally presented faces

Face analysis result by OKAO Bounding box Facial parts Head pose Gaze direction

Experimental setup Objectives –Record the video and gaze data with minimum obtrusiveness for children –Allow the analysis of the data for eye contact detection Protocol –Interactive session 5 to 8 min Examiner wears the eye tracker glass Interact with the child Sitting in front

Provide online annotation by pressing foot pedal----error prone

Experimental setup During interaction –Eye gaze was tracked and egocentric video was recorded OKAO vision library is applied –Obtain face information of the child Location/orientation of face 3D gaze direction –Adult gaze information Provided by eye tracker

Experimental setup Participants Female subject of age 16 months Recoded session – 7 min

Method Combines –Eye gaze of the examiner –Face information of the child (gaze direction) Extract features from gaze and face information Train the classifier –Detect existence of eye contact in a specific frame

Features Relative location (RL) –Examiner’s gaze point with respect to child’s eye center 3D Gaze direction of the child (GD) –With respect to image plane Head orientation (HO) –3D head position of the child Confidence of eye detection (CE)

Method Detect Eye contact –Binary classification with ground truth For each frame in the video and given feature –Detect the eye contact Simple rule works –Fix threshold for RL and GD Examiner’s gaze point is close to child’s eye Child’s gaze facing towards examiner

Gaze direction Face orientation

Results Randomly select 60% as training; rest as testing 5 trees ---- depth 6

1.RL is more reliable 2.Vertical is more frequent than horizontal

Performance

OKAO vision library fails to detect correct gaze direction