Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute

Slides:



Advertisements
Similar presentations
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Advertisements

Université du Québec École de technologie supérieure Face Recognition in Video Using What- and-Where Fusion Neural Network Mamoudou Barry and Eric Granger.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 27 – Overview of probability concepts 1.
Sensor-Based Abnormal Human-Activity Detection Authors: Jie Yin, Qiang Yang, and Jeffrey Junfeng Pan Presenter: Raghu Rangan.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Learning to estimate human pose with data driven belief propagation Gang Hua, Ming-Hsuan Yang, Ying Wu CVPR 05.
Robust 3D Head Pose Classification using Wavelets by Mukesh C. Motwani Dr. Frederick C. Harris, Jr., Thesis Advisor December 5 th, 2002 A thesis submitted.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Joint Eye Tracking and Head Pose Estimation for Gaze Estimation
Introduction of Probabilistic Reasoning and Bayesian Networks
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
TOWARD DYNAMIC GRASP ACQUISITION: THE G-SLAM PROBLEM Li (Emma) Zhang and Jeff Trinkle Department of Computer Science, Rensselaer Polytechnic Institute.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
MULTI-TARGET TRACKING THROUGH OPPORTUNISTIC CAMERA CONTROL IN A RESOURCE CONSTRAINED MULTIMODAL SENSOR NETWORK Jayanth Nayak, Luis Gonzalez-Argueta, Bi.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept.
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Overview and Mathematics Bjoern Griesbach
Eye tracking: principles and applications 廖文宏 Wen-Hung Liao 12/10/2009.
The Eye-Tracking Butterfly: Morphing the SMI REDpt Eye-Tracking Camera into an Interactive Device. James Cunningham & James D. Miles California State University,
Human tracking and counting using the KINECT range sensor based on Adaboost and Kalman Filter ISVC 2013.
1. An Overview of the Data Analysis and Probability Standard for School Mathematics? 2.
Online Vigilance Analysis Combining Video and Electrooculography Features Ruofei Du 1, Renjie Liu 1, Tianxiang Wu 1, Baoliang Lu Center for Brain-like.
The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Automated Drowsiness Detection For Improved Driving Safety Aytül Erçil November 13, 2008.
The Detection of Driver Cognitive Distraction Using Data Mining Methods Presenter: Yulan Liang Department of Mechanical and Industrial Engineering The.
CORRELATION BETWEEN EYE MOVEMENTS AND MOUTH MOVEMENTS TO DETECT DRIVER COGNITIVE DISTRACTION afizan azman : qinggang.
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
Shape-Based Human Detection and Segmentation via Hierarchical Part- Template Matching Zhe Lin, Member, IEEE Larry S. Davis, Fellow, IEEE IEEE TRANSACTIONS.
Consensus-based Distributed Estimation in Camera Networks - A. T. Kamal, J. A. Farrell, A. K. Roy-Chowdhury University of California, Riverside
Hand Gesture Recognition System for HCI and Sign Language Interfaces Cem Keskin Ayşe Naz Erkan Furkan Kıraç Özge Güler Lale Akarun.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
 Face Detection Face Detection  Eye Detection Eye Detection  Lip-reading Lip-reading  Facial Feature Detection Facial Feature Detection  Gaze Tracking.
PINTS Network. Multiple Target Tracking Nonlinear Filtering Used for detection, tracking, and prediction of a target in a noisy environment Based entirely.
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Multimodal Information Analysis for Emotion Recognition
N n Debanga Raj Neog, Anurag Ranjan, João L. Cardoso, Dinesh K. Pai Sensorimotor Systems Lab, Department of Computer Science The University of British.
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
User Attention Tracking in Large Display Face Tracking and Pose Estimation Yuxiao Hu Media Computing Group Microsoft Research, Asia.
Human Activity Recognition at Mid and Near Range Ram Nevatia University of Southern California Based on work of several collaborators: F. Lv, P. Natarajan,
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
Belief Networks in Computer Vision Applications Alex Yakushev CMPS 290C final project Winter 2006.
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Target Tracking In a Scene By Saurabh Mahajan Supervisor Dr. R. Srivastava B.E. Project.
3D head pose estimation from multiple distant views X. Zabulis, T. Sarmis, A. A. Argyros Institute of Computer Science, Foundation for Research and Technology.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
Learning and Inferring Transportation Routines Lin Liao, Don Patterson, Dieter Fox, Henry Kautz Department of Computer Science and Engineering University.
Presented By Meet Shah. Goal  Automatically predicting the respondent’s reactions (accept or reject) to offers during face to face negotiation by analyzing.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Mobile eye tracker construction and gaze path analysis By Wen-Hung Liao 廖文宏.
FINGERTEC FACE ID FACE RECOGNITION Technology Overview.
SENSOR FUSION LAB RESEARCH ACTIVITIES PART I : DATA FUSION AND DISTRIBUTED SIGNAL PROCESSING IN SENSOR NETWORKS Sensor Fusion Lab, Department of Electrical.
AHED Automatic Human Emotion Detection
Non-invasive Techniques for Driver Fatigue Monitoring
MURI Annual Review Meeting Randy Moses November 3, 2008
Inconsistent Constraints
Interior Camera - A solution to Driver Monitoring Status
Image processing and computer vision
An Infant Facial Expression Recognition System Based on Moment Feature Extraction C. Y. Fang, H. W. Lin, S. W. Chen Department of Computer Science and.
Introduction to Object Tracking
AHED Automatic Human Emotion Detection
Presentation transcript:

Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute Funded by AFOSR and Honda

Visual Behaviors Visual behaviors that typically reflect a person's level of fatigue include –Eyelid movement –Head movement –Gaze –Facial expressions

Eye Detection and Tracking

Eye Detection

Eye Tracking Develop an eye tracking technique based on combining mean-shift and Kalman filtering tracking. It can robustly track eyes under different face orientations, illuminations, and large head movements.

Eyelid Movements Characterization Eyelid movement parameters Percentage of Eye Closure (PERCLOS) Average Eye Closure/Open Speed (AECS)

Gaze (Pupil Movements) Real time gaze tracking Develop a real time gaze tracking technqiue. No calibration is needed and allows natural head movements !.

Gaze Estimation Gaze is determined by Pupil location (local gaze) Local gaze is characterized by relative positions between glint and pupil. Head orientation (global gaze) Head orientation is estimated by pupil shape, pupil position, pupil orientation, and pupil size.

Gaze Parameters Gaze spatial distribution over time PERSAC-percentage of saccade eye movement over time

Gaze distribution over time while alert

Gaze distribution over time while fatigue

Gaze distribution over time for inattentive driving

Plot of PERSAC parameter over 30 seconds.

Head Movement Real time head pose tracking Perform 3D face pose estimation from a single uncalibrated camera. Head movement parameters Head tilt frequency over time (TiltFreq)

The flowchart of face pose tracking

Examples Face Model Acquisition

Head pitches (tilts) monitoring over time (seconds)

Facial Expressions Tracking facial features Recognize certain facial expressions related to fatigue like yawning and compute its frequency (YawnFreq) Building a database of fatigue expressions for training

The plot of the openness of the mouth over time

Facial expression demo

Fatigue Modeling Observations of fatigue is uncertain, incomplete, dynamic, and from different from perspectives Fatigue represents the affective state of an individual, is not observable, and can only be inferred.

Overview of Our Approach Propose a probabilistic framework based on the Dynamic Bayesian Networks (DBN) to systematically represent and integrate various sources of information related to fatigue over time. infer and predict fatigue from the available observations and the relevant contextual information.

Bayesian Networks Construction A DBN model consists of target hypothesis variables (hidden nodes) and information variables (information nodes). Fatigue is the target hypothesis variable that we intend to infer. Other contextual factors and visual cues are the information nodes.

Causes for Fatigue Major factors to cause fatigue include: Sleep quality. Circadian rhythm (time of day). Physical conditions. Working environment.

Bayesian Fatigue Model

Dynamic Fatigue Modeling

Bayesian Fatigue Model Demo

Interface with Vision Module An interface has been developed to connect the output of the computer vision system with the information fusion engine. The interface instantiates the evidences of the fatigue network, which then performs fatigue inference and displays the fatigue index in real time.

Conclusions Developed non-intrusive real-time computer vision techniques to extract multiple fatigue parameters related to eyelid movements, gaze, head movement, and facial expressions. Develop a probabilistic framework based on the Dynamic Bayesian networks to model and integrate contextual and visual cues information for fatigue detection over time.

Effective Fatigue Monitoring The technology must be non-intrusive and in real time. It should simultaneously extract multiple parameters and systematically combine them over time in order to obtain a robust and consistent fatigue characterization. A fatigue model is needed that can represent uncertain and dynamic knowledge associated with fatigue and integrate them over time to infer and predict human fatigue.