Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute

Slides:



Advertisements
Similar presentations
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Advertisements

Modeling Urban Growth using the CaFe Modeling Shell Mantelas A. Eleftherios Regional Analysis Division Institute of Applied and Computational Mathematics.
Introduction to Eye Tracking
Virtual Me. Motion Capture (mocap) Motion capture is the process of simulating actual movement in a computer generated environment The capture subject.
CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu Lecture 27 – Overview of probability concepts 1.
Kien A. Hua Division of Computer Science University of Central Florida.
Sensor-Based Abnormal Human-Activity Detection Authors: Jie Yin, Qiang Yang, and Jeffrey Junfeng Pan Presenter: Raghu Rangan.
A Brief Introduction to Bayesian Inference Robert Van Dine 1.
Digital Interactive Entertainment Dr. Yangsheng Wang Professor of Institute of Automation Chinese Academy of Sciences
Electrical & Computer Engineering Dept. University of Patras, Patras, Greece Evangelos Skodras Nikolaos Fakotakis.
Learning to estimate human pose with data driven belief propagation Gang Hua, Ming-Hsuan Yang, Ying Wu CVPR 05.
Robust 3D Head Pose Classification using Wavelets by Mukesh C. Motwani Dr. Frederick C. Harris, Jr., Thesis Advisor December 5 th, 2002 A thesis submitted.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Introduction of Probabilistic Reasoning and Bayesian Networks
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
TOWARD DYNAMIC GRASP ACQUISITION: THE G-SLAM PROBLEM Li (Emma) Zhang and Jeff Trinkle Department of Computer Science, Rensselaer Polytechnic Institute.
Correlation Between Image Reproduction Preferences and Viewing Patterns Measured with a Head Mounted Eye Tracker Lisa A. Markel Jeff B. Pelz, Ph.D. Center.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering On-line Alert Systems for Production Plants A Conflict Based Approach.
An Introduction to Bayesian Networks for Multi-Agent Systems By Vijay Sargunar.M.M.
Tracking multiple independent targets: Evidence for a parallel tracking mechanism Zenon Pylyshyn and Ron Storm presented by Nick Howe.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
UNIVERSITY OF MURCIA (SPAIN) ARTIFICIAL PERCEPTION AND PATTERN RECOGNITION GROUP REFINING FACE TRACKING WITH INTEGRAL PROJECTIONS Ginés García Mateos Dept.
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
February 4, 2003 JPF1 JPF Probabilistic Frames Masami Takikawa Information Extraction & Transport
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Eye tracking: principles and applications 廖文宏 Wen-Hung Liao 12/10/2009.
The Eye-Tracking Butterfly: Morphing the SMI REDpt Eye-Tracking Camera into an Interactive Device. James Cunningham & James D. Miles California State University,
1. An Overview of the Data Analysis and Probability Standard for School Mathematics? 2.
Online Vigilance Analysis Combining Video and Electrooculography Features Ruofei Du 1, Renjie Liu 1, Tianxiang Wu 1, Baoliang Lu Center for Brain-like.
The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Automated Drowsiness Detection For Improved Driving Safety Aytül Erçil November 13, 2008.
The Detection of Driver Cognitive Distraction Using Data Mining Methods Presenter: Yulan Liang Department of Mechanical and Industrial Engineering The.
CORRELATION BETWEEN EYE MOVEMENTS AND MOUTH MOVEMENTS TO DETECT DRIVER COGNITIVE DISTRACTION afizan azman : qinggang.
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
PINTS Network. Multiple Target Tracking Nonlinear Filtering Used for detection, tracking, and prediction of a target in a noisy environment Based entirely.
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
Lecture 15 – Social ‘Robots’. Lecture outline This week Selecting interfaces for robots. Personal robotics Chatbots AIML.
Computer Vision: Eye Tracking By: Geraud Campion Michael O’Connor.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
User Attention Tracking in Large Display Face Tracking and Pose Estimation Yuxiao Hu Media Computing Group Microsoft Research, Asia.
Challengesdemomind-reading machinesautism {kaliouby, teeters, MIT | Media Lab | Affective Computing Mind-Read > Act > Persuade hmm.
It Starts with iGaze: Visual Attention Driven Networking with Smart Glasses It Starts with iGaze: Visual Attention Driven Networking with Smart Glasses.
CONTENTS: 1.Abstract. 2.Objective. 3.Block diagram. 4.Methodology. 5.Advantages and Disadvantages. 6.Applications. 7.Conclusion.
Belief Networks in Computer Vision Applications Alex Yakushev CMPS 290C final project Winter 2006.
Gaze Interaction Group GIG Research Interests Leicester, 5th of September Public University of Navarra.
Using Bayesian Networks to Predict Plankton Production from Satellite Data By: Rob Curtis, Richard Fenn, Damon Oberholster Supervisors: Anet Potgieter,
A Protocol for Tracking Mobile Targets using Sensor Networks H. Yang and B. Sikdar Department of Electrical, Computer and Systems Engineering Rensselaer.
An Energy-Efficient Approach for Real-Time Tracking of Moving Objects in Multi-Level Sensor Networks Vincent S. Tseng, Eric H. C. Lu, & Kawuu W. Lin Institute.
Face Detection and Head Tracking Ying Wu Electrical Engineering & Computer Science Northwestern University, Evanston, IL
Presented By Meet Shah. Goal  Automatically predicting the respondent’s reactions (accept or reject) to offers during face to face negotiation by analyzing.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Visual Attention (neural mechanisms) Arash Afraz.
Mobile eye tracker construction and gaze path analysis By Wen-Hung Liao 廖文宏.
AHED Automatic Human Emotion Detection
Non-invasive Techniques for Driver Fatigue Monitoring
Inconsistent Constraints
Video-based human motion recognition using 3D mocap data
Interior Camera - A solution to Driver Monitoring Status
Attentive User Interfaces
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
About Thetus Thetus develops knowledge discovery and modeling infrastructure software for customers who: Have high value data that does not neatly fit.
AHED Automatic Human Emotion Detection
Rey-Long Liu Dept. of Medical Informatics Tzu Chi University Taiwan
Presentation transcript:

Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute

Visual Behaviors Visual behaviors that typically reflect a person's level of fatigue include –Eyelid movement –Head movement –Gaze –Facial expressions

Eyelid Movements Tracking Eyes Develop techniques that can robustly track eyes under different face orientations, illuminations, and large head movements. Compute Eye movement parameters PERCLOS Average Eye Closure/Open Speed (AECS)

Eyes tracking demo

PERCLOS measurement over time

Average Eye Closure Speed Over time

Gaze (Pupil Movements) Real time gaze tracking No calibration is needed and allows natural head movements !. Gaze parameters Spatial gaze distribution overtime Ratio of fixation time to saccade time.

Gaze distribution over time while alert

Gaze distribution over time under fatigue

Head Movement Real time head pose tracking Perform 3D face pose estimation from a single uncalibrated camera. Head movement parameters Head tilt frequency over time Percentage of side views (PerSideV)

Facial Expressions Tracking facial features Recognize certain facial expressions related to fatigue like yawning. Building a database of fatigue expressions.

Facial expression demo

Fatigue Modeling Knowledge of fatigue is uncertain and from different levels of abstraction. Fatigue represents the affective state of an individual, is not observable, and can only be inferred.

Overview of Our Approach Propose a probabilistic framework based on Bayesian Networks (BN) to model fatigue. systematically integrate various sources of information related to fatigue. infer and predict fatigue from the available observations and the relevant contextual information.

Bayesian Networks Construction A BN model consists of target hypothesis variables (hidden nodes) and information variables (information nodes). Fatigue is the target hypothesis variable that we intend to infer. Other contextual factors and visual cues are the information nodes.

Causes for Fatigue Major factors to cause fatigue include: Sleep quality. Circadian rhythm (time of day). Physical conditions. Working environment.

Bayesian Network Model for Monitoring Human Fatigue

Interface with Vision Module An interface has been developed to connect the output of the computer vision system with the information fusion engine. The interface instantiates the evidences of the fatigue network, which then performs fatigue inference and displays the fatigue index in real time.

Conclusions Developed non-intrusive real-time computer vision techniques to extract multiple fatigue parameters related to eyelid movements, gaze, head movement, and facial expressions. Develop a probabilistic framework based on Bayesian networks to model and integrate contextual and visual cues information for fatigue monitoring.