Automated Drowsiness Detection For Improved Driving Safety Aytül Erçil November 13, 2008.

Slides:



Advertisements
Similar presentations
Artificial passenger.
Advertisements

ARTIFICIAL PASSENGER.
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Université du Québec École de technologie supérieure Face Recognition in Video Using What- and-Where Fusion Neural Network Mamoudou Barry and Eric Granger.
Application a hybrid controller to a mobile robot J.-S Chiou, K. -Y. Wang,Simulation Modelling Pratice and Theory Vol. 16 pp (2008) Professor:
Face Alignment by Explicit Shape Regression
Salvatore Vitabile, Alessandra De Paola, Filippo Sorbello Department of Biopathology and Medical Biotechnology and Forensics, University of Palermo, Italy.
DDDAS: Stochastic Multicue Tracking of Objects with Many Degrees of Freedom PIs: D. Metaxas, A. Elgammal and V. Pavlovic Dept of CS, Rutgers University.
Driver Behavior Models NSF DriveSense Workshop Norfolk, VA Oct Mario Gerla UCLA, Computer Science Dept.
Tracking Learning Detection
Introduction to VISSIM
Real-Time Human Pose Recognition in Parts from Single Depth Images Presented by: Mohammad A. Gowayyed.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
The design was done collaboratively within the Winter Mini-Project course taught by Prof. Dov Dori In response to the call for contest at INCOSE_IL.
Student: Yao-Sheng Wang Advisor: Prof. Sheng-Jyh Wang ARTICULATED HUMAN DETECTION 1 Department of Electronics Engineering National Chiao Tung University.
Oklahoma State University Generative Graphical Models for Maneuvering Object Tracking and Dynamics Analysis Xin Fan and Guoliang Fan Visual Computing and.
Accelerometer-based Transportation Mode Detection on Smartphones
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Monotony of road environment and driver fatigue: a simulator study Anil Divvela Kenny Stauffer.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
CISR GW-TRI Center for Intelligent Systems Research GW Transportation Research Institute The George Washington University, Virginia Campus, Academic.
Online Vigilance Analysis Combining Video and Electrooculography Features Ruofei Du 1, Renjie Liu 1, Tianxiang Wu 1, Baoliang Lu Center for Brain-like.
Flow Based Action Recognition Papers to discuss: The Representation and Recognition of Action Using Temporal Templates (Bobbick & Davis 2001) Recognizing.
The Detection of Driver Cognitive Distraction Using Data Mining Methods Presenter: Yulan Liang Department of Mechanical and Industrial Engineering The.
ITS America – April 2004 The Naturalistic Driving Study: Why are Crashes Occurring? Suzie Lee Research Scientist, Center for Crash Causation and Human.
Prepared by: Badiuzaman Bin Baharu Supervisor: Dr. Nasreen Bt. Badruddin.
Intelligent Transportation System (ITS) ISYM 540 Current Topics in Information System Management Anas Hardan.
1 Li Li [WSC17] Institute of Integrated Sensor Systems Department of Electrical and Computer Engineering Multi-Sensor Soft-Computing System for Driver.
Detecting Pedestrians Using Patterns of Motion and Appearance Paul Viola Microsoft Research Irfan Ullah Dept. of Info. and Comm. Engr. Myongji University.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Multimodal Information Analysis for Emotion Recognition
Object Recognition in Images Slides originally created by Bernd Heisele.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Fatigue and driving. What is fatigue? Subjective experience of sleepiness, tiredness, lack of energy that cause decrease in performance and arousal. Five.
PRESENTED BY TARUN CHUGH ROLL NO: DATE OF PRESENTATION :-29/09/2010 ARTIFICIAL PASSENGER.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Human Activity Recognition at Mid and Near Range Ram Nevatia University of Southern California Based on work of several collaborators: F. Lv, P. Natarajan,
Counting How Many Words You Read
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
FACE DETECTION : AMIT BHAMARE. WHAT IS FACE DETECTION ? Face detection is computer based technology which detect the face in digital image. Trivial task.
Detecting Eye Contact Using Wearable Eye-Tracking Glasses.
Under The Guidance of Smt. D.Neelima M.Tech., Submitted by
Head Tracking Using Video Analytics Xuan Wang 1, Yuhen Hu 1, Robert G. Radwin 2, John D. Lee 2 University of Wisconsin – Madison 1 Dept. Electrical and.
SHRP 2 Safety Databases: Continuous Observations of Seat Belt Use Jim Hedlund APHA Annual Meeting November 18, 2014 Accelerating solutions for highway.
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
When you are behind the wheel of a car, being sleepy is very dangerous. Driving drowsy slows your reaction time, decreases awareness, and impairs judgment,
Presented By: O. Govinda Rao 3 rd MCA AITAM CH. Hari Prasad 3 rd MCA AITAM.
Facial Smile Detection Based on Deep Learning Features Authors: Kaihao Zhang, Yongzhen Huang, Hong Wu and Liang Wang Center for Research on Intelligent.
Under Guidance of Mr. A. S. Jalal Associate Professor Dept. of Computer Engineering and Applications GLA University, Mathura Presented by Dev Drume Agrawal.
Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. A scalable platform for learning and evaluating a real-time vehicle detection system.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
5th Annual International Commerce Convention: Startup to Sustainability Initiatives and Challenges Reduction in Accidental Death: Determining Driver Behavior.
Non-invasive Techniques for Driver Fatigue Monitoring
Vehicle Segmentation and Tracking in the Presence of Occlusions
Tremor Detection Using Motion Filtering and SVM Bilge Soran, Jenq-Neng Hwang, Linda Shapiro, ICPR, /16/2018.
Interior Camera - A solution to Driver Monitoring Status
Categorization by Learning and Combing Object Parts
Identifying Confusion from Eye-Tracking Data
Multi-Sensor Soft-Computing System for Driver Drowsiness Detection
Phd Candidate Computational Physiology Lab University of Houston
Non-Intrusive Monitoring of Drowsiness Using Eye Movement and Blinking
Marian Stewart Bartlett, Gwen C. Littlewort, Mark G. Frank, Kang Lee 
Presentation transcript:

Automated Drowsiness Detection For Improved Driving Safety Aytül Erçil November 13, 2008

Outline Problem Background and Description Technological Background Action Unit Detection Drowsiness Prediction

Objectives/Overview Statistical Inference of fatigue Using Machine Learning Techniques

In over accidents in (in Turkey): Injured: 123,985 people Deceased: 3,215 people Financial loss: 651,166,236 USD Driver error has been blamed as the primary cause for approximately 80% of these traffic accidents.

The US National Highway Tra ffi c Safety Administration estimates that in the US alone approximately 100,000 crashes each year are caused primarily by driver drowsiness or fatigue

Growing Interest In Intelligent Vehicles US Department of Transportation Initiative European Transport Policy for 2010: set a target to halve road fatalities by Problem Background

The Drivesafe Project

Current Funding Status: Turkish Development Agency funding of Drive-Safe (August 2005-July. 2009) Japanese New Energy and Industrial Technology Development Organization (NEDO) (October December 2008) FP6 SPICE Project at Sabancı University (May October 2008) FP6 AUTOCOM Project at ITU Mekar (May April 2008).

Readiness-to-perform Mathematical models of alertness dynamics Vehicle-based performance technologies (Vehicle Speed, Lateral Position, Pedal Movement) In-vehicle, on-line, operator status monitoring technologies Fatigue Detection and Prediction Technologies

Physiological Signals (heart rate, pulse rate and Electroencephalography (EEG)) Computer Vision Systems (detect and recognize the facial motion and appearance changes occurring during drowsiness) In-vehicle, on-line, operator status monitoring technologies

Computer Vision Systems Visual Behaviors Visual Behaviors Examples Examples Gaze Direction Gaze Direction Head Movement Head Movement Yawning Yawning No requirement for physical contact No requirement for physical contact

Facial Actions Ekman & Friesen, 1978

Background Information- Action Units

Proposed Work Detection Of Driver Fatigue From A Recorded Video Using Facial Appearance Changes The framework will be based on graphical models and machine learning approaches

Proposed Architecture Sensing Channels Eye Tracker AU 61 Pupil Motion AU 62 Gaze Tracker Gaze AU 51AU 52 Eye Tracker AU 61 Pupil Motion AU 62 Gaze Tracker Gaze AU 51AU 52 Features Time n-1Time n InattentiveFalling Asleep Fatigue InattentiveFalling Asleep Fatigue Entire Face Behavior Partial Face Behavior Single AU

Action Unit Tracking Previous techniques Previous techniques Do not employ a spatially and temporally dependent structure for Action Unit Tracking Do not employ a spatially and temporally dependent structure for Action Unit Tracking Contextual information is not exploited Contextual information is not exploited Temporal information is not exploited Temporal information is not exploited

Classification- Challenges Which action units or combinations is a cue for fatigue?

Learning from real examples Posed Drowsiness Actual Drowsiness Different Neural pathways for posed/spontaneous expressions

Initial Experimental Setup Subjects played a driving video game on a windows machine using a steering wheel and an open source multi-platform video game. At random times, a wind e ff ect was applied that dragged the car to the right or left, forcing the subject to correct the position of the car.

Head movement measures Head movement was measured using an accelerometer that has 3 degrees of freedom. This three dimensional accelerometer has three one dimensional accelerometers mounted at right angles measuring accelerations in the range of 5g to +5g

The one minute preceding a sleep episode or a crash was identified as a non-alert state. There was a mean of 24 non- alert episodes with a minimum of 9 and a maximum of 35. Fourteen alert segments for each subject were collected from the first 20 minutes of the driving task.

Crash Overcorrection Seconds 0 20 Steering Distance from center Eye opening Eyes closed

Histograms for Eye Closure and Eye Brow Up Eye Closure: AU45Brow Raise:AU2 Area under the ROC

Pattern Recognition (Adaboost)‏ (SVM)‏ Feature Selection Machine Learning Facial Action Unit Detection AU1 AU2 AU4 …. AU46 ++

Drowsiness Prediction The facial action outputs were passed to a classifier for predicting drowsiness based on the automatically detected facial behavior. Two learning-based classifiers, Adaboost and multinomial logistic regression are compared. Within-subject prediction of drowsiness and across-subject (subject independent) prediction of drowsiness were both tested.

Classification Task Multinomial Logistic Regression (MLR) ‏ Frame Alert 60 sec Before crash : 31 Facial Action Channels Continuous output for each frame AU1 AU2 AU4 AU31

Testing: MLR Weighted Temporal Windows

Within subject drowsiness prediction For the within-subject prediction, 80% of the alert and non- alert episodes were used for training and the other 20% were reserved for testing. This resulted in a mean of 19 non-alert and 11 alert episodes for training, and 5 non-alert and 3 alert episodes for testing per subject.

Across Subject Drowsiness Prediction Training : 31 actions -> MLR Classifier Framewise training Cross validation: 3 subjects –> training 1 subject –> testing Crash prediction: choose 5 best features by sequential feature selection Sum MLR weighted features over 12 second time interval.98 across subjects (Area under the ROC) ‏

More when critically drowsy Eye Closure Brow Raise Chin Raise Frown Nose Jaw WrinkleSideways Predictive Performance of Individual Facial Actions

Less when critically drowsy Smile Squint Nostril Brow Lower Jaw Drop Compressor A’ >.75

We observed during this study that many subjects raised their eyebrows in an attempt to keep their eyes open, and the strong association of the AU 2 detector is consistent with that observation. Also of note is that action 26, jaw drop, which occurs during yawning, actually occurred less often in the critical 60 seconds prior to a crash. This is consistent with the prediction that yawning does not tend to occur in the final moments before falling asleep.

Drowsiness detection performance, using an MLR classifier with di ff erent feature combinations.

Effect of Temporal Window Length * 12 secondsA’ Seconds

Coupling of Facial Movements ALERTDROWSY Eye Openness Brow Raises Brow Raise Eye Closure Brow Raise Eye Closure r=0.87 0Seconds10 Seconds0

Coupling of Steering and Head Motion ALERT DROWSY r=0.27 r=0.65 Steering Head Acceleration Steering Seconds60 00 Seconds

Coupling of Steering and Head Motion

New associations between facial behavior and drowsiness Brow raise Chin raise More head roll Possibly less yawning just before crash Coupling of behaviors – Head movement and steering – Brow raise and eye opening

Future Work Extend the graphical model so that it captures the temporal relationships using a discriminative approach Extend the graphical model so that it captures the temporal relationships using a discriminative approach

Future Work: More Data Collection in Simulator Environment Uykucu (Sleepy)

Thank you