ARTIFICIAL PASSENGER.

Slides:



Advertisements
Similar presentations
Artificial passenger.
Advertisements

Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Bayesian Decision Theory Case Studies
Face Alignment by Explicit Shape Regression
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
1 RTL Example: Video Compression – Sum of Absolute Differences Video is a series of frames (e.g., 30 per second) Most frames similar to previous frame.
By : Adham Suwan Mohammed Zaza Ahmed Mafarjeh. Achieving Security through Kinect using Skeleton Analysis (ASKSA)
Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012.
David Wild Supervisor: James Connan Rhodes University Computer Science Department Gaze Tracking Using A Webcamera.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Autonomous Vehicle Pursuit of Target Through Optical Recognition Vision & Image Science Laboratory, Department of Electrical Engineering,Technion Daniel.
Bohr Robot Group OpenCV ECE479 John Chhokar J.C. Arada Richard Dixon.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Chapter 1_2 Becoming Skilled at Information Technology.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
Presented by: Doron Brot, Maimon Vanunu, Elia Tzirulnick Supervised by: Johanan Erez, Ina Krinsky, Dror Ouzana Vision & Image Science Laboratory, Department.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Virtual Dart – An Augmented Reality Game on Mobile Device Supervised by Prof. Michael R. Lyu LYU0604Lai Chung Sum ( )Siu Ho Tung ( )
Stockman MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
Dynamic Scalable Distributed Face Recognition System Security Framework by Konrad Rzeszutek B.S. University of New Orleans, 1999.
Smart Traveller with Visual Translator. What is Smart Traveller? Mobile Device which is convenience for a traveller to carry Mobile Device which is convenience.
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
Digital Images The nature and acquisition of a digital image.
- TECHLEAD SOFTWARE ENGINEERING PVT. LTD. Video Analytics.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Sana Naghipour, Saba Naghipour Mentor: Phani Chavali Advisers: Ed Richter, Prof. Arye Nehorai.
(CONTROLLER-FREE GAMING
A Brief Overview of Computer Vision Jinxiang Chai.
Technology and digital images. Objectives Describe how the characteristics and behaviors of white light allow us to see colored objects. Describe the.
Real-time object tracking using Kalman filter Siddharth Verma P.hD. Candidate Mechanical Engineering.
COSC 1P02 Intro. to Computer Science 6.1 Cosc 1P02 Week 6 Lecture slides "To succeed, jump as quickly at opportunities as you do at conclusions." --Benjamin.
Detecting Attention Detecting Sleep By Matthew Parks.
Sensing for Robotics & Control – Remote Sensors R. R. Lindeke, Ph.D.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
National Taiwan A Road Sign Recognition System Based on a Dynamic Visual Model C. Y. Fang Department of Information and.
Monitoring Human Behavior in an Office Environment Douglas Ayers and Mubarak Shah * Research Conducted at the University of Central Florida *Now at Harris.
Tracking CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Expectation-Maximization (EM) Case Studies
PRESENTED BY TARUN CHUGH ROLL NO: DATE OF PRESENTATION :-29/09/2010 ARTIFICIAL PASSENGER.
Daniel A. Keim, Hans-Peter Kriegel Institute for Computer Science, University of Munich 3/23/ VisDB: Database exploration using Multidimensional.
Autonomous Robots Vision © Manfred Huber 2014.
Color Web Design Professor Frank. Color Displays Based on cathode ray tubes (CRTs) or back- lighted flat-screen Monitors transmit light - displays use.
Video Camera Security and Surveillance System ICAMES 2008 Team Members : Semih Altınsoy Denis Kürov Team Advisor: Assist. Prof. M. Elif Karslıgil May,
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Motion Detection and Processing Performance Analysis Thomas Eggers, Mark Rosenberg Department of Electrical and Systems Engineering Abstract Histograms.
Under The Guidance of Smt. D.Neelima M.Tech., Submitted by
Digital Images and Digital Cameras take notes in your journal.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Intelligent Robotics Today: Vision & Time & Space Complexity.
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Product: Microsoft Kinect Team I Alex Styborski Brandon Sayre Brandon Rouhier Section 2B.
Presented By: O. Govinda Rao 3 rd MCA AITAM CH. Hari Prasad 3 rd MCA AITAM.
ARTIFICIAL PASSENGER (A Sleep Prevention Dialogue Based Car System) Prepared By Jesalpura Riddhi 09-IT-14 Guided By Mansi Parmar.
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
Submitted by: Siddharth Jain (08EJCIT075) Shirin Saluja (08EJCIT071) Shweta Sharma (08EJCIT074) VIII Semester, I.T Department Submitted to: Mr. Abhay Kumar.
CHAPTER 8 Sensors and Camera. Chapter objectives: Understand Motion Sensors, Environmental Sensors and Positional Sensors Learn how to acquire measurement.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
EE368 Final Project Spring 2003
FINGERTEC FACE ID FACE RECOGNITION Technology Overview.
Driver Drawsiness Detection
CSSE463: Image Recognition Day 29
Vehicle Segmentation and Tracking in the Presence of Occlusions
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 29
Presentation transcript:

ARTIFICIAL PASSENGER

What is an artificial passenger? Natural language e-companion. Sleep preventive device in cars to overcome drowsiness. Life safety system.

What does it do? Detects alarm conditions through sensors. Broadcasts pre-stored voice messages over the speakers. Captures images of the driver.

Devices that are used in AP The main devices that are used in this artificial passenger are:- Eye tracker. Voice recognizer or speech recognizer.

About AP The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The system has a conversation planner that holds a profile of you, including details of your interests and profession.

A microphone picks up your answer and breaks it down into separate words with speech-recognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition.

A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of attention are signs of fatigue. If you reply quickly and clearly, the system judges you to be alert and tells the conversation planner to continue the line of questioning.

If your response is slow or doesn’t make sense, the voice analyzer assumes you are dropping off and acts to get your attention.

Detecting driver vigilance Aiming a single camera at a head of a driver. Detecting frequency of up and down nodding and left to right rotations of the head within a selected time period. Determining frequency of eye blinkings and eye closings.

HOW DOES TRACKING DEVICE WORK? Data collection and analysis is handled by eye-tracking software. Data are stored as a series of x/y coordinates related to specific grid points on the computer screen.

Our head tracker consists of tracking the lip corners, eye centers, and side of the face. “Occlusion” of the eyes and mouth often occurs when the head rotates or the eyes close, so our system tracks through such occlusion and can automatically reinitialize when it mis-tracks.

Representative Image

Eye Tracker

Monitoring System

Tracking includes the following steps Automatically initialize lips and eyes using color predicates and connected components. Track lip corners using dark line between lips and color predicate even through large mouth movement like yawning.

Construct a bounding box of the head. Determine rotation using distances between eye and lip feature points and sides of the face. Determine eye blinking and eye closing using the number and intensity of pixels in the eye region.

The lip and eye colors ((RED, BLUE, GREEN)RGB) are marked in the image offline. Mark the lip pixels in the image is important.

Each pixel has an Red(R), Green)G), and Blue(B) component Each pixel has an Red(R), Green)G), and Blue(B) component. For a pixel that is marked as important, go to this location in the RGB array indexing on the R, G, B components. This array location can be incremented by equation (1): exp(−1.0*( j*j+k*k+i*i )/(2*sigma*sigma)); (1) where: sigma is approximately 2;

If a color, or pixel value, is marked as important multiple times, its new value can be added to the current value. Pixel values that are marked as unimportant can decrease the value of the RGB indexed location via equation (2) as follows: exp(−1.0*( j*j+k*k+i*i )/(2*(sigma−1)*(sigma−1))). (2) where: sigma is approximately 2;

The values in the array which are above a threshold are marked as being one of the specified colors. Another RGB array is generated of the skin colors, and the largest non-skin components above the lips are marked as the eyes First a dark pixel is located

The system goes to the eye center of the previous frame and finds the center of mass of the eye region pixels Look for the darkest pixel, which corresponds to the pupil This estimate is tested to see if it is close enough to the previous eye location

Feasibility occurs when the newly computed eye centers are close in pixel distance units to the previous frame's computed eye centers. This kind of idea makes sense because the video data is 30 frames per second, so the eye motion between individual frames should be relatively small.

If new points are too far away, the system searches a window around the eyes and finds all non-skin connected components in approximately a 7×20 pixel window, and finds the slant of the line between the lip corners using equation (5). This equation finds the slope between two points in general. (( y 2 −y 1 )/( x 2 −x 1 )) (5) where: x 1 ,y 1 is the coordinate of a feature; and x 2 ,y 2 is the coordinate of the other corresponding feature.

The system selects the eye centroids that have the closest slant to that of the slant between the lip corners using equation (5). These two stages are called the eye black hole tracker.

The detection of eye occlusion is done by analysing the bright regions. As long as there are eye-white pixels in the eye region the eyes are open. If not blinking is happening. To determine what eye-white color is, in the first frame of each sequence we find the brightest pixel in the eye region and use this as the eye white color.

If the eyes have been closed for more than approximately 40 of the last approximately 60 frames, the system declares that driver has his eyes closed for too long.

Output Onsite alarms within the vehicle Remote alarms

Other applications 1) Cabins in airplanes. 2) Water craft such as boats. 3) Trains and subways.

SUMMARY Method for monitoring driver alertness Sufficient time to avert an accident. Monitoring of full facial occlusion of the driver.

THANK YOU