Automated Reading Assistance System Using Point-of-Gaze Estimation M.A.Sc. Thesis Presentation Automated Reading Assistance System Using Point-of-Gaze.

Slides:



Advertisements
Similar presentations
University of Karlsruhe September 30th, 2004 Masayuki Fujita
Advertisements

We consider situations in which the object is unknown the only way of doing pose estimation is then building a map between image measurements (features)
Xiaoyong Ye Franz Alexander Van Horenbeke David Abbott
A Projector Based Hand-held Display System
A Keystone-free Hand-held Mobile Projection System Li Zhaorong And KH Wong Reference: Zhaorong Li, Kin-Hong Wong, Yibo Gong, and Ming-Yuen Chang, “An Effective.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Joint Eye Tracking and Head Pose Estimation for Gaze Estimation
Qualifying Exam: Contour Grouping Vida Movahedi Supervisor: James Elder Supervisory Committee: Minas Spetsakis, Jeff Edmonds York University Summer 2009.
66: Priyanka J. Sawant 67: Ayesha A. Upadhyay 75: Sumeet Sukthankar.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Relevance Feedback Content-Based Image Retrieval Using Query Distribution Estimation Based on Maximum Entropy Principle Irwin King and Zhong Jin Nov
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Multi video camera calibration and synchronization.
3-D Geometry.
Real-time Hand Pose Recognition Using Low- Resolution Depth Images
© Anselm Spoerri Lecture 13 Housekeeping –Term Projects Evaluations –Morse, E., Lewis, M., and Olsen, K. (2002) Testing Visual Information Retrieval Methodologies.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Information that lets you recognise a region.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
College of Engineering and Science Clemson University
An Investigation of Usability Issues with Mobile Systems Using a Mobile Eye Tracker thesis by Marie Norlien International University in Germany Thesis.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
ALARA Planning and Teaching Tool Based on Virtual-Reality Technologies Di Zhang 1, X. George Xu 1, D. Hussey 2, S.Bushart 2 1 Nuclear Engineering and Engineering.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Robot Vision SS 2007 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Eyes Alive Sooha Park - Lee Jeremy B. Badler - Norman I. Badler University of Pennsylvania - The Smith-Kettlewell Eye Research Institute Presentation Prepared.
Supervisor: Dr. Eddie Jones Electronic Engineering Department Final Year Project 2008/09 Development of a Speaker Recognition/Verification System for Security.
Multimodal Interaction Dr. Mike Spann
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra 3D Hand Trajectory Segmentation by Curvatures and Hand Orientation for Classification.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
3D SLAM for Omni-directional Camera
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Scientific Writing Abstract Writing. Why ? Most important part of the paper Number of Readers ! Make people read your work. Sell your work. Make your.
Example: line fitting. n=2 Model fitting Measure distances.
Metrology 1.Perspective distortion. 2.Depth is lost.
Methods Validation with Simulated Data 1.Generate random linear objects in the model coordinate system. 2.Generate a random set of points on each linear.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Point-of-Gaze Estimation: Theory and Applications Jeff Kang, Elias Guestrin, Moshe Eizenman, Erez Eizenman June 9, 2006.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Gaze-based Interfaces for Internet
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Research Background: Depth Exam Presentation
Counting How Many Words You Read
Automated Fingertip Detection
David Wild Supervisor: James Connan Rhodes University Computer Science Department Eye Tracking Using A Simple Webcamera.
Speaker Change Detection using Support Vector Machines V.Kartik, D.Srikrishna Satish and C.Chandra Sekhar Speech and Vision Laboratory Department of Computer.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Investigating the Use of Eye-Tracking Technology for Assessment: A case study of research and innovation at a Special School INNOVATION IN THE ASSESSMENT.
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
Active Flattening of Curved Document Images via Two Structured Beams
EYE TRACKING TECHNOLOGY
3D Single Image Scene Reconstruction For Video Surveillance Systems
A language assistant system for smart glasses
Vehicle Segmentation and Tracking in the Presence of Occlusions
Mingyu Feng Neil Heffernan Joseph Beck
CSc4730/6730 Scientific Visualization
Propagation of Error Berlin Chen
Unrolling the shutter: CNN to correct motion distortions
Presentation transcript:

Automated Reading Assistance System Using Point-of-Gaze Estimation M.A.Sc. Thesis Presentation Automated Reading Assistance System Using Point-of-Gaze Estimation Jeffrey J. Kang Supervisor: Dr. Moshe Eizenman Department of Electrical and Computer Engineering Institute of Biomaterials and Biomedical Engineering January 24, 2006

Introduction Reading Visual examination of text Convert words to sounds to activate word recognition We learn appropriate conversions through repetitive exposure to word-to-sound mappings Insufficient reader skill or irregular spelling can lead to failed conversion: assistance is required Objective: Develop an automated reading assistance system that automatically vocalizes unknown words in real-time on the reader’s behalf. The system should operate within a natural reading setting.

What We Need To Do — Step 1 1. Identify the word being read, in real-time 2. Detect when the word being read is an unknown word 3. Vocalization of the unknown word

Identifying the Word Being Read Identify the viewed word using point-of-gaze estimation Point-of-gaze is: Where we are looking with the highest visual acuity region of the retina Intersection of the visual axis of each eye within the 3D scene Intersection of the visual axis one eye with a 2D plane

Point-of-Gaze Estimation Methodologies 1. Head-mounted 2. Remote (no head-worn components)

Head-mounted Point-of-Gaze Estimation Based on principle of tracking the pupil centre, and corneal reflections to measure eye position Point-of-gaze is estimated with respect to a coordinate system attached to the head scene camera eye camera IR LEDs hot mirror corneal reflections pupil centre

Point-of-Gaze in Head Coordinate System Point-of-gaze is measured in the head coordinate system, and placed on the scene camera image

Locating the Reading Object The position of the reading object is determined by tracking markers

Mapping the Point-of-Gaze Establish point correspondences from the estimated positions of the markers in the scene image the known positions of the markers on the reading object Homographic mapping of point-of-gaze from scene camera image to reading object coordinate system

Identify the Reading Object Extract the barcode from the scene camera image to identify the reading object (e.g. page number) Match barcode to database of reading objects to determine what text is being read

Identifying the Word Being Read Using the mapped point-of-gaze, identify the word being read by table lookup

Sample Reading Video

Mapping Accuracy

Point-of-Gaze Estimation Methodologies 1. Head-mounted 2. Remote (no head-worn components)

O 2D scene object Z X P C visual axis Y Remote Point-of-Gaze Estimation Point-of-gaze is estimated to a fixed coordinate system C – centre of corneal curvature P – point-of-gaze IR LEDs eye camera computer screen

O assumed position of 2D scene object Z X P C visual axis P’ true position of 2D scene object Y Moving Reading Card How can point-of-gaze be estimated to a coordinate system attached to a moving reading object?

O assumed position of 2D scene object Z X P C visual axis P’ true position of 2D scene object Y Estimate Motion R, T t0t0 t1t1

O assumed position of 2D scene object Z X P C visual axis P’ true position of 2D scene object Y Use a Scene Camera and Targets t0t0 t1t1 Scene Camera

H 0 O assumed position of 2D scene object Z X P C visual axis P’ true position of 2D scene object Y Calculate Two Homographies t0t0 t1t1 H1H1 Scene Camera

O assumed position of 2D scene object Z X P C visual axis P’ true position of 2D scene object Y Decompose Homography Matrices t0t0 t1t1 Scene Camera R 0, T 0 R 1, T 1

O assumed position of 2D scene object Z X P C visual axis P’ true position of 2D scene object Y Calculate Motion of 2D Scene Object t0t0 t1t1 Scene Camera R, T R 0, T 0 R 1, T 1

Point-of-Gaze Accuracy

What We Need To Do: Step 2 1. Identify the word being read, in real-time 2. Detect when the word being read is an unknown word 3. Vocalization of the unknown word

Dual Route Reading Model Coltheart, M. et al. (2001)

Each word’s graphemes are processed in parallel Dual Route Reading Model

Each word’s graphemes are individually converted into phonemes based on mapping rules

Detecting Unknown Words For unknown words, the lexical route fails and the slower non-lexical route is used Hypothesis: we can differentiate between known and unknown words by the duration of the processing time

Processing Time

Setting a Threshold Curve

Threshold curve is a function of word length Model processing time for known words (length k ) as a Gaussian random variable  (μ k, σ k 2 ) Estimate μ k, σ k 2 from a short training set for each subject Each point on threshold curve is given by α is the constrained probability of false alarm Setting the Threshold

Experiment: Detecting Unknown Words Remote point-of-gaze estimation system Reading material presented on computer screen Head position stabilized using a chinrest Four subjects read from 40 passages of text 20 passages aloud and 20 passages silently Divided into training set to “learn” μ k, σ k 2 and set detection threshold curves Set false alarm probability α = 0.10 Evaluate detection performance

Experiment: Detecting Unknown Words

Experiment: Natural Setting Reading Assistance Natural reading pose Unrestricted head movement Reading material is hand-held Head-mounted eye-tracker Identify viewed word in real-time Measure per-word processing time Detecting unknown words Processing time threshold curves established in previous experiment Assistance Detection of unknown word activates vocalization

Experiment: Natural Setting Reading Assistance Results Point-of-gaze mapping method accommodated head and reading material movement without reducing detection performance SubjectDetection RateFalse Alarm Rate M.E P.L

Conclusions Developed methods to map point-of-gaze estimates to an object coordinate system attached to a moving 2D scene object (e.g. reading card) Head-mounted system Remote system Developed method to detect when a reader encounters an unknown word Demonstrated principle of operation for an automated reading assistance system

Future Work Implement reading assistant using remote-gaze estimation methodology Validate efficacy of system as a teaching tool for unskilled English readers, in collaboration with an audiologist Evaluate other forms of assistive intervention e.g. translation, definition

Questions?