Research Background: Depth Exam Presentation

Slides:



Advertisements
Similar presentations
Human-Computer Interaction
Advertisements

Introduction to Eye Tracking
Why do we move our eyes? - Image stabilization
Light Cornea Sclera Optic nerve Lens Vitreus humor Pigment epithelium Fovea Retina Light entering the eye is focused by the cornea and the lens. Then it.
The eye – curved cornea – lens – retina – fovea – optic disk Using Light.
Compensatory Eye Movements John Simpson. Functional Classification of Eye Movements Vestibulo-ocular Optokinetic Uses vestibular input to hold images.
Version 0.10 (c) 2007 CELEST VISI  N Star Light, Star Bright, Let’s Explore Light How You Perceive Light How many black dots can you count?
Jeff B. Pelz Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of Technology Using Eyetracking to Improve Image Composition.
Correlation Between Image Reproduction Preferences and Viewing Patterns Measured with a Head Mounted Eye Tracker Lisa A. Markel Jeff B. Pelz, Ph.D. Center.
Electro-Oculography (EOG) Measurement System The goal : To measure eye movement with maximum accuracy using skin electrodes around the eyes that detect.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 4: Introduction to Vision 1 Computational Architectures in Biological.
Imaging Science FundamentalsChester F. Carlson Center for Imaging Science The Human Eye and Vision 1 (Producing The Image)
Saccades: Rapid rotation of the eyes to reorient gaze to a new object or region. Reflex saccades Attentional saccades Shifting Gaze - Saccades.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Human Sensing: The eye and visual processing Physiology and Function Martin Jagersand.
Ch 31 Sensation & Perception Ch. 3: Vision © Takashi Yamauchi (Dept. of Psychology, Texas A&M University) Main topics –convergence –Inhibition, lateral.
Task Dependency of Eye Fixations & The Development of a Portable Eyetracker Jeff Cunningham Senior Research Project Dr. Jeff Pelz Visual Perception Laboratory.
Survey of Eye Tracking Techniques
The Visual System Into. to Neurobiology 2010.
Development of an Eye Tracker By Jason Kajon Barrett of the Center for Imaging Science at the Rochester Institute of Technology Advisor: Jeff Pelz.
The Human Eye and Vision The structure of the eye –Iris –Cornea –Lens Focusing –Cornea –Accommodation The Retina –Photoreceptors –Processing time –Sensitivity.
The Eye: Structure & Function
Optics and the Eye. The Visible Spectrum Some similarities between the eye and a camera.
1 Chapter 5: The Human Eye and Vision - I: Producing the Image Using what we have learned about lenses and cameras to understand how the eye works The.
Society for Psychophysiological Research
Eye Movements and Visual Attention
HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM ERRORS HUMAN 1 HUMAN 2 HUMAN 3 ALGORITHM.
Sensation Chapter 5 Myers AP Psychology. Transduction  Conversion of one form of energy into another.  In sensation, the transforming of stimulus energies,
Human Eye  A human eyeball is like a simple camera! Sclera: White part of the eye, outer walls, hard, like a light-tight box. Cornea and crystalline lens.
Jochen Triesch, UC San Diego, 1 Eye Movements and Eye Tracking Why move the eyes? see the same thing better (stabilize.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
VISION AND VISUAL PERCEPTION The visual system is made up of: the eyes, visual cortex and visual association cortex Each eye is set into protective cavities.
Year Science. Cornea Transparent outer covering of the eye It is convex in shape to allow the light that passes through it to be refracted Cornea.
 An eye tracking system records how the eyes move when a subject is sitting in front of a computer screen.  The human eyes are constantly moving until.
Mr. Koch AP Psychology Forest Lake High School
The Eye and Sight Describe the relationship between the structure of the eye and sight Contrast ways in which light rays are bent by concave and convex.
1 Perception, Illusion and VR HNRS , Spring 2008 Lecture 3 The Eye.
A Photon Accurate Model Of The Human Eye Michael F. Deering.
Human Visual Perception The Human Eye Diameter: 20 mm 3 membranes enclose the eye –Cornea & sclera –Choroid –Retina.
Digital Image Fundamentals. What Makes a good image? Cameras (resolution, focus, aperture), Distance from object (field of view), Illumination (intensity.
Eye movements: a primer Leanne Chukoskie, Ph.D.. Two classes of eye movements Gaze-stabilizing – Vestibulo-ocular reflex (VOR) – Optokinetic Nystagmus.
PSY 369: Psycholinguistics Language Comprehension: Methods for sentence comprehension.
Vision: change, lightness, acuity
Ch 31 Sensation & Perception Ch. 3: Vision © Takashi Yamauchi (Dept. of Psychology, Texas A&M University) Main topics –convergence –Inhibition, lateral.
Chapter 3 Anatomy of the Eye. Sclera  The white part of the eyeball is called the sclera (say: sklair- uh). The sclera is made of a tough material.
Spook Fish. Eyes How We See Eye Anatomy Nocturnal Eye.
Psychology 210 Lecture 4 Kevin R Smith. Vision Sensory System –The eye –Exactly what we sense from our environment Perceptual System –The brain –How we.
1 Computational Vision CSCI 363, Fall 2012 Lecture 5 The Retina.
Visually guided attention during flying OR Pilots “do not like” fovea because they cannot pay attention to more than 1% of space at any one time.
Symmetry Detecting Symmetry in a Random Background Monica Cook.
Option E: Neurobiology and Behavior. E.2.1 Outline the diversity of stimuli that can be detected by human sensory receptors, including mechanoreceptors,
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Automated Reading Assistance System Using Point-of-Gaze Estimation M.A.Sc. Thesis Presentation Automated Reading Assistance System Using Point-of-Gaze.
Virtual University - Human Computer Interaction 1 © Imran Hussain | UMT Imran Hussain University of Management and Technology (UMT) Lecture 7 Human Input-Output.
Dr. Raj Patel OD - Vancouver Vision Clinic
1 Perception and VR MONT 104S, Fall 2008 Lecture 2 The Eye.
Measuring Monkey Eye Movement in the MRI Team Members: Josh Anders Betsy Appel Bryan Baxter Alyssa Walsworth Client: Luis Populin,Ph. D. Advisor: Justin.
Visual Perception By Katie Young and Joe Avery. Overview Visual Perception Eye Trackers Change Blindness.
P15051: Robotic Eye Project Definition Review TIM O’HEARNANDREW DROGALISJORGE GONZALEZ KATIE HARDY DANIEL WEBSTER.
Seeing READING ASSIGNMENT Discussion of Gregory’s Article on Visual Illusions – Tues Feb 17 Available in your course pack.
What visual image information is needed for the things we do? How is vision used to acquire information from the world?
Student : Chen–Fung Tsen Advisor : Sheng-Lung Huang.
Vision. The Eye and Vision It’s the most complex and most important sense for humans. The vision “system” transfers light waves into neural messages that.
School of Mechanical, Industrial, and Manufacturing Engineering The Visual System and Visual Performance.
EYE TRACKING TECHNOLOGY
Research Background: Depth Exam Presentation
INPUT-OUTPUT CHANNELS
Oculomotor Systems S. J. Potashner, Ph.D
Laboratory for Physiology EOG
Presentation transcript:

Research Background: Depth Exam Presentation Susan Kolakowski Committee: Juan Cockburn, Chair Jeff Pelz, Adviser Andrew Herbert Mitchell Rosen Carl Salvaggio Eye Tracking: Why We Use It and Trying To Improve It March 20, 2006

Research Background Introduction Human Visual System Eye Movements Eye Trackers RIT Wearable Eye Tracker My Research First I will give you a small introduction about why eye tracking is used. Then I will talk a little bit about the Human Visual System and Eye Movements - move our eyes more than 150,000 times per day I will conclude the first part of this presentation with the mention of a few eye trackers and The eye tracker which my research has been utilizing, the RIT Wearable Eye Tracker.

Introduction Why are eye trackers used? Examples: Objective measure of where people look Interest in Human Visual System Examples: Understanding Behaviors: How do humans read? Improving Skill: Train people to move their eyes as an expert would. Improving Quality: What parts of an image are important to the image’s overall quality? Study HVS - how we capture an image, where we are looking when we make decisions, the order in which we move our eyes. We make ____ eye movements a day and we couldn’t possibly describe where we are looking throughout the day or as we perform a specific task. Eye trackers allow us to see where a subject is looking during a specific task. For instance, where do we look when we are trying to judge the overall quality of an image? This information may be useful when creating images but may not be easy to describe on our own - eye trackers can show us exactly where we are looking. What if we want to understand how humans read - watching their eye movements while subjects read an exerpt can help us see this. VIDEO Finally, why are some people better at search tasks than others? We can eye track someone who is really good at findiing a target and use this information to train others. In many CV tasks it is desired to mimick the HVS (have a robot perform a task as well as a human)

The Human Eye Optic Axis Pupil Cornea Iris Ciliary Muscle Retina Eyelens Cornea and Eye Lens bend light rays to form an image on the retina. The retina is analgous to the film in a camera, it is where light is received. The fovea is a small portion of the retina where the most detailed view is created (talk about this later). The Ciliary muscle contracts and ___ the lens to change its focus. The iris is the stop which determines how much light may pass through its opening, the pupil. The optic nerve transfers the signal received within the retina to be processed. Optic Nerve Fovea

Human Visual System What we see is determined by How the photoreceptors in our retina are connected and distributed How our brain processes this information What we already accept as truth (previous knowledge) How we move our eyes throughout a scene The way we receive information from our eyes is determined by how our rods and cones are connected and how the light they perceive gets to our visual cortex and what we already accept as truth Processes… adaptations… abberrations

The Retina Contains two types of photoreceptors Rods that offer wide field of view (and night vision) Cones that provide high acuity (and color vision) Our retina is like the film in a camera - collects the light to create the image The retina contains two photoreceptors: Cones which perceive detail. and Rods which are more sensitive and used in low light, more connected together - act as an averaging filter - blurs the periphery These photoreceptors are connected such that the signal from neighboring receptors inhibits the signal at a detector. This is called lateral inhibition and describes much about how we perceive things and illusions we may see.

The Craik-O’Brien Illusion The Visual System responds to changes, therefore it gets most of its information from the edge where there is a sharp intensity change and fills in the rest of the information where the change is very gradual so that we perceive two uniform regions, one lighter than the other. There is much more to color perception than the cone mechanisms and additive and subtractive color production systems. The complexity of the human visual system gives it huge advantages in image processing and understanding, but can also lead to incorrect interpretations of objects The visual system is designed to be more sensitive to edges than it is to large areas. The “Cornsweet edge” shown above separates two identical colors with a narrow edge, inducing the perception of a color difference across the edge. Covering the edge allows the visual system to correctly interpret the two colors as identical.

Lateral Inhibition Center grey squares have SAME intensity

Affect of Previous Knowledge Rotating Mask

Affect of Previous Knowledge www.michaelbach.de/ ot/cog_dalmatian/

The Fovea At its center: contains only cones (no rods) Perceive greatest detail and color vision To get the most detailed representation of a scene, must move your eyes rapidly so that different areas of the scene fall on your fovea Along visual axis - lowest potential for aberrations This slide needs to be completely redone

Serial Execution (fovea covers <0.1% of the field) Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec! (fovea covers <0.1% of the field)

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Serial Execution Most vision experiments are performed under ‘reduced conditions’ to make it easier to analyze the results. While this makes life easier for the experimenter, it has led to the situation in which we know a great deal about how vision works when an observer is seated in a dark room, his head fixed by a chin and forehead rest, with a 2 degree field illuminated for 200 msec!

Eye Movements… … and lack thereof Saccades Smooth Pursuit Optokinesis (OKN) Vestibular-Ocular Reflex (VOR) Fixations … and lack thereof To study with monocular eye tracking…

Fixations Stabilizations of the eye for higher acuity at a given point Drifts and tremors of the eye occur during fixations such that the view is always changing slightly Tremor, drift, - the eye responds to change…

Saccades X X X Eye Movements Rapid ballistic movement of eye from one position to another Shift point of gaze such that a new region falls on the fovea Can make up to 4 saccades per second Less than 1 deg to greater than 45 deg Velocity 500 deg/sec Preprogrammed, if the target which the eye is moving to moves during the saccade, the eye cannot change direction mid-saccade. X X X

Smooth Pursuit X X Eye Movements Smooth eye movement to track a moving target Involuntary - can’t be produced without a moving object Velocity of eye matches that of object up to about 30 deg/sec Follow a small moving target X X

Vestibular-Ocular Reflex Eye Movements Optokinesis Invoked to stabilize an image on the retina Eye rotates with large object or with its field-of-view Vestibular-Ocular Reflex Invoked to stabilize an image on the retina Stabilizes an image as the head or body moves relative to the image Similar to smooth pursuit When the field of view moves, looking out the window of a car X

Eye Trackers Invasive Restrictive Modern Video-Based Trackers Painful devices which discomfort subject’s eye Restrictive Devices that require strict stabilization of subject’s head, not allowing for natural movement The first permanent objective eye movement records to be recorded non-invasively occurred in 1901 (Dodge and Cline). Recorded horizontal eye movements on photographic plate. Before this, eye movements were studied using just subject introspection or experimenters observations or recorded invasively such as Delabarre’s method in 1898 in which he attached a mechanical stalk to the eye using plaster of paris. Early invasive methods such as these were criticized for impeding motion and straining the eye. 1960’s - Yarbus used suction, mechanical stalk and mirrors. Robinson (1963) used scleral search coil - two orthogonal wire coils that would perturb a magnetic field surrounding the subject’s head. High level discomfort - no longer used today. Video-based eye tracker came about in the early 1970s. Detect the limbus - very rapid measure of horizontal movement - poor measure of vertical movement. Dark-pupil tracking - requires contrast between pupil and iris. Bright-pupil tracking - on axis, bright pupil, larger contrast. Everything up to here - track eye in relation to head - head needed to be completely stabilized to understand where eye was looking in the world. 1970s measured two features of the eye to account for head movement - corneal reflection. Still required the head to be restrained with a bite bar or chin rest but allowed for slight movements of the head. 1973 Cornsweet and Crane - Dual Purkinje image eye tracker. Detects first and fourth Purkinje images - reflection off the outer surface of the cornea and the rear of the lens. A series of servo motors are adjusted in response to the movement of these images, degree servos move = eye rotation (independent of head rotation) HEAD MUST STILL BE IN BITE BAR OR CHIN REST so that the eye can be detected by the instruments. Remarkably fast and accurate (limited only by speed of servo motors). Head mounted eye trackers point scene camera at subject’s field-of-view that moves with subject’s head such that the point-of-regard can be superimposed on the scene camera’s image. Remote eye trackers have been developed to allow some head movement while subject sits in front of computer for 2D stimulus presentation. Modern Video-Based Trackers Remote - constrained to 2D stimuli Head-mounted - allows natural movement

Intrusive Eye Trackers Delabarre 1898 Yarbus 1965 Mechanical stalk

Intrusive Eye Trackers Robinson 1963, Search Coils 3D eye movements

Video-based Eye Trackers Early 1970’s, Limbus RESTRICTIVE

Video-based Eye Trackers Cornsweet and Crane 1973, Dual Purkinje RESTRICTIVE

Video-based Eye Trackers Early 1970’s Dark-Pupil Bright Pupil Show illumination angle

Video-based Eye Trackers Head-Mounted Remote

R.I.T. Wearable Eye Tracker Video-based Eye Trackers R.I.T. Wearable Eye Tracker SCENE CAMERA Most eye trackers require subject to sit still – can only track them looking at a screen or image. Our tracker allows people to walk around and perform tasks like walking through the woods Talk about backpack and whats inside it and how it works. IR LED EYE CAMERA

R.I.T. Wearable Eye Tracker How it works Off-axis illumination Off-line processing Off-axis illumination produces dark-pupil image. Off-line processing allows us to perform extra processing on the data without the constraint of a real-time application.

Example Video

My Research Objective: Improve the performance of video-based eye trackers in the processing stage. Compensate for camera movement with respect to the subject’s head Reduce noise

R.I.T. Wearable Eye Tracker Advantage: Subject is less constrained, can perform more natural tasks Disadvantage: Camera (eye tracker) not stabilized - need to account for any movement of camera relative to head RIT wearable eye trackers LOWER PRECISION

Lower Precision Analysis of Disadvantages Need to account for movement of camera with respect to the head requires additional data: corneal reflection Corneal Reflection data is not as precise as Pupil data. Show large corneal reflection, same size as pupil Too bad we can’t just use the Pupil data

Oversimplifying Assumption Analysis of Disadvantages Oversimplifying Assumption Assumption: When the camera moves with respect to the head, the pupil and corneal reflection move the same amount. To account for camera movement: The assumption, why its wrong, the problems it causes SHOW P-CR EQUATION and images Virtual image of pupil affected by optics - cornea and eyelens - as camera moves, light bends at different angles to create this virtual image which is not moving the exact same amount. See illustration of this later.

Why this assumption is wrong Corneal Reflection data comes from the center of the reflection off the curved outer surface of the eye Pupil data comes from the center of the flat virtual image of the pupil inside the eye. These features are not on a flat surface that the camera is translating with respect to… SHOW ILLUSTRATION Show pupil and cr arrays during camera movement only DON’T MOVE THE SAME AMOUNT WHEN THE CAMERA MOVES

Result of Oversimplification P-CR vector difference changes with camera movement Artifacts in final data SHOW DATA, P-CR changes May appear to be a saccade or noise Eye is looking horizontal through three points, camera is moving. Resulting P-CR - circle artifacts Show Pupil array, CR array and Camera array X X X

The Solution Determine the actual relationship between the pupil and corneal reflection during BOTH: Camera movements Eye movements Use these relationships to develop a new equation in terms of pupil and corneal reflection position Camera movements WITH RESPECT TO HEAD

Eye Movements When you look into a person’s eye, the pupil which you are seeing is actually the virtual image of the pupil as produced by the optics of the eye. CR hardly moves Pupil moves Eye gain - amount the CR moves when the pupil moves 1 degree during an eye movement

Camera Movements Pictures from paper, Cam gain

Camera and Eye Gains Eye Gain: amount corneal reflection moves when pupil moves 1 degree during an eye movement Camera Gain: amount corneal reflection moves when pupil moves 1 degree during a camera movement

The Equations 4 Initial Equations 4 Unknowns: (1) (3) (4) (2) 4 unknowns - given that E and C can be found experimentally

Added Benefit Can smooth Camera array without loss of information from Pupil array: Assuming camera moves more slowly than eye moves. Result is on same level as Pupil only data Compensate for cam movement better AND Reduce noise Show smoothing animation - Eye array changes as camera array is smoothed

Determining the Gains Eye Gain: (Instruct subject to…) Look at center of field-of-view. Keep camera and head perfectly still. Look through calibration points. Cam Gain: (Instruct subject to…) Keep eye fixated while moving the camera on nose. How to do this, Start by looking at the center of the field-of-view - no bias value to worry about and can deal with fraction … Move camera very small amount, the camera would not move off the subjects nose during a task. Make realistic camera movements that we would like to compensate for. Linear regression - why is this okay? Show graph

Eye Gain Results Single subject ABC

Eye Gain Results Single subject ABC

Eye Gain Results y = 0.5161x + 0.3322 R2 = 0.9878 Single subject ABC

Camera Gain Results Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

Camera Gain Results Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

Camera Gain Results y = 0.8143x + 4.5981 R2 = 0.9768 Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

Camera Gain Results y = 0.8143x + 4.5981 R2 = 0.9768 Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

Camera Gain Results slope = average gain = 0.8524 of 5 subjects y = 0.8143x + 4.5981 R2 = 0.9768 Single subject ABC If we look at the line with the eye equation that we saw before, we see that it appears as though the subject made some small eye movements during the trial. These errors in the linear regression are averaged out when we calculate the gains for multiple subjects and take the average.

Testing the Algorithm Collect data: Extract eye movements: 5 subjects look through 9 calibration points while moving the eye tracker’s headgear Extract eye movements: Use average gains to calculate Camera array Smoothed Camera array Subtracted smoothed Camera array from Pupil array Eye array Experiments, subjects, use average gain, smoothing filters used Eye array can also be considered “Corrected Pupil array” - represents amount pupil has moved during eye movements only

Horizontal Results Results X X X Have subjects start looking at center calibration point - call this 0 degrees

Horizontal Results Results Continued X X X Have subjects start looking at center calibration point - call this 0 degrees

Horizontal Results Results Continued X X X Have subjects start looking at center calibration point - call this 0 degrees

Vertical Results Results Continued X X X Have subjects start looking at center calibration point - call this 0 degrees

Vertical Results Results Continued X X X Have subjects start looking at center calibration point - call this 0 degrees

Vertical Results Results Continued X X X Have subjects start looking at center calibration point - call this 0 degrees

Vertical Results Results Continued X X X Have subjects start looking at center calibration point - call this 0 degrees

Vertical Results Results Continued X X X Have subjects start looking at center calibration point - call this 0 degrees

Results Continued Noise Reduction Get color version of these images

Noise Reduction Results Continued Get color version of these images It may seem as though the reduction in noise for the third trial was not as successful but if we look at the corresponding pupil data we see that the pupil data was noisier for the third trial and that is why the eye array is also noisier (than for the first trial shown on the previous slide)

Conclusions Successful application to head-mounted video-based eye trackers Use same gain values for all subjects Final Eye array precision is on the order of the Pupil array precision Noise due to Corneal Reflection data is reduced Successful for head-mounted video-based eye trackers which track both the pupil and corneal reflection. Noise due to corneal reflection is reduced CR used to determine camera movement but does not affect final data precision

Next Steps Calibration - Eye array represents eye movement in head - need to map this to the world (via scene camera) Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

Next Steps Calibration - Eye array represents eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

Next Steps Calibration - Eye array represents eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Obtain gain values for larger group of subjects Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

Next Steps Calibration - Eye array represents eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Obtain gain values for larger group of subjects Test on larger eye movements Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

Next Steps Calibration - Eye array represents eye movement in head - need to map this to the world (via scene camera) Investigate realistic camera movements and alternative smoothing options for Camera array Obtain gain values for larger group of subjects Test on larger eye movements Revision for remote trackers Eye array is corrected pupil array. Shows how the eye is moving but needs to be mapped to scene image to show where subject is looking in the scene. HOW DO WE APPLY CAMERA ARRAY???? Revise method for remote trackers. New gains will need to be calculated since the camera is further away, the camera gain is likely to approach the eye gain. Smoothing camera array - understand typical camera movements, amplitude and velocity to design a smoothing filter appropriate for this data. (current smoothing method involves median filter followed by gaussian filter)

Questions, Suggestions…

R.I.T. Wearable Eye Tracker Advantage: Subject is less constrained, can perform more natural tasks Disadvantages: Head not stabilized - need to know where subject is looking at all times Camera (eye tracker) not stabilized - need to account for any movement of camera relative to head RIT wearable eye trackers LOWER PRECISION

Compensating for Eye Tracker Camera Movement Susan M. Kolakowski and Jeff B. Pelz Visual Perception Laboratory Rochester Institute of Technology March 28, 2006