Eyes Alive Sooha Park - Lee Jeremy B. Badler - Norman I. Badler University of Pennsylvania - The Smith-Kettlewell Eye Research Institute Presentation Prepared.

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Tutorial 3 Refractor assignment, Analysis, Modeling and Statics
Dynamic monitoring of traffic signs on roadways. INTRODUCTION TRAFFIC SIGNS ALONG ROADWAYS Approximately half of traffic accidents in developed countries.
Xiaoyong Ye Franz Alexander Van Horenbeke David Abbott
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Chapter 3: Motion in 2 or 3 Dimensions
SWE 423: Multimedia Systems Chapter 3: Audio Technology (2)
THE AUSTRALIAN NATIONAL UNIVERSITY Infrasound Technology Workshop, November 2007, Tokyo, Japan OPTIMUM ARRAY DESIGN FOR THE DETECTION OF DISTANT.
Challenge the future Delft University of Technology Blade Load Estimations by a Load Database for an Implementation in SCADA Systems Master Thesis.
MEG Experiments Stimulation and Recording Setup Educational Seminar Institute for Biomagnetism and Biosignalanalysis February 8th, 2005.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
Correlation Between Image Reproduction Preferences and Viewing Patterns Measured with a Head Mounted Eye Tracker Lisa A. Markel Jeff B. Pelz, Ph.D. Center.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Surface Variation and Mating Surface Rotational Error in Assemblies Taylor Anderson UGS June 15, 2001.
Gaze Awareness for Videoconferencing: A Software Approach Nicolas Werro.
Project Presentation: March 9, 2006
Today Introduction to MCMC Particle filters and MCMC
Iris localization algorithm based on geometrical features of cow eyes Menglu Zhang Institute of Systems Engineering
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Hand Signals Recognition from Video Using 3D Motion Capture Archive Tai-Peng Tian Stan Sclaroff Computer Science Department B OSTON U NIVERSITY I. Introduction.
Face Recognition and Retrieval in Video Basic concept of Face Recog. & retrieval And their basic methods. C.S.E. Kwon Min Hyuk.
A Full Frequency Masking Vocoder for Legal Eavesdropping Conversation Recording R. F. B. Sotero Filho, H. M. de Oliveira (qPGOM), R. Campello de Souza.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Hydrologic Statistics
Getting started © juhanita2015.
The Camera Mouse: Visual Tracking of Body Features to Provide Computer Access for People With Severe Disabilities.
ELECTRICAL CIRCUIT ET 201 Define and explain characteristics of sinusoidal wave, phase relationships and phase shifting.
1 Statistical Analysis - Graphical Techniques Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering EMIS 7370/5370 STAT 5340 : PROBABILITY AND.
Circular Motion Kinematics 8.01 W04D1. Today’s Reading Assignment: W04D1 Young and Freedman: 3.4; Supplementary Notes: Circular Motion Kinematics.
Change blindness and time to consciousness Professor: Liu Student: Ruby.
Cognitive demands of hands-free- phone conversation while driving Professor : Liu Student: Ruby.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Curve Fitting.
3D SLAM for Omni-directional Camera
TAUCHI – Tampere Unit for Computer-Human Interaction Visualizing gaze path for analysis Oleg Špakov MUMIN workshop 2002, Tampere.
Recognition, Analysis and Synthesis of Gesture Expressivity George Caridakis IVML-ICCS.
PFI Cobra/MC simulator Peter Mao. purpose develop algorithms for fiducial (FF) and science (SF) fiber identification under representative operating conditions.
Circular Motion Topics Angular Measure Angular Speed and Velocity Uniform Circular Motion and Centripetal Acceleration Angular Acceleration.
An Introduction to Statistics. Two Branches of Statistical Methods Descriptive statistics Techniques for describing data in abbreviated, symbolic fashion.
Analysis of Eye Movement Data Frank M. Marchak, Ph.D. Veridical Research and Design Corporation
The Duration of Eyelid Movements During Blinks: Changes with Drowsiness Tucker, AJ, Johns, MW Sleep Diagnostics Pty Ltd Melbourne, Australia Introduction.
Perceptual Analysis of Talking Avatar Head Movements: A Quantitative Perspective Xiaohan Ma, Binh H. Le, and Zhigang Deng Department of Computer Science.
Eurecom, 6 Feb 2007http://biobimo.eurecom.fr Project BioBiMo 1.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Automated Reading Assistance System Using Point-of-Gaze Estimation M.A.Sc. Thesis Presentation Automated Reading Assistance System Using Point-of-Gaze.
Counting How Many Words You Read
Motion Estimation using Markov Random Fields Hrvoje Bogunović Image Processing Group Faculty of Electrical Engineering and Computing University of Zagreb.
Velocity and Other Rates of Change Chapter 3.4. Instantaneous Rates of Change 2 *
Speed Sensor Calibration
Effective Anomaly Detection with Scarce Training Data Presenter: 葉倚任 Author: W. Robertson, F. Maggi, C. Kruegel and G. Vigna NDSS
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Particle Image Velocimetry Demo Outline (For reference) ‏ Topic NumberTopic NamePage Type 1Flow of PIVAnimated page.
1 Statistical Analysis - Graphical Techniques Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering EMIS 7370/5370 STAT 5340 : PROBABILITY AND.
Level Set Segmentation ~ 9.37 Ki-Chang Kwak.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Blink Is Not A Random Event In Reading Yu-Chi Tai, James Sheedy, & John Hayes Pacific University, College of Optometry.
Advanced Quantitative Techniques
Character Animation Forward and Inverse Kinematics
Presented by Jason Moore
Submitted by: Ala Berawi Sujod Makhlof Samah Hanani Supervisor:
Authoring Directed Gaze for Full-Body Motion Capture
Categorizing sex and identity from the biological motion of faces
Shohei Fujita, Takuya Matsuo, Masahiro Ishiura, Masahide Kikkawa 
Part I Review Highlights, Chap 1, 2
Presentation transcript:

Eyes Alive Sooha Park - Lee Jeremy B. Badler - Norman I. Badler University of Pennsylvania - The Smith-Kettlewell Eye Research Institute Presentation Prepared By: Chris Widmer CSE 4280

Outline Introduction Introduction Motivation Motivation Background Background Overview of System Overview of System Descriptions Descriptions Results Results Conclusions Conclusions

Introduction Eye movement an important expression technique Eye movement an important expression technique Statistical eye movement model based on empirical data Statistical eye movement model based on empirical data

Motivation Natural look eye movement for animations of close-up face views Natural look eye movement for animations of close-up face views Traditionally difficult to attain accurate eye movement in animations Traditionally difficult to attain accurate eye movement in animations No proposals for saccadic eye movement for easy use in speaking/expressive faces No proposals for saccadic eye movement for easy use in speaking/expressive faces Recent interest in construction of human facial models Recent interest in construction of human facial models

Background To build a realistic face model: To build a realistic face model: –Geometry modeling –Muscle behavior –Lip synchronization –Text synthesis Research has traditionally not focused on eye movement. Research has traditionally not focused on eye movement.

Background Eyes are essential for non-verbal communication Eyes are essential for non-verbal communication –Regulate flow of conversation –Search for feedback –Express emotion –Influence of behavior New approach based on statistical data and empirical studies

Saccades Rapid movements of both eyes from one gaze position to another. Rapid movements of both eyes from one gaze position to another. –Only Eye Movement Executed Consciously –Balance conflicting demands of speed and accuracy Magnitude – angle the eyeball rotates to change position Magnitude – angle the eyeball rotates to change position Direction – 2D axis of rotation, 0 degrees to the right Direction – 2D axis of rotation, 0 degrees to the right Duration – Time of movement Duration – Time of movement Inter-saccadic Interval – time between saccades Inter-saccadic Interval – time between saccades

Saccades Example: Magnitude 10, 45 degrees Example: Magnitude 10, 45 degrees –Rotating 10 degrees, right-upward Initial/Final Acceleration: 30,000 deg/sec Initial/Final Acceleration: 30,000 deg/sec Peak Velocity – 400 – 600 deg/sec Peak Velocity – 400 – 600 deg/sec Reaction Time: 180 – 220 msec Reaction Time: 180 – 220 msec Duration and Velocity Functions of Magnitude Duration and Velocity Functions of Magnitude Magnitude Approximation Magnitude Approximation D = D + d * A D = D 0 + d * A D = Duration, A = Amplitude, d = increment in duration per degree (2-2.7 msec/deg), D = Intercept (20-30 ms) D = Duration, A = Amplitude, d = increment in duration per degree (2-2.7 msec/deg), D 0 = Intercept (20-30 ms) Often accompanied by head rotation Often accompanied by head rotation

Background Three Functions of Gaze Three Functions of Gaze –Sending Social Signals –Open Channel to Receive Information –Regulate Flow of Conversation

Overview of System 1. Eye-tracking images analyzed, statistically based model generated using Matlab 2. Lip movements/Eye Blinks/Head Rotation analyzed by alterEGO face motion analysis system

Overview of System Face Animation Parameter (FAP) File Face Animation Parameter (FAP) File Eye Movement Synthesis System (EMSS) Eye Movement Synthesis System (EMSS) –Adds eye movement data to FAP file Modified from face2face’s animator plug-in for 3D Studio Max Modified from face2face’s animator plug-in for 3D Studio Max

Analysis of Data Eye movements recorded with eye-tracking visor (ISCAN) – monocle and two miniature cameras Eye movements recorded with eye-tracking visor (ISCAN) – monocle and two miniature cameras One views environment from left eye perspective, other is close-up of left eye One views environment from left eye perspective, other is close-up of left eye Eye image recorded Eye image recorded Device tracks by comparing corneal reflection of the light source relative to the location of the pupil center Device tracks by comparing corneal reflection of the light source relative to the location of the pupil center –Reflection acts as reference point while pupil changes during movement

Analysis of Data Pupil position found using pattern mapping Pupil position found using pattern mapping Default threshold grey level using Canny Edge Detection operator Default threshold grey level using Canny Edge Detection operator Positional histograms along X and Y axis calculated Positional histograms along X and Y axis calculated Two center points with maximum correlation chosen Two center points with maximum correlation chosen

Analysis of Data

Saccade Magnitude Saccade Magnitude Frequency of a specific magnitude (least mean squares distribution) Frequency of a specific magnitude (least mean squares distribution) d = Distance traversed by pupil center r = radius of eyeball (1/2 of x max P = % chance to occur A = Saccade Magnitude (Degrees)

Analysis of Data Saccade Duration measured with 40 deg/sec threshold Saccade Duration measured with 40 deg/sec threshold Used to derive instantaneous velocity curve for every saccade Used to derive instantaneous velocity curve for every saccade Duration of each movement normalized to 6 frames Duration of each movement normalized to 6 frames Two classes of Gaze: Two classes of Gaze: –Mutual –Away

Talking vs. Listening

Synthesis of Eye Movement Attention Monitor (AttMon) Attention Monitor (AttMon) Parameter Generator (ParGen) Parameter Generator (ParGen) Saccade Synthesizer (SacSyn) Saccade Synthesizer (SacSyn)

Head Rotation Monitoring

Synthesis of Natural Eye Movement AttMon determines mode, changes in head rotation, gaze state AttMon determines mode, changes in head rotation, gaze state ParGen determines saccade magnitude, direction, duration, and instantaneous velocity ParGen determines saccade magnitude, direction, duration, and instantaneous velocity SacSyn synthesizes and codes movements into FAP values SacSyn synthesizes and codes movements into FAP values

Synthesis of Natural Eye Movement Magnitude determined by inverse of fitting function shown earlier (Slide 16) Magnitude determined by inverse of fitting function shown earlier (Slide 16) Mapping guarantees same probability distribution as empirical data Mapping guarantees same probability distribution as empirical data Direction determined by head rotation (threshold), and distribution table Direction determined by head rotation (threshold), and distribution table –Uniform Distribution, 0 to 100 –8 non-uniform intervals assigned to respective directions

Synthesis of Natural Eye Movement Duration determined by first equation, respective values for d and D Duration determined by first equation, respective values for d and D 0 Velocity determined by using fitted instantaneous velocity curve Velocity determined by using fitted instantaneous velocity curve SacSyn system calculates sequence of coordinates for sys centers SacSyn system calculates sequence of coordinates for sys centers Translated into FAP values, rendered in 3D Studio MAX Translated into FAP values, rendered in 3D Studio MAX Face2face animation plug-in to render animations with correct parameters Face2face animation plug-in to render animations with correct parameters

Results 3 Different Methods Tested 3 Different Methods Tested Type 1 -> No Saccadic Movements Type 1 -> No Saccadic Movements Type 2 -> Random Eye Movement Type 2 -> Random Eye Movement Type 3 -> Sampled from Estimated Distributions (synchronized with head movements) Type 3 -> Sampled from Estimated Distributions (synchronized with head movements) Tests were subjective Tests were subjective

Results Q1: Did the character on the screen appear interested in (5) or indifferent (1) to you? Q1: Did the character on the screen appear interested in (5) or indifferent (1) to you? Q2: Did the character appear engaged (5) or (1) distracted during the conversation? Q2: Did the character appear engaged (5) or (1) distracted during the conversation? Q3: Did the personality of the character look friendly (5) or not (1)? Q3: Did the personality of the character look friendly (5) or not (1)? Q4: Did the face of the character look lively (5) or deadpan (1)? Q4: Did the face of the character look lively (5) or deadpan (1)? Q5: In general, how would you describe the character? Q5: In general, how would you describe the character?

Results

Conclusions Saccade Model for Talking and Listening Modes Saccade Model for Talking and Listening Modes 3 Different Eye Movements: Stationary, Random, Model-based 3 Different Eye Movements: Stationary, Random, Model-based Model-based scored significantly higher Model-based scored significantly higher Eye Tracking Data recorded from a subject Eye Tracking Data recorded from a subject –New recorded data for every character This model allows any number of unique eye movement sequences.

Drawbacks and Improvements Aliasing with small movements Aliasing with small movements Sensing of eye movement vs. head movement during data gathering Sensing of eye movement vs. head movement during data gathering Future Enhancements Future Enhancements –Eye/Eyelid data –More model gaze patterns –More subjects for data –Scan-path model for close-up images

Developments J. Badler, Director, Center for Human Modeling and Simulation J. Badler, Director, Center for Human Modeling and Simulation –Digital Human Modeling/Behavior –“Jack” Software –Simulation of workflow using virtual people

References Badler, Jeremy B., Badler, Norman I, Lee, Sooha Park, “Eyes Alive Badler, Jeremy B., Badler, Norman I, Lee, Sooha Park, “Eyes Alive