Gaze-Augmented Think-Aloud as an Aid to Learning Sarah A. Vitak Scripps College Andrew T. Duchowski, Steve Ellis, Anand K. Gramopadhye Clemson University.

Slides:



Advertisements
Similar presentations
TEMPLATE DESIGN © Learning Effect With Repeated Use of the DynaVision D2 Visual Motor Evaluation William P. McCormack,
Advertisements

Evaluating the Effect of Neighborhood Size on Chinese Word Naming and Lexical Decision Meng-Feng Li 1, Jei-Tun WU 1*, Wei-Chun Lin 1 and Fu-Ling Yang 1.
Effect of Opacity of Stimulus in Deployment of Interest in an Interface Sujoy Kumar Chowdhury & Jeremiah D. Still Missouri Western State University Introduction.
Multifactorial Designs
Department of Biomedical Informatics 1 APIII Slide 1 Use of a ‘Mathematical Microscope’ to Understand Radiologists’ Errors in Breast Cancer Detection Claudia.
Joke Daems PhD student Lieve Macken, Sonia Vandepitte, Robert Hartsuiker Comparing HT and PE using advanced research tools.
Eye Movements of Younger and Older Drivers Professor: Liu Student: Ruby.
LOGO The role of attentional breadth in perceptual change detection Professor: Liu Student: Ruby.
Autism and the Brain. Hello Antonia Hamilton –Lecturer & researcher in Psychology from the University of Nottingham –will give a general introduction.
Effects of Sound and Visual Congruency on Product Selection and Preference Brock Bass Felipe Fernandez Drew Link Andrew Schmitz.
“Yours is Better!” Participant Response Bias in HCI Nicola Dell, Vidya Vaidyanathan, Indrani Medhi, Edward Cutrell, William Thies CHI 2012 Joon-won Lee.
Data analysis and interpretation. Agenda Part 2 comments – Average score: 87 Part 3: due in 2 weeks Data analysis.
Sakina Dharsee Psychology Introduction Autism: Developmental disorder, unique profile of social and emotional behavior. Symptomatology: Diminished.
Mobile Phone Use in a Driving Simulation Task: Differences in Eye Movements Stacy Balk, Kristin Moore, Will Spearman, & Jay Steele.
The Use of Eye Tracking Technology in the Evaluation of e-Learning: A Feasibility Study Dr Peter Eachus University of Salford.
The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter Twelve: Designing, Conducting, Analyzing, and Interpreting Experiments.
1 User Centered Design and Evaluation. 2 Overview Why involve users at all? What is a user-centered approach? Evaluation strategies Examples from “Snap-Together.
SIMS 213: User Interface Design & Development Marti Hearst Thurs, March 13, 2003.
Abstract Cognitive control processes reduce the effects of irrelevant or misleading information on performance. We report a study suggesting that effective.
Brian White, Karl Gegenfurtner & Dirk Kerzel Random noise textures as visual distractors.
1 Keeping Track: Coordinating Multiple Representations in Programming Pablo Romero, Benedict du Boulay & Rudi Lutz IDEAs Lab, Human Centred Technology.
1 User Centered Design and Evaluation. 2 Overview My evaluation experience Why involve users at all? What is a user-centered approach? Evaluation strategies.
The role of eye tracking in usability evaluation of LMS in ODL context Mr Sam Ssemugabi Ms Jabulisiwe Mabila (Professor Helene Gelderblom) College of Science.
Pilot: Customizing a Commercially Available Digital Game to Assess Cognitive Function William C. M. Grenhart, John F. Sprufera, Jason C. Allaire, & Anne.
Evaluating Eye Movement Differences when Processing Subtitles Andrew T. Duchowski COMPUTER SCIENCE CLEMSON UNIVERSITY Abstract Our experimental.
Participants and Procedure  Twenty-five older adults aged 62 to 83 (M = 70.86, SD = 5.89).  Recruited from St. John’s and surrounding areas  56% female.
Frequency Judgments in an Auditing-Related Task By: Jane Butt Presenter: Sara Aliabadi November 20,
Year Review Nancy Rader May 13, esearch Emotion and Working Memory Temperament Infant Perception Attention and Early Language.
Eye Tracking in the Design and Evaluation of Digital Libraries
Effects on driving behavior of congestion information and of scale of in-vehicle navigation systems Author: Shiaw-Tsyr Uang, Sheue-Ling Hwang Transportation.
Adviser: Ming-Puu Chen Presenter: Pei-Chi Lu van den Boom, G., Pass, F., van Merrienbore, J.J.G., & van Gog, T. (2004). Reflection prompts and tutor feedback.
Methods Inhibition of Return was used as a marker of attention capture.  After attention goes to a location it is inhibited from returning later. Results.
How do university students solve problems in vector calculus? Evidence from eye tracking Karolinska institutet Stockholm 4th of May 2012 Magnus Ögren 1.
CIALDINI, R., et al Culture and Compliance. Personality and Social Psychology Bulletin 125, 1242–1253. KAHNEMAN, D. AND TVERSKY, A Prospect.
Comparing Eye Movements Sampled At Different Rates Abstract Eye movement data is often used as a means of corroborating performance metrics (e.g. time.
Assessment of Computational Visual Attention Models on Medical Images Varun Jampani 1, Ujjwal 1, Jayanthi Sivaswamy 1 and Vivek Vaidya 2 1 CVIT, IIIT Hyderabad,
Gaze-Controlled Human-Computer Interfaces Marc Pomplun Department of Computer Science University of Massachusetts at Boston Homepage:
Subject wearing a VR helmet immersed in the virtual environment on the right, with obstacles and targets. Subjects follow the path, avoid the obstacels,
Change detection and occlusion modes in road-traffic scenarios Professor: Liu Student: Ruby.
Additional Statistical Investigations A paired t-test was performed to evaluate whether a perceptual learning process occurs between the initial baseline.
Expertise Differences in Fixation, Quiet Eye Duration, and Surgical Performance During Identification and Dissection of the Recurrent Laryngeal Nerve Harvey,
Presenter: Lung-Hao Lee Nov. 3, Room 310.  Introduction  Related Work  Methods  Results ◦ General Gaze Distribution on SERPs ◦ Effects of Task.
An Analysis of Advertisement Perception through Eye Tracking William A. Hill Physics Youngstown State University Zach C. Joyce Computer.
Age Differences in Visual Search for Traffic Signs During a Simulated Conversation 學生:董瑩蟬.
Counting How Many Words You Read
Smith/Davis (c) 2005 Prentice Hall Chapter Fifteen Inferential Tests of Significance III: Analyzing and Interpreting Experiments with Multiple Independent.
An Eyetracking Analysis of the Effect of Prior Comparison on Analogical Mapping Catherine A. Clement, Eastern Kentucky University Carrie Harris, Tara Weatherholt,
 Example: seeing a bird that is singing in a tree or miss a road sign in plain sight  Cell phone use while driving reduces attention and memory for.
In both active and passive groups, the correlation between spatial ability and performance was attenuated, relative to previous studies In contrast to.
PET Count  Word Frequency effects (coefficients) were reliably related to activation in both the striate and ITG for older adults only.  For older adults,
Describe how reaching and grasping abilities develop in the first year of life.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent.
Lab 2 Issues: Needed to adapt to the “normal environment”. We would have liked to see more rapid adjustment and a stable baseline. Most subjects adapted.
Examining the Conspicuity of Infra-Red Markers For Use With 2-D Eye Tracking Abstract Physical infra-red (IR) markers are sometimes used to help aggregate.
Feedforward Eye-Tracking for Training Histological Visual Searches Andrew T. Duchowski COMPUTER SCIENCE, CLEMSON UNIVERSITY Abstract.
PS Research Methods I with Kimberly Maring Unit 9 – Experimental Research Chapter 6 of our text: Zechmeister, J. S., Zechmeister, E. B., & Shaughnessy,
An Evaluation of Pan & Zoom and Rubber Sheet Navigation with and without an Overview Dmitry Nekrasovski, Adam Bodnar, Joanna McGrenere, François Guimbretière,
Date of download: 6/25/2016 Copyright © 2016 American Medical Association. All rights reserved. From: Absence of Preferential Looking to the Eyes of Approaching.
Gaze cues in mother-child dyads Heather Bell and Meredith Meyer University of Oregon INTRODUCTION RESULTS CONCLUSIONS METHODS REFERENCES ACKNOWLEDGEMENTS.
Introduction   Many 3-D pronunciation tutors with both internal and external articulator movements have been implemented and applied to computer-aided.
Introduction   Many 3-D pronunciation tutors with both internal and external articulator movements have been implemented and applied to computer-aided.
Background Method Results Objectives Results Discussion References
The influence of visual strategies used by National Water Polo goalies on decision time and accuracy improvement through a video based perceptual training.
David Marchant, Evelyn Carnegie, Paul Ellison
Qualitative vs. Quantitative
Measuring Gaze Depth with an Eye Tracker During Stereoscopic Display
Consequences of the Oculomotor Cycle for the Dynamics of Perception
Consequences of the Oculomotor Cycle for the Dynamics of Perception
Dynamics of Mouth Opening in Hydra
Figure 1 A B GCL NBL ED18.5 PND3 PND6 GCL
Presentation transcript:

Gaze-Augmented Think-Aloud as an Aid to Learning Sarah A. Vitak Scripps College Andrew T. Duchowski, Steve Ellis, Anand K. Gramopadhye Clemson University CHI May, Austin, TX John E. Ingram Sewanee: University of the South

Motivation: training Visual search: –well-defined strategies have been developed by experts –e.g., top-down cognitive strategies based on experience –Chest X-Ray (CXR) inspection: Airway, Bones, Cardiac silhouette, Diaphrams, External tissue,... Expert (left) and novice (right) scanpaths over abnormal CXR.

Objectives: histology Develop histological search strategy training Test effectiveness of expert’s gaze atop video –scanpath needs to demonstrate expert’s search strategy Task: identify BrdU marked cells in epithelial layer Example stimuli: immuno-gold (BrdU) stained cross-sections of bovine mammary tissue.

Previous work: tracking experts Ericsson et al. (2006) surveyed experts’ gaze: 1. experts’ search strategies are task dependent 2. experts’ shorter dwell times thought to reflect expertise 3. experts make better use of peripheral information 4. experts’ patterns of visual analysis develop with training 5. experts use a larger area around fixation 6. experts make better use of extra-foveal information Expert football player (Ronaldo, gaze captured with Dikablis tracker), expert pilots (Weibel et al., 2012 [ETRA]), expert laparoscopic surgeons (courtesy Stella Atkins and Bin Zheng)

Previous work: scanpath training Fertile research area (see paper for review) Selected references related to training: –static, stylized scanpaths (VR) Sadasivan et al. (2005) –dynamic scanpaths (still images) Nalangula et al. (2006) –static scanpaths (still images) Litchfield et al. (2010) –dynamic scanpaths (video) Jarodzka et al. (2010) Sadasivan et al. (2005). Jarodzka (2010).

Contributions: GATA Gaze-Augmented Think-Aloud (GATA) builds on: –Feed-forward training Sadasivan et al. (2005) hierarchical task analysis & scanpath visualization –Stimulated Retrospective Think-Aloud Guan et al. (2006) verbalizing while watching scanpath visualization –Eye Movement Modeling Examples (EMME) Jarodzka et al. (2010, 2009, 2010) task analysis & highlighting of salient regions (e.g., foveation)

Gaze-Augmented Think-Aloud GATA: –recorded scanpath & verbalization (audio track) –a specific visual search strategy is required –for histology images, devised STAMP: Staining, Tissue, Artifacts, Magnification, Plane of section Training video used in study.

Evaluation Experiment: –2 x 2 mixed factorial design: (presence or absence of video) x (experienced or naive participant) video between-subjects, 8 stimuli images within-subjects –task: “identify cells marked with BrdU in epithelial layer” –procedure: both groups saw training slides w/out scanpath Training slides: the first highlights marked cells, second differentiates tissue type, third shows epithelia with no marked cells, fourth again shows marked cells in the epithelia, fifth shows no marked cells; same verbalization was heard as in GATA video.

Evaluation Experiment: –apparatus: Tobii ET-1750 (see paper for specs) –participants: 32 (aged 19-57, median 22) 15 experienced, 15 naive 2 participants dropped (no data) –dependent measures speed (time to task completion) accuracy (hits & misses) fixation counts fixation durations –outliers: beyond 3 SD one removed (from A&VS) Example of participant & apparatus; half the participants (experienced) recruited from Animal & Veterinary Science (A&VS) class who were familiar with tissue images.

Dependent measures Speed –time to task completion –tabulated per each image, then averaged Accuracy –hits (true positives) –misses (false positives) Fixation counts Fixation durations –longer durations suggest cognitive load (e.g., difficulty) Example of two captured scanpaths: blue is from experimental group, red is from control group.

Results: speed Time to completion: –two-way ANOVA revealed significant effect of video F(1,6)=9.25, p<0.05) –but not of population group F(1,6)=0.01, p=0.92, n.s.) –those who saw video performed significantly faster in either group

Results: fixations Number of fixations: –two-way ANOVA revealed significant effect of video F(1,6)=8.93, p<0.05) –but not of population group F(1,6)=0.02, p=0.90, n.s.) –numbers of fixation closely mirror time to completion –not surprising as both are often correlated

Results: fixation durations Fixation durations: –two-way ANOVA revealed no significant effect of video F(1,6)=0.00, p=0.56, n.s.) –nor of population group F(1,6)=0.00, p=0.98, n.s.) –durations not expected to differ between groups –within-group analysis shows significant difference by naive group but not experienced group

Results: accuracy Considering only effect of video on accuracy: –one-way ANOVA revealed significant effect on error rate F(1,27)=10.15, p<0.01) –but not on correct responses F(1,28)=1.21, p=0.28, n.s.)

Discussion Results suggest fairly clear explanation: –those who viewed scanpath were faster with fewer errors –effect more pronounced in experienced group corroborates to a certain extent by Litchfield et al. (2010) Eye movements provide evidence of performance –numbers of fixation correlate with time might not be obvious had stopwatch been used –fixation duration may indicate expertise shorter fixations suggest faster recognition (familiarity with task)

Conclusion Gaze-Augmented Think-Aloud: –simpler than more elaborate visual codifications e.g., Sadasivan et al. (2005) –scanpath indicates what to look for and what to avoid –cost-benefit ratio is increased through simplicity –likely to be effective for CXR training Current training at local hospital: from left to right, intern, resident, radiologist.

Acknowledgments This work was supported by the US NSF –Research Experience for Undergraduates grant Thank you! Questions?