Modeling Visual Search Time for Soft Keyboards Lecture #14.

Slides:



Advertisements
Similar presentations
Selective Visual Attention: Feature Integration Theory PS2009/10 Lecture 9.
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Reminder: extra credit experiments
Windows Basics An Introduction to the Windows Operating System.
NMDS 2.0 Program Description Dealing with Missing Values (MV) Basic Knowledge about the Interface This is a usual Power Point Presentation. Use Mouse-Clicks.
Efficiency of Algorithms Csci 107 Lecture 6-7. Topics –Data cleanup algorithms Copy-over, shuffle-left, converging pointers –Efficiency of data cleanup.
Multi-Modal Text Entry and Selection on a Mobile Device David Dearman 1, Amy Karlson 2, Brian Meyers 2 and Ben Bederson 3 1 University of Toronto 2 Microsoft.
Features and Objects in Visual Processing
Visual Search: finding a single item in a cluttered visual scene.
Upcoming Stuff: Finish attention lectures this week No class Tuesday next week – What should you do instead? Start memory Thursday next week – Read Oliver.
A graphical user interface (GUI) is a pictorial interface to a program. A good GUI can make programs easier to use by providing them with a consistent.
Pre-attentive Visual Processing: What does Attention do? What don’t we need it for?
Task Analysis (continued). Task analysis Observations can be done at different levels of detail fine level (primitives, e.g. therbligs, keystrokes,GOMS.
Fall 2007CS 225 Introduction to Software Design Chapter 1.
1 Exploring Stagecast Creator Stagecast Creator Tutorial: Kids Smoking on the Playground By: Community Simulations Team Center for Human-Computer Interaction.
Next Week Memory: Articles by Loftus and Sacks (following week article by Vokey)
Introduction to Software Design Chapter 1. Chapter 1: Introduction to Software Design2 Chapter Objectives To become familiar with the software challenge.
Visual search: Who cares?
Attention II Selective Attention & Visual Search.
Features and Object in Visual Processing. The Waterfall Illusion.
Korea Univ. Division Information Management Engineering UI Lab. Korea Univ. Division Information Management Engineering UI Lab – 2 학기 Paper 7 Modeling.
Features and Object in Visual Processing. The Waterfall Illusion.
Chapter 1 Program Design
Spring 2009CS 225 Introduction to Software Design Chapter 1.
CSC USI Class Meeting 2 August 31, Beginnings SOP 1: 1. When you use a (physical) key-based entry device, what do you do to the keys? A.
Proposal 13 HUMAN CENTRIC COMPUTING (COMP106) ASSIGNMENT 2.
1 Information Input and Processing Information Theory: Some times called cognitive psychology, cognitive engineering, and engineering psychology. Information.
 A data processing system is a combination of machines and people that for a set of inputs produces a defined set of outputs. The inputs and outputs.
Module 3 Productivity Programs Common Features and Commands Microsoft Office 2007.
Human Factors for Input Devices CSE 510 Richard Anderson Ken Fishkin.
Studying Visual Attention with the Visual Search Paradigm Marc Pomplun Department of Computer Science University of Massachusetts at Boston
XHTML Introductory1 Forms Chapter 7. XHTML Introductory2 Objectives In this chapter, you will: Study elements Learn about input fields Use the element.
Technology and digital images. Objectives Describe how the characteristics and behaviors of white light allow us to see colored objects. Describe the.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved Section 10-1 Review and Preview.
PMS /134/182 HEX 0886B6 PMS /39/80 HEX 5E2750 PMS /168/180 HEX 00A8B4 PMS /190/40 HEX 66CC33 By Adrian Gardener Date 9 July 2012.
Slides based on those by Paul Cairns, York ( users.cs.york.ac.uk/~pcairns/) + ID3 book slides + slides from: courses.ischool.berkeley.edu/i213/s08/lectures/i ppthttp://www-
1 Information Input and Processing Information Theory: Some times called cognitive psychology, cognitive engineering, and engineering psychology. Some.
Calculating Fractal Dimension from Vector Images Kelly Ran FIGURE 1. Examples of fractals (a) Vector graphics image (b) Sierpinski Carpet D ≈ 1.89 FIGURE.
Chapter Three The UNIX Editors. 2 Lesson A The vi Editor.
TAUCHI – Tampere Unit for Computer-Human Interaction 1 Statistical Models of Human Performance I. Scott MacKenzie.
Time Series Data Analysis - I Yaji Sripada. Dept. of Computing Science, University of Aberdeen2 In this lecture you learn What are Time Series? How to.
Department of Industrial Management Engineering 2. Psychology and HCI ○Hard science (ComSci) would drive out soft science (Psy)  “harden Psy” to improve.
Behaviour Models There are a number of models that predict the way in which an interface or user will behave.
Digital Image Processing CCS331 Relationships of Pixel 1.
Excel Lesson 1 Excel Basics
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Prof Jim Warren with reference to sections 7.4 and 7.6 of The Resonant Interface.
Term 2, 2011 Week 1. CONTENTS Problem-solving methodology Programming and scripting languages – Programming languages Programming languages – Scripting.
Sanjay Johal. Introduction(1.1) In this PowerPoint I will be explaining :  The purpose of the code for each of the two given programs, e.g. to carry.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 25 Nov 4, 2005 Nanjing University of Science & Technology.
Chapter Three The UNIX Editors.
ITM 734 Introduction to Human Factors in Information Systems
PROTEIN STRUCTURE SIMILARITY CALCULATION AND VISUALIZATION CMPS 561-FALL 2014 SUMI SINGH SXS5729.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Verification & Validation. Batch processing In a batch processing system, documents such as sales orders are collected into batches of typically 50 documents.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Click on the running man to start the experiment (this will take longer than the Stroop task you ran last week)
USER INTERFACE USER INTERFACE January 5, 2006 Intern 박지현 Information Theoretic Model of HCI : A Comparison of the Hick-Hyman Law and Fitts’ Law Steven.
Goals for Today’s Class
Slide Slide 1 Chapter 10 Correlation and Regression 10-1 Overview 10-2 Correlation 10-3 Regression 10-4 Variation and Prediction Intervals 10-5 Multiple.
Pasewark & Pasewark 1 Excel Lesson 1 Excel Basics Microsoft Office 2007: Introductory.
Melanie Boysen & Gwendolyn Walton
Let’s continue to do a Bayesian analysis

CHAPTER 2 Research Methods in Industrial/Organizational Psychology
King Saud University College of Engineering IE – 341: “Human Factors Engineering” Fall – 2017 (1st Sem H) Chapter 3. Information Input and Processing.
Let’s do a Bayesian analysis
SoC and FPGA Oriented High-quality Stereo Vision System
Copyright Catherine M. Burns
A Novel Smoke Detection Method Using Support Vector Machine
Presentation transcript:

Modeling Visual Search Time for Soft Keyboards Lecture #14

Topics to cover Introduction Models of Visual Search Our Proposed Model Model Validation Conclusion

Introduction What is Visual Search? Types of Visual Search What is a Soft Keyboard? Why it is important? Issues in designing a Soft Keyboard

Introduction (cont..)  What is Visual Search? –Visual search is our ability to detect a target among distractors –Example: Searching File menu in MS-WORD

Introduction (cont..)  Types of Visual Search  Feature Search Feature Search is the process of searching for targets defined by a single visual feature, such as color, size, orientation or shape, sometimes called Parallel Search  Conjunction Search It occurs when a target is defined not by any single visual feature, but by a combination of two or more feature, sometimes called Serial Search

Introduction (cont..)  What is a Soft Keyboard? Soft Keyboards are on-screen representation of physical keyboards The time to locate one key in soft keyboard interface is known as Visual Search time in Soft Keyboard

Introduction (cont..) Why a Soft Keyboard is important? In Hand-held devices For motor-impaired people to use computer

Introduction (cont..) Issues in designing Soft Keyboard Problems in user testing Fitts’-digraph model Deficiencies in Hick-Hyman Law

Introduction (cont..)  Fitts’ Digraph Model  Used to model virtual keyboard performance  There are 3 components in this model  The visual search time component (RT): The time to locate the target character key on the interface. The time is modeled using the Hick-Hyman law.  The motor movement time component (MT): The time to move the finger or stylus (common input method used on soft keyboards) from the last selected key to the target key. The time is calculated using the Fitts’ law.  Digraph probability of character pairs (DP): The digraph probability is calculated from a corpus of text.

Fitt’s-digraph Model Fitts’-diagraph model – Movement time using Fitts law – Visual search time using Hick-Hyman law – Digraph probability (calculated from a corpus)

Fitt’s-digraph Model Average movement time Performance (CPS) Performance (WPM)

Introduction (cont..)  Hick-Hyman Law (Hick, Hyman)  States that the time required to make a choice among N objects given a target is a logarithmic function of N  N is the number of possible responses,  c and d are constants  Response time RT= c + d log2N  d was set to the reciprocal of the slowest speed of text entry (i.e. 1/5 or 0.20)

Fitt’s-digraph Model  Drawbacks of Hick-Hyman Law  No. of alternatives (N) should be based on the no. of keys in the keyboard rather than the number of letters.  Visual search time on a soft keyboard interface is affected by various other factors like familiarity of the organization of the letters, number of letters on each key, grouping of letters etc. in addition to number of elements on the interface

Models of Visual Search 1. Simple Search Model (Neisser) This model predicts that the response time will increase linearly as a function of the total number of objects that must be examined Mean Predicted Response Time  RT a = α a +β (m s ) when target absent  RT p = α p +β (m s +1)/ when target present Where –m s = The total number of items in the display –β = The amount of search time required to examine each item in a serial, non-overlapping search –α = A constant amount of time required for other processes not related to the number of items in the display (which might change depending on whether the target is absent or present)

Models of Visual Search (cont…) 2. Feature Integration Model (Treisman)  Early in the processing of visual information, a separate feature map is created for each primary feature (such as size, color, shape, and orientation)  These feature maps are searched in parallel, pre- attentively, for the presence of a single object that possesses a simple feature corresponding to one of the maps  If the target item does not possess a distinguishing primary feature, then the model must resort to a serial search process much like simple search model

Models of Visual Search (cont….) 3. Guided Search Model (Cave and Wolfe) The Guided Search model had a pre- attentive stage and an attentive stage. Information from the first stage could be used to guide deployments of selective attention in the second stage In the parallel stage, the element that resembles the target and differs most from other elements is identified

Models of Visual Search (cont….) Search time is linearly related to the number of steps for locating the target The time for Visual Search then calculated by the formula RT = No. of Steps * Cycle time + Overhead Time Where,  RT= reaction time or response time, the time required to search an element in the interface  Cycle time = the constant time required for processing of an individual element in the serial search. This value can be taken from the cognitive science  Overhead time = time for key pressing during experiment

Models of Visual Search (cont….)  The activation score for each and every element can be calculated by the following equation. Where A i = Net activation of element i for a visual feature p = Bottom-up coefficient q = Top- down coefficient f i = Activation for element i for that feature f j = Activation for rest of the other elements | d i - d j | = Absolute difference in the distance between the i th element and j th element t = Activation for the target for that particular visual feature n = No. of elements in the interface

Identified factors for Soft Keyboard 1. Grouping 2. Size of the keys 3. Number of groups 4. Distance between the groups 5. Size of the groups 6. Number of elements 7. Shape characteristics

Our Proposed Model  Selecting Model Parameters and Factors  Collecting Experimental Data from Search Trials  Parameter Fitting  Calculations of Activation Score and Visual Search Time

Model Parameters and Factors  Factors  Grouping Factor Whenever, there are large number of elements present in the interface, grouping and labeling of elements have decrease the visual search time  Shape Characteristics It describes how much amount one element differs from other elements by shape.

Model Parameters and Factors (cont..)  Parameters  Activation Scores for each and every elements for each and every factor  Bottom-up coefficient for the corresponding factor  Top-down coefficient for the corresponding factor  Cycle Time  Overhead Time

Collecting Experimental Data  Step 1 Keyboard hooking program was run by which all the key press events were stored in a file in a temporary folder along with the timestamp  Step 2 After finishing the first step, participants were shown a screen which is shown below  Step 3 As soon as the user was ready for search, the control was set on to the “layout” button. Then, the user hit the button

Experiment (cont..)  Step 4 After hitting the button, a screen which contained the keyboard layout came. Then, the participant had to search for the target  Step 5 As soon as the participant found the target, he again pressed the button  Step 6 Some verifications were also done to investigate whether the participant actually saw the target or not

Layouts for Experiment

Parameter Fitting The average search times across the 10 participants in all the search trials were used to fit the model predictions With a given set of model parameters, the model can compute a prediction of the visual search time for locating the target The predicted search times were then correlated with the experimentally measured search times, and the squared correlation coefficient was computed A single model parameter was then changed, and the predicted search times for all targets were computed again

Parameter Fitting (cont..) These predicted search times were used again to calculate the squared correlation coefficient with the experimental search times If the parameter change led to an increase in the squared correlation coefficient, the change was kept otherwise change was undone This process iterated until it seemed that no further changes in model parameters led to an increase in the squared correlation coefficient The model parameters were first set manually

Calculations of Activation Score and Visual Search Time  After representing the elements in the pixel matrix form, initial activation for shape can be calculated as follows  Binary values can be calculated by taking the blank pixels as ‘0’ and filled pixels as ‘1’  From the binary representation, decimal values can be calculated for each element  From the initial activations, we can calculate the final activations from the activation formula  After the calculating activations for individual factors, total activations can be calculated by, Activation (Total) = Activation (group) + Activation (shape)

Calculations of Activation Score and Visual Search Time  Distance between two elements has been calculated by the Euclidian distance  Visual search time or Response time (RT) can be calculated as follows. RT = No. of Steps * Cycle time + Overhead time + System response time Cycle time= 30 msec (Taken from Cognitive Science) Overhead time = 50 msec System response time = Time delay between successive key presses = 150 msec (calculated)

Model Validation TargetsObserved TimeNo. of StepsPredicted Time A B division G I S Z Shift

Comparison Between Hick-Hyman Law & our Proposed Model TargetsObserved time Predicted time (The Hick-Hyman law) % ErrorPredicted time (Our model) % Error A B division G I S Z Shift

Conclusion  Comparison shows that our proposed model predicts visual search time more accurately than Hick-Hyman law.  Some more factors to be considered like  Size of the groups  Size of the keys  Spacing between the groups  Number of groups