Component-specific usability testing Dr Willem-Paul Brinkman Lecturer Department of Information Systems and Computing Brunel University

Slides:



Advertisements
Similar presentations
Chapter 15: Analytical evaluation
Advertisements

Cultural Heritage in REGional NETworks REGNET Project Meeting Content Group Part 1: Usability Testing.
Design of Experiments Lecture I
User Modeling CIS 376 Bruce R. Maxim UM-Dearborn.
Making sense out of recorded user-system interaction Dr Willem-Paul Brinkman Lecturer Department of Information Systems and Computing Brunel University.
User Interfaces 4 BTECH: IT WIKI PAGE:
Web Search Results Visualization: Evaluation of Two Semantic Search Engines Kalliopi Kontiza, Antonis Bikakis,
User Interface Design Yonsei University 2 nd Semester, 2013 Sanghyun Park.
CS305: HCI in SW Development Continuing Evaluation: Asking Experts Inspections and walkthroughs.
Cognitive modelling, Users models and Mental models What’s cognitive modelling ? The human information processing approach Cognitive Models of Users in.
Empirical Usability Testing in a Component-Based Environment: Improving Test Efficiency with Component-Specific Usability Measures Willem-Paul Brinkman.
Chapter 15: Analytical evaluation. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept. Cal Poly San.
1 User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept. Cal Poly San Luis Obispo FJK 2009.
Discrim Continued Psy 524 Andrew Ainsworth. Types of Discriminant Function Analysis They are the same as the types of multiple regression Direct Discrim.
7M701 1 User Interface Design Sommerville, Ian (2001) Software Engineering, 6 th edition: Chapter 15
Improving Robustness in Distributed Systems Jeremy Russell Software Engineering Honours Project.
An Experimental Evaluation on Reliability Features of N-Version Programming Xia Cai, Michael R. Lyu and Mladen A. Vouk ISSRE’2005.
Usability Testing Of Interaction Components: Taking the Message Exchange as a Measure of Usability Willem-Paul Brinkman Brunel University, London Reinder.
Developing an instrument to assess the impact of attitude and social norms on user selection of an interface design: a repertory grid approach Willem-Paul.
Evaluation: Inspections, Analytics & Models
10th Workshop "Software Engineering Education and Reverse Engineering" Ivanjica, Serbia, 5-12 September 2010 First experience in teaching HCI course Dusanka.
INTRODUCTION. Concepts HCI, CHI Usability User-centered Design (UCD) An approach to design (software, Web, other) that involves the user Interaction Design.
Ch 14. Understanding an App’s Architecture Two perspectives 1.Programmer: a)Components & Behaviour b)Designed in Component designer Blocks editor 2.End-user:
User Interface Design Chapter 11. Objectives  Understand several fundamental user interface (UI) design principles.  Understand the process of UI design.
©2011 1www.id-book.com Analytical evaluation Chapter 15.
Lecture 7: Objects and Interaction 1  Principles of Interactive Graphics  CMSCD2012  Dr David England, Room 718,  ex 2271 
Predictive Evaluation
Cognitive demands of hands-free- phone conversation while driving Professor : Liu Student: Ruby.
©RavichandranUser interface Slide 1 User interface design.
Ch 14. Testing & modeling users
Q Q Human Computer Interaction – Part 1© 2005 Mohammed Alabdulkareem Human Computer Interaction - 1 Dr. Mohammed Alabdulkareem
Interacting with IT Systems Fundamentals of Information Technology Session 5.
The Effectiveness of Web Components Presented By: Geoffrey Zimmerman Computer Science Capstone Fall 2004/Spring 2005 Mentor: Dr. C. David Shaffer.
Overview of the rest of the semester Building on Assignment 1 Using iterative prototyping.
Software Engineering Chapter 16 User Interface Design Ku-Yaw Chang Assistant Professor Department of Computer Science and Information.
The 11th Global Conference on Ageing 28 May – 1 June 2012 Prague Research on usability for ICT system to improve the health of dependent elderly people.
GOMS Timing for WIMP interfaces When (fine-grained) speed matters.
Architectures of distributed systems Fundamental Models
Architectural Design lecture 10. Topics covered Architectural design decisions System organisation Control styles Reference architectures.
PowerPoint Presentation for Dennis, Wixom, & Tegarden Systems Analysis and Design with UML, 3rd Edition Copyright © 2009 John Wiley & Sons, Inc. All rights.
Testing & modeling users. The aims Describe how to do user testing. Discuss the differences between user testing, usability testing and research experiments.
Evaluation of User Interface Design 4. Predictive Evaluation continued Different kinds of predictive evaluation: 1.Inspection methods 2.Usage simulations.
Project team: Da Young Lee Linus Wooram Jeon Nithya Kote Shundan Xiao.
Software Architecture
Chapter 15: Analytical evaluation. Inspections Heuristic evaluation Walkthroughs.
Chapter 15: Analytical evaluation Q1, 2. Inspections Heuristic evaluation Walkthroughs Start Q3 Reviewers tend to use guidelines, heuristics and checklists.
Analytical evaluation Prepared by Dr. Nor Azman Ismail Department of Computer Graphics and Multimedia Faculty of Computer Science & Information System.
Users’ Quality Ratings of Handheld devices: Supervisor: Dr. Gary Burnett Student: Hsin-Wei Chen Investigating the Most Important Sense among Vision, Hearing.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
P6 BTEC Level 3 Subsidiary Diploma in ICT. Automation The end user of a spreadsheet may be proficient in using the software, but the more that you automate.
SD1230 Unit 6 Desktop Applications. Course Objectives During this unit, we will cover the following course objectives: – Identify the characteristics.
Usability Evaluation of the Course Management Features of Sakai Jonathan Howarth Rex Hartson Aaron Zeckoski
1 Cognitive Modeling GOMS, Keystroke Model Getting some details right!
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
Introduction to Operating Systems Prepared by: Dhason Operating Systems.
1 Evaluating the User Experience in CAA Environments: What affects User Satisfaction? Gavin Sim Janet C Read Phil Holifield.
Fall 2002CS/PSY Predictive Evaluation (Evaluation Without Users) Gathering data about usability of a design by a specified group of users for a particular.
HCI Meeting 1 Thursday, August 26. Class Activities [1] Student questionnaire Answer the following questions: 1.When and where was the computer mouse.
Consistency: A Factor that Links the Usability of Individual Interaction Components Together Willem-Paul Brinkman Brunel University Reinder Haakma Philips.
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
Chapter 6 : User interface design
A Hierarchical Model for Object-Oriented Design Quality Assessment
Reading, Processing and Interacting with Hypertext on the Web
CIS 376 Bruce R. Maxim UM-Dearborn
Visual Programming week # 02 APP (Application) Architecture.
Princess Nourah bint Abdulrahman University
Analysis of Software Usability Evaluation Methods
User interface design.
Evaluation.
Testing & modeling users
Presentation transcript:

Component-specific usability testing Dr Willem-Paul Brinkman Lecturer Department of Information Systems and Computing Brunel University

Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

Introduction Component-Based Software Engineering Empirical Usability Testing

Layered communication

Layered Protocol Theory (Taylor, 1988) = 15+23= Add ProcessorEditor Control results Control equation UserCalculator

Usability Testing Aim to evaluate the usability of a component based on the message exchange between a user and a specific component

Two paradigms  Multiple versions testing paradigm  Single version testing paradigm Manage Support Re-use Create

Test Procedure  Normal procedures of a usability test  User task which requires interaction with components under investigation  Users must complete the task successfully

Component-specific component measures Number of messages received The effort users put into the interaction Objective performance Perceived ease- of-use Perceived satisfaction Component Control process Control loop

Component-specific component measures Increasing the statistical power Objective performance Perceived ease- of-use Perceived satisfaction y y 1 = x k +  k y 2 = x m +  m  k =  k component +  k rest  m =  m component +  m rest Assumption  k rest   m rest messages keys

Component-specific component measures Objective performance Perceived ease-of-use Perceived satisfaction Component-specific questionnaire increase the statistical power because they help help the users to remember their control experience with a particular interaction component

Component-specific component measures Objective performance Perceived ease-of-use Perceived satisfaction Perceived Usefulness and Ease-of-use questionnaire (David, 1989), 6 questions, e.g.  Learning to operate [name] would be easy for me.  I would find it easy to get [name] to do what I want it to do. UnlikelyLikely

Component-specific component measures Objective performance Perceived ease- of-use Perceived satisfaction Post-Study System Usability Questionnaire (Lewis, 1995)  The interface of [name] was pleasant.  I like using the interface of [name].Strongly disagreeagree

Experimental validation 80 users 8 mobile telephones 3 components were manipulated according to Cognitive Complexity Theory (Kieras & Polson, 1985) 1.Function Selector 2.Keypad 3.Short Text Messages

Architecture Mobile telephone Send Text Message Send Text Message Function Selector Function Selector Keypad

Experimental validation Functions Selector Broad/shallow Narrow/deep

Experimental validation Keypad Repeated-Key Method “L” Modified-Model-Position method “J”

Experimental validation Send Text Message Simple Complex

Results Average probability that a measure finds a significant (α = 0.05) effect for the usability difference between the two versions of FS, STM, or the Keypad components

Wilcoxon Matched-Pairs Signed-Ranks Tests between the number of correct classification made by discriminant analyses on overall and component-specific measures Results

Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

Two paradigms  Multiple versions testing paradigm  Single version testing paradigm Manage Create Support Re-use

Testing Different Components Component specific objective performance measure: 1.Messages received + Weight factor A common currency 2.Compare with ideal user A common point of reference Usability of individual components in a single device can be compared with each other and prioritized on potential improvements

Click {1} Click {1} Call <>{2} Set <Fill colour red, no border> {7} Right Mouse Button Menu Properties Assigning weight factors to represent the user’s effort in the case of ideal user

Total effort value Total effort =  MR i.W MR i.W : Message received. Weight factor Click {1} Click {1} Call <>{2} Right Mouse Button Menu Properties 5 2 = 7 + 2

Assigning weight factors in case of real user Correction for inefficiency of higher and lower components Visual Drawing Objects Properties Right Mouse Button Menu

Assigning weight factors in case of real user Assign weight factors as if lower components operate optimal Visual Drawing Objects Properties Right Mouse Button Menu Inefficiency of lower level components: need more messages to pass on a message upwards than ideally required

Assigning weight factors in case of real user Visual Drawing Objects Properties Right Mouse Button Menu Inefficiency of higher level components: more messages are requested than ideally required UE: User effort MR i.W : Message received. Weight factor #MSU real :Number of messages sent upward by real user #MSU ideal :Number of messages sent upward by ideal user  MR i.W #MSU real  #MSU ideal UE =

Ideal User versus Real User Extra User Effort = User Effort - Total effort The total effort an ideal user would make The total effort a real user made The extra effort a real user made Calculate for each component: Prioritize

Experimental validation 40 users 40 mobile telephones 2 components were manipulated (Keypad only Repeated-Key Method) 1.Function Selector 2.Short Text Messages

Results Mobile phones Extra User Effort

Results MeasureFunction Selector Send Text Message Objective Extra keystrokes0.64**0.44** Task duration0.63**0.39** Perceived Overall ease-of-use-0.43**-0.26* Overall satisfaction-0.25*-0.22 Component-specific ease-of-use-0.55**-0.34** Component-specific satisfaction-0.41**-0.37** Partial correlation between extra user effort regarding the two components and other usability measures *p. <.05. **p. <.01.

Comparison with other evaluation methods Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Example: Keystrokes, task duration, overall perceived usability Relatively easy to obtain Unsuitable to evaluate components

Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Based only on lower-level events Pre-processing: selection, abstraction, and re-coding Relation between higher- level component and compound message less direct Components’ status not recorded Comparison with other evaluation methods

Help to understand the problem Only looking at error-free task execution Considers the system only at the lowest-level layer Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Comparison with other evaluation methods

Quicker Evaluator effect (reliability) Overall measures Sequential Data analysis GOMS Thinking-aloud, Cognitive Walkthrough and heuristic evaluation Comparison with other evaluation methods

Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

Consistency problems

Consistency Activation of the wrong mental model

Consistency experiments  48 Users  Used 3 applications: 1.4 Room Thermostats 2.4 (2 Web-Enabled TV sets  2 Web Page Layouts) 3.4 Applications (2 Timers  2 Application domains)

Within one layer

Within one layer – Experimental Design Day time Temperature Night time Temperature Moving Pointer Moving Scale Moving Pointer Moving Scale

Within on layer - Results

Between layers Web-enable TV set Browser versus Web pages

Between layers - Page Layout List layout Matrix layout

Between layers - Browser

Between layers – Experimental Design Web Page Version Browser List Matrix Linear Plane

Between layers - Results

Application domain

Between Application domain – Experimental Design Application Timer Alarm radio Microwave Mechanical alarm Hot dish

Application domain - Results

Topics  Introduction  Whether and how the usability of components can be tested empirically. -Testing different versions of component -Testing different components  Whether and how the usability of components can be affected by other components. -Consistency -Memory load

Mental effort problems

Mental Effort - Calculator ProcessorEditor Control results Control equation UserCalculator

Memory load – Experimental Design Equation Editor Easy Difficult Large display Small display

Mental Effort - Heart-rate variability

Mental Effort - Control of higher- level layer

Conclusions  Whether and how the usability of components can be tested empirically. -Testing different versions of component : more powerful -Testing different components : prioritized on potential improvements  Whether and how the usability of components can be affected by other components. -Consistency : components on the same or on higher- level layers can activate wrong mental models -Memory load : lower-level interaction affects higher- level interaction strategy

Questions Thank you for your attention