Download presentation
Presentation is loading. Please wait.
1
Empirical Usability Testing in a Component-Based Environment: Improving Test Efficiency with Component-Specific Usability Measures Willem-Paul Brinkman Brunel University, London Reinder Haakma Philips Research Laboratories Eindhoven Don Bouwhuis Eindhoven University of Technology
2
Topics Research Motivation Testing Method Experimental Evaluation of the Testing Method Conclusions
3
Research Motivation Studying the usability of a system
4
Research Motivation ExternalComparison External Comparison relating differences in usability to differences in the systems InternalComparison Internal Comparison trying to link usability problems with parts of the systems
5
Component-Based Software Engineering Multiple versions testing paradigm (external comparison) Single version testing paradigm (internal comparison) Manage Support Re-use Create Re-use
6
Research Motivation PROBLEM 1.Only empirical analysis of the overall system such as task time, keystrokes, questionnaires etc - not powerful 2.Usability tests, heuristic evaluations, cognitive walkthroughs where experts identify problems – unreliable SOLUTION Component-Specific usability measures: more powerful and reliable
7
Testing Method Procedure Normal procedures of a usability test User task which requires interaction with components under investigation Users must complete the task successfully
8
Component-specific component measures Perceived ease-of-use Perceived satisfaction Objective performance Component-specific questionnaire helps the users to remember their interaction experience with a particular component
9
Component-specific component measures Perceived ease-of-use Perceived satisfaction Objective performance Perceived Usefulness and Ease-of-use questionnaire (David, 1989), 6 questions, e.g. Learning to operate [name] would be easy for me. I would find it easy to get [name] to do what I want it to do. UnlikelyLikely
10
Component-specific component measures Perceived ease- of-use Perceived satisfaction Objective performance Post-Study System Usability Questionnaire (Lewis, 1995) The interface of [name] was pleasant. I like using the interface of [name].Strongly disagreeagree
11
Component-specific component measures Number of messages received directly, or indirectly from lower- level components. The effort users put into the interaction Perceived ease- of-use Perceived satisfaction Objective performance Component Control process Control loop: Each message is a cycle of the control loop
12
Architectural Element Interaction component Elementary unit of an interactive system, on which behavioural- based evaluation is possible. A unit within an application that can be represented as a finite state machine which directly, or indirectly via other components, receives signals from the user. Users must be able to perceive or infer the state of the interaction component. AP C AP C AP C Interactor CNUCE model CM V V MVC PAC Example of suitable agents-models
13
Interaction layers 15 + 23 = 15+23= 01111 10111 Add 100110 38 ProcessorEditor Control results Control equation UserCalculator 15 15 + 15 + 23 38
14
Control Loop Evaluation Component User message Feedback Reference value User System
15
Lower Level Control Loop User Calculator
16
Higher Level Control Loop User Calculator
17
80 users 8 mobile telephones 3 components were manipulated according to Cognitive Complexity Theory (Kieras & Polson, 1985) 1.Function Selector 2.Keypad 3.Short Text Messages Experimental Evaluation of the Testing Method
18
Architecture Mobile telephone Send Text Message Send Text Message Function Selector Function Selector Keypad
19
Evaluation study – Function Selector Versions: Broad/shallow Narrow/deep
20
Evaluation study – Keypad Versions Repeated-Key Method “L” Modified-Model-Position method “J”
21
Evaluation study– Send Text Message Versions Simple Complex
22
Statistical Tests number of keystrokes task time 0 8 x = sample mean (estimator of µ) s = estimation of the standard deviation (σ) s x = estimation of the standard error of the mean, s x 2 = s 2 /n
23
Statistical Tests p-value: probability of making type I, or , error, wrongly rejecting the hypothesis that underlying distribution is the same.
24
Statistical Tests p-value: probability of making type I, or , error, wrongly rejecting the hypothesis that underlying distribution is the same.
25
Results – Function Selector Results of two multivariate analyses and related univariate analyses of variance with the version of the Function Selector as independent between-subjects variable.
26
Results – Keypad Results of multivariate and related univariate analyses of variance with the version of the Keypad as independent between-subjects variable.
27
Results – Send Text Message Results of two multivariate analyses and related univariate analyses of variance with the version of the STM component as independent between-subjects variable
28
Power of component-specific measures Statistical Power: 1 - β Type II, or β, error: failing to reject the hypothesis when it is false
29
Power of component-specific measures x = sample mean (estimator of µ) s = estimation of the standard deviation (σ) s x = estimation of the standard error of the mean, s x 2 = s 2 /n
30
Power of component-specific measures Statistical Power: 1 - β Component-specific measure are less affected by usability problems users may or may not encounter with other part of the system
31
Results- Power Analysis Average probability that a measure finds a significant (α = 0.05) effect for the usability difference between the two versions of FS, STM, or the Keypad components
32
Conclusions Component-Specific measure can be used to test the difference in usability between different versions of an interaction component 1.Objective Performance Measure: Number of messages received directly or indirectly via lower- level components 2.Subjective Usability Measures: Ease-Of-Use and Satisfaction questionnaire Component-specific measures are potentially more powerful than overall usability measures
33
Questions / Discussion Thanks for your attention
34
Layered Protocol Theory Layered Protocol Theory (Taylor, 1988) Component-Based Interactive Systems
35
Reflection 1.Different lower level versions different effort involved when sending a message 2.Usability of a component can affect the interaction users have with other components Overall measure more powerful? 3.Can instrumentation code be inserted? Limitations Other Evaluation Methods Exploitation of the Testing Method
36
Reflection 1.Unit testing lacks the context of a real task 2.Sequential Data Analysis lacks direct link with higher layers 3.Not Event-Base Usability Evaluation lacks direct link with component Limitations Other Evaluation Methods Exploitation of the Testing Method
37
Reflection 1.Creation process Reducing the need to deal with a component each time when it is deployed 2.Re-use process Still needs final usability test Limitations Other Evaluation Methods Exploitation of the Testing Method
38
Testing Method Aim to evaluate the difference in usability between two or more versions of a component
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.