Download presentation
Presentation is loading. Please wait.
Published byMargery Stone Modified over 9 years ago
1
Chapter 27 Variations and more complex evaluations
2
Evaluation elements ElementUser-observation choicesHeuristic inspection choices ObserveEvaluator observes participantsInspectors observe themselves CompareParticipant’s personal standard of a good UI List of heuristics ListenThink-aloud or retrospective protocol Inspectors ask themselves questions posed by heuristics MeasurePerformance metrics, Satisfaction ratings, etc. Frequency of heuristic violations
3
Numerous options Observe direct observation indirect observation video one-way mirror eye tracking software retrospection Observe direct observation indirect observation video one-way mirror eye tracking software retrospection
4
Numerous options Compare users’ personal concepts heuristics design principles design guidelines usability standards style guides Compare users’ personal concepts heuristics design principles design guidelines usability standards style guides
5
Numerous options Listen think-aloud protocols cognitive walkthroughs debriefings retrospective protocols users’ opinions questionnaires Listen think-aloud protocols cognitive walkthroughs debriefings retrospective protocols users’ opinions questionnaires
6
Numerous options Measure questionnaires (Satisfaction) performance metrics Efficiency (speed; time to complete task) Effectiveness (accuracy; error rates) process metrics eye movements other physiological measures (blood pressure, etc.) Measure questionnaires (Satisfaction) performance metrics Efficiency (speed; time to complete task) Effectiveness (accuracy; error rates) process metrics eye movements other physiological measures (blood pressure, etc.)
7
Other approaches Remote moderated testing testing done remotely (e.g., via internet) Focus groups Card sorting Automatic checkers accessibility checkers (web-based) HTML adherence Remote moderated testing testing done remotely (e.g., via internet) Focus groups Card sorting Automatic checkers accessibility checkers (web-based) HTML adherence
8
Other types of evaluations Diagnostic evaluations: what we’ve been talking about---usability evluations Formative evaluations: part of a continuing development process Summative evaluations: performed following completion (has system met its [usability] goals) The latter two are also important in Education work, e.g., midterm==formative; final==summative Diagnostic evaluations: what we’ve been talking about---usability evluations Formative evaluations: part of a continuing development process Summative evaluations: performed following completion (has system met its [usability] goals) The latter two are also important in Education work, e.g., midterm==formative; final==summative
9
Other types of evaluations Exploratory evaluations done early on, on low-fidelity prototypes Validation evaluations seek to make absolute claims to support claim such as”UI meets these requirements”: establish hypothesis choose appropriate sample size and population etc. here they suggest experimental design and refer to Coolican (1996): Coolican, H. (1996). Introduction to Research Methods and Statistics in Psychology. London: Arnold Publishers Exploratory evaluations done early on, on low-fidelity prototypes Validation evaluations seek to make absolute claims to support claim such as”UI meets these requirements”: establish hypothesis choose appropriate sample size and population etc. here they suggest experimental design and refer to Coolican (1996): Coolican, H. (1996). Introduction to Research Methods and Statistics in Psychology. London: Arnold Publishers
10
Other types of evaluations Assessment evaluations can be formative and diagnostic may start to establish measures of usability Comparison evaluations between competing designs “within subjects”: same participant does both learning effects (need to counteract, e.g,. Latin square) “between subjects”: one group does A, another does B can’t query preference (each participant only evaluated one design) good for performance metrics, however will probably need to do inferential statistics (why are they dreaded?) Assessment evaluations can be formative and diagnostic may start to establish measures of usability Comparison evaluations between competing designs “within subjects”: same participant does both learning effects (need to counteract, e.g,. Latin square) “between subjects”: one group does A, another does B can’t query preference (each participant only evaluated one design) good for performance metrics, however will probably need to do inferential statistics (why are they dreaded?)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.