Download presentation
Presentation is loading. Please wait.
1
SY DE 542 User Testing March 7, 2005 R. Chow
2
User Testing Objective: Evaluate the design, using users
NOT Evaluate the users Outcomes: Design problems, design recommendations NOT User’s level of proficiency
3
Usability Usability Does usability equal usefulness? Learnability
Efficiency Memorability Errors Satisfaction (Nielsen,1993) Does usability equal usefulness?
4
Usefulness Utility AND Usability Utility
Does design do what is needed? Does it help the user make decisions? Real decisions Typical decisions Critical decisions Does it provide the right information? In the right context?
5
Planning the User Test Need to decide on scope and approach based on:
time available money available criticality of system, of functions user base
6
Levels of User Testing Cognitive Walkthrough Heuristic Evaluation
Performance Testing Field Studies
7
Cognitive Walkthrough
Relatively cheap form of review Set up tasks or scenarios User tries to step through the interface, with help from designer / facilitator Pretend the interface is already built
8
Cognitive Walkthrough
Look for tasks user can’t complete areas of user confusion circuitous paths Finds major problems “first use” problems
9
Cognitive Walkthrough
Best used early in the design process at the paper prototype stage
10
Heuristic Evaluation Basically an inspection done by several people experts in usability Each expert evaluates against a set of usability principles and looks for problems More experts more problems found Nielsen recommends 3 to 5 Again establish scenarios or tasks
11
Heuristic Evaluation Best early in the design process
Avoid problems before implementation Relatively low cost Doesn’t give direct information on how users will interact with the system
12
Some Usability Principles
Use clear and natural language Support recognition rather than recall Be consistent Provide feedback Prevent errors Support error detection and recovery Provide shortcuts (adapted from Nielsen, 1993)
13
Performance Testing Need at least a realistic prototype, or a beta version Goal is hard quantitative results Design relevant tasks Decide on participants (# , characteristics) Develop quantifiable measures
14
Performance Testing Used to create a strong case Used in research
Use to quantify benefits of one interface over another
15
Field Study Watch users interact with the system in a realistic setting while doing other tasks distracted by co-workers running other software, phone etc. working on real problems
16
Field Study Identify requirements at start
Confirm or improve design after implementation Observations and interviews Requires on-site access Most realistic results
17
Test Scenarios Typical scenarios Critical scenarios
normal operations frequent routine Critical scenarios abnormal operations - problems or opportunities less frequent, but (can be) anticipated never before, completely unanticipated Use different, but representative scenarios Give representative context Initial state, sequence of events / tasks
18
Test Tasks Some possibilities:
system start-up system shut-down system re-configuration fault detection, diagnosis, recovery opportunity recognition, exploitation Each scenario will involve 1 or more tasks
19
Performance Measures Quantifiable measures Need a benchmark
speed - time to do task accuracy - was task done correctly errors made - number and type paths taken Need a benchmark what level of performance is ok?
20
Navigation example (re-visited)
In-vehicle or PDA-based navigation system with route and travel info What are possible scenarios? What are the tasks for each scenario? What are appropriate performance measures?
21
Report 2 Number of users: 4 – 10, closer to 4
Number of scenarios: >= 3 Number of tasks: 5 – 10, within 2 hours Profile of users: Novices, with training! Consider: Data collection needs Note-taking, audio, and/or video-taping Forms Pilot test Data analysis needs Problem severity vs. prevalence
22
Severity vs. Prevalence
Task completed, but inefficient Task partially completed Task failure Prevalence: All users Most users Some users
23
Report 2 (cont’d) Relate findings to your WDA, info requirements and availability table, elemental, configural and/or mass data displays … Where were the issues? Wrong information, missing information? Wrong measures, missing measures? Wrong presentation? (structure and form) Wrong organization, missing navigational support?
24
More Refs Nielsen, J. (1993). Usability Engineering.
Rubin, J. (1994) Handbook of Usability Testing.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.