1 Usability Studies
2 Evaluate Usability Run a usability study to judge how an interface facilitates tasks with respect to the aspects of usability mentioned earlier
3 Usability Study Process Define tasks (and their importance) Develop Questionnaires Administer Test –Pre-test –Tasks with a selected protocol (discussed in later slides) –Post-test –Interview
4 Selecting Tasks What are the most important things a user would do with this interface? Present it as a task not a question –Good: Create an itinerary with flights from BWI to Las Vegas Departing December 30th and returning January 5th. –Bad: How many flights are available from BWI to Las Vegas Departing December 30th and returning January 5th. –Users come to plan itineraries, not to count them Occasional exceptions to this rule, but try to follow it.
5 Selecting Tasks Be specific! –Good: Find the calories, vitamins, and minerals in a one 1c serving of broccoli. –Bad: Find nutrition information –Users shouldn’t have to be creative to figure out what you want them to do
6 Selecting Tasks Don’t give instructions –Good: Using google maps, find a street view of the Empire State Building –Bad: Go to maps.google.com. In the search box, enter 350 5th Ave, New York, NY and click search maps. Then, using the zoom scale on the left of the map, click the person icon to see a street view –You aren’t testing anything if you give step by step instructions
7 Selecting Tasks Don’t be vague or provide tiny insignificant tasks –Good: Using google maps, find a close up view that shows just the block with the Empire State Building –Bad: Zoom in on a google map –Users don’t come to the site to zoom. Zooming is something that needs to be done as part of a real task.
8 Selecting Tasks Choose representative tasks that reflect the most important things a user would do with the interface. –Good: For Google, tasks could include a web search, a map search with directions, changing the language, conducting an advanced search (with the options specified), etc. –Bad: Do 5 basic web searches for different things –Repeated tasks do not provide new insights
9 Pre-Test Questionnaires Learn any relevant background about the subjects Age, gender, education level, experience with the web, experience with this type of website, experience with this site in particular, Perhaps more specific questions based on the site, e.g. color blindness, if the user has children, etc.
10 Evaluation Users are given a list of tasks and asked to perform each task Interaction with the user is governed by different protocols
11 Examples of Evaluations Silent Observer –Evaluator observes users interacting with system in lab: user asked to complete pre-determined tasks in field: user goes through normal duties Basically no interaction between the evaluator and the user –Validity depends on how controlled/contrived the situation is
12 Examples of Evaluations Think-aloud protocol –Users speak their thoughts while doing the task –Gives insight into what the user is thinking –Downsides: May alter the way users do the task Unnatural and potentially distracting
13 Examples of Evaluations Constructive Interaction –Two users work together –They can ask each other questions –Downsides: Users may not work together in real life
14 Interview Ask users to give you feedback Easier for the user than writing it down They will tell you things you never thought to ask
15 Reporting After the evaluation, report your results Summarize the experiences of users Emphasize your insights with specific examples or quotes Offer suggestions for improvement for tasks that were difficult to perform
16 Post-Test Questionnaire Have users provide feedback on the interface e.g. Overall I found this interface/website (difficult) (very easy) e.g. Finding directions on a map was (frustrating) (enjoyable) Can rate multiple features for each question