Alfred Kobsa University of California, Irvine

Slides:



Advertisements
Similar presentations
Chapter 14: Usability testing and field studies
Advertisements

The Usability Test Process: Steps, tips, and more! Dr. Jennifer L. Bowie For Digital Rhetoric.
HCI SEMESTER PROJECT PROJECTS  Project #2 (due 2/20)  Find an interface that can be improved  Interview potential clients  Identify an HCI concept.
Running a User Study Alfred Kobsa University of California, Irvine.
Web E’s goal is for you to understand how to create an initial interaction design and how to evaluate that design by studying a sample. Web F’s goal is.
Each individual person is working on a GUI subset. The goal is for you to create screens for three specific tasks your user will do from your GUI Project.
Analyzing and Presenting Results Establishing a User Orientation Alfred Kobsa University of California, Irvine.
CS CS 5150 Software Engineering Lecture 12 Usability 2.
James Tam Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of ethics.
Web 2.0 Testing and Marketing E-engagement capacity enhancement for NGOs HKU ExCEL3.
Preparing a User Test Alfred Kobsa University of California, Irvine.
Evaluation IMD07101: Introduction to Human Computer Interaction Brian Davison 2010/11.
Objective To create a professional, affordable, and easy to use website Create a user friendly interface with accessibility and effortless navigation.
WARNING These slides are not optimized for printing or exam preparation. These are for lecture delivery only. These slides are made for PowerPoint 2010.
Human Computer Interaction
Usability testing IS 403: User Interface Design Shaun Kane.
What is Usability? Usability Is a measure of how easy it is to use something: –How easy will the use of the software be for a typical user to understand,
Preparing a User Test Alfred Kobsa University of California, Irvine.
Introduction to Web Authoring Ellen Cushman Class mtg. #21.
User Interface Design & Usability for the Web Card Sorting You should now have a basic idea as to content requirements, functional requirements and user.
Preparing and Running User Experiments By Mei Li, Pearl Ho, and Deepika Gandhi.
Running a User Study Alfred Kobsa University of California, Irvine.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
By Godwin Alemoh. What is usability testing Usability testing: is the process of carrying out experiments to find out specific information about a design.
AVI/Psych 358/IE 340: Human Factors Evaluation October 31, 2008.
Usability Testing TECM 4180 Dr. Lam. What is Usability? A quality attribute that assesses how easy user interfaces are to use Learnability – Ease of use.
Field Studies (Ethnographic Studies) Alfred Kobsa University of California, Irvine.
Steps in Planning a Usability Test Determine Who We Want To Test Determine What We Want to Test Determine Our Test Metrics Write or Choose our Scenario.
Monday 22 nd October 2012 Principal Moderator Judith Evans.
1 April 14, Starting New Open Source Software Projects William Cohen NCSU CSC 591W April 14, 2008.
SCC P2P – Collaboration Made Easy Contract Management training
Creating Success with Facebook
Provide instruction.
Usability Testing and Web Design
Recall The Team Skills Analyzing the Problem (with 5 steps)
Adapted from PPT developed by Jhpiego corporation
Usability Tests: Developing Scenarios
Welcome to your first Online Class Session
Getting Started with SAM
Usability Testing 3 CPSC 481: HCI I Fall 2014 Anthony Tang.
ENG 105i Writing in Business
Benchmarking.
The Systems Engineering Context
Open for Business: Website User Testing.
Usability Evaluation, part 2
Multi Rater Feedback Surveys FAQs for Participants
Multi Rater Feedback Surveys FAQs for Participants
IENG 451 / 452 Voice of the Customer: Analysis (KANO, CTQ)
Collaboration with Google Docs
Students Welcome to “Students” training module..
David Conley Byung Lee Jennifer Stoneking Vijay Hattiangadi
Revenge of the UI The Final Installment
This presentation document has been prepared by Vault Intelligence Limited (“Vault") and is intended for off line demonstration, presentation and educational.
Maryland Online IEP System Instructional Series - PD Activity #5
A Commercialization Strategy for (your business/company name)
Evaluation.
HCI Evaluation Techniques
Writing a Business Plan
Online Safety: Rights and Responsibilities
Introduction to Web Authoring
Maryland Online IEP System Instructional Series - PD Activity #5
What is it for? Where to find it How to use the forum
Interviewing Sriram Mohan.
Does Time Heal? A Longitudinal Study of Usability
Empirical Evaluation Data Collection: Techniques, methods, tricks Objective data IRB Clarification All research done outside the class (i.e., with non-class.
Creating Success with Facebook
Affiliate Slide Show Guide to the Unique Selling Points
A Commercialization Strategy for (your business/company name)
Presentation transcript:

Alfred Kobsa University of California, Irvine Preparing a User Test Alfred Kobsa University of California, Irvine

Specifying Global Test Goals (Intended) purpose of product Product development status Test goals (with priorities) User profiles (personae) Even if you know and can clearly specify for the purpose of the user test what the intended purpose of the product is, when it’s launched to actual users on a massive scale, there are instances when the purpose of the product is poorly communicated. Can someone give examples? Facebook - What are some industry scenarios where it’s difficult to specify the intended purpose of the product? Understanding where your product is at its developmental stage is very crucial – why? Could you give me examples? If you don’t understand where your product is at – what does it even mean to know where your product development status is at? User reach/ penetration/ developmental stage; Having 2 out of 3 intended features implemented (does it make sense to develop the third) – Chat with Friends Facebook Live – has commenting

Specifying Global Test Goals (Intended) purpose of product Product development status Test goals (with priorities) User profiles (personae) Even if you know and can clearly specify for the purpose of the user test what the intended purpose of the product is, when it’s launched to actual users on a massive scale, there are instances when the purpose of the product is poorly communicated. Take a look at Facebook Desktop application

Specifying Global Test Goals Take a look at Facebook here. Insights vs. Explore Feed vs. Pages Feed Pretty sure when the explore feed team and the insights team each came up with each of their own product, they both had very clear intended purposes in mind when they were doing these usability tests and interviews around how users viewed their respective features. Once it rolled out on to the desktop app – it be came unclear. Product application is too dense Competing / overlapping feature purposes What are some industry scenarios where it’s difficult to specify the intended purpose of the product?

Specifying Global Test Goals (Intended) purpose of product Product development status Test goals (with priorities) User profiles (personae) Understanding where your product is at its developmental stage is very crucial before you actually perform the user test. What does it mean to understand where your product is at? Does it mean that you know that out of the 4 features that are supposed to be implemented, it’s knowing that right now you only have 3. Is that what it means to understand what the product development status is? Where the product is at: Known UX issues  with what, who, and why Hacks  How are users overcoming these issues? User metrics around product use Regional penetration of the product What else? The more you can specify about the product development status, the more you know what to do / options you have with what you can do in designing a user test.

Specifying Global Test Goals (Intended) purpose of product Product development status Test goals (with priorities) User profiles (personae) Super important – have clear hypotheses and even assumptions around what you are trying to test. Why? It’s the goal around which you designed the user test. (with priorities) – what does that mean? Clear test goals should lead to clear results… not always. Why? Communication with non-researchers and engineers

Specifying Global Test Goals (Intended) purpose of product Product development status Test goals (with priorities) User profiles (personae) Why important? You can narrow down your participant selection for the user test. Why is this important?

Selecting Tasks Test subjects should not merely “try out the system for n minutes”, but rather carry out selected tasks with the system. One cannot test every possible user task. Rather, usability tests must focus on representative and/or important tasks. 1. Make the Task Realistic User goal: Browse product offerings and purchase an item. Source: https://www.nngroup.com/articles/task-scenarios-usability-testing/ Poor task: Purchase a pair of orange Nike running shoes. Better task: Buy a pair of shoes for less than $40.

Selecting Tasks Test subjects should not merely “try out the system for n minutes”, but rather carry out selected tasks with the system. One cannot test every possible user task. Rather, usability tests must focus on representative and/or important tasks. 2. Make the Task Actionable User goal: Find movie and show times. https://www.nngroup.com/articles/task-scenarios-usability-testing/ Poor task: You want to see a movie Sunday afternoon. Go to www.fandango.com and tell me where you’d click next. Better task: Use www.fandago.com to find a movie you’d be interested in seeing on Sunday afternoon.

Selecting Tasks Test subjects should not merely “try out the system for n minutes”, but rather carry out selected tasks with the system. One cannot test every possible user task. Rather, usability tests must focus on representative and/or important tasks. 3. Avoid Giving Clues and Describing the Steps User goal: Look up grades. Task scenarios that include terms used in the interface also bias the users. If you’re interested in learning if people can sign up for the newsletter and your site has a large button labeled Sign up for newsletter, you should not phrase the task as "Sign up for this company's weekly newsletter." It's better to use a task such as: “Find a way to get information on upcoming events sent to your email on a regular basis.”  Poor task: You want to see the results of your midterm exams. Go to the website, sign in, and tell me where you would click to get your transcript. Better task: Look up the results of your midterm exams.

Selecting Tasks Tasks should be selected that may be fraught with usability problems, as suggested from earlier concerns and usage experience; will be frequently carried out by users (20% is used 80% of time) are mission-critical; are performed under time pressure; or are new or have been modified in comparison with previous version or competitive program ☞ Brainstorm and then filter tasks using these criteria ☞ Test the comprehensibility of task descriptions ☞ Specify timeout for each task (possibly do not reveal to subjects)

Creating Scenarios Scenarios are created to contextualize user experiments (which in general yields more representative test results) Scenario descriptions should be - short - formulated in the words of the user / task domain - unambiguous - contain enough information for test subjects to carry out tasks - be directly linked to tasks and concerns ☛ User should read scenario descriptions (and experimenters should possibly read them aloud at the same time) ☛ Scenario descriptions should be tested (in the pilot test or even earlier) Examples: http://www.usability.gov/how-to-and-tools/methods/scenarios.html

Deciding how to measure usability Performance measures Time needed to carry out a task Error rate Task completion rate Time spent on “unproductive” activities (navigation, looking up help, recovery after an error) Frequency of “unproductive” activities Counting keystrokes / mouse clicks Etc. (see Dumas & Reddish) Measures of satisfaction User-provided: Observed(?): frustration / confusion / surprise / satisfaction User ratings (e.g., SUS – System Usability Scale) Comparisons with previous version / competitors’ software / current way of doing it Behavioral intentions (use, buy(?), recommend to friend) Free comments (after and during experiment)

Preparing the Test Materials Legal documents Informed consent form Non-disclosure form Waiver of liability form Permissions form (e.g., for video recordings) Instruction and training related materials Software / Powerpoint slides / video to be shown Summary of software functionality Write-up of oral instruction (Guided) training tasks Task-related materials Scenario description Task descriptions (☛ one task on each page, large font for task/page number) Pre-test and post-test questionnaires Experiment-related materials “Screener” with participant election criteria Experimental time sheet / log book To-do list for all experimenters

Preparing the testing environment Hardware equipment ☛ cater to users’ normal equipment; remove all potentially distracting programs Sample data ☛ make it look real Voice recording ☛ take care of ambient noise (also exceptional), direction of microphone,… Screen recording ☛ (mind a possible slowdown of tested program) Video recording ☛ take care of video angle, blocked view, glare, different sunlight over the day,… Time taking ☛ avoid races (between participants, or participants against stop watch) Lab layout (see Courage & Baxter) ☛ participants should not influence each other

Setting up a test team Typical roles Greeter: welcomes subjects, makes them relaxed, bridges wait time Briefer: informs about study, makes them sign forms Instructor / trainer: instructs them about the software Test administrator: tells subjects what to do Note taker Video operator Backup technician for emergencies Many of these roles can be combined in a single person No role-switching over the duration of an experiment, to ensure comparability The number of team members also depends on the number of parallel / overlapping subjects and the experimental design Teams of 2-3 are typical Tests are typically carried out by UI design team, and/or outside usability specialists Developers, managers, user representatives should be able to watch (invisible to the subjects, or in the background)

You can make assumptions about “where the product is at” Exercise Facebook search sucks - we don’t know why. We just know that user metrics are low for search bar related activity. Design a user test that would inform you around what kind of barriers people have around using the Facebook search product You can make assumptions about “where the product is at” What is your user test goals? How would you prioritize them? Specify user goal (for tasks) Provide user task description and steps List what you are going to measure and how that would answer your test goals