11/10/981 User Testing CS 160, Fall ‘98 Professor James Landay November 10, 1998.

Slides:



Advertisements
Similar presentations
Task-Centered User Interface Design who are the users? what are the tasks? plagiarize! iterative design –rough descriptions, mock-ups, prototypes test.
Advertisements

March 19, 2002 Internet Librarian International Darlene Fichter Data Coordinator, University of Saskatchewan Libraries
6.811 / PPAT: Principles and Practice of Assistive Technology Wednesday, 16 October 2013 Prof. Rob Miller Today: User Testing.
CS305: HCI in SW Development Evaluation (Return to…)
HCI 특론 (2007 Fall) User Testing. 2 Hall of Fame or Hall of Shame? frys.com.
Darlene Fichter Data Coordinator, University of Saskatchewan Libraries February 20, 2002 Usability Testing on a Shoestring.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
©2010 John Wiley and Sons Chapter 14 Research Methods in Human-Computer Interaction Chapter 14- Working with Human Subjects.
CyLab Usable Privacy and Security Laboratory 1 C yLab U sable P rivacy and S ecurity Laboratory Designing.
CS160 Discussion Section Matthew Kam Apr 14, 2003.
Group Project. Don’t make me think Steve Krug (2006)
1 User Testing. 2 Hall of Fame or Hall of Shame? frys.com.
Stanford hci group / cs147 u 15 November 2007 Closing the Loop: From Analysis to Design Scott Klemmer tas: Marcello Bastea-Forte,
SIMS 213: User Interface Design & Development Marti Hearst Thurs, March 13, 2003.
Empirical Methods in Human- Computer Interaction.
User Interface Testing. Hall of Fame or Hall of Shame?  java.sun.com.
Evaluation Methodologies
Writing tips Based on Michael Kremer’s “Checklist”,
An evaluation framework
User Testing CSE 510 Richard Anderson Ken Fishkin.
James Tam Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of ethics.
Damian Gordon.  Summary and Relevance of topic paper  Definition of Usability Testing ◦ Formal vs. Informal methods of testing  Testing Basics ◦ Five.
Prof. James A. Landay Computer Science Department Stanford University Autumn 2014 HCI+D: USER INTERFACE DESIGN + PROTOTYPING + EVALUATION Usability Testing.
Usability Methods: Cognitive Walkthrough & Heuristic Evaluation Dr. Dania Bilal IS 588 Spring 2008 Dr. D. Bilal.
Dr. MaLinda Hill Advanced English C1-A Designing Essays, Research Papers, Business Reports and Reflective Statements.
Predictive Evaluation
English Language Arts Level 7 #44 Ms. Walker
Online, Remote Usability Testing  Use web to carry out usability evaluations  Two main approaches agent-based evaluation (e.g., WebCritera)  model automatically.
Involving Users in Interface Evaluation Marti Hearst (UCB SIMS) SIMS 213, UI Design & Development April 8, 1999.
Interface Design Natural Design. What is natural design? Intuitive Considers our learned behaviors Naturally designed products are easy to interpret and.
©2010 John Wiley and Sons Chapter 6 Research Methods in Human-Computer Interaction Chapter 6- Diaries.
Usability Testing Chapter 6. Reliability Can you repeat the test?
1 ISE 412 Usability Testing Purpose of usability testing:  evaluate users’ experience with the interface  identify specific problems in the interface.
CS2003 Usability Engineering Usability Evaluation Dr Steve Love.
CS160 Discussion Section Final review David Sun May 8, 2007.
Reflection helps you articulate and think about your processes for communication. Reflection gives you an opportunity to consider your use of rhetorical.
California Educational Research Association Annual Meeting Rancho Mirage, CA – December 5, 2008 Hoky Min, Gregory K. W. K. Chung, Rebecca Buschang, Lianna.
Task Analysis Methods IST 331. March 16 th
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Prof. James A. Landay University of Washington Autumn 2004 User Testing December 2, 2004.
The Information School of the University of Washington Information System Design Info-440 Autumn 2002 Session #15.
Usability Evaluation. Objectives for today Go over upcoming deliverables Learn about usability testing (testing with users) BTW, we haven’t had a quiz.
Introduction to Evaluation without Users. Where are you at with readings? Should have read –TCUID, Chapter 4 For Next Week –Two Papers on Heuristics from.
Usability Testing Instructions. Why is usability testing important? In a perfect world, we would always user test instructions before we set them loose.
Prof. James A. Landay University of Washington Autumn 2006 User Testing November 30, 2006.
User Testing. CSE490f - Autumn 2006User Interface Design, Prototyping, & Evaluation2 Hall of Fame or Hall of Shame? frys.com.
Week 2 - Tutorial Interactive Digital Moving Image Production | CU3003NI | - Pratik Man Singh Pradhan.
Usability Evaluation or, “I can’t figure this out...do I still get the donuts?”
Steps in Planning a Usability Test Determine Who We Want To Test Determine What We Want to Test Determine Our Test Metrics Write or Choose our Scenario.
Usability Engineering Dr. Dania Bilal IS 582 Spring 2007.
Usability Engineering Dr. Dania Bilal IS 587 Fall 2007.
Test Taking Skills Make sure you prove what you know!
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
The Information School of the University of Washington Information System Design Info-440 Autumn 2002 Session #20.
Usability Testing November 15, 2016
CEN3722 Human Computer Interaction User Testing
Evaluation through user participation
Usability Evaluation, part 2
Usability Evaluation.
Professor John Canny Spring 2003
Usability Testing November 13, 2017
based on notes by James Landay
User Testing November 27, 2007.
Usability Testing November 12, 2018
CS305, HW1, Spring 2008 Evaluation Assignment
Professor John Canny Spring 2004
SE365 Human Computer Interaction
Professor John Canny Fall 2004
Empirical Evaluation Data Collection: Techniques, methods, tricks Objective data IRB Clarification All research done outside the class (i.e., with non-class.
Presentation transcript:

11/10/981 User Testing CS 160, Fall ‘98 Professor James Landay November 10, 1998

11/10/98 2 Web Performance Measurements 4 By Keynote Systems *averages response time at 40 major sites 4 This last week: 10 sec (this week last year: sec.) 4 Range for the year: 6-11 sec. (last year: 6-18 sec.) 4 Often violates Nielsen’s sec. rule ‘97

11/10/98 3 Web Performance Measurements 4 By Keynote Systems *averages response time at 40 major sites 4 This last week: 10 sec (this week last year: sec.) 4 Range for the year: 6-11 sec. (last year: 6-18 sec.) 4 Often violates Nielsen’s sec. rule ‘98

11/10/984 User Testing CS 160, Fall ‘98 Professor James Landay November 10, 1998

11/10/98 5 Outline 4 Review 4 Why do user testing? 4 Choosing participants 4 Designing the test 4 Collecting data 4 Analyzing the data

11/10/ Non-intuitive empirical results *“readable” pages were less effective +style of links matters 4 Hints for web improved web design *use page tags *watch assumptions +user’s text size, browser type, & window size *make better thumbnails & use image tags *give better feedback about links *write for the web Review

11/10/98 7 Why do User Testing? 4 Can’t tell how good or bad UI is until? *people use it! 4 Other methods are based on evaluators who? *may know too much *may not know enough (about tasks, etc.) 4 Summary: Hard to predict what real users will do

11/10/98 8 Choosing Participants 4 Representative of eventual users in terms of *job-specific vocabulary / knowledge *tasks 4 If you can’t get real users, get approximation *system intended for doctors +get medical students *system intended for electrical engineers +get engineering students 4 Use incentives to get participants

11/10/98 9 Ethical Considerations 4 Sometimes tests can be distressing *users have left in tear (embarrassed by mistakes) 4 You have a responsibility to alleviate this *make voluntary with informed consent *avoid pressure to participate *let them know they can stop at any time [Gomoll] *stress that you are testing the system, not them *make collected data as anonymous as possible 4 Often must get human subjects approval

11/10/98 10 User Test Proposal 4 A report that contains *objective *description of system being testing *task environment & materials *participants *methodology *tasks *test measures 4 Get approved & then reuse for final report

11/10/98 11 Selecting Tasks 4 Should reflect what real tasks will be like 4 Tasks from analysis & design can be used *may need to shorten if +they take too long +require background that test user won’t have 4 Avoid bending tasks in direction of what your design best supports 4 Don’t choose tasks that are too fragmented *e.g., phone-in bank test

11/10/98 12 Deciding on Data to Collect 4 Two types of data *process data +observations of what users are doing & thinking *bottom-line data +summary of what happened (time, errors, success…) +i.e., the dependent variables

11/10/98 13 Process Data vs. Bottom Line Data 4 Focus on process data first *gives good overview of where problems are 4 Bottom-line doesn’t tell you where to fix *just says: “too slow”, “too many errors”, etc. 4 Hard to get reliable bottom-line results *need many users for statistical significance

11/10/98 14 The “Thinking Aloud” Method 4 Need to know what users are thinking, not just what they are doing 4 Ask users to talk while performing tasks *tell us what they are thinking *tell us what they are trying to do *tell us questions that arise as they work *tell us things they read 4 Make a recording or take good notes *make sure you can tell what they were doing

11/10/98 15 Thinking Aloud (cont.) 4 Prompt the user to keep talking *“tell me what you are thinking” 4 Only help on things you have pre-decided *keep track of anything you do give help on 4 Recording *use a digital watch/clock *take notes, plus if possible +record audio and video (or even event logs)

11/10/98 16 Using the Test Results 4 Summarize the data *make a list of all critical incidents (CI) +positive & negative *include references back to original data *try to judge why each difficulty occurred 4 What does data tell you? *UI work the way you thought it would? +consistent with your cognitive walkthrough? +users take approaches you expected? *something missing?

11/10/98 17 Using the Results (cont.) 4 Update task analysis and rethink design *rate severity & ease of fixing CIs *fix both severe problems & make the easy fixes 4 Will thinking aloud give the right answers? *not always *if you ask a question, people will always give an answer, even it is has nothing to do with the facts +panty hose example *try to avoid specific questions

11/10/98 18 Measuring Bottom-Line Usability 4 Situations in which numbers are useful *time requirements for task completion *successful task completion *compare two designs on speed or # of errors 4 Do not combine with thinking-aloud *talking can affect speed & accuracy (neg. & pos.) 4 Time is easy to record 4 Error or successful completion is harder *define in advance what these mean

11/10/98 19 Analyzing the Numbers 4 Example: trying to get task time <=30 min. *test gives: 20, 15, 40, 90, 10, 5 *mean (average) = 30 *median (middle) = 17.5 *looks good! *wrong answer, not certain of anything 4 Factors contributing to our uncertainty *small number of test users (n = 6) *results are very variable (standard deviation = 32) +std. dev. measures dispersal from the mean

11/10/98 20 Analyzing the Numbers (cont.) 4 This is what statistics is for *get The Cartoon Guide to Statistics +see class web page readings list for full cite 4 Crank through the procedures and you find *95% certain that typical value is between 5 & 55 4 Usability test data is quite variable *need lots to get good estimates of typical values *4 times as many tests will only narrow range by 2x +breadth of range depends on sqrt of # of test users

11/10/98 21 Measuring User Preference 4 How much users like or dislike the system *can ask them to rate on a scale of 1 to 10 *or have them choose among statements +“best UI I’ve ever…”, “better than average”… *hard to be sure what data will mean +novelty of UI, feelings, not realistic setting, etc. 4 If many give you low ratings -> trouble 4 Can get some useful data by asking *what they liked, disliked, where they had trouble, best part, worst part, etc. (redundant questions)

11/10/98 22 B A Comparing Two Alternatives 4 Between groups experiment *two groups of test users *each group uses only 1 of the systems 4 Within groups experiment *one group of test users +each person uses both systems +can’t use the same tasks (learning) *best for low-level interaction techniques 4 See if differences are statistically significant *assumes normal distribution & same std. dev.

11/10/98 23 Experimental Details 4 Order of tasks *choose one simple order (simple -> complex) +unless doing within groups experiment 4 Training *depends on how real system will be used 4 What if someone doesn’t finish *assign very large time & large # of errors 4 Pilot study *helps you fix problems with the study *do 2, first with colleagues, then with real users

11/10/98 24 Instructions to Participants [Gomoll] 4 Describe the purpose of the evaluation *“I’m testing the product; I’m not testing you” 4 Tell them they can quit at any time 4 Demonstrate the equipment 4 Explain how to think aloud 4 Explain that you will not provide help 4 Describe the task *give written instructions

11/10/98 25 Details (cont.) 4 Keeping variability down *recruit test users with similar background *brief users to bring them to common level *perform the test the same way every time +don’t help some more than others (plan in advance) *make instructions clear 4 Debriefing test users *often don’t remember, so show video segments *ask for comments on specific features +show them screen (online or on paper)

11/10/98 26 Summary 4 User testing is important, but takes time/effort 4 Early testing can be done on mock-ups (low-fi) 4 Use real tasks & representative participants 4 Be ethical & treat your participants well 4 Want to know what people are doing & why *i.e., collect process data 4 Using bottom line data requires more users to get statistically reliable results

11/10/98 27 User Testing Assignment 4 on the web this afternoon 4 user testing with 3 subjects *pay attention *write up a report 4 due next Tue., 11/17

11/10/98 28 Next Time 4 Output Models *read Olsen Ch. 3 4 Hand back HE assignments during next class