Professor John Canny Spring 2003

Slides:



Advertisements
Similar presentations
HCI 특론 (2007 Fall) User Testing. 2 Hall of Fame or Hall of Shame? frys.com.
Advertisements

CyLab Usable Privacy and Security Laboratory 1 C yLab U sable P rivacy and S ecurity Laboratory Designing.
Experimental Design: Single factor designs Psych 231: Research Methods in Psychology.
CS160 Discussion Section Matthew Kam Apr 14, 2003.
1 User Testing. 2 Hall of Fame or Hall of Shame? frys.com.
Experimental Control & Design Psych 231: Research Methods in Psychology.
User Interface Testing. Hall of Fame or Hall of Shame?  java.sun.com.
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 5): Outliers Fall, 2008.
Experimental Control & Design Psych 231: Research Methods in Psychology.
Experimental Evaluation
Prof. James A. Landay Computer Science Department Stanford University Autumn 2014 HCI+D: USER INTERFACE DESIGN + PROTOTYPING + EVALUATION Usability Testing.
Chapter 14: Usability testing and field studies
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Involving Users in Interface Evaluation Marti Hearst (UCB SIMS) SIMS 213, UI Design & Development April 8, 1999.
Conducting a User Study Human-Computer Interaction.
Psy B07 Chapter 4Slide 1 SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.
Conducting a User Study Human-Computer Interaction.
Usability Testing Chapter 6. Reliability Can you repeat the test?
©2010 John Wiley and Sons Chapter 3 Research Methods in Human-Computer Interaction Chapter 3- Experimental Design.
Copyright  2003 by Dr. Gallimore, Wright State University Department of Biomedical, Industrial Engineering & Human Factors Engineering Human Factors Research.
Human-Computer Interaction. Overview What is a study? Empirically testing a hypothesis Evaluate interfaces Why run a study? Determine ‘truth’ Evaluate.
Chapter 20 Testing Hypothesis about proportions
The product of evaluation is knowledge. This could be knowledge about a design, knowledge about the user or knowledge about the task.
1 Psych 5510/6510 Chapter 13 ANCOVA: Models with Continuous and Categorical Predictors Part 2: Controlling for Confounding Variables Spring, 2009.
Prof. James A. Landay University of Washington Autumn 2004 User Testing December 2, 2004.
Prof. James A. Landay University of Washington Autumn 2006 User Testing November 30, 2006.
User Testing. CSE490f - Autumn 2006User Interface Design, Prototyping, & Evaluation2 Hall of Fame or Hall of Shame? frys.com.
11/10/981 User Testing CS 160, Fall ‘98 Professor James Landay November 10, 1998.
Research in Psychology A Scientific Endeavor. Goals of Psychological Research Description of social behavior Are people who grow up in warm climates different.
CEN3722 Human Computer Interaction User Testing
Comparing Systems Using Sample Data
User Interface Evaluation
Evaluation through user participation
AP CSP: Data Assumptions & Good and Bad Data Visualizations
Experiment Basics: Designs
INF397C Introduction to Research in Information Studies Spring, Day 12
Qualitative vs. Quantitative
Understanding Results
Conducting a User Study
Chapter 5 Sampling Distributions
Chapter 8: Inference for Proportions
Chapter 5 Sampling Distributions
Statistical Process Control
Introduction to Instrumentation Engineering
Chapter 9 Hypothesis Testing.
Lesson Comparing Two Means.
Discrete Event Simulation - 4
Chapter 5 Sampling Distributions

based on notes by James Landay
User Testing November 27, 2007.
One-Way Analysis of Variance
Usability Testing November 12, 2018
Two-Sample Between-Subjects Experiments and Independent-Samples t-Tests So far, we’ve talked about experiments in which we needed to take only one sample.
Chapter 12 Power Analysis.
CS305, HW1, Spring 2008 Evaluation Assignment
Psych 231: Research Methods in Psychology
Professor John Canny Spring 2004
Title of your experimental design
Psych 231: Research Methods in Psychology
Experiment Basics: Designs
Psych 231: Research Methods in Psychology
Experimental Evaluation
SE365 Human Computer Interaction
Experiment Basics: Variables
Professor John Canny Fall 2004
Empirical Evaluation Data Collection: Techniques, methods, tricks Objective data IRB Clarification All research done outside the class (i.e., with non-class.
Human-Computer Interaction: Overview of User Studies
CS 160: Lecture 16 Professor John Canny 8/2/2019.
Evaluation David Kauchak CS 158 – Fall 2019.
Presentation transcript:

Professor John Canny Spring 2003 CS 160: Lecture 16 Professor John Canny Spring 2003 9/20/2018

Outline More on user testing Basics of quantitative methods Choosing participants Designing the test Basics of quantitative methods Collecting data Analyzing the data 9/20/2018

Why do User Testing? Can’t tell how good or bad UI is until? people use it! Other methods are based on evaluators who? may know too much may not know enough (about tasks, etc.) Summary Hard to predict what real users will do Jakob Nielsen 9/20/2018

Choosing Participants Representative of eventual users in terms of job-specific vocabulary / knowledge tasks If you can’t get real users, get approximation system intended for doctors get medical students system intended for electrical engineers get engineering students Use incentives to get participants 9/20/2018

Incentives Paying every subject a fixed amount can be too expensive. A small chance at a large reward is a good incentive – e.g. subjects get a 1/n chance to receive an MP3 player. Offer to tell them the results of the study. Free software for software companies. Some courses require participation as research subjects. 9/20/2018

Ethical Considerations Sometimes tests can be distressing users have left in tears (embarrassed by mistakes) You have a responsibility to alleviate make voluntary with informed consent avoid pressure to participate will not affect their job status either way let them know they can stop at any time stress that you are testing the system, not them make collected data as anonymous as possible Get human subjects approval if needed – typically if results are going to be published. 9/20/2018

User Test Proposal A report that contains objective description of system being testing task environment & materials participants methodology tasks test measures Get approved & then reuse for final report 9/20/2018

Qualitative vs. Quantitative Studies Qualitative: What we’ve been doing so far: Contextual Inquiry: trying to understand user’s tasks and their conceptual model. Usability Studies: looking for critical incidents in a user interface. In general, we use qualitative methods to: Understand what’s going on, look for problems, or get a rough idea of the usability of an interface. 9/20/2018

Qualitative vs. Quantitative Studies Use to reliably measure something. Requires us to know how to measure it. Examples: Time to complete a task. Average number of errors on a task. Users’ ratings of an interface *: Ease of use, elegance, performance, robustness, speed,… * - You could argue that users’ perception of speed, error rates etc is more important than their actual values. 9/20/2018

Quantitative Methods Very often, we want to compare values for two different designs (which is faster?). This requires some statistical methods. We begin by defining some quantities to measure - variables. 9/20/2018

Variable types Independent Variables: the ones you control Aspects of the interface design Characteristics of the testers Discrete: A, B or C Continuous: Time between clicks for double-click Dependent variables: the ones you measure Time to complete tasks Number of errors 9/20/2018

Deciding on Data to Collect Two types of data process data observations of what users are doing & thinking bottom-line data summary of what happened (time, errors, success…) i.e., the dependent variables 9/20/2018

Process Data vs. Bottom Line Data Focus on process data first gives good overview of where problems are Bottom-line doesn’t tell you where to fix just says: “too slow”, “too many errors”, etc. Hard to get reliable bottom-line results need many users for statistical significance 9/20/2018

The “Thinking Aloud” Method Need to know what users are thinking, not just what they are doing Ask users to talk while performing tasks tell us what they are thinking tell us what they are trying to do tell us questions that arise as they work tell us things they read Make a recording or take good notes make sure you can tell what they were doing 9/20/2018

Thinking Aloud (cont.) Prompt the user to keep talking “tell me what you are thinking” Only help on things you have pre-decided keep track of anything you do give help on Recording use a digital watch/clock take notes, plus if possible record audio and video (or even event logs) 9/20/2018

Some statistics Variables X & Y A relation (hypothesis) e.g. X > Y We would often like to know if a relation is true e.g. X = time taken by novice users Y = time taken by users with some training To find out if the relation is true we do experiments to get lots of x’s and y’s (observations) Suppose avg(x) > avg(y), or that most of the x’s are larger than all of the y’s. What does that prove? 9/20/2018

Significance The significance or p-value of an outcome is the probability that it happens by chance if the relation does not hold. E.g. p = 0.05 means that there is a 1/20 chance that the observation happens if the hypothesis is false. So the smaller the p-value, the greater the significance. 9/20/2018

Significance For instance p = 0.001 means there is a 1/1000 chance that the observation would happen if the hypothesis is false. So the hypothesis is almost surely true. Significance increases with number of trials. CAVEAT: You have to make assumptions about the probability distributions to get good p-values. 9/20/2018

Normal distributions Many variables have a Normal distribution At left is the density, right is the cumulative prob. Normal distributions are completely characterized by their mean and variance (mean squared deviation from the mean). 9/20/2018

Normal distributions The difference between two independent normal variables is also a normal variable, whose variance is the sum of the variances of the distributions. Asserting that X > Y is the same as (X-Y) > 0, whose probability we can read off from the curve. 9/20/2018

Analyzing the Numbers Example: prove that task 1 is faster on design A than design B. Suppose the average time for design B is 20% higher than A. Suppose subjects’ times in the study have a std. dev. which is 30% of their mean time (typical). How many subjects are needed? 9/20/2018

Analyzing the Numbers Example: prove that task 1 is faster on design A than design B. Suppose the average time for design B is 20% higher than A. Suppose subjects’ times in the study have a std. dev. which is 30% of their mean time (typical). How many subjects are needed? Need at least 13 subjects for significance p=0.01 Need at least 22 subjects for significance p=0.001 (assumes subjects use both designs) 9/20/2018

Analyzing the Numbers (cont.) i.e. even with strong (20%) difference, need lots of subjects to prove it. Usability test data is quite variable 4 times as many tests will only narrow range by 2x breadth of range depends on sqrt of # of test users This is when online methods become useful easy to test w/ large numbers of users (e.g., Landay’s NetRaker system) 9/20/2018

Lies, damn lies and statistics… A common mistake (made by famous HCI researchers *) Increasing n (number of trials) by running each subject several times. No! the analysis only works when trials are independent. All the trials for one subject are dependent, because that subject may be faster/slower/less error-prone than others. * - making this error will not help you become a famous HCI researcher . 9/20/2018

Statistics with care: What you can do to get better significance: Run each subject several times, compute the average for each subject. Run the analysis as usual on subjects’ average times, with n = number of subjects. This decreases the per-subject variance, while keeping data independent. 9/20/2018

Measuring User Preference How much users like or dislike the system can ask them to rate on a scale of 1 to 10 or have them choose among statements “best UI I’ve ever…”, “better than average”… hard to be sure what data will mean novelty of UI, feelings, not realistic setting, etc. If many give you low ratings -> trouble Can get some useful data by asking what they liked, disliked, where they had trouble, best part, worst part, etc. (redundant questions) 9/20/2018

B A Using Subjects Between subjects experiment Two groups of test users Each group uses only 1 of the systems Within subjects experiment One group of test users Each person uses both systems 9/20/2018

Between subjects Two groups of testers, each use 1 system Advantages: Users only have to use one system (practical). No learning effects. Disadvantages: Per-user performance differences confounded with system differences: Much harder to get significant results (many more subjects needed). Harder to even predict how many subjects will be needed (depends on subjects). 9/20/2018

Within subjects One group of testers who use both systems Advantages: Much more significance for a given number of test subjects. Disadvantages: Users have to use both systems (two sessions). Order and learning effects (can be minimized by experiment design). 9/20/2018

Example Same experiment as before: Within subjects: Between subjects: System B is 20% slower than A Subjects have 30% std. dev. in their times. Within subjects: Need 13 subjects for significance p = 0.01 Between subjects: Typically require 52 subjects for significance p = 0.01. But depending on the subjects, we may get lower or higher significance. 9/20/2018

Experimental Details Order of tasks choose one simple order (simple -> complex) unless doing within groups experiment Training depends on how real system will be used What if someone doesn’t finish assign very large time & large # of errors Pilot study helps you fix problems with the study do 2, first with colleagues, then with real users 9/20/2018

Reporting the Results Report what you did & what happened Images & graphs help people get it! 9/20/2018

Summary Use real tasks & representative participants Be ethical & treat your participants well Want to know what people are doing & why i.e., collect process data Using bottom line data requires more users to get statistically reliable results 9/20/2018