Professor John Canny Fall 2004

Slides:



Advertisements
Similar presentations
A small taste of inferential statistics
Advertisements

Hypothesis testing Another judgment method of sampling data.
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
HCI 특론 (2007 Fall) User Testing. 2 Hall of Fame or Hall of Shame? frys.com.
CS160 Discussion Section Matthew Kam Apr 14, 2003.
1 User Testing. 2 Hall of Fame or Hall of Shame? frys.com.
Lecture 5 Outline – Tues., Jan. 27 Miscellanea from Lecture 4 Case Study Chapter 2.2 –Probability model for random sampling (see also chapter 1.4.1)
Experimental Control & Design Psych 231: Research Methods in Psychology.
User Interface Testing. Hall of Fame or Hall of Shame?  java.sun.com.
Evaluating Hypotheses
The Experimental Approach September 15, 2009Introduction to Cognitive Science Lecture 3: The Experimental Approach.
Independent Sample T-test Often used with experimental designs N subjects are randomly assigned to two groups (Control * Treatment). After treatment, the.
Experimental Evaluation
Chapter 9 Hypothesis Testing.
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 6 Chicago School of Professional Psychology.
Independent Sample T-test Classical design used in psychology/medicine N subjects are randomly assigned to two groups (Control * Treatment). After treatment,
Review for Exam 2 Some important themes from Chapters 6-9 Chap. 6. Significance Tests Chap. 7: Comparing Two Groups Chap. 8: Contingency Tables (Categorical.
Inferential Statistics
Statistical Analysis. Purpose of Statistical Analysis Determines whether the results found in an experiment are meaningful. Answers the question: –Does.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
+ Quantitative Statistics: Chi-Square ScWk 242 – Session 7 Slides.
Hypothesis Testing:.
Introduction to Statistics Steven A. Jones Biomedical Engineering Louisiana Tech University (Created for our NSF-funded Research Experiences in Micro/Nano.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
STA Lecture 161 STA 291 Lecture 16 Normal distributions: ( mean and SD ) use table or web page. The sampling distribution of and are both (approximately)
Estimation of Statistical Parameters
MARE 250 Dr. Jason Turner Hypothesis Testing III.
Psy B07 Chapter 4Slide 1 SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.
©2010 John Wiley and Sons Chapter 3 Research Methods in Human-Computer Interaction Chapter 3- Experimental Design.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 8 Hypothesis Testing.
Chapter 20 Testing Hypothesis about proportions
Stats Lunch: Day 3 The Basis of Hypothesis Testing w/ Parametric Statistics.
Prof. James A. Landay University of Washington Autumn 2004 User Testing December 2, 2004.
Week 6. Statistics etc. GRS LX 865 Topics in Linguistics.
User Testing. CSE490f - Autumn 2006User Interface Design, Prototyping, & Evaluation2 Hall of Fame or Hall of Shame? frys.com.
Chapter 13 Understanding research results: statistical inference.
Hypothesis Tests. An Hypothesis is a guess about a situation that can be tested, and the test outcome can be either true or false. –The Null Hypothesis.
11/10/981 User Testing CS 160, Fall ‘98 Professor James Landay November 10, 1998.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 11: Between-Subjects Designs 1.
Outline Sampling Measurement Descriptive Statistics:
Comparing Systems Using Sample Data
User Interface Evaluation
Sampling Distribution Models
Chapter 5 STATISTICAL INFERENCE: ESTIMATION AND HYPOTHESES TESTING
Experiment Basics: Designs
Hypothesis Testing: One Sample Cases
INF397C Introduction to Research in Information Studies Spring, Day 12
Qualitative vs. Quantitative
Unit 3 Hypothesis.
PCB 3043L - General Ecology Data Analysis.
Understanding Results
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
Chapter 8: Inference for Proportions
Sampling Distribution Models
Professor John Canny Spring 2003
Statistical Process Control
Introduction to Instrumentation Engineering
Chapter 9 Hypothesis Testing.
Discrete Event Simulation - 4

based on notes by James Landay
User Testing November 27, 2007.
Chapter 12 Power Analysis.
Professor John Canny Spring 2004
Psych 231: Research Methods in Psychology
AP Statistics Chapter 16 Notes.
SE365 Human Computer Interaction
Section 11.1: Significance Tests: Basics
CS 160: Lecture 16 Professor John Canny 8/2/2019.
Presentation transcript:

Professor John Canny Fall 2004 CS 160: Lecture 16 Professor John Canny Fall 2004 5/11/2019

Outline Basics of quantitative methods Random variables, probabilities, distributions Review of statistics Collecting data Analyzing the data 5/11/2019

Qualitative vs. Quantitative Studies Qualitative: What we’ve been doing so far: Contextual Inquiry: trying to understand user’s tasks and their conceptual model. Usability Studies: looking for critical incidents in a user interface. In general, we use qualitative methods to: Understand what’s going on, look for problems, or get a rough idea of the usability of an interface. 5/11/2019

Qualitative vs. Quantitative Studies Use to reliably measure something. Requires us to know how to measure it. Examples: Time to complete a task. Average number of errors on a task. Users’ ratings of an interface *: Ease of use, elegance, performance, robustness, speed,… * - You could argue that users’ perception of speed, error rates etc is more important than their actual values. 5/11/2019

Quantitative Methods Very often, we want to compare values for two different designs (which is faster?). This requires some statistical methods. We begin by defining some quantities to measure - variables. 5/11/2019

Random variables Random variables take on different values according to a probability distribution. E.g. X  {1, 2, 3} is a discrete random variable. To characterize the variable, we need to define the probabilities: Pr[X=1] = Pr[X=2] = ¼, Pr[X=3] = ½ 5/11/2019

Random variables Given Pr[X=1] = Pr[X=2] = ¼, Pr[X=3] = ½ we can also represent the distribution with a graph: ½ ¼ 1 2 3 5/11/2019

Continuous Random variables Some random variables take on continuous values, e.g. Y  [-1,1]. The probability must be defined by a probability density function (pdf). E.g. p(Y) = ¾ (1 – Y2) Note that the area under the curve is the total probability, which is 1. ¾ -1 1 5/11/2019

Continuous Random variables The area between two values gives the probability that the value of the variable lies in that range. i.e. Pr[a < Y < b] = ¾ a b -1 1 5/11/2019

Meaning of the distribution The limit of the area as the range [a,b] goes to zero gives the value of p(Y) Pr[a < Y < a+dY] = p(Y) dY ¾ a -1 1 5/11/2019

CDF: Cumulative Distribution The CDF is the area under the distribution from - to some value v So C(- ) = 0 and C() = 1 -1 v 1 5/11/2019

Mean and Variance The mean is the expected value of the variable. Its roughly the average value of the variable over many trials. Mean = E[Y] = In this case E[Y] = ½ ¾ ½ -1 1 5/11/2019

Variance Variance is the expected value of the square difference from the mean. Its roughly the squared “width” of the distribution. Var[Y] = Standard deviation is the square root of variance. ¾ ½ -1 1 5/11/2019

Mean and Variance What is the mean and variance for the following distribution? ½ ¼ 2 3 4 5/11/2019

Independent trials For independent trials, both the mean and the variances add. i.e. for r.v.s X and Y, E[X+Y] = E[X]+E[Y] Var[X+Y] = Var[X] + Var[Y] 5/11/2019

Identical trials For independent trials with the same mean and variance E[X1 + … + Xn] = n E[X] Var[X1 + … + Xn] = n Var[X] Std[X1 + … + Xn] = n Std[X] 5/11/2019

Identical trials As the number of trials increases, the ratio of mean to std. deviation decreases. i.e. the distribution narrows in a relative sense. 5/11/2019

Variable types Independent Variables: the ones you control Aspects of the interface design Characteristics of the testers Discrete: A, B or C Continuous: Time between clicks for double-click Dependent variables: the ones you measure Time to complete tasks Number of errors 5/11/2019

Deciding on Data to Collect Two types of data process data observations of what users are doing & thinking bottom-line data summary of what happened (time, errors, success…) i.e., the dependent variables 5/11/2019

Process Data vs. Bottom Line Data Focus on process data first gives good overview of where problems are Bottom-line doesn’t tell you where to fix just says: “too slow”, “too many errors”, etc. Hard to get reliable bottom-line results need many users for statistical significance 5/11/2019

Break 5/11/2019

Some statistics Variables X & Y A relation (hypothesis) e.g. X > Y We would often like to know if a relation is true e.g. X = time taken by novice users Y = time taken by users with some training To find out if the relation is true we do experiments to get lots of x’s and y’s (observations) Suppose avg(x) > avg(y), or that most of the x’s are larger than all of the y’s. What does that prove? 5/11/2019

Significance The significance or p-value of an outcome is the probability that it happens by chance if the relation does not hold. E.g. p = 0.05 means that there is a 1/20 chance that the observation happens if the hypothesis is false. So the smaller the p-value, the greater the significance. 5/11/2019

Significance For instance p = 0.001 means there is a 1/1000 chance that the observation would happen if the hypothesis is false. So the hypothesis is almost surely true. Significance increases with number of trials. CAVEAT: You have to make assumptions about the probability distributions to get good p-values. There is always an implied model of user performance. 5/11/2019

Normal distributions Many variables have a Normal distribution (pdf) At left is the density, right is the cumulative prob. Normal distributions are completely characterized by their mean and variance (mean squared deviation from the mean). 5/11/2019

One standard deviation Normal distributions The std. deviation for a normal distribution occurs at about 60% of its value One standard deviation 5/11/2019

T-test The T-test asks for the probability that E[X] > E[Y] is false. i.e. the null hypothesis for the T-test is whether E[X] = E[Y]. What is the probability of that given the observations? 5/11/2019

T-test We actually ask for the probability that E[X] and E[Y] are at least as different as the observed means. X Y 5/11/2019

Analyzing the Numbers Example: prove that task 1 is faster on design A than design B. Suppose the average time for design B is 20% higher than A. Suppose subjects’ times in the study have a std. dev. which is 30% of their mean time (typical). How many subjects are needed? 5/11/2019

Analyzing the Numbers Example: prove that task 1 is faster on design A than design B. Suppose the average time for design B is 20% higher than A. Suppose subjects’ times in the study have a std. dev. which is 30% of their mean time (typical). How many subjects are needed? Need at least 13 subjects for significance p=0.01 Need at least 22 subjects for significance p=0.001 (assumes subjects use both designs) 5/11/2019

Analyzing the Numbers (cont.) i.e. even with strong (20%) difference, need lots of subjects to prove it. Usability test data is quite variable 4 times as many tests will only narrow range by 2x breadth of range depends on sqrt of # of test users This is when online methods become useful easy to test w/ large numbers of users (e.g., Landay’s NetRaker system) 5/11/2019

Lies, damn lies and statistics… A common mistake (made by famous HCI researchers *): Increasing n, the number of trials, by running each subject several times. No! the analysis only works when trials are independent. All the trials for one subject are dependent, because that subject may be faster/slower/less error-prone than others. * - making this error will not help you become a famous HCI researcher . 5/11/2019

Statistics with care: What you can do to get better significance: Run each subject several times, compute the average for each subject. Run the analysis as usual on subjects’ average times, with n = number of subjects. This decreases the per-subject variance, while keeping data independent. 5/11/2019

Measuring User Preference How much users like or dislike the system can ask them to rate on a scale of 1 to 10 or have them choose among statements “best UI I’ve ever…”, “better than average”… hard to be sure what data will mean novelty of UI, feelings, not realistic setting, etc. If many give you low ratings -> trouble Can get some useful data by asking what they liked, disliked, where they had trouble, best part, worst part, etc. (redundant questions) 5/11/2019

B A Using Subjects Between subjects experiment Two groups of test users Each group uses only 1 of the systems Within subjects experiment One group of test users Each person uses both systems 5/11/2019

Between subjects Two groups of testers, each use 1 system Advantages: Users only have to use one system (practical). No learning effects. Disadvantages: Per-user performance differences confounded with system differences: Much harder to get significant results (many more subjects needed). Harder to even predict how many subjects will be needed (depends on subjects). 5/11/2019

Within subjects One group of testers who use both systems Advantages: Much more significance for a given number of test subjects. Disadvantages: Users have to use both systems (two sessions). Order and learning effects (can be minimized by experiment design). 5/11/2019

Example Same experiment as before: Within subjects: Between subjects: System B is 20% slower than A Subjects have 30% std. dev. in their times. Within subjects: Need 13 subjects for significance p = 0.01 Between subjects: Typically require 52 subjects for significance p = 0.01. But depending on the subjects, we may get lower or higher significance. 5/11/2019

Experimental Details Order of tasks Training choose one simple order (simple -> complex) unless doing within groups experiment Training depends on how real system will be used What if someone doesn’t finish assign very large time & large # of errors Pilot study helps you fix problems with the study do 2, first with colleagues, then with real users 5/11/2019

Reporting the Results Report what you did & what happened Images & graphs help people get it! 5/11/2019

Summary Random variables Distributions Some statistics Experiment design guidelines 5/11/2019