Presentation is loading. Please wait.

Presentation is loading. Please wait.

Observation & Experiments Watch, listen, and learn…

Similar presentations


Presentation on theme: "Observation & Experiments Watch, listen, and learn…"— Presentation transcript:

1 Observation & Experiments Watch, listen, and learn…

2 Observing Users Not as easy as you think One of the best ways to gather feedback about your interface Watch, listen and learn as a person interacts with your system Qualitative & quantitative, end users, experimental or naturalistic

3 Observation Direct In same room Can be intrusive Users aware of your presence Only see it one time May use 1-way mirror to reduce intrusiveness Indirect Video or app recording Reduces intrusiveness, but doesn’t eliminate it Cameras focused on screen, face & keyboard Gives archival record, but can spend a lot of time reviewing it

4 Location Observations may be In lab - Maybe a specially built usability lab Easier to control Can have user complete set of tasks In field Watch their everyday actions More realistic Harder to control other factors

5 Observation Room State-of-the-art observation room equipped with three monitors to view participant, participant's monitor, and composite picture in picture. One-way mirror plus angled glass captures light and isolates sound between rooms. Comfortable and spacious for three people, but room enough for six seated observers. Digital mixer for unlimited mixing of input images and recording to VHS, SVHS, or MiniDV recorders.

6 Task Selection What tasks are people performing? Representative and realistic? Tasks dealing with specific parts of the interface you want to test? Problematic tasks? Don’t forget to pilot your entire evaluation!!

7 Engaging Users in Evaluation What’s going on in the user’s head? Use verbal protocol where users describe their thoughts Qualitative techniques Think-aloud - can be very helpful Post-hoc verbal protocol - review video Critical incident logging - positive & negative Structured interviews - good questions “What did you like best/least?” “How would you change..?”

8 Think Aloud User describes verbally what s/he is thinking and doing What they believe is happening Why they take an action What they are trying to do Widely used, popular protocol Potential problems: Can be awkward for participant Thinking aloud can modify way user performs task

9 Cooperative approach Another technique: Co-discovery learning (Constructive iteration) Join pairs of participants to work together Use think aloud Perhaps have one person be semi-expert (coach) and one be novice More natural (like conversation) so removes some awkwardness of individual think aloud Variant: let coach be from design team (cooperative evaluation)

10 Alternative What if thinking aloud during session will be too disruptive? Can use post-event protocol User performs session, then watches video afterwards and describes what s/he was thinking Sometimes difficult to recall Opens up door of interpretation

11 What if a user gets stuck? Decide ahead of time what you will do. Offer assistance or not? What kind of assistance? You can ask (in cooperative evaluation) “What are you trying to do..?” “What made you think..?” “How would you like to perform..?” “What would make this easier to accomplish..?” Maybe offer hints This is why cooperative approaches are used

12 Inputs Need operational prototype could use Wizard of Oz simulation Need tasks and descriptions Reflect real tasks Avoid choosing only tasks that your design best supports Minimize necessary background knowledge Pay attention to time and training required.

13 Capturing a Session 1. Paper & pencil Can be slow May miss things Is definitely cheap and easy Time 10:00 10:03 10:08 10:22 Task 1 Task 2 Task 3 … SeSe SeSe

14 Capturing a Session 2. Recording (screen, audio and/or video) Good for think-aloud Multiple cameras may be needed Good, rich record of session Can be intrusive Can be painful to transcribe and analyze

15 Recording 2b: With usability software (such as Morae) Combines audio/video, screen, mouse and keyboard clicks, window opening/closing, etc. Synchronizes everything Allows for annotations Can create clips or presentations

16 Capturing a Session 3. Software logging Modify software to log user actions Can give time-stamped key press or mouse event Two problems: Too low-level, want higher level events Massive amount of data, need analysis tools

17 Example logs 2303761098721869683|hrichter|1098722080134|MV|START|566 2303761098721869683|hrichter|1098722122205|MV|QUESTION|false|false|false|false|false|false| 2303761098721869683|hrichter|1098724978982|MV|TAB|AGENDA 2303761098721869683|hrichter|1098724981146|MV|TAB|PRESENTATION 2303761098721869683|hrichter|1098724985161|MV|SLIDECHANGE|5 2303761098721869683|hrichter|1098724986904|MV|SEEK|PRESENTATION-A|566|604189|0 2303761098721869683|hrichter|1098724996257|MV|SEEK|PRESENTATION-A|566|604189|604189 2303761098721869683|hrichter|1098724998791|MV|SEEK|PRESENTATION-A|566|604189|604189 2303761098721869683|hrichter|1098725002506|MV|TAB|AGENDA 2303761098721869683|hrichter|1098725003848|MV|SEEK|AGENDA|566|149613|604189 2303761098721869683|hrichter|1098725005981|MV|TAB|PRESENTATION 2303761098721869683|hrichter|1098725007133|MV|SLIDECHANGE|3 2303761098721869683|hrichter|1098725009326|MV|SEEK|PRESENTATION|566|315796|149613 2303761098721869683|hrichter|1098725011569|MV|PLAY|566|315796 2303761098721869683|hrichter|1098725039850|MV|TAB|AV 2303761098721869683|hrichter|1098725054241|MV|TAB|PRESENTATION 2303761098721869683|hrichter|1098725056053|MV|SLIDECHANGE|2 2303761098721869683|hrichter|1098725057365|MV|SEEK|PRESENTATION|566|271191|315796 2303761098721869683|hrichter|1098725064986|MV|TAB|AV 2303761098721869683|hrichter|1098725083373|MV|TAB|PRESENTATION 2303761098721869683|hrichter|1098725084534|MV|TAB|AGENDA 2303761098721869683|hrichter|1098725085255|MV|TAB|PRESENTATION 2303761098721869683|hrichter|1098725088690|MV|TAB|AV 2303761098721869683|hrichter|1098725130500|MV|TAB|AGENDA 2303761098721869683|hrichter|1098725139643|MV|TAB|AV 2303761098721869683|hrichter|1098726430039|MV|STOP|566|271191 2303761098721869683|hrichter|1098726432482|MV|END

18 Analysis Many approaches Task based How do users approach the problem What problems do users have Need not be exhaustive, look for interesting cases Performance based Frequency and timing of actions, errors, task completion, etc. Can be very time consuming!!

19 Experiments Testing hypotheses…

20 Experiments Test hypotheses in your design More controlled examination than just observation Generally quantitative, experimental, with end users.

21 Types of Variables Independent What you’re studying, what you intentionally vary (e.g., interface feature, interaction device, selection technique, design) Dependent Performance measures you record or examine (e.g., time, number of errors) Controlled What you do not want to affect your study

22 “Controlling” Variables Prevent a variable from affecting the results in any systematic way Methods of controlling for a variable: Don’t allow it to vary e.g., all males Allow it to vary randomly e.g., randomly assign participants to different groups Counterbalance - systematically vary it e.g., equal number of males, females in each group The appropriate option depends on circumstances

23 Hypotheses What you predict will happen More specifically, the way you predict the dependent variable (i.e., accuracy) will depend on the independent variable(s) “Null” hypothesis (H o ) Stating that there will be no effect e.g., “There will be no difference in performance between the two groups” Data used to try to disprove this null hypothesis

24 Defining Performance Based on the task Specific, objective measures/metrics Examples: Speed (reaction time, time to complete) Accuracy (errors, hits/misses) Production (number of files processed) Score (number of points earned) …others…? Preference, satisfaction, etc. (i.e. questionnaire response) are also valid measurements

25 Example Do people complete operations faster with a black- and-white display or a color one? Independent - display type (color or b/w) Dependent - time to complete task (minutes) Controlled variables - same number of males and females in each group Hypothesis: Time to complete the task will be shorter for users with color display H o : Time color = Time b/w

26 What about subjects? How many? Book advice:at least 10 Other advice:6 subjects per experimental condition Real advice: depends on statistics Relating subjects and experimental conditions Within/between subjects design

27 Experimental Designs Within Subjects Design Every participant provides a score for all levels or conditions Color B/W P1 12 secs. 17 secs. P2 19 secs. 15 secs. P3 13 secs. 21 secs....

28 Experimental Designs Between Subjects Each participant provides results for only one condition Color B/W P1 12 secs. P2 17 secs. P3 19 secs. P5 15 secs. P4 13 secs. P6 21 secs....

29 Within Subjects Designs More efficient: Each subject gives you more data - they complete more “blocks” or “sessions” More statistical “power”: Each person is their own control Therefore, can require fewer participants May mean more complicated design to avoid “order effects” e.g. seeing color then b/w may be different from seeing b/w then color

30 Between Subjects Designs Fewer order effects Participant may learn from first condition Fatigue may make second performance worse Simpler design & analysis Easier to recruit participants (only one session, less time) Less efficient

31 Now What…? Performed initial data inspection Removed outliers, have general idea what occurred Descriptive Statistics Totals, Averages, Ranges, etc. Subgroup Statistics Inferential Analysis to test hypotheses

32 Descriptive Statistics For all variables and subgroups, get a feel for results: Total scores, times, ratings, etc. Minimum, maximum Mean, median, ranges, etc. What is the difference between mean & median? Why use one or the other?  e.g. “Twenty participants completed both sessions (10 males, 10 females; mean age 22.4, range 18-37 years).”  e.g. “The median time to complete the task in the mouse-input group was 34.5 s (min=19.2, max=305 s). The median time to complete the task in the keyboard-input group was 32.1 s (min=17.6, max=286 s)

33 Inferential Stats and the Data Are these really different? What would that mean?

34 Goal of analysis Get >95% confidence in significance of result that is, null hypothesis disproved H o : Time color = Time b/w OR, there is an influence ORR, only 1 in 20 chance that difference occurred due to random chance

35 Hypothesis Testing Tests to determine differences t-test to compare two means ANOVA (Analysis of Variance) to compare several means Need to determine “statistical significance” “Significance level” (p): The probability that your null hypothesis was wrong, simply by chance p (“alpha” level) is often set at 0.05, or 5% of the time you’ll get the result you saw, just by chance

36 Feeding Back Into Design What were the conclusions you reached? How can you improve on the design? What are quantitative benefits of the redesign? e.g. 2 minutes saved per transaction, which means 24% increase in production, or $45,000,000 per year in increased profit What are qualitative, less tangible benefit(s)? e.g. workers will be less bored, less tired, and therefore more interested --> better cust. service

37 Example: Resolution Effects Task: Given a filename, locate the icon with that filename on a desktop scattered with other icons (distractors) Compare task time under 3 resolutions (800 x 600, 1024 x 768, 1280 x 1024) Control for: Number of distractors (same) Icon shape/size (all text document icons) Position of target (randomized) Etc.

38 Resolution Experiment Task: Cursor starts in middle of screen. Filename flashes at top of screen, user has to find the file and click on it once with cursor Hypothesis: time to select target icon increases with higher resolution (because icons and font are smaller) Independent variables: 3 resolution settings Dependent variables: Time to click on target Controlling variables: Experience, # distractors, size/shape, target position, distance from start, direction to target

39 Resolution Experiment Within-Participants design 1 hour, 3 blocks of 15 minutes each, 100 trials in each condition 24 participants, split into 6 groups of 4 people Why 6 groups? 3 conditions, and 3! = 6 User preference questionnaire, ratings

40 GroupBlock 1Block 2Block 3 ARes 1Res 2Res 3 BRes 1Res 3Res 2 C Res 3Res 1 DRes 2Res 1Res 3 E Res 2Res 1 FRes 3Res 1Res 2 Res 1: 600 x 800 Res 2: 1024 x 768Res 3: 1280 x 1024 3! = 3 * 2 * 1 = 6 --> so there are six possible orderings of the blocks Divide the 24 participants evenly among these 6 groups.

41 Analysis Process (Within-Participants) Look at descriptive statistics, average trial time, user preferences on questionnaire, etc. Remove outliers (any trial with measure greater than 2 standard deviations from the mean). Check null hypothesis for your control variables to ensure no effects exist. Check for learning effects. Make sure the counter- balancing worked! Do an ANOVA of trial time vs. Group, should show no effect. Do ANOVA of trial time vs. resolution to test significance of findings. Hopefully there is an effect here! Do post-hoc pairwise tests

42 Results (trial time in seconds) 600 x 8001024 x 7681280 x 1024 Group A7.59.411.1 Group B8.38.812.8 … Mean trial time7.69.511.0 Hypotheses: Appears to be supported… NOTE: These are MADE UP numbers. I have not run this study!!!!!!

43 ANOVAs ANOVA for mean trial time, group: SourceDegreesFreedomF-RatioProbability Group50.4570.635 ANOVA for mean trial time, resolution: Source DegreesFreedomF-RatioProbability Resolution24.337<= 0.0001 Low F-ratio (less than 1) means not much effect of group on trial time High probability indicates that whatever relationship is found, it had a 63% chance of being found in random data… Higher F-ratio here indicates that resolution does have an effect on trial time (doesn’t tell you what the effect is, exactly) Low probability indicates the effect is very unlikely to be found in random data Conclusion: counter- balancing worked, and resolution has a statistically significant effect on selecting a target in this experiment

44 Post-Hoc Pairwise Test Once you’ve found an effect using an ANOVA, you need to figure out whether each condition is significantly different from the others. You do this with post- hoc tests, such as Tukey HSD: ConditionsDifference (seconds) Probability 800x600 - 1024x768 1.90.037 1280x1024 - 800x600 3.40.002 1280x1024 - 1025x768 1.50.054 Significant Not Significant

45 What could be concluded? (if this data was real…) In a laboratory setting, target search and selection tasks are faster on 800x600 screens than 1024x768 or 1280x1024, all other things being equal. There is no statistically significant difference between the two higher resolutions. What could not be concluded: Lower resolution screens are better. Why not? For other tasks, in other conditions (like different lighting, off-line distractions, etc. the other resolutions might perform better).

46 Another Example: Heather’s simple experiment Designing interface for categorizing keywords in a transcript Wanted baseline for comparison Experiment comparing: Pen and paper, not real time Pen and paper, real time Simulated interface, real time

47 Experiment Hypothesis: fewer keywords in real time, fewer with simulated Independent variables: Time, accuracy of transcript, platform Dependent variables: Number of keywords of each category Controlling variables: Gender, experience, etc. Between subjects design 1 hour, mentally intensive task

48 Results Non-Real Time Rate Real Time RateSimulated Rate Domain-specific tags7.59.45.1 Domain- independent tags 129.85.8 Conversation tags1.832.5 For Domain-specific tags, Simulated less than RealTime, p < 0.01 For Domain-independent tags, Simulated less than RealTime, p < 0.01 Hypotheses fewer in Real Time: not supported fewer with Simulated: supported for two categories

49 Example: add video to IM voice chat? Compare voice chat with and without video Plan an experiment: Compare message time or difficulty in communicating or frequency… Consider: Tasks What data you want to gather How you would gather What analysis you would do after

50 Questions: What are independent variables? What are dependent variables? What could be hypothesis? Between or within subjects? What was controlled? What are the tasks?


Download ppt "Observation & Experiments Watch, listen, and learn…"

Similar presentations


Ads by Google