Intro to Evaluation See how (un)usable your software really is…

Slides:



Advertisements
Similar presentations
Chapter 14: Usability testing and field studies
Advertisements

Evaluation of User Interface Design
CS305: HCI in SW Development Evaluation (Return to…)
Cognitive Walkthrough More evaluation without users.
Each individual person is working on a GUI subset. The goal is for you to create screens for three specific tasks your user will do from your GUI Project.
Data analysis and interpretation. Agenda Part 2 comments – Average score: 87 Part 3: due in 2 weeks Data analysis.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
Observation Watch, listen, and learn…. Agenda  Observation exercise Come back at 3:40.  Questions?  Observation.
Useability.
Think-aloud usability experiments or concurrent verbal accounts Judy Kay CHAI: Computer human adapted interaction research group School of Information.
Experiments Testing hypotheses…. Agenda Homework assignment Review evaluation planning Observation continued Empirical studies In-class practice.
ITIS 6010/8010 Usable Privacy & Security Dr. Heather Richter Lipford
Usable Privacy and Security Carnegie Mellon University Spring 2008 Lorrie Cranor 1 Designing user studies February.
An evaluation framework
An evaluation framework
ICS 463, Intro to Human Computer Interaction Design: 8. Evaluation and Data Dan Suthers.
ISE554 The WWW 3.4 Evaluation Methods. Evaluating Interfaces with Users Why evaluation is crucial to interface design General approaches and tradeoffs.
Intro to Evaluation See how (un)usable your software really is…
James Tam Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of ethics.
Empirical Evaluation Assessing usability (with users)
Chapter 14: Usability testing and field studies
Predictive Evaluation
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
Intro to Evaluation See how (un)usable your software really is…
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Evaluating a Research Report
Fall 2002CS/PSY Empirical Evaluation Analyzing data, Informing design, Usability Specifications Inspecting your data Analyzing & interpreting results.
Human Computer Interaction
Usability Evaluation June 8, Why do we need to do usability evaluation?
SEG3120 User Interfaces Design and Implementation
Testing & modeling users. The aims Describe how to do user testing. Discuss the differences between user testing, usability testing and research experiments.
Usability Testing Chapter 6. Reliability Can you repeat the test?
CS2003 Usability Engineering Usability Evaluation Dr Steve Love.
COMP5047 Pervasive Computing: 2012 Think-aloud usability experiments or concurrent verbal accounts Judy Kay CHAI: Computer human adapted interaction research.
Data analysis and interpretation. Project part 3 Watch for comments on your evaluation plans Finish your plan – Finalize questions, tasks – Prepare scripts.
Usability Assessment Methods beyond Testing Chapter 7 Evaluating without users.
Intro to Evaluation See how (un)usable your software really is…
Intro to Evaluation See how (un)usable your software really is…
Usability Engineering Dr. Dania Bilal IS 582 Spring 2006.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Usability Engineering Dr. Dania Bilal IS 592 Spring 2005.
Working with People & Project Overview “Doing right by your participants”
Cognitive Walkthrough More evaluating with experts.
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
Introduction to Evaluation without Users. Where are you at with readings? Should have read –TCUID, Chapter 4 For Next Week –Two Papers on Heuristics from.
Intro to Evaluation See how (un)usable your software really is…
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Introduction to Evaluation “Informal” approaches.
Fall 2002CS/PSY Predictive Evaluation (Evaluation Without Users) Gathering data about usability of a design by a specified group of users for a particular.
User Interface Evaluation Introduction Lecture #15.
Usability Evaluation or, “I can’t figure this out...do I still get the donuts?”
Usability Engineering Dr. Dania Bilal IS 582 Spring 2007.
Usability Engineering Dr. Dania Bilal IS 587 Fall 2007.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
1 Usability Analysis n Why Analyze n Types of Usability Analysis n Human Subjects Research n Project 3: Heuristic Evaluation.
RES 320 expert Expect Success/res320expertdotcom FOR MORE CLASSES VISIT
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
Day 8 Usability testing.
User Interface Evaluation
SIE 515 Design Evaluation Lecture 7.
Inspecting your data Analyzing & interpreting results
Evaluation.
HCI Evaluation Techniques
Empirical Evaluation Data Collection: Techniques, methods, tricks Objective data IRB Clarification All research done outside the class (i.e., with non-class.
Human-Computer Interaction: Overview of User Studies
Cognitive Walkthrough
Presentation transcript:

Intro to Evaluation See how (un)usable your software really is…

Why evaluation is done?  Summative assess an existing system judge if it meets some criteria  Formative assess a system being designed gather input to inform design  Summative or formative? Depends on  maturity of system  how evaluation results will be used Same technique can be used for either

Other distinctions  Form of results of obtained Quantitative Qualitative  Who is experimenting with the design End users HCI experts  Approach Experimental Naturalistic Predictive

Evaluation techniques  Predictive Evaluation Fitt’s law, Hick’s, etc.  Observation  Think-aloud  Cooperative evaluation Watch users perform tasks with your interface Next week…

More techniques  Empirical user studies (experiments) Test hypotheses about your interface Examine dependent variables against independent variables More next lecture…  Interviews  Questionnaire  Focus Groups Get user feedback More in two weeks…

Still more techniques  Discount usability techniques Use HCI experts instead of users Fast and cheap method to get broad feedback Heuristic evaluation  Several experts examine interface using guiding heuristics (like the ones we used in design) Cognitive Walkthrough  Several experts assess learnability of interface for novices You will do one of each of these

And still more techniques  Diary studies Users relate experiences on a regular basis Can write down, call in, etc.  Experience Sampling Technique Interrupt users with very short questionnaire on a random-ish basis  Good to get idea of regular and long term use in the field (real world)

Evaluation is Detective Work  Goal: gather evidence that can help you determine whether your usability goals are being met  Evidence (data) should be: Relevant Diagnostic Credible Corroborated

Data as Evidence  Relevant Appropriate to address the hypotheses  e.g., Does measuring “number of errors” provide insight into how effective your new air traffic control system supports the users’ tasks?  Diagnostic Data unambiguously provide evidence one way or the other  e.g., Does asking the users’ preferences clearly tell you if the system performs better? (Maybe)

Data as Evidence  Credible Are the data trustworthy?  Gather data carefully; gather enough data  Corroborated Do more than one source of evidence support the hypotheses?  e.g. Both accuracy and user opinions indicate that the new system is better than the previous system. But what if completion time is slower?

General Recommendations  Identify evaluation goals  Include both objective & subjective data e.g. “completion time” and “preference”  Use multiple measures, within a type e.g. “reaction time” and “accuracy”  Use quantitative measures where possible e.g. preference score (on a scale of 1-7) Note: Only gather the data required; do so with minimum interruption, hassle, time, etc.

Evaluation planning  Decide on techniques, tasks, materials What are usability criteria? How much required authenticity?  How many people, how long  How to record data, how to analyze data  Prepare materials – interfaces, storyboards, questionnaires, etc.  Pilot the entire evaluation Test all materials, tasks, questionnaires, etc. Find and fix the problems with wording, assumptions Get good feel for length of study

Recruiting Participants  Various “subject pools” Volunteers Paid participants Students (e.g., psych undergrads) for course credit Friends, acquaintances, family, lab members “Public space” participants - e.g., observing people walking through a museum , newsgroup lists  Must fit user population (validity)  Note: Ethics, IRB, Consent apply to *all* participants, including friends & “pilot subjects”

Performing the Study  Be well prepared so participant’s time is not wasted  Explain procedures without compromising results  Session should not be too long, subject can quit anytime  Never express displeasure or anger  Data to be stored anonymously, securely, and/or destroyed  Expect anything and everything to go wrong!! (a little story)

Consent  Why important? People can be sensitive about this process and issues Errors will likely be made, participant may feel inadequate May be mentally or physically strenuous  What are the potential risks (there are always risks)?

Data Inspection  Start just looking at the data Were there outliers, people who fell asleep, anyone who tried to mess up the study, etc.?  Identify issues: Overall, how did people do? “5 W’s” (Where, what, why, when, and for whom were the problems?)  Compile aggregate results and descriptive statistics

Making Conclusions  Where did you meet your criteria? Where didn’t you?  What were the problems? How serious are these problems?  What design changes should be made? But don’t make things worse…  Prioritize and plan changes to the design  Iterate on entire process

Example: Heather’s study  Software: MeetingViewer interface fully functional  Criteria – learnability, efficiency, see what aspects of interface get used, what might be missing  Resources – subjects were students in a research group, just me as evaluator, plenty of time  Wanted completely authentic experience

Heather’s software

Heather’s evaluation  Task: answer questions from a recorded meeting, use my software as desired  Think-aloud  Video taped, software logs  Also had post questionnaire  Wrote my own code for log analysis  Watched video and matched behavior to software logs

Example materials

Data analysis  Basic data compiled: Time to answer a question (or give up) Number of clicks on each type of item Number of times audio played Length of audio played User’s stated difficulty with task User’s suggestions for improvements  More complicated: Overall patterns of behavior in using the interface User strategies for finding information

Data representation example

Data presentation

Some usability conclusions  Need fast forward and reverse buttons (minor impact)  Audio too slow to load (minor impact)  Target labels are confusing, need something different that shows dynamics (medium impact)  Need more labeling on timeline (medium impact)  Need different place for notes vs. presentations (major impact)

Your turn: assignment  In one week: draft of evaluation plan What are your goals How you will test each one – basic idea Early drafts of any materials  (tasks you want people to do, questionnaires, interview questions, etc.  Part 2 due in 2 weeks!

Your turn: in class  In your project groups Which usability goals are important for you? How might you measure each one?