User Interface Evaluation Usability Testing Methods.

Slides:



Advertisements
Similar presentations
Evaluation of User Interface Design
Advertisements

Requirements gathering
6.811 / PPAT: Principles and Practice of Assistive Technology Wednesday, 16 October 2013 Prof. Rob Miller Today: User Testing.
Cognitive Walkthrough More evaluation without users.
User Interface Evaluation Usability Inspection Methods
SEVEN FREQUENTLY ASKED QUESTIONS ABOUT USABILITY TESTING Usability Testing 101.
Deciding How to Measure Usability How to conduct successful user requirements activity?
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
Project Work Playtesting + Postmortem. Plan for today Lecture + discussion Groups status report New Features /Changes in game engine LUNCH BREAK Group.
Empirical Methods in Human- Computer Interaction.
1 Overview of Usability Testing CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology April 19, 2007.
Usability Inspection n Usability inspection is a generic name for a set of methods based on having evaluators inspect or examine usability-related issues.
Part 4: Evaluation Chapter 20: Why evaluate? Chapter 21: Deciding on what to evaluate: the strategy Chapter 22: Planning who, what, where, and when Chapter.
Design and Evaluation of Iterative Systems n For most interactive systems, the ‘design it right first’ approach is not useful. n The 3 basic steps in the.
Damian Gordon.  Summary and Relevance of topic paper  Definition of Usability Testing ◦ Formal vs. Informal methods of testing  Testing Basics ◦ Five.
Review an existing website Usability in Design. to begin with.. Meeting Organization’s objectives and your Usability goals Meeting User’s Needs Complying.
Usability Methods: Cognitive Walkthrough & Heuristic Evaluation Dr. Dania Bilal IS 588 Spring 2008 Dr. D. Bilal.
User Interface Evaluation Usability Inquiry Methods
Predictive Evaluation
ITCS 6010 VUI Evaluation.
Chapter 23 How to collect data. This chapter is about the tools/techniques used to collect data Hang on, let’s review: what are we collecting? What’s.
Dobrin / Keller / Weisser : Technical Communication in the Twenty-First Century. © 2008 Pearson Education. Upper Saddle River, NJ, All Rights Reserved.
Small-Scale Usability Testing “ Evolution Not Revolution” Darlene Fichter March 12, 2003 Computers in Libraries 2003.
Part 1-Intro; Part 2- Req; Part 3- Design  Chapter 20 Why evaluate the usability of user interface designs?  Chapter 21 Deciding on what you need to.
Gathering User Data IS 588 Dr. Dania Bilal Spring 2008.
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
3461P Crash Course Lesson on Usability Testing The extreme, extreme basics...
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 7: Focusing on Users and Their Tasks.
Interface Design Natural Design. What is natural design? Intuitive Considers our learned behaviors Naturally designed products are easy to interpret and.
Usability testing. Goals & questions focus on how well users perform tasks with the product. – typical users – doing typical tasks. Comparison of products.
Click to edit Master subtitle style USABILITY and USER INTERFACE DESIGN Application.
Evaluation Paradigms & Techniques IS 588 Spring 2008 Dr. D. Bilal.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Lecturer – Prof Jim Warren Lecture 4 - Usability Testing Based on Heim, Chapter.
Usability Testing CS774 Human Computer Interaction Spring 2004.
Usability Testing Chapter 6. Reliability Can you repeat the test?
Chapter 13. Reviewing, Evaluating, and Testing © 2010 by Bedford/St. Martin's1 Usability relates to five factors of use: ease of learning efficiency of.
Designing & Testing Information Systems Notes Information Systems Design & Development: Purpose, features functionality, users & Testing.
Oct 231 Heuristic for websites Avoid orphan pages Avoid long pages that force scrolling Provide navigation support, such a site map that is always present.
Software Engineering User Interface Design Slide 1 User Interface Design.
Chapter 8 Usability Specification Techniques Hix & Hartson.
The product of evaluation is knowledge. This could be knowledge about a design, knowledge about the user or knowledge about the task.
AMSc Research Methods Research approach IV: Experimental [1] Jane Reid
Usability Engineering Dr. Dania Bilal IS 582 Spring 2006.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Usability Engineering Dr. Dania Bilal IS 592 Spring 2005.
June 5, 2007Mohamad Eid Usability Testing Chapter 8.
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
Chapter 27 Variations and more complex evaluations.
Fall 2002CS/PSY Predictive Evaluation (Evaluation Without Users) Gathering data about usability of a design by a specified group of users for a particular.
Usability Evaluation or, “I can’t figure this out...do I still get the donuts?”
Usability Engineering Dr. Dania Bilal IS 582 Spring 2007.
Usability Engineering Dr. Dania Bilal IS 587 Fall 2007.
How do we know if our UI is good or bad?.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
1 Usability Analysis n Why Analyze n Types of Usability Analysis n Human Subjects Research n Project 3: Heuristic Evaluation.
Usability Testing By: Abeera Saeed Abira Abid Bhutta Submitted to: Ma’am Maryam Akhtar BESE-19B.
Day 8 Usability testing.
User Interface Evaluation
SIE 515 Design Evaluation Lecture 7.
Human Computer Interaction Lecture 15 Usability Evaluation
Usability Evaluation, part 2
SY DE 542 User Testing March 7, 2005 R. Chow
Evaluation Paradigms & Techniques
Q4 Measuring Effectiveness
Chapter 23 Deciding how to collect data
Evaluation.
CSM18 Usability Engineering
Empirical Evaluation Data Collection: Techniques, methods, tricks Objective data IRB Clarification All research done outside the class (i.e., with non-class.
Presentation transcript:

User Interface Evaluation Usability Testing Methods

 Conduct experiments to find specific information about a design and/or product.  Basis comes from experimental psychology.  Uses statistical data methods  Quantitative and Qualitative

Usability Testing Methods  During usability testing, users work on specific tasks using the interface/product and evaluators use the results to evaluate and modify the interface/product.  Widely used in practice, but not appropriately used.  Often abused by developers that consider themselves to be usability experts.  Can be very expensive and time consuming.

Usability Testing Methods  Performance Measurement  Thinking-aloud Protocol  Question-asking Protocol  Coaching Method

Usability Testing Methods  Co-discovery Learning  Teaching Method  Retrospective Testing  Remote Testing

Performance Measurement  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 6.

Performance Measurement  Usability Issues Covered  Effectiveness: Yes  Efficiency: Yes  Satisfaction: No  Quantitative Data is collected.  Can NOT be conducted remotely.  Can be used on any system.

Performance Measurement  What is it?  Used to collect quantitative data.  Typically, you will be looking for benchmark data.  Objectives MUST be quantifiable  75% of users shall be able to complete the basic task in less than 30 minutes.

Performance Measurement  How can I do it?  Define the goals that you expect users to perform

Performance Measurement  How can I do it?  Quantify the goals  The time users take to complete a specific task.  The Ratio between successful interactions and errors.  The time spent recovering from errors.  The number of user errors.  The number of commands or other features that were never used by the user.  The number of system features the user can remember during a debriefing after the test.  The proportion of users who say that they would prefer using the system over some specified competitor.

Performance Measurement  How can I do it?  Get participants for the experiments  Conduct very controlled experiments  All variables must remain consistent across users

Performance Measurement  Problems With Performance Measurement  No qualitative data.

Thinking-aloud Protocol  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 4.

Thinking-aloud Protocol  Usability Issues Covered  Effectiveness: Yes  Efficiency: No  Satisfaction: Yes  Quantitative Data is NOT collected.  Can NOT be conducted remotely.  Can be used on any system.

Thinking-aloud Protocol  What is it?  Technique where the participant is asked to vocalize his or her thoughts, feelings, and opinions while interacting with the product..

Thinking-aloud Protocol  How can I do it?  Select the participants, who will be involved?  Select the tasks and design scenarios.  Ask the participant to perform a task using the software.

Thinking-aloud Protocol  How can I do it?  During the task, ask the user to vocalize  Thoughts, opinions, feelings, etc. 

Thinking-aloud Protocol  Problem With Thinking-Aloud Protocol  Cognitive Overload  Can you walk & chew gum at the same time?  Asking the participants to do too much.

Question-asking Protocol  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 4.

Question-asking Protocol  Usability Issues Covered  Effectiveness: Yes  Efficiency: No  Satisfaction: Yes  Quantitative Data is NOT collected.  Can NOT be conducted remotely.  Can be used on any system.

Question-asking Protocol  What is it?  Similar to Thinking-aloud protocol.  Instead of participant saying what they are thinking, the evaluator prompts the participant with questions while using the system.

Question-asking Protocol  How can I do it?  Select the participants, who will be involved?  Select the tasks and design scenarios.  Ask the participant to perform a task using the software.

Question-asking Protocol  How can I do it?  During the task, ask the user to questions about the product  Thoughts, opinions, feelings, etc. 

Question-asking Protocol  Problem With Thinking-Aloud Protocol  Cognitive Overload++  Can you walk, chew gum & talk at the same time?  Asking the participants to do too much.  Added pressure when the evaluator asks questions.  Can be frustrating on novice users.

Coaching Method  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 4.

Coaching Method  Usability Issues Covered  Effectiveness: Yes  Efficiency: No  Satisfaction: Yes  Quantitative Data is NOT collected.  Can NOT be conducted remotely.  Can be used on any system.

Coaching Method  What is it?  A system expert sits with the participant and acts as a coach.  Expert answers the participant’s questions.  The evaluator observes their interaction.

Coaching Method  How can I do it?  Select the participants, who will be involved?  Select the tasks and design scenarios.  Ask the participant to perform a task using the software in the presence of a coach/expert.

Coaching Method  How can I do it?  During the task, the user will ask the expert questions about the product 

Coaching Method  Problem With Coaching Method  In reality, there will not be a coach present.  This is good for creating a coaching system, but not for evaluating the interface.

Co-Discovery Learning  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 6.

Co-Discovery Learning  Usability Issues Covered  Effectiveness: Yes  Efficiency: No  Satisfaction: Yes  Quantitative Data is NOT collected.  Can NOT be conducted remotely.  Can be used on any system.

Co-Discovery Learning  What is it?  Two test users attempt to perform tasks together while being observed.  They are to help each other in the same manner as they would if they were working together to accomplish a common goal using the product.  They are encouraged to explain what they are thinking about while working on the tasks.  Thinking Aloud, but more natural because of partner.

Co-Discovery Learning  How can I do it?  Select the participants, who will be involved?  Select the tasks and design scenarios.  Ask the participants to perform a task using the software.

Co-Discovery Learning  How can I do it?  During the task, the users will help each other and voice their thoughts by talking to each other. 

Co-Discovery Learning  Problem With Co-Discovery Learning  Neither is an expert  The blind leading the blind.

Teaching Method  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 4.

Teaching Method  Usability Issues Covered  Effectiveness: Yes  Efficiency: No  Satisfaction: Yes  Quantitative Data is NOT collected.  Can NOT be conducted remotely.  Can be used on any system.

Teaching Method  What is it?  You have 1 participant use the system.  Ask the participant to teach a new novice participant how to use the system.

Teaching Method  How can I do it?  Select the participants, who will be involved?  Select the tasks and design scenarios.  Ask the 1 st participant to perform a task using the software.  Ask the 1 st participant to teach a new participant.

Teaching Method  How can I do it?  Observe their interactions. 

Teaching Method  Problem With Teaching Method  Neither is an expert  The blind leading the blind.  Possible to discover some interesting things about the learn-ability of your interfaces.

Retrospective Testing  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 4.

Retrospective Testing  Usability Issues Covered  Effectiveness: Yes  Efficiency: Yes  Satisfaction: Yes  Quantitative Data can be collected.  Can NOT be conducted remotely.  Can be used on any system.

Retrospective Testing  What is it?  A videotape of the session is observed by the usability expert and the participants.

Retrospective Testing  How can I do it?  Select the participants, who will be involved?  Select the tasks and design scenarios.  Use one of the usability testing methods that we have discussed.  Videotape the session.

Retrospective Testing  How can I do it?  Review the videotape with the users. 

Retrospective Testing  Problem With Retrospective Testing  Extremely time consuming!

Remote Testing  Applicable Stages:  Design, Code, Test & Deployment  Personnel  Usability Experts, approximately 1.  Developers, 0.  Users, 5.

Remote Testing  Usability Issues Covered  Effectiveness: Yes  Efficiency: Yes  Satisfaction: Yes  Quantitative Data can be collected.  Can be conducted remotely.  Can be used on any system.

Remote Testing  What is it?  The participants are separated from the evaluators.  No formal observation.  No usability lab.

Remote Testing  How can I do it?  Give the product/software to participants.  Collect information about how they use your software/product.  Methods  Same-Time Different Place  Different-Time Different Place

Remote Testing  How can I do it?  Camtasia, SnagIt  Usability Logger   Journaled Sessions 

Remote Testing  Problem With Remote Testing  The evaluator is not there.  Can’t observe facial expressions.  Great for Web based systems.

Usability Testing Methods  Select the method that works best for you.  Select the method that fits your implementation.  Be thorough during your experiments.  The more data, the better.

Usability Testing Methods  Hawthorne Effect  The tendency for people to improve their performance after any change when they know their performance is being studied.