Evaluating Interface Designs Session 04 Course: T0593 / Human Computer Interaction Year: 2012.

Slides:



Advertisements
Similar presentations
Copyright © 2005, Pearson Education, Inc. An Instructor’s Outline of Designing the User Interface 4th Edition by Ben Shneiderman & Catherine Plaisant Slides.
Advertisements

Designing the User Interface CHAPTER 4 Evaluating interface Designs.
Deciding How to Measure Usability How to conduct successful user requirements activity?
Snejina Lazarova Senior QA Engineer, Team Lead CRMTeam Dimo Mitev Senior QA Engineer, Team Lead SystemIntegrationTeam Telerik QA Academy Telerik QA Academy.
ACTIVELY ENGAGING THE STAKEHOLDER IN DEFINING REQUIREMENTS FOR THE BUSINESS, THE STAKEHOLDER, SOLUTION OR TRANSITION Requirements Elicitation.
Técnicas de Calidad en el Software Sesión # 10. Good quality software Operations Transition Testing Portability Reusability Interoperability Maintainability.
4.1 Introduction Designers may fail to evaluate adequately.
Ch 11 Cognitive Walkthroughs and Heuristic Evaluation Yonglei Tao School of Computing and Info Systems GVSU.
Usability presented by the OSU Libraries’ u-team.
Evaluating Interface Designs
Useability.
Evaluation Methodologies
Design and Evaluation of Iterative Systems n For most interactive systems, the ‘design it right first’ approach is not useful. n The 3 basic steps in the.
Heuristic Evaluation Evaluating with experts. Discount Evaluation Techniques  Basis: Observing users can be time- consuming and expensive Try to predict.
Copyright © 2005, Pearson Education, Inc. Chapter 4 Expert Reviews, Usability Testing, Surveys, and Continuing Assessment.
Feedback from Usability Evaluation to User Interface Design: Are Usability Reports Any Good? Christian M. Nielsen 1 Michael Overgaard 2 Michael B. Pedersen.
Principles and Methods
User interface design Designing effective interfaces for software systems Objectives To suggest some general design principles for user interface design.
Evaluation: Inspections, Analytics & Models
From Controlled to Natural Settings
HCI Research Methods Ben Shneiderman Founding Director ( ), Human-Computer Interaction Lab Professor, Department of Computer Science.
Usability and Evaluation Dov Te’eni. Figure ‎ 7-2: Attitudes, use, performance and satisfaction AttitudesUsePerformance Satisfaction Perceived usability.
User Interface Evaluation CIS 376 Bruce R. Maxim UM-Dearborn.
Damian Gordon.  Summary and Relevance of topic paper  Definition of Usability Testing ◦ Formal vs. Informal methods of testing  Testing Basics ◦ Five.
Managing Design Processes and Usability Testing
CS 3366: Human Computer Interaction Chapter 4: Evaluating Interface Designs September 8, 2011 Mohan Sridharan Based on Slides for the book: Designing the.
1. Learning Outcomes At the end of this lecture, you should be able to: –Define the term “Usability Engineering” –Describe the various steps involved.
1 Evaluating Interface Designs  How do you know your design is any good?  When will you know?
1 ISE 412 Human-Computer Interaction Design process Task and User Characteristics Guidelines Evaluation.
Requirements-definition User analysis
Predictive Evaluation
1 Shawlands Academy Higher Computing Software Development Unit.
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
1 User Studies: Surveys and Continuing Assessment Lecture 5.
CHAPTER 4: Evaluating interface Designs
Evaluation of Adaptive Web Sites 3954 Doctoral Seminar 1 Evaluation of Adaptive Web Sites Elizabeth LaRue by.
Chapter 4 Expert Reviews, Usability, Testing, Surveys, and Continuing Assessments Saba Alavi,Jacob Hicks,Victor Chen.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of Designing the User Interface: Strategies for Effective Human-Computer.
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Human Computer Interaction
What is Usability? Usability Is a measure of how easy it is to use something: –How easy will the use of the software be for a typical user to understand,
Chapter 3: Managing Design Processes
SEG3120 User Interfaces Design and Implementation
Methodology and Explanation XX50125 Lecture 3: Usability testing Dr. Danaë Stanton Fraser.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of Designing the User Interface: Strategies for Effective Human-Computer.
User Interface Structure Design Chapter 11. Key Definitions The user interface defines how the system will interact with external entities The system.
Software Architecture
Copyright © 2005, Pearson Education, Inc. An Instructor’s Outline of Designing the User Interface 4th Edition by Ben Shneiderman & Catherine Plaisant Slides.
1-1 Lecture 31 Enterprise Systems Development ( CSC447 ) COMSATS Islamabad Muhammad Usman, Assistant Professor.
Usability Assessment Methods beyond Testing Chapter 7 Evaluating without users.
Chapter 8 Usability Specification Techniques Hix & Hartson.
Evaluating a UI Design Expert inspection methods Cognitive Walkthrough
Mario Čagalj University of Split 2014/15. Human-Computer Interaction (HCI)
The Software Development Process
1 Evaluating Interface Designs  How do you know your design is any good?  When will you know?
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
SE 204, IES 506 – Human Computer Interaction Lecture 6: Evaluating Interface Designs Lecturer: Gazihan Alankuş Please look at the end of the.
1 Evaluating the User Experience in CAA Environments: What affects User Satisfaction? Gavin Sim Janet C Read Phil Holifield.
Usability Evaluation or, “I can’t figure this out...do I still get the donuts?”
Day 8 Usability testing.
CHAPTER 5: Evaluation and the User Experience
Review of Usability Testing
CHAPTER 4: Evaluating interface Designs
SIE 515 Design Evaluation Lecture 7.
Usability Evaluation, part 2
CHAPTER 4: Evaluating interface Designs
CS 522: Human-Computer Interaction User Evaluation
CHAPTER 4: Evaluating interface Designs
Usability Techniques Lecture 13.
Presentation transcript:

Evaluating Interface Designs Session 04 Course: T0593 / Human Computer Interaction Year: 2012

3 Outline Introduction Expert Reviews Usability Testing and Laboratories Survey Instruments Acceptance Tests Evaluation During Active Use

4 Introduction A designer who so deeply involved in the development of a product could make mistakes without realizing it. Products before use should be extensively tested in order to find errors. But extensive testing is very expensive. Evaluation plan is needed.

5 Introduction (cont.) Factors, that detemine evaluation plan –stage of design (early, middle, late) –novelty of project (well defined or exploratory) –number of expected users –criticality of the interface (life-critical medical system vs. museum exhibit support) –costs of product & budget allocated for testing –time available –experience of the design and evaluation team

6 Introduction (cont.) Evaluation range: from very complete test to three days test Test cost range: from 20% of a project down to 5%. Usability testing includes empirical and non-empirical methods, e.g. user sketches, design alternative consideration, ethnographic studies

7 Expert Reviews Informal demos can collect useful feedback Expert reviews are more formal and more effective Expert reviews may need one-half day to one week effort, although can be longer o explain operational procedures Can be scheduled at several points in the development process Different experts tend to find different problems 3-5 expert reviewers can be highly productive The expert reviewers must understand task domain, user communities, first-time user behavior

8 Expert Reviews (cont.) Expert review methods are to chose from: –Heuristic evaluation –Guidelines review –Consistency inspection –Cognitive walkthrough –Metaphors of human thinking –Formal usability inspection

9 Usability Testing and Laboratories

10 Usability Testing and Laboratories (cont.) Emergence since early 1980s Usability testing sped up many projects and produced dramatic cost savings Usability testing is planed to be performed in usability laboratories Usability lab consist of two 10 by 10 foot areas, one for the participants and another, separated by one-side glass, for the testers and observers Participants represent the user communities, with attention to computer background, task experience, motivation, education, and language ability

11 Usability Testing and Laboratories (cont.) Participation must be voluntary, informed about tasks and operation procedures Informed consent should be obtained Videotaping of performing tasks is reviewed and later shows designers or managers the problems that users encounter Participants must use Think-Aloud Technique Think-Aloud is a technique in which participants say aloud what they think.

12 Usability Testing and Laboratories (cont.) Eye tracking software: In this eye-tracking setup, the participant wears a helmet that monitors and records where on the screen the participant is looking

13 Usability Testing and Laboratories (cont.) Many variant forms of usability testing have been tried: –Paper mockups –Discount usability testing –Competitive usability testing –Universal usability testing –Field test and portable labs –Remote usability testing –Can-you-break-this tests

Survey Instruments Written user surveys are a familiar, inexpensive and generally acceptable companion for usability tests and expert reviews. Keys to successful surveys –Clear goals in advance –Development of focused items that help attain the goals. Survey goals can be tied to the components of the Objects and Action Interface model of interface design. Users could be asked for their subjective impressions about specific aspects of the interface such as the representation of: –task domain objects and actions –syntax of inputs and design of displays. 14

Survey Instruments (cont.) Other goals would be to ascertain –users background (age, gender, origins, education, income) –experience with computers (specific applications or software packages, length of time, depth of knowledge) –job responsibilities (decision-making influence, managerial roles, motivation) –personality style (introvert vs. extrovert, risk taking vs. risk aversive, early vs. late adopter, systematic vs. opportunistic) –reasons for not using an interface (inadequate services, too complex, too slow) –familiarity with features (printing, macros, shortcuts, tutorials) –their feeling state after using an interface (confused vs. clear, frustrated vs. in-control, bored vs. excited). 15

A cceptance Test Rather than the vague and misleading criterion of "user friendly," measurable criteria for the user interface can be established for the following: –Time to learn specific functions –Speed of task performance –Rate of errors by users –Human retention of commands over time –Subjective user satisfaction In a large system, there may be eight or 10 such tests to carry out on different components of the interface and with different user communities. Once acceptance testing has been successful, there may be a period of field testing before national or international distribution.. 16

Evaluation During Active Use Successful active use requires constant attention from dedicated managers, user-services personnel, and maintenance staff. Perfection is not attainable, but percentage improvements are possible. Interviews and focus group discussions –Interviews with individual users can be productive because the interviewer can pursue specific issues of concern. –Group discussions are valuable to ascertain the universality of comments. 17

Evaluation During Active Use (cont.) Continuous user-performance data logging –The software architecture should make it easy for system managers to collect data about –The patterns of system usage –Speed of user performance –Rate of errors –Frequency of request for online assistance –A major benefit is guidance to system maintainers in optimizing performance and reducing costs for all participants. Online or telephone consultants, , and online suggestion boxes –Many users feel reassured if they know there is a human assistance available –On some network systems, the consultants can monitor the user's computer and see the same displays that the user sees 18

Evaluation During Active Use (cont.) Online suggestion box or trouble reporting –Electronic mail to the maintainers or designers. –For some users, writing a letter may be seen as requiring too much effort. Discussion groups, wiki’s and newsgroups –Permit postings of open messages and questions –Some are independent, e.g. America Online and Yahoo! –Topic list –Sometimes moderators –Social systems –Comments and suggestions should be encouraged. 19

Evaluation During Active Use (cont.) 20 Bug report using Google’s Chrome browser (

Supporting Materials Formative_Evaluation.pptwww.eng.auburn.edu/~sealscd/COMP6620/16- Formative_Evaluation.ppt 21

Q & A 22