Chapter 10 Evaluation. Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world.

Slides:



Advertisements
Similar presentations
DEVELOPING A METHODOLOGY FOR MS3305 CW2 Some guidance.
Advertisements

Chapter 15: Analytical evaluation
Human Computer Interaction
Human Computer Interaction
Acknowledgements: Most of this course is based on the excellent course offered by Prof. Kellogg Booth at the British Columbia University, Vancouver, Canada.
Ch 11 Cognitive Walkthroughs and Heuristic Evaluation Yonglei Tao School of Computing and Info Systems GVSU.
11 HCI - Lesson 5.1 Heuristic Inspection (Nielsen’s Heuristics) Prof. Garzotto.
Electronic Communications Usability Primer.
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
Part 4: Evaluation Days 25, 27, 29, 31 Chapter 20: Why evaluate? Chapter 21: Deciding on what to evaluate: the strategy Chapter 22: Planning who, what,
Evaluation Methodologies
Design and Evaluation of Iterative Systems n For most interactive systems, the ‘design it right first’ approach is not useful. n The 3 basic steps in the.
Heuristic Evaluation Evaluating with experts. Discount Evaluation Techniques  Basis: Observing users can be time- consuming and expensive Try to predict.
Evaluation Through Expert Analysis U U U
Evaluating with experts
Usability 2004 J T Burns1 Usability & Usability Engineering.
Evaluation techniques Part 1
Evaluation: Inspections, Analytics & Models
Heuristic evaluation IS 403: User Interface Design Shaun Kane.
©2011 1www.id-book.com Analytical evaluation Chapter 15.
류 현 정류 현 정 Human Computer Interaction Introducing evaluation.
Predictive Evaluation
Evaluation Techniques Material from Authors of Human Computer Interaction Alan Dix, et al.
Part 1-Intro; Part 2- Req; Part 3- Design  Chapter 20 Why evaluate the usability of user interface designs?  Chapter 21 Deciding on what you need to.
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Nielsen’s Ten Usability Heuristics
Human Computer Interaction
Usability Evaluation/LP Usability: how to judge it.
10 Usability Heuristics for User Interface Design.
Multimedia Specification Design and Production 2012 / Semester 1 / week 5 Lecturer: Dr. Nikos Gazepidis
Usability Evaluation June 8, Why do we need to do usability evaluation?
Evaluation Techniques zEvaluation ytests usability and functionality of system yoccurs in laboratory, field and/or in collaboration with users yevaluates.
SEG3120 User Interfaces Design and Implementation
Level 2 Prepared by: RHR First Prepared on: Nov 23, 2006 Last Modified on: Quality checked by: MOH Copyright 2004 Asia Pacific Institute of Information.
New Media Research Methods- Part 1 Planning stage: Using Methods, Data and Tools to explore User’s Experience. How methods relate to research questions?
Design 2 (Chapter 5) Conceptual Design Physical Design Evaluation
Chapter 15: Analytical evaluation. Inspections Heuristic evaluation Walkthroughs.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in collaboration with users –evaluates.
CENG 394 Introduction to Human-Computer Interaction
Chapter 15: Analytical evaluation Q1, 2. Inspections Heuristic evaluation Walkthroughs Start Q3 Reviewers tend to use guidelines, heuristics and checklists.
Evaluating a UI Design Expert inspection methods Cognitive Walkthrough
Usability 1 Usability evaluation Without users - analytical techniques With users - survey and observational techniques.
Developed by Tim Bell Department of Computer Science and Software Engineering University of Canterbury Human Computer Interaction.
Heuristic Evaluation Short tutorial to heuristic evaluation
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Introduction to Evaluation “Informal” approaches.
Fall 2002CS/PSY Predictive Evaluation (Evaluation Without Users) Gathering data about usability of a design by a specified group of users for a particular.
User Interface Evaluation Introduction Lecture #15.
Ten Usability Heuristics with Example.. Page 2 Heuristic Evaluation Heuristic evaluation is the most popular of the usability inspection methods. Heuristic.
User Interface Evaluation Heuristic Evaluation Lecture #17.
Observational Methods Think Aloud Cooperative evaluation Protocol analysis Automated analysis Post-task walkthroughs.
© 2016 Cognizant. © 2016 Cognizant Introduction PREREQUISITES SCOPE Heuristic evaluation is a discount usability engineering method for quick, cheap,
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation – tests usability and functionality of system – occurs in laboratory, field and/or in.
Ten Usability Heuristics These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of.
Human Computer Interaction
SIE 515 Design Evaluation Lecture 7.
Evaluation through user participation
Human Computer Interaction Lecture 15 Usability Evaluation
Evaluation techniques
Unit 14 Website Design HND in Computing and Systems Development
Heuristic Evaluation Jon Kolko Professor, Austin Center for Design.
Evaluation.
HCI Evaluation Techniques
Nilesen 10 hueristics.
CSM18 Usability Engineering
Evaluation Techniques
Experimental Evaluation
Evaluation: Inspections, Analytics & Models
Presentation transcript:

Chapter 10 Evaluation

Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world constraints. Explain the concepts and terms used to discuss evaluation. Examine how different techniques are used at different stages of development.

What is Evaluation? Evaluation is a process by which the interface is tested against the needs and practices of the users. Evaluation should occur throughout the design life cycle. The results of the evaluation will be used into modifications of the design. This process helps to ensure that any problem occurs during the lifecycle phases shall be solved earlier and easier to do the modification.

What is Evaluation? part of the system are simulated and tested. prototypes: part of the system, mock-ups, storyboards, paper systems. Evaluation Analysis phase Design phase Pre-production phase (Develop/Implement) Pre-production phase (Develop/Implement) analyzing work that has been done in similar fields. get feedback from users of previous designed systems. changes will be the cheapest. develop details of the system. measure user performance.

Goals of Evaluation There are 3 main goals: To assess the extent of the system’s functionality To assess the effect of the interface on the user To identify any specific problems with the system.

Goals of Evaluation The system’s functionality must accord with the user’s task requirements In other words, the design of the system should enable the user to perform the tasks more easily. The system must function and reachable by the user, this involves matching the use of the system to the user’s expectations of the task. It is also important to measure the impact of the design on the user.

Star Lifecycle EVALUATION IMPLEMENTATION TASK ANALYSIS/ FUNCTIONAL ANALYSIS PROTOTYPING REQUIREMENT SPECIFICATION CONCEPTUAL DESIGN/ FORMAL DESIGN The Star Life Cycle- adapted from Hix and Hartson, 1993

Styles of Evaluation There are two main styles: Evaluation performed under laboratory conditions Evaluation performed in the work environment or ‘in the field’

Styles of Evaluation Laboratory studies In some cases, it may not involve any users. done in the lab which contain a complete equipment for the evaluation. Advantage: suitable for the system that need to be located in a dangerous or remote location and involved a single- user tasks. Disadvantages: the system is not tested in real environment. the system is not tested in real environment. users handling the system not in a real way. users handling the system not in a real way.

Styles of Evaluation Field studies It takes the designer or evaluator out into the user’s work environment. Advantage: You will be able to observe interactions between systems and individuals which would have been missed in a laboratory study. Disadvantage: interruptions by noises (phone calls) and movements.

Why you need to evaluate? Nielsen Norman Group point out: “User experience” encompasses all aspects of the end- user’s interaction.. The first requirement for an exemplary user experience is to meet the exact needs of the customer, without fuss or bother. Next comes simplicity and elegance that produce products that are a joy to own, a joy to use.” Evaluation is needed to check that users can use the product and like it.

Why you need to evaluate? 4 good reasons for investing in user testing/ evaluation: Problems are fixed before the product is delivered, not after. The team can concentrate on real problems, not imaginary ones Time to market is sharply reduced

When to evaluate? The product can be: A brand new product An upgrade product If the product new ; Time is usually invested in market research Designers may support this process by developing mockups of the potential product To gain understanding of users’ needs and early requirements

When to evaluate? An upgrade product: Focus on improving the overall product The evaluations will compare user performance and attitudes towards the previous version and the new one. Evaluations done during design to check the products continues to meet users’ needs are known as formative evaluations Evaluations done to assess the success of a finished product or to check a standard is upheld, are known as summative evaluation

Evaluating the design Evaluation should occur throughout the design process. The first evaluation of a system should ideally be performed before any implementation work has started 4 possible approaches: Cognitive walkthrough Heuristic evaluation Review-based evaluation Model-based evaluation

Cognitive Walkthrough A review technique where expert evaluators construct task scenarios from a specification or early prototype and then role play the part of a user working with that interface--"walking through" the interface. They act as if the interface was actually built and they (in the role of a typical user) was working through the tasks. This technique evaluate how well the interface supports "exploratory learning," i.e., first-time use without formal training. It can be performed by the system's designers in the early stages of design, before empirical user testing is possible.

Cognitive Walkthrough The early versions relied on a detailed series of questions, to be answered on paper or electronic forms. The cognitive walkthrough was developed as an additional tool in usability engineering, to give design teams a chance to evaluate early mockups of designs quickly. It does not require a fully functioning prototype, or the involvement of users. Instead, it helps designers to take on a potential user’s perspective, and therefore to identify some of the problems that might arise in interactions with the system.

Cognitive Walkthrough To do the cognitive walkthrough, you need: A description of the prototype of the system A description of the task that user needs to perform on the system A complete, written list of the actions needed to complete the task An indication of who the users are, and what kind of knowledge and experience the evaluators can assume about them.

Cognitive Walkthrough: Example For example, operating a car begins with the goals of opening the door, sitting down in the driver's seat with the controls easily accessible, and starting the car. And we're not even driving yet! This example shows the granularity that some walkthroughs attain. The goal of "opening the door" could be broken down into sub-goals: find the key, orient the key, unlock the door, open the door. Each of these goals requires cognitive (thinking) and physical actions. To open the door, do I orient my hand with the palm up or with the palm down? What affordances are provided for opening the door?

Heuristic Evaluation A method for quick, cheap, and easy evaluation of a user interface design. It is a guideline or general principle or rule of thumb that can guide a design decision or be used to critique a decision that has already been made. Heuristic evaluation is the most popular of the usability inspection methods. The goal is to find the usability problems in the design so that they can be attended to as part of an iterative design process. Have a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the "heuristics").

Heuristic Evaluation The heuristics are related to principles and guidelines. The list is as follows: Visibility of system status – system should always keep users informed about what is going on Match between system and real world – system should speak the user’s language User control and freedom- support undo and redo. Make user feels in control Consistency & standards- follow platform convention Error prevention – prevent a problem from occurring in the first place

Heuristic Evaluation Recognition rather than recall- make objects, actions and options visible Flexibility & efficiency- allow users to tailor frequent actions Aesthetic & minimalist design- dialogs should not contain information which is irrelevant or rarely needed. Helps user recognize, diagnose & recover from error- error messages should be expressed in plain language, precise and constructively suggest a solution Help and documentation- provide help and documentation

Review-based evaluation Experiment is done to a specific domain. For example, the usability issues: the review is done on the menu designs, the recall of command name and the choice of icons. May use the output to support the aspects of design. The reviewer must therefore select evidence carefully, noting the experimental design chosen, the population of subjects used, the analyses and the assumptions made.

Model-based evaluation May use cognitive or design models to perform the evaluation. For example the GOMS(Goals, Operators, Methods and Selections) used to predict user performance with a particular interface and also can be used to filter particular design options. Design methodologies also have a role in evaluation. It may provide a framework in which design options can be evaluated.

Evaluating the implementation This type of evaluation will involve users and actual implementation of the system. The approaches are: Experimental methods Observational methods Query techniques

Empirical methods: Experimental evaluation One of the most powerful methods of evaluating design is to use a controlled experiment. The evaluator will choose a hypothesis to test, which can be determined by measuring some attribute of subject behavior. Main components in experimental evaluation are as follows: Subjects- subject should be chosen to match the expected user population. May involve the actual users. The sample size also must be identified.

Empirical methods: Experimental evaluation Variables – experiments manipulate and measure variables under controlled conditions, in order to test the hypothesis. Two main type : variable to manipulate, and variable to measure. Hypothesis – a prediction of the outcome of an experiment. The aim of the experiment is to show that this prediction is correct. Experimental design – in order to produce reliable and generalizable results, an experiment must be carefully design Statistical measures – there are 2 rules, look at the data and to save the data. This can be used to interpret the data.

Observational techniques A popular way to gather information about actual use of a system is to observe users interacting with it. Usually they are asked to complete a set of predetermined tasks, and may perform it in the users’ place of work. 2 ways to perform observational techniques: Think aloud Cooperative evaluation

Observational: Think aloud May be used to observe how the system is actually used user observed performing task user is asked to describe what he is doing and why, what he thinks is happening etc. Advantages simplicity - requires little expertise can provide useful insight can show how system is actually use Disadvantages subjective selective depending on the tasks provided

Observational: Cooperative evaluation A variation on think aloud User is encouraged to see himself as a collaborator in the evaluation and not simply as an experimental subjects both user and evaluator can ask each other questions Additional advantages less constrained and easier to use user is encouraged to criticize system clarification possible

Query Techniques Usually based on prepared questions Informal, subjective and relatively cheap Can be useful in eliciting detail of the user’s view of a system The best way to find out how a system meets user requirements is to ask the user. Advantages can be varied to suit context issues can be explored more fully can elicit user views and identify unanticipated problems Simple and cheap

Query Techniques Disadvantages Very subjective Time consuming Difficult to get accurate feedback 2 ways to perform query techniques: Interviews questionnaires

Choosing an Evaluation Method Factors to consider: what style of evaluation is required? laboratory vs field how objective should the technique be? subjective vs objective what type of measures are required? qualitative vs quantitative

Choosing an Evaluation Method what level of information is required? High level vs low level what level of interference? obtrusive vs unobtrusive what resources are available? time, subjects, equipment, expertise

Example Choose an appropriate evaluation method for each situations in the next page. In each case identify The subjects. The technique used. Representative tasks to be examined Measurements that would be appropriate. An outline plan for carrying out the evaluation.

Example 1 You are at an early stage in the design of a spreadsheet package and you wish to test what type of icons will be easiest to learn.

Spread Sheet Package Subjects Typical users: secretaries, academics, students, accountants, home users, schoolchildren Technique Heuristic evaluation Representative tasks Sorting data, printing spreadsheet, formatting cells, adding functions, producing graphs Measurements Speed of recognition, accuracy of recognition, user-perceived clarity Outline plan Test the subjects with examples of each icon in various styles noting responses

Example 2 You have developed a group decision support system for a solicitors' office.

Group decision support system Subjects Solicitors, legal assistants, possibly clients Technique Cognitive walkthrough Representative tasks Anything requiring shared decision making: compensation claims, plea bargaining, complex issues with a diverse range of expertise needed. Measurements Accuracy of information presented and accessible, veracity of audit trail of discussion, screen clutter and confusion, confusion owing to turn-taking protocols Outline plan Evaluate by having experts walk through the system performing tasks, commenting as necessary

Example 3 You have been asked to develop a system to store and manage student exam results and would like to test two different designs prior to implementation or prototyping.

Exam Management System Subjects Exams officer, secretaries, academics Technique Think aloud, questionnaires Representative tasks Storing marks, altering marks, deleting marks, collating information, security protection Measurements Ease of use, levels of security and error correction provided, accuracy of user Outline plan Users perform tasks set, with running verbal commentary on immediate thoughts and considered views gained by questionnaire at end.

Summary Evaluation is an important part of the design process and should take place throughout the design life cycle. The aim is to test the functionality and usability of the design and to identify and rectify any problems.