Evaluation techniques

Slides:



Advertisements
Similar presentations
Chapter 15: Analytical evaluation
Advertisements

Human Computer Interaction
Human Computer Interaction
evaluation techniques
Evaluating UI Designs assess effect of interface on user performance and satisfaction identify specific usability problems evaluate users’ access to functionality.
Chapter 4 Design Approaches and Methods
Evaluation techniques Part 2
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
Methodology Overview Dr. Saul Greenberg John Kelleher.
Evaluation Methodologies
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
An evaluation framework
1http://img.cs.man.ac.uk/stevens Evaluation CS2391 Lecture n+1: Robert Stevens.
Evaluation techniques Part 1
Evaluation: Inspections, Analytics & Models
Usability Evaluation Methods Computer-Mediated Communication Lab week 10; Tuesday 7/11/06.
©2011 1www.id-book.com Analytical evaluation Chapter 15.
Predictive Evaluation
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
Chapter 7 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Evaluation Techniques Material from Authors of Human Computer Interaction Alan Dix, et al.
Human Computer Interaction Chapter 3 Evaluation Techniques.
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Human Computer Interaction
Usability Evaluation June 8, Why do we need to do usability evaluation?
Evaluation Techniques zEvaluation ytests usability and functionality of system yoccurs in laboratory, field and/or in collaboration with users yevaluates.
Chapter 10 Evaluation. Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world.
Testing & modeling users. The aims Describe how to do user testing. Discuss the differences between user testing, usability testing and research experiments.
Evaluation of User Interface Design 4. Predictive Evaluation continued Different kinds of predictive evaluation: 1.Inspection methods 2.Usage simulations.
Chapter 15: Analytical evaluation. Inspections Heuristic evaluation Walkthroughs.
1 Human-Computer Interaction Usability Evaluation: 1 Introduction and Analytic Methods.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in collaboration with users –evaluates.
1 Lecture 18 chapter 9 evaluation techniques. 2 Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field.
CS 580 chapter 9 evaluation techniques. Evaluation Tests usability and functionality of system Occurs in laboratory, field and/or in collaboration with.
CENG 394 Introduction to Human-Computer Interaction
AMSc Research Methods Research approach IV: Experimental [1] Jane Reid
Requirements Validation
1 Human-Computer Interaction Usability Evaluation: 2 Expert and Empirical Methods.
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
1 Usability evaluation and testing User interfaces Jaana Holvikivi Metropolia.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
User Interface Evaluation Introduction Lecture #15.
Department of CSE, KLU 1 Chapter 9 evaluation techniques.
Observational Methods Think Aloud Cooperative evaluation Protocol analysis Automated analysis Post-task walkthroughs.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation Objective –Tests usability –Efficiency – Functionality of system occurs in laboratory,
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation – tests usability and functionality of system – occurs in laboratory, field and/or in.
Stages of Research and Development
Human Computer Interaction
SIE 515 Design Evaluation Lecture 7.
Evaluation through user participation
Week: 13 Human-Computer Interaction
Human Computer Interaction Lecture 15 Usability Evaluation
Evaluation Techniques 1
Introducing Evaluation
WHAT IS INTERACTION DESIGN?
CSE310 Human-Computer Interaction
Usability Techniques Lecture 13.
evaluation techniques
Evaluation.
Week: 14 Human-Computer Interaction
HCI Evaluation Techniques
CSM18 Usability Engineering
Testing & modeling users
Evaluation Techniques
Experimental Evaluation
evaluation techniques
COMP444 Human Computer Interaction Evaluation
Evaluation: Inspections, Analytics & Models
Presentation transcript:

Evaluation techniques

Overview Evaluation tests the usability, functionality and acceptability of an interactive system. Evaluation may take place: In the laboratory In the field

Overview: Laboratory studies Advantages: Specialist equipment available Uninterrupted environment Disadvantages: Lack of context Difficult to observe several users cooperating Appropriate If system location is dangerous Or impractical for constrained single user systems to allow controlled manipulation of use

Overview: Field studies Advantages: Natural environment Context retained (though observation may alter it) Longitudinal studies possible Disadvantages: Distractions Noise Appropriate Where context is crucial for longitudinal studies

Types of Evaluation Formative evaluation Summative evaluation

Formative evaluation Repeatedly, as development proceeds Purpose: to support iterative refinement Nature: structured Fairly informal Average of 3 major design/test/re-design Many minor cycles to check minor changes “The earlier poor design features or errors are detected, the easier and cheaper they are to correct”

Summative evaluation Done once Important in field or ‘beta’ testing After implementation, ‘beta’-release Important in field or ‘beta’ testing Purpose: quality control Product is reviewed to check for approval Own specifications Prescribed standards, e.g. Health and Safety ISO Nature: formal, often involving statistical inferences

Goals of Evaluation Assess extent of Assess effect of Measure the level of System functionality System usability Assess effect of Measure the result of Interface on user Identify specific problems

What to evaluate? Usability specifications at all lifecycle stages Initial designs (pre-implementation) Partial Integrated Basically all representations produced ! Prototype at various stages Final implementation (α and β-release) Documentation

Evaluation Approaches/Methods Some approaches are based on expert evaluation (experts) : Analytic (Cognitive Walkthrough) methods Heuristic (Review) methods Model-based methods. Some approaches are based on empirical evaluation (involves users) : Observational methods Query (Survey) methods. Experimental methods Statistical Validation An evaluation method must be chosen carefully and must be suitable for the job.

Evaluation Approaches/Methods Interface Development User Involvement Analytic Specification No users Heuristic Specification & Prototype No users, role playing only Model based Observation Simulation & Prototype Real users Query Experiment Normal full Prototype Expert cos t s Empirical

Cognitive Walkthrough Heuristic Evaluation Model-based evaluation Expert Evaluation Cognitive Walkthrough Heuristic Evaluation Model-based evaluation

Expert Evaluation Analytic (Cognitive Walkthrough) methods Heuristic (Review) methods (L-23) Model-based methods

Cognitive Walkthrough Proposed by Polson et al. Evaluates design on how well it supports user in learning task Usually performed by expert in cognitive psychology Expert ‘walks though’ design to identify potential problems using psychological principles Forms used to guide analysis

Cognitive Walkthrough (ctd) For each task walkthrough considers What impact will interaction have on user? What cognitive processes are required? What learning problems may occur? Analysis focuses on goals and knowledge: does the design lead the user to generate the correct goals?

Heuristic Evaluation (L-23) Proposed by Nielsen and Molich. Usability criteria (heuristics) are identified Design examined by experts to see if these are violated Example heuristics (Nielsen 10 rules) System behaviour is predictable System behaviour is consistent Feedback is provided Heuristic evaluation `debugs' design.

Model based evaluation (L-35-39) Results from the literature used to support or refute parts of design. Care needed to ensure results are transferable to new design. Cognitive models used to filter design options E.g. GOMS prediction of user performance. Design rationale can also provide useful evaluation information

Empirical Evaluation

Empirical Evaluation Observational methods Query (Survey) methods Experimental methods

Observational Methods Think Aloud Cooperative evaluation Protocol analysis Automated analysis Post-task walkthroughs

Think Aloud User observed performing task User asked to describe what he is doing and why, what he thinks is happening etc. Advantages Simplicity - requires little expertise Can provide useful insight Can show how system is actually use Disadvantages Subjective Selective Act of describing may alter task performance

Cooperative evaluation Variation on think aloud User collaborates in evaluation Both user and evaluator can ask each other questions throughout Additional advantages Less constrained and easier to use User is encouraged to criticize system Clarification possible

Interviews Questionnaires Query Techniques Interviews Questionnaires

Interviews Analyst questions user on one-to-one basis usually based on prepared questions Informal, subjective and relatively cheap Advantages Can be varied to suit context Issues can be explored more fully Can elicit user views and identify unanticipated problems Disadvantages Very subjective Time consuming

Questionnaires Set of fixed questions given to users Advantages Quick and reaches large user group Can be analyzed more rigorously Disadvantages Less flexible Less probing

Questionnaires (ctd) Need careful design Styles of question What information is required? How are answers to be analyzed? Styles of question General Open-ended Scalar Multi-choice Ranked

Experimental evaluation

Experimental evaluation Controlled evaluation of specific aspects of interactive behaviour Evaluator chooses hypothesis to be tested A number of experimental conditions are considered which differ only in the value of some controlled variable. Changes in behavioural measure are attributed to different conditions

Experimental factors Subjects Variables Hypothesis Experimental design Who – representative, sufficient sample Variables Things to modify and measure Hypothesis What you’d like to show Experimental design How you are going to do it

Subject groups Larger number of subjects  more expensive Longer time to `settle down’ … even more variation! Difficult to timetable so … often only three or four groups

Variables Independent variable (IV) Dependent variable (DV) Characteristic changed to produce different conditions E.g. Interface style, number of menu items Dependent variable (DV) Characteristics measured in the experiment E.g. Time taken, number of errors.

Hypothesis Prediction of outcome Null hypothesis: Framed in terms of IV and DV E.g. “error rate will increase as font size decreases” Null hypothesis: States no difference between conditions Aim is to disprove this E.g. null hyp. = “no change with font size” (Eg: logos, images)

Experimental design Within groups design Between groups design Each subject performs experiment under each condition. Transfer of learning possible Less costly and less likely to suffer from user variation. Between groups design Each subject performs under only one condition No transfer of learning More users required Variation can bias results.