Human Computer Interaction

Slides:



Advertisements
Similar presentations
Chapter 15: Analytical evaluation
Advertisements

Human Computer Interaction
evaluation techniques
Evaluation techniques Part 2
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
Useability.
Usability Inspection n Usability inspection is a generic name for a set of methods based on having evaluators inspect or examine usability-related issues.
Evaluation Methodologies
Heuristic Evaluation Evaluating with experts. Discount Evaluation Techniques  Basis: Observing users can be time- consuming and expensive Try to predict.
Evaluation Through Expert Analysis U U U
Evaluating with experts
An evaluation framework
Usability 2004 J T Burns1 Usability & Usability Engineering.
1http://img.cs.man.ac.uk/stevens Evaluation CS2391 Lecture n+1: Robert Stevens.
Evaluation techniques Part 1
Usability Evaluation Methods Computer-Mediated Communication Lab week 10; Tuesday 7/11/06.
RESEARCH METHODS IN EDUCATIONAL PSYCHOLOGY
Usability 2009 J T Burns1 Usability & Usability Engineering.
©2011 1www.id-book.com Analytical evaluation Chapter 15.
1 Usability evaluation and testing User interfaces Jaana Holvikivi Metropolia.
Predictive Evaluation
Chapter 7 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Chapter 11: An Evaluation Framework Group 4: Tony Masi, Sam Esswein, Brian Rood, & Chris Troisi.
Evaluation Techniques Material from Authors of Human Computer Interaction Alan Dix, et al.
Human Computer Interaction Chapter 3 Evaluation Techniques.
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Human Computer Interaction
Usability Evaluation June 8, Why do we need to do usability evaluation?
Evaluation Techniques zEvaluation ytests usability and functionality of system yoccurs in laboratory, field and/or in collaboration with users yevaluates.
Chapter 10 Evaluation. Objectives Define the role of evaluation To understand the importance of evaluation Discuss how developers cope with real-world.
Level 2 Prepared by: RHR First Prepared on: Nov 23, 2006 Last Modified on: Quality checked by: MOH Copyright 2004 Asia Pacific Institute of Information.
New Media Research Methods- Part 1 Planning stage: Using Methods, Data and Tools to explore User’s Experience. How methods relate to research questions?
Chapter 15: Analytical evaluation. Inspections Heuristic evaluation Walkthroughs.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in collaboration with users –evaluates.
1 Lecture 18 chapter 9 evaluation techniques. 2 Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field.
CS 580 chapter 9 evaluation techniques. Evaluation Tests usability and functionality of system Occurs in laboratory, field and/or in collaboration with.
CENG 394 Introduction to Human-Computer Interaction
Chapter 15: Analytical evaluation Q1, 2. Inspections Heuristic evaluation Walkthroughs Start Q3 Reviewers tend to use guidelines, heuristics and checklists.
Evaluating a UI Design Expert inspection methods Cognitive Walkthrough
Usability 1 Usability evaluation Without users - analytical techniques With users - survey and observational techniques.
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
1 Usability evaluation and testing User interfaces Jaana Holvikivi Metropolia.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Fall 2002CS/PSY Predictive Evaluation (Evaluation Without Users) Gathering data about usability of a design by a specified group of users for a particular.
COGNITIVE WALKTHROUGH Vigneshwar Poojar. The cognitive walkthrough is a formalized way of imagining people’s thoughts and actions when they use an interface.
Department of CSE, KLU 1 Chapter 9 evaluation techniques.
Observational Methods Think Aloud Cooperative evaluation Protocol analysis Automated analysis Post-task walkthroughs.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in.
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation Objective –Tests usability –Efficiency – Functionality of system occurs in laboratory,
Chapter 9 evaluation techniques. Evaluation Techniques Evaluation – tests usability and functionality of system – occurs in laboratory, field and/or in.
Human Computer Interaction
SIE 515 Design Evaluation Lecture 7.
Evaluation through user participation
Human Computer Interaction Lecture 15 Usability Evaluation
Evaluation Techniques 1
SY DE 542 User Testing March 7, 2005 R. Chow
Evaluation techniques
CSE310 Human-Computer Interaction
Chapter 26 Inspections of the user interface
evaluation techniques
Evaluation.
Week: 14 Human-Computer Interaction
HCI Evaluation Techniques
CSM18 Usability Engineering
Evaluation Techniques
Experimental Evaluation
evaluation techniques
COMP444 Human Computer Interaction Evaluation
Presentation transcript:

Human Computer Interaction Chapter-9 Evaluation techniques

Evaluation Techniques Evaluation test the usability, functionality and acceptability of interactive system To assess design and test systems to ensure that they actually behave as we expect and meet user requirements should be considered at all stages in the design life cycle It is not possible to perform extensive experimental testing continuously throughout the design, but analytic & informal techniques can be used

Goals of Evaluation Three main goals: assess extent & accessibility of system functionality assess users experience of interaction identify specific problems

Evaluation through Expert Analysis Evaluation should be performed before any implementation work has started Expensive mistakes can be avoided Later in design process that error is discovered, the more costly it is to put right A number of methods proposed to evaluate interactive systems through expert analysis consider four approaches to expert analysis: cognitive walkthrough, heuristic evaluation, use of models and use of previous work

Cognitive Walkthrough Proposed by Polson evaluates design on how well it supports user in learning task usually performed by expert in cognitive psychology expert ‘walks through’ design to identify potential problems using psychological principles main focus “how easy a system is to learn” Cognitive > concerned with acquisition of knowledge: relating to the process of acquiring knowledge by the use of reasoning, intuition, or perception 2. relating to thought: relating to thought processes Encarta« World English Dictionary ⌐ & (P) 1999 Microsoft Corporation. All rights reserved. Developed for Microsoft by Bloomsbury Publishing Plc.

Cognitive Walkthrough For each task walkthrough considers what impact will interaction have on user? what cognitive processes are required? what learning problems may occur? Analysis focuses on goals and knowledge: does the design lead the user to generate the correct goals?

Heuristic Evaluation Proposed by Nielsen and Molich. A heuristic is a guideline or general principle or rule of thumb that can guide a design decision Heuristic evaluation perform on design specification so it is useful for evaluating early design It can also used on prototypes, storyboards and fully functioning systems, therefore it is flexible and cheap approach design examined by experts to see if these are violated heuristic > procedure for getting solution: a helpful procedure for arriving at a solution but not necessarily a proof Encarta« World English Dictionary ⌐ & (P) 1999 Microsoft Corporation. All rights reserved. Developed for Microsoft by Bloomsbury Publishing Plc. Violated > disregard something: to act contrary to something such as a law, contract, or agreement, especially in a way that produces significant effects 2. rape somebody: to rape or sexually assault somebody 3. disturb something: to disturb or interrupt something in a rude or violent way

Heuristic Evaluation Example heuristics system behaviour is predictable system behaviour is consistent feedback is provided Heuristic evaluation `debugs' design.

Heuristic Evaluation Nielsen’s ten heuristics are: Visibility of system status Match b/w system and the real world User control and freedom Consistency and standards Error prevention Recognition rather than recall Flexibility and efficiency Aesthetic and minimalist design Help users recognize and recover from errors Help and documentation prevention > ction that stops something from happening: an action or actions taken to stop somebody from doing something or to prevent something from taking place • the prevention of crime 2. something that acts to prevent something: an action or measure that makes it impossible or very difficult for somebody to do a certain thing, or for something to happen Encarta« World English Dictionary ⌐ & (P) 1999 Microsoft Corporation. All rights reserved. Developed for Microsoft by Bloomsbury Publishing Plc. Aesthetic > action that stops something from happening: an action or actions taken to stop somebody from doing something or to prevent something from taking place • the prevention of crime Aesthetic > beautiful: pleasing in appearance

Model-based evaluation Another expert-based approach is the use of models Certain cognitive and design models provide a means of combining design specification and evaluation into same framework e.g. GOMS (goals, operators, methods & selection) model predicts user performance with a particular interface and can used to filter design options. Design rationale can also provide useful evaluation information

Review -based evaluation Results from the literature used to support or refute parts of design. Care needed to ensure results are transferable to new design. refute > disprove

Evaluating through user Participation The techniques considered so far concentrate on evaluating a design or system through analysis by designer or expert evaluator, rather than testing with actual users There are number of different approaches to evaluation through user participation

Styles of evaluation We distinguish between two distinct evaluation styles: Those performed under laboratory conditions Those conducted in the work environment or ‘in the field’

Laboratory studies Advantages: Disadvantages: specialist equipment available uninterrupted environment Disadvantages: lack of context (filling cabinets, calendars, books) difficult to observe several users cooperating context > environment

Field Studies Advantages: Disadvantages: natural environment longitudinal studies possible Disadvantages: distraction noise Distraction > interruption

Experimental evaluation One most powerful methods of evaluating a design is to use a controlled experiment It can be used to study a wide range of different issues at different levels of detail There are number of factors that are important to overall reliability of experiment, which must be considered carefully in experimental design This include the participants chosen, variables tested and manipulated and hypothesis tested hypothesis > assumption, suggestion

Experimental factor Subjects Variables Hypothesis Experimental design who – representative Variables things to modify and measure Hypothesis what you’d like to show Experimental design how you are going to do it

Variables independent variable (IV) dependent variable (DV) IV are those elements of the experiment that are manipulated to produce different conditions for comparison. e.g. interface style, number of menu items and icon design dependent variable (DV) DV are the variables that can be measured in experiment, their value is ‘dependent’ on the changes made to independent variable e.g. time taken to complete task, number of errors made

Hypothesis A hypothesis is a prediction of outcome of an experiment. The aim of experiment is to show this prediction is correct

Experimental design In order to produce reliable and generalizable results, an experiment must be carefully designed The first phase in experimental design is to choose the hypothesis: to decide exactly what it is you are trying to demonstrate In doing this we are likely to clarify the independent and dependent variables a number of experimental conditions are considered which differ only in the value of some variable

Experimental studies on groups More difficult than single-user experiments Problems with: complexities of human-human communication & group working choice of task data gathering analysis

Participant groups larger number of subjects  more expensive longer time to `settle down’ … even more variation! difficult to timetable

The task Choosing a suitable task is also difficult, we may want to test a variety of different task types: options: creative task e.g. ‘write a short report on …’ decision games e.g. desert survival task control task e.g. ARKola bottling plant

Data gathering problems: one solution: Even in a single-user experiment use several video cameras In group setting this is replicated for each participant problems: synchronisation volume! one solution: record from each perspective

Observational techniques A particular way to gather information about actual user of system is to observe users interacting with it Usually they are asked to complete a set of predetermined tasks The evaluator watches and records the users action Users are asked to elaborate their actions by ‘thinking aloud’

Think Aloud user observed performing task user asked to describe what he is doing and why, what he thinks is happening etc. Advantages simplicity - requires little expertise can provide useful insight can show how system is actually use

Cooperative evaluation A variation on think aloud is known as cooperative evaluation in which the user is encouraged to see himself as collaborator both user and evaluator can ask each other questions throughout Additional advantages less constrained and easier to use user is encouraged to criticize system the evaluator can clarify points of confusion

Protocol analysis paper and pencil – cheap, limited to writing speed audio – good for think aloud, difficult to match with other protocols video – accurate and realistic, needs special equipment, obtrusive computer logging – relatively easy to get system automatically to record user actions, it tells us what user is doing on system user notebooks – coarse and subjective, useful insights, good for longitudinal studies

automated analysis – EVA Analyzing protocols, whether video, audio or system logs is time consuming It is harder if there is more than one stream of data One solution to this problem is to provide automatic analysis tools to support the task EVA (Experimental Video Annotator) is a system that runs on a multimedia workstation with a direct link to video recorder Post task walkthrough user reacts on action after the event used to fill in intention

Interviews Questionnaires Query Techniques Interviews Questionnaires

Interviews analyst questions user on one-to-one basis usually based on prepared questions informal, subjective and relatively cheap Advantages issues can be explored more fully can elicit user views and identify unanticipated problems Disadvantages very subjective time consuming

Questionnaires Set of fixed questions given to users Advantages quick and reaches large user group can be analyzed more rigorously Disadvantages less flexible less probing

Questionnaires Need careful design Styles of question what information is required? how are answers to be analyzed? Styles of question general open-ended scalar multi-choice

Choosing an evaluation method A range of techniques available for evaluating system at all stages in design process So how do we decide which methods are most appropriate……no hard and fast rules Each method has its particular strengths and weakness and each is useful There are number of factors that should be taken into account when selecting evaluation techniques

Choosing an Evaluation Method when in process: design vs. implementation style of evaluation: laboratory vs. Field type of measures: qualitative vs. quantitative level of information: high level vs. low level resources available: time, subjects, equipment, expertise