Presentation is loading. Please wait.

Presentation is loading. Please wait.

Human Computer Interaction

Similar presentations


Presentation on theme: "Human Computer Interaction"— Presentation transcript:

1 Human Computer Interaction
Week 8 Evaluation

2 Introduction Evaluation is concerned with gathering data about the usability of a design or product by a specified group of users for a particular activity within a specified environment or work context.

3 Evaluation Factors The characteristics of the users of the product.
The types of activities that the user will do. The environment of the study. The nature of the artefact being evaluated.

4 Why do evaluation? To find out what users want and what problems they experience, because the more understanding designers have about users’ needs, then the better designed their products will be.

5 Formative vs. Summative Evaluation
Formative Evaluation provides information that contributes to the development of the system. Summative Evaluation is concerned with assessing the finished product. Our focus is on formative evaluation.

6 Reasons for doing evaluations
Evaluations provide ways of answering questions about how well a design meets users’ needs. Understanding the real world Comparing design Engineering towards a target Checking conformance to a standard

7 Evaluations in the life cycle (1)
During the early design stages, evaluations tend to be done to: Predict the usability of the product or an aspect of it. Check the design team’s understanding of users’ requirements by seeing how an already existing system is being used in the field. Test out ideas quickly and informally.

8 Evaluations in the life cycle (2)
Later on in the design process, the focus shifts to: Identifying user difficulties so that the product can be more finely tuned to meet their needs. Improving an upgrade of the product.

9 Evaluation Methods Observing and monitoring usage
Collecting users’ opinions Experimenting and benchmarking Interpretive evaluation Predictive evaluation

10 Evaluation Methods vs Reasons X – very likely choice, x – less likely
Observation User’s Opinion Experiments Interpretive Predictive Engineering towards a target x X Understanding the real world Comparing Designs Standard Conformance

11 Observing and Monitoring Usage
Observing can change what is being observed – the Hawthorne effect Verbal protocol help to reveal what the user is thinking The problem with think aloud protocol is that in difficult problem solving situations users usually stop talking Prompting, questions or working with pairs of users are ways to avoid silences Two types of software logging – time-stamped key-presses and interaction logging Users should always told that they are being recorded

12 Collecting Users’ Opinions
Structured vs. Unstructured Interviews Questions in a Questionnaire should be unambiguous Ranking scales should not overlap Check that the length of the questionnaire is appropriate Always try to carry out a pilot study to test the questionnaire design Pre- and Post- questionnaire enable researchers to check changes in attitudes or performance

13 Experiments and Benchmarking
Experiments enable us to manipulate variables associated with a design and study their effects Experimenter must know the purpose of the experiment, must state hypotheses in a way that can be tested and must consider and then choose a statistical analyses that are appropriate Pilot studies are important for determining the suitability of the experimental design Usability Engineering – Benchmarking, using special laboratories, video recording, keystroke logging, test condition is artificial, good for fine-tuning product upgrades

14 Interpretive Evaluation
Contextual Inquiry, Cooperative Evaluation, and Participative Evaluation Users and researchers participate to identify and understand usability problems within the normal working environment of the user Ethnographic researchers strive to immerse themselves in the situation that they want to learn about Video tends to be the main data capture technique

15 Predictive Evaluation
Usability Testing: Expensive, Predictive Evaluation: Cheaper. Method: Predicting aspects of usage, not involving users. Reviewer selection: HCI expertise, application area expertise, impartial (e.g. Involve in the past product development) Three kinds of reporting for reviews: structured, unstructured, predefined Heuristic Evaluation: reviewers examine the system or prototype as in a general review or usage simulation, guided by a set of high-level heuristics Problems with evaluations that involve users: expensive. Solution: Discount Usability Engineering

16 Methods Comparison Observation: Quickly highlights difficulties, rich qualitative data, affect user performance, time-consuming Users’opinion: large group of users, time-consuming (interviews), low response rate (mailed questionnaires) Experiments: User group comparison, Stats Analysis, replicable, high resources, restricted / artificial tasks, unreal environment Interpretive: real world situations, requires subjective interpretation, expertise in social sciences methods, cannot be replicated Predictive: diagnostic, few resources, early design stage, restriction in role playing, problems locating experts

17 Further Reading Preece, chapter 29, 30


Download ppt "Human Computer Interaction"

Similar presentations


Ads by Google