Human Computer Interaction

Slides:



Advertisements
Similar presentations
©2011 1www.id-book.com Evaluation studies: From controlled to natural settings Chapter 14.
Advertisements

Chapter 14: Usability testing and field studies
Evaluation of User Interface Design
Methodology and Explanation XX50125 Lecture 1: Part I. Introduction to Evaluation Methods Part 2. Experiments Dr. Danaë Stanton Fraser.
Agile Usability Testing Methods
CS305: HCI in SW Development Evaluation (Return to…)
WHAT IS INTERACTION DESIGN?
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
Usability Evaluation Evaluation should occur continually through the design and implementation process. Evaluation methods are applied as the interface.
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
© De Montfort University, 2001 Questionnaires contain closed questions (attitude scales) and open questions pre- and post questionnaires obtain ratings.
Methodology Overview Dr. Saul Greenberg John Kelleher.
Evaluation. formative 4 There are many times throughout the lifecycle of a software development that a designer needs answers to questions that check.
Empirical Methods in Human- Computer Interaction.
Evaluation Methodologies
User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept. Cal Poly San Luis Obispo FJK 2005.
Observing users.
An evaluation framework
Scholarship of Teaching: An Introduction New Fellows Orientation April 17, 2008 SoTL Fellows
Testing and Modeling Users Kristina Winbladh & Ramzi Nasr.
An evaluation framework
From Controlled to Natural Settings
ICS 463, Intro to Human Computer Interaction Design: 8. Evaluation and Data Dan Suthers.
Project Design and Data Collection Methods: A quick and dirty introduction to classroom research Margaret Waterman September 21, 2005 SoTL Fellows
Chapter 14: Usability testing and field studies
류 현 정류 현 정 Human Computer Interaction Introducing evaluation.
Predictive Evaluation
Usability Testing Teppo Räisänen
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
1 SWE 513: Software Engineering Usability II. 2 Usability and Cost Good usability may be expensive in hardware or special software development User interface.
Chapter 11: An Evaluation Framework Group 4: Tony Masi, Sam Esswein, Brian Rood, & Chris Troisi.
Ch 14. Testing & modeling users
Evaluation Techniques Material from Authors of Human Computer Interaction Alan Dix, et al.
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Evaluating a Research Report
Usability testing. Goals & questions focus on how well users perform tasks with the product. – typical users – doing typical tasks. Comparison of products.
Evaluation Paradigms & Techniques IS 588 Spring 2008 Dr. D. Bilal.
Chapter 12/13: Evaluation/Decide Framework Question 1.
Testing & modeling users. The aims Describe how to do user testing. Discuss the differences between user testing, usability testing and research experiments.
Usability Testing Chapter 6. Reliability Can you repeat the test?
CS2003 Usability Engineering Usability Evaluation Dr Steve Love.
CS2003 Usability Engineering Human-Centred Design Dr Steve Love.
Chapter 8 Usability Specification Techniques Hix & Hartson.
CSCI 4163 / CSCI 6904 – Winter Housekeeping  Clarification about due date for reading comments/questions  Skills sheet  Active listening handout.
LInfoVis Winter 2011 Chris Culy Evaluation of visualizations.
AMSc Research Methods Research approach IV: Experimental [1] Jane Reid
Chapter 12/13: Evaluation/Decide Framework. Why Evaluate? Why: to check that users can use the product and that they like it. Designers need to check.
Identifying needs and establishing requirements Data gathering for requirements.
1 Human-Computer Interaction Usability Evaluation: 2 Expert and Empirical Methods.
Data gathering (Chapter 7 Interaction Design Text)
User Interface Evaluation Introduction Lecture #15.
Field Studies (Ethnographic Studies) Alfred Kobsa University of California, Irvine.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
Chapter 2 Research Methods Please fill in your slides as we proceed.
SIE 515 Design Evaluation Lecture 7.
Introducing Evaluation
Evaluation techniques
From Controlled to Natural Settings
WHAT IS INTERACTION DESIGN?
Research Design Shamindra Nath Sanyal 12/4/2018 SNS.
Evaluation Paradigms & Techniques
Chapter 23 Deciding how to collect data
From Controlled to Natural Settings
Evaluation.
HCI Evaluation Techniques
CSM18 Usability Engineering
Testing & modeling users
Evaluation Techniques
Human-Computer Interaction: Overview of User Studies
Presentation transcript:

Human Computer Interaction Week 8 Evaluation

Introduction Evaluation is concerned with gathering data about the usability of a design or product by a specified group of users for a particular activity within a specified environment or work context.

Evaluation Factors The characteristics of the users of the product. The types of activities that the user will do. The environment of the study. The nature of the artefact being evaluated.

Why do evaluation? To find out what users want and what problems they experience, because the more understanding designers have about users’ needs, then the better designed their products will be.

Formative vs. Summative Evaluation Formative Evaluation provides information that contributes to the development of the system. Summative Evaluation is concerned with assessing the finished product. Our focus is on formative evaluation.

Reasons for doing evaluations Evaluations provide ways of answering questions about how well a design meets users’ needs. Understanding the real world Comparing design Engineering towards a target Checking conformance to a standard

Evaluations in the life cycle (1) During the early design stages, evaluations tend to be done to: Predict the usability of the product or an aspect of it. Check the design team’s understanding of users’ requirements by seeing how an already existing system is being used in the field. Test out ideas quickly and informally.

Evaluations in the life cycle (2) Later on in the design process, the focus shifts to: Identifying user difficulties so that the product can be more finely tuned to meet their needs. Improving an upgrade of the product.

Evaluation Methods Observing and monitoring usage Collecting users’ opinions Experimenting and benchmarking Interpretive evaluation Predictive evaluation

Evaluation Methods vs Reasons X – very likely choice, x – less likely Observation User’s Opinion Experiments Interpretive Predictive Engineering towards a target x X Understanding the real world Comparing Designs Standard Conformance

Observing and Monitoring Usage Observing can change what is being observed – the Hawthorne effect Verbal protocol help to reveal what the user is thinking The problem with think aloud protocol is that in difficult problem solving situations users usually stop talking Prompting, questions or working with pairs of users are ways to avoid silences Two types of software logging – time-stamped key-presses and interaction logging Users should always told that they are being recorded

Collecting Users’ Opinions Structured vs. Unstructured Interviews Questions in a Questionnaire should be unambiguous Ranking scales should not overlap Check that the length of the questionnaire is appropriate Always try to carry out a pilot study to test the questionnaire design Pre- and Post- questionnaire enable researchers to check changes in attitudes or performance

Experiments and Benchmarking Experiments enable us to manipulate variables associated with a design and study their effects Experimenter must know the purpose of the experiment, must state hypotheses in a way that can be tested and must consider and then choose a statistical analyses that are appropriate Pilot studies are important for determining the suitability of the experimental design Usability Engineering – Benchmarking, using special laboratories, video recording, keystroke logging, test condition is artificial, good for fine-tuning product upgrades

Interpretive Evaluation Contextual Inquiry, Cooperative Evaluation, and Participative Evaluation Users and researchers participate to identify and understand usability problems within the normal working environment of the user Ethnographic researchers strive to immerse themselves in the situation that they want to learn about Video tends to be the main data capture technique

Predictive Evaluation Usability Testing: Expensive, Predictive Evaluation: Cheaper. Method: Predicting aspects of usage, not involving users. Reviewer selection: HCI expertise, application area expertise, impartial (e.g. Involve in the past product development) Three kinds of reporting for reviews: structured, unstructured, predefined Heuristic Evaluation: reviewers examine the system or prototype as in a general review or usage simulation, guided by a set of high-level heuristics Problems with evaluations that involve users: expensive. Solution: Discount Usability Engineering

Methods Comparison Observation: Quickly highlights difficulties, rich qualitative data, affect user performance, time-consuming Users’opinion: large group of users, time-consuming (interviews), low response rate (mailed questionnaires) Experiments: User group comparison, Stats Analysis, replicable, high resources, restricted / artificial tasks, unreal environment Interpretive: real world situations, requires subjective interpretation, expertise in social sciences methods, cannot be replicated Predictive: diagnostic, few resources, early design stage, restriction in role playing, problems locating experts

Further Reading Preece, chapter 29, 30