Chapter 20 Deciding on what to evaluate: the strategy.

Slides:



Advertisements
Similar presentations
User Studies With Camtasia Internet Librarian, Monterey, California.
Advertisements

Ch.6: Requirements Gathering, Storyboarding and Prototyping
CS305: HCI in SW Development Evaluation (Return to…)
Web E’s goal is for you to understand how to create an initial interaction design and how to evaluate that design by studying a sample. Web F’s goal is.
User Interface Evaluation Usability Inspection Methods
Each individual person is working on a GUI subset. The goal is for you to create screens for three specific tasks your user will do from your GUI Project.
Deciding How to Measure Usability How to conduct successful user requirements activity?
Analyzing and Presenting Results Establishing a User Orientation Alfred Kobsa University of California, Irvine.
Marketing Research and Information Systems
ACTIVELY ENGAGING THE STAKEHOLDER IN DEFINING REQUIREMENTS FOR THE BUSINESS, THE STAKEHOLDER, SOLUTION OR TRANSITION Requirements Elicitation.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
USABILITY AND EVALUATION Motivations and Methods.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
CS CS 5150 Software Engineering Lecture 12 Usability 2.
Part 4: Evaluation Chapter 20: Why evaluate? Chapter 21: Deciding on what to evaluate: the strategy Chapter 22: Planning who, what, where, and when Chapter.
Usable Privacy and Security Carnegie Mellon University Spring 2008 Lorrie Cranor 1 Designing user studies February.
Usability 2004 J T Burns1 Usability & Usability Engineering.
Testing and Modeling Users Kristina Winbladh & Ramzi Nasr.
An evaluation framework
Usability Testing From: earnusa/index.html I.
Preparing a User Test Alfred Kobsa University of California, Irvine.
Sofia Carlander Kinoshita Laboratory 2004/2005
Copyright © 2010 Pearson Education, Inc. Chapter 2 Data.
Chapter 14: Usability testing and field studies
Conducting Usability Tests ITSW 1410 Presentation Media Software Instructor: Glenda H. Easter.
Introduction to Usability By : Sumathie Sundaresan.
236: II'nMI Usability Testing. What is Usability Testing? Usability testing: What is it? A way to assess the usability of a design with real-world users,
Principles of User Centred Design Howell Istance.
Chapter 2 Data. Slide 2- 2 What Are Data? Data can be numbers, record names, or other labels. Not all data represented by numbers are numerical data (e.g.,
Chapter 23 How to collect data. This chapter is about the tools/techniques used to collect data Hang on, let’s review: what are we collecting? What’s.
Part 1-Intro; Part 2- Req; Part 3- Design  Chapter 20 Why evaluate the usability of user interface designs?  Chapter 21 Deciding on what you need to.
A GENERIC PROCESS FOR REQUIREMENTS ENGINEERING Chapter 2 1 These slides are prepared by Enas Naffar to be used in Software requirements course - Philadelphia.
Chapter 20 Why evaluate the usability of UI designs?
Human Computer Interaction
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Usability testing. Goals & questions focus on how well users perform tasks with the product. – typical users – doing typical tasks. Comparison of products.
What about Chapter 7?. What is the usability process? Tyldesley’s 22 possible Measurement Criteria Let’s focus on usability–A usability initiative needs.
What is Usability? Usability Is a measure of how easy it is to use something: –How easy will the use of the software be for a typical user to understand,
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Chapter 13. Reviewing, Evaluating, and Testing © 2010 by Bedford/St. Martin's1 Usability relates to five factors of use: ease of learning efficiency of.
Systems Life Cycle. Know why it is necessary to evaluate a new system Understand the need to evaluate in terms of ease-of- use, appropriateness and efficiency.
Ways of Collecting Information Interviews Questionnaires Ethnography Books and leaflets in the organization Joint Application Design Prototyping.
Chapter 6: Thinking about requirements and describing them.
Task Analysis Methods IST 331. March 16 th
Usability Engineering Dr. Dania Bilal IS 582 Spring 2006.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
Introduction to Usability By : Sumathie Sundaresan.
Delphi Evaluation Results PBA Back-End development, Autumn 2012.
By Godwin Alemoh. What is usability testing Usability testing: is the process of carrying out experiments to find out specific information about a design.
Chapter 23 Deciding how to collect data. UIDE Chapter 23 Introduction: Preparing to Collect Evaluation Data Timing and Logging Actions –Automatic Logging.
Chapter 27 Variations and more complex evaluations.
© 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 1 Prototyping.
Science Process Skills Vocabulary 8/16/04. Predicting Forming an idea of an expected result. Based on inferences.
Usability Engineering Dr. Dania Bilal IS 582 Spring 2007.
Chapter 2 The 6 W’s & H. The “W’s”  To provide context we need the W’s  Who - REQUIRED  What (and in what units) - REQUIRED  When  Where  Why –
Chapter 2 Data. Statistics – the science of collecting, analyzing, and drawing conclusions from data.
Some general tips for the interviews. Interviews Data collection method used to discover opinions held by users Works better for qualitative research.
User Interface Evaluation
AIM The aim of this study is to introduce you to the fundamental and exciting area of human computer interaction (HCI) and to prepare you for more advanced.
Planning who, what, when, and where
Usability Evaluation, part 2
TOKEN Field Usability Study Janet An Stephany Yong Timi Opeke
Investigating System Requirements
Chapter 20 Why evaluate the usability of user interface designs?
Chapter 6 Thinking about requirements and describing them
Chapter 23 Deciding how to collect data
Chapter 21 Deciding on what you need to evaluate: the strategy
HCI What ? HCI Why ? What happens when a human and a computer system interact to perform a task? task -write document, calculate budget, solve equation,
STEM PROJECT Engineering Design
Presentation transcript:

Chapter 20 Deciding on what to evaluate: the strategy

Introduction What’s the purpose of the evaluation? What data to collect? What product, system, or prototype are you testing? What constraints do you have? Answers to above form da strategy What’s the purpose of the evaluation? What data to collect? What product, system, or prototype are you testing? What constraints do you have? Answers to above form da strategy

Purpose of evaluation Qualitative or quantitative? Qualitative: not easily defined or measured  Sometimes obtained from user comments, e.g., “easy”, “difficult”, “boring”, etc. so…  Listen to your subjects (video camera, yes) Quantitative: explicit usability metrics  Clearly easier to crunch the numbers if you have some numbers to crunch  Of course measurements need to be set up: in the code, with a stopwatch, or “wrapper” program, like Tobii’s ClearView (records time, keystrokes, etc.) Qualitative or quantitative? Qualitative: not easily defined or measured  Sometimes obtained from user comments, e.g., “easy”, “difficult”, “boring”, etc. so…  Listen to your subjects (video camera, yes) Quantitative: explicit usability metrics  Clearly easier to crunch the numbers if you have some numbers to crunch  Of course measurements need to be set up: in the code, with a stopwatch, or “wrapper” program, like Tobii’s ClearView (records time, keystrokes, etc.)

Priorities and levels Prioritize usability requirements  What’s more important: domain, users, tasks, environment, constraints (costs, budgets, timescales, technology)? What’s most important drives design  Erm, what’s this got to do with evaluation? Setting usability metric levels:  Has to do with baseline and desired performance levels, i.e., “speed will improve by 50%”  Can be based on model, e.g., Fitts Law  Can be stated as a hypothesis Prioritize usability requirements  What’s more important: domain, users, tasks, environment, constraints (costs, budgets, timescales, technology)? What’s most important drives design  Erm, what’s this got to do with evaluation? Setting usability metric levels:  Has to do with baseline and desired performance levels, i.e., “speed will improve by 50%”  Can be based on model, e.g., Fitts Law  Can be stated as a hypothesis

What type of data to collect Quantitative or qualitative data? Didn’t we already go over this?  (Doncha hate it when textbooks are overly repetitive; I don’t know what it is about HCI books, but they tend to be this way)  Anyway, so this is a kind of wasted slide…  Oh wait, I get it--it’s the second question of da strategy Quantitative or qualitative data? Didn’t we already go over this?  (Doncha hate it when textbooks are overly repetitive; I don’t know what it is about HCI books, but they tend to be this way)  Anyway, so this is a kind of wasted slide…  Oh wait, I get it--it’s the second question of da strategy

What to test? (Question 3 of da strategy) What’s being evaluated: low-fidelity prototype or high-fidelity prototype? (Why not an existing system?) Low-fidelity: more for guidance and direction of design (more exploratory in nature) High-fidelity: used for exposing problems with preliminary version of UI (Question 3 of da strategy) What’s being evaluated: low-fidelity prototype or high-fidelity prototype? (Why not an existing system?) Low-fidelity: more for guidance and direction of design (more exploratory in nature) High-fidelity: used for exposing problems with preliminary version of UI

What are the constraints? (Question 4 of …) Hmm, they say this is the most important, is it? Practically speaking, I guess so… These are the pragmatic concerns:  How much time do I have to run the experiment?  Money? (Paying subjects, yeah right…)  Equipment available?  Subjects? Where to get them (Psyc pool!)  How much time do I have to analyze? Document the strategy (good idea) (Question 4 of …) Hmm, they say this is the most important, is it? Practically speaking, I guess so… These are the pragmatic concerns:  How much time do I have to run the experiment?  Money? (Paying subjects, yeah right…)  Equipment available?  Subjects? Where to get them (Psyc pool!)  How much time do I have to analyze? Document the strategy (good idea)

Global warming example Evaluating the Global Warming CD:  Learnability: easy to learn?  Satisfaction: enjoyable to use?  Navigation: easy to install, navigate, use? Exercise 21.1 (good one):  Suppose you’re a consultant and you get hauled in by the Global Warming developers (who think usability testing is a waste of time but they want to adhere to ISO 9241)  What are you going to tell them your strategy is?  What concerns/requirements do you have as the experimenter? Evaluating the Global Warming CD:  Learnability: easy to learn?  Satisfaction: enjoyable to use?  Navigation: easy to install, navigate, use? Exercise 21.1 (good one):  Suppose you’re a consultant and you get hauled in by the Global Warming developers (who think usability testing is a waste of time but they want to adhere to ISO 9241)  What are you going to tell them your strategy is?  What concerns/requirements do you have as the experimenter?

Global Warming strategy Purpose: evaluate whether navigation will be effective for students Concerns: will this be an enjoyable learning experience (will students actually learn anything?) Data to collect: comments on the UI during use (what about learning effect?) To test: prototype (just UI, no math model) Constraints: newbie evaluators Purpose: evaluate whether navigation will be effective for students Concerns: will this be an enjoyable learning experience (will students actually learn anything?) Data to collect: comments on the UI during use (what about learning effect?) To test: prototype (just UI, no math model) Constraints: newbie evaluators