Methodology Overview Why do we evaluate in HCI? Why should we use different methods? How can we compare methods? What methods are there? see www.cpsc.ucalgary.ca/~saul/681/

Slides:



Advertisements
Similar presentations
Chapter 14: Usability testing and field studies
Advertisements

CS305: HCI in SW Development Evaluation (Return to…)
Saul Greenberg User Centered Design Why User Centered Design is important Approaches to User Centered Design.
CPSC 481 Foundations and Principles of Human Computer Interaction
Department of Computer Science
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
Saul Greenberg CPSC 481 Foundations and Principles of Human Computer Interaction James Tam.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
Methodology Overview Dr. Saul Greenberg John Kelleher.
James Tam User Centered Design Why User Centered Design is important Approaches to User Centered Design.
1 User Centered Design and Evaluation. 2 Overview Why involve users at all? What is a user-centered approach? Evaluation strategies Examples from “Snap-Together.
Saul Greenberg CPSC 481 Foundations and Principles of Human Computer Interaction James Tam.
Graphical User Interfaces Design and usability Saul Greenberg Professor University of Calgary Slide deck by Saul Greenberg. Permission is granted to use.
Foundations and Principles of Human Computer Interaction Slide deck by Saul Greenberg. Permission is granted to use this for non-commercial purposes as.
Midterm Exam Review IS 485, Professor Matt Thatcher.
Evaluation Methodologies
User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept. Cal Poly San Luis Obispo FJK 2005.
Inspection Methods. Inspection methods Heuristic evaluation Guidelines review Consistency inspections Standards inspections Features inspection Cognitive.
Design and Evaluation of Iterative Systems n For most interactive systems, the ‘design it right first’ approach is not useful. n The 3 basic steps in the.
An evaluation framework
What is a good length of string? –Depends on its use How do you design a good length of string? –Can be determined by a process What is a good user interface?
Course Wrap-Up IS 485, Professor Matt Thatcher. 2 C.J. Minard ( )
An evaluation framework
James Tam CPSC 481 Foundations and Principles of Human Computer Interaction James Tam.
1 User Centered Design and Evaluation. 2 Overview My evaluation experience Why involve users at all? What is a user-centered approach? Evaluation strategies.
Saul Greenberg Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of.
ISE554 The WWW 3.4 Evaluation Methods. Evaluating Interfaces with Users Why evaluation is crucial to interface design General approaches and tradeoffs.
HCI revision lecture. Main points Understanding Applying knowledge Knowing key points Knowing relationship between things If you’ve done the group project.
James Tam Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of ethics.
Mid-Term Exam Review IS 485, Professor Matt Thatcher.
INTRODUCTION. Concepts HCI, CHI Usability User-centered Design (UCD) An approach to design (software, Web, other) that involves the user Interaction Design.
Science and Engineering Practices
Chapter 14: Usability testing and field studies
류 현 정류 현 정 Human Computer Interaction Introducing evaluation.
Predictive Evaluation
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
Human Computer Interaction
1 Issues in Assessment in Higher Education: Science Higher Education Forum on Scientific Competencies Medellin-Colombia Nov 2-4, 2005 Dr Hans Wagemaker.
CS 3120 USER INTERFACE DESIGN, IMPLEMENTATION AND EVALUATION (UIDIE) Dr. Ben Schafer
Testing & modeling users. The aims Describe how to do user testing. Discuss the differences between user testing, usability testing and research experiments.
Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)
CS2003 Usability Engineering Human-Centred Design Dr Steve Love.
Mario Čagalj University of Split 2014/15. Human-Computer Interaction (HCI)
Introduction to Earth Science Section 2 Section 2: Science as a Process Preview Key Ideas Behavior of Natural Systems Scientific Methods Scientific Measurements.
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Overview and Revision for INFO3315. The exam
Evaluation Methods - Summary. How to chose a method? Stage of study – formative, iterative, summative Pros & cons Metrics – depends on what you want to.
Chapter 15: Analytical evaluation. Aims: Describe inspection methods. Show how heuristic evaluation can be adapted to evaluate different products. Explain.
DECIDE: An evaluation framework. DECIDE: a framework to guide evaluation D D etermine the goals. E E xplore the questions. C C hoose the evaluation approach.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Research Word has a broad spectrum of meanings –“Research this topic on ….” –“Years of research has produced a new ….”
Lecture №4 METHODS OF RESEARCH. Method (Greek. methodos) - way of knowledge, the study of natural phenomena and social life. It is also a set of methods.
Design Evaluation Overview Introduction Model for Interface Design Evaluation Types of Evaluation –Conceptual Design –Usability –Learning Outcome.
Research Design. How do we know what we know? The way we make reasoning Deductive logic Begins with one or more premises, reasoning then proceeds logically.
Methodology Overview basics in user studies Lecture /slide deck produced by Saul Greenberg, University of Calgary, Canada Notice: some material in this.
SIE 515 Design Evaluation Lecture 7.
Task-Centered Walkthrough
Imran Hussain University of Management and Technology (UMT)
Methodology Overview 2 basics in user studies Lecture /slide deck produced by Saul Greenberg, University of Calgary, Canada Notice: some material in this.
Prototyping.
Introducing Evaluation
Usability Techniques Lecture 13.
Gathering Requirements
Evaluation.
HCI Evaluation Techniques
Testing & modeling users
Interface Design and Usability
Presentation transcript:

Methodology Overview Why do we evaluate in HCI? Why should we use different methods? How can we compare methods? What methods are there? see

Why Do We Evaluate In HCI? 1. Evaluation to produce generalized knowledge are there general design principles? are there theories of human behaviour? oexplanatory opredictive can we validate ideas / visions / hypotheses? evaluation produces: validated theories, principles and guidelines evidence supporting/rejecting hypotheses / ideas / visions…

Why Do We Evaluate In HCI? 2. Evaluation as part of the Design Process design implementationevaluation

Why Do We Evaluate In HCI? A. Pre-design stage: what do people do? what is their real world context and constraints? how do they think about their task? how can we understand what we need in system functionality? can we validate our requirements analysis? evaluation produces key tasks and required functionality key contextual factors descriptions of work practices organizational practices useful key requirements user type…

Why Do We Evaluate In HCI? B. Initial design stage: evaluate choices of initial design ideas and representations usually sketches, brainstorming exercises, paper prototypes ois the representation appropriate? odoes it reflect how people think of their task evaluation produces: user reaction to design validation / invalidation of ideas list of conceptual problem areas (conceptual bugs) new design ideas

Why Do We Evaluate In HCI? C. Iterative design stage iteratively refine / fine tune the chosen design / representation evolve low / medium / high fidelity prototypes and products look for usability bugs ocan people use this system? evaluation produces: user reaction to design validation and list of problem areas (bugs) variations in design ideas

Why Do We Evaluate In HCI? D. Post-design stage acceptance test: did we deliver what we said we would? overify human/computer system meets expected performance criteria oease of learning, usability, user’s attitude, time, errors… – e.g., 9/10 first-time users will successfully download pictures from their camera within 3 minutes, and delete unwanted ones in an additional 3 minutes revisions: what do we need to change? effects: what did we change in the way people do their tasks? in the field: do actual users perform as we expected them to? evaluation produces testable usability metrics end user reactions validation and list of problem areas (bugs) changes in original work practices/requirements

Articulate: who users are their key tasks User and task descriptions Goals: Methods: Products: Brainstorm designs Task centered system design Participatory design User- centered design Evaluate Psychology of everyday things User involvement Representation & metaphors low fidelity prototyping methods Throw-away paper prototypes Participatory interaction Task scenario walk- through Refined designs Graphical screen design Interface guidelines Style guides high fidelity prototyping methods Testable prototypes Usability testing Heuristic evaluation Completed designs Alpha/beta systems or complete specification Field testing Interface Design and Usability Engineering

Why Do We Evaluate In HCI? Design and evaluation Best if they are done together oevaluation suggests design odesign suggests evaluation ouse evaluation to create as well as critique Design and evaluation methods must fit development constraints obudget, resources, time, product cost… odo triage: what is most important given the constraints? Design usually needs quick approximate answers oprecise results rarely needed oclose enough, good enough, informed guesses,… See optional reading by Don Norman oApplying the Behavioural, Cognitive and Social Sciences to Products.

Why Use Different Methods? Method definition (Baecker, McGrath) Formalized procedures / tools that guide and structure the process of gathering and analyzing information Different methods can do different things. Each method offers potential opportunities not available by other means, Each method has inherent limitations…

Why Use Different Methods? All methods: enable but also limit what can be gathered and analyzed are valuable in certain situations, but weak in others have inherent weaknesses and limitations can be used to complement each other’s strengths and weaknesses. -McGrath (Methodology Matters)

Why Use Different Methods? Information requirements differ pre-design, iterative design, post-design, generalizable knowledge… Information produced differs outputs should match the particular problem/needs Relevance does the method provide information to our question / problem? its not what method is best, its what method is best to answer the question you are asking

How Can We Compare Methods? Naturalistic is the method applied in an ecologically valid situation? oobservations reflect real world settings – real environment, real tasks, real people, real motivation Repeatability would the same results be achieved if the test were repeated? Validity External validity: ocan the results be applied to other situations? oare they generalizable? Internal validity: odo we have confidence in our explanation?

How Can We Compare Methods? Product relevance Does the test measure something relevant to the usability and usefulness of real products in real use outside of lab? Some typical reliability problems of testing vs real use onon-typical users tested otasks are not typical tasks otests usability vs usefulness ophysical environment different – quiet lab vs very noisy open offices vs interruptions osocial influences different – motivation towards experimenter vs motivation towards boss

How Can We Compare Methods? Partial Solution for product relevance use real users user real tasks (task-centered system design) environment similar to real situation context similar to real situation

Why Use Different Methods? Cost/benefit of using method cost of method should match the benefit gained from the result Constraints and pragmatics may force you to chose quick and dirty discount usability methods

How Can We Compare Methods? Quickness can I do a good job with this method within my time constraints? Cost Is the cost of using this method reasonable for my question? Equipment What special equipment / resources required? Personnel, training and expertise What people / expertise are required to run this method?

How Can We Compare Methods? Subject selection how many do I need, who are they, and can I get them? Scope of subjects is it good for analyzing individuals? small groups? organizations? Type of information (qualitative vs quantitative) is the information quantitative and amenable to statistical analysis? Comparative can I use it to compare different things?

How Can We Compare Methods? Control can I control for certain factors to see what effects they have? Cross-sectional or Longitudinal can it reveal changes over time? Setting field vs laboratory? Support are there tools for supporting the method and analyzing the data?

How Can We Compare Methods? Routine application is there a fairly standard way to apply the method to many situations Theoretic is there a theoretic basis behind the method? Result type does it produce a description or explanation? Metrics are there useful, observable phenomena that can be measured

How Can We Compare Methods? Measures can I see processes or outcomes Organizational can they be included within an organization as part of a software development process Politics are there ‘method religion wars’ that may bias method selection?

What methods are there? Laboratory tests requires human subjects that act as end users Experimental methodologies ohighly controlled observations and measurements to answer very specific questions i.e., hypothesis testing Usability testing omostly qualitative, less controlled observations of users performing tasks

What methods are there? Interface inspection done by interface professionals, no end users necessary Usability heuristics oseveral experts analyze an interface against a handful of principles Walkthroughs oexperts and others analyze an interface by considering what a user would have to do a step at a time while performing their task

What methods are there? Field studies requires established end users in their work context Ethnography ofield worker immerses themselves in a culture to understand what that culture is doing Contextual inquiry ointerview methodology that gains knowledge of what people do in their real-world context

What methods are there? Self reporting requires established or potential end users interviews questionnaires surveys

What methods are there? Cognitive modeling requires detailed interface specifications Fitt’s Law omathematical expression that can predict a user’s time to select a target Keystroke-level model olow-level description of what users would have to do to perform a task that can be used to predict how long it would take them to do it Goms ostructured, multi-level description of what users would have to do to perform a task that can also be used to predict time

Goals of Behavioural Evaluation Designer: user-centered iterative design Customer selecting among systems Manager assisting effectiveness Marketer building a case for the product Researcher developing a knowledge base (From Finholt & Olsons CSCW 96 Tutorial)

Course goal To provide you with a toolbox of evaluation methodologies for both research and practice in Human Computer Interaction To achieve this, you will: investigate, compare and contrast many existing methodologies understand how each methodology fits particular interface design and evaluation situation practice several of these methodologies on simple problems gain first-hand experience with a particular methodology by designing, running, and interpreting a study.