CS 422: UI Design and Programming

Slides:



Advertisements
Similar presentations
6.811 / PPAT: Principles and Practice of Assistive Technology Wednesday, 16 October 2013 Prof. Rob Miller Today: User Testing.
Advertisements

Each individual person is working on a GUI subset. The goal is for you to create screens for three specific tasks your user will do from your GUI Project.
Deciding How to Measure Usability How to conduct successful user requirements activity?
© De Montfort University, 2001 Questionnaires contain closed questions (attitude scales) and open questions pre- and post questionnaires obtain ratings.
Observation Watch, listen, and learn…. Agenda  Observation exercise Come back at 3:40.  Questions?  Observation.
Empirical Methods in Human- Computer Interaction.
6.S196 / PPAT: Principles and Practice of Assistive Technology Monday, 28 Nov Prof. Rob Miller Today: User Testing & Ethics.
Usable Privacy and Security Carnegie Mellon University Spring 2008 Lorrie Cranor 1 Designing user studies February.
Intro to Evaluation See how (un)usable your software really is…
Saul Greenberg Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of.
ICS 463, Intro to Human Computer Interaction Design: 8. Evaluation and Data Dan Suthers.
ISE554 The WWW 3.4 Evaluation Methods. Evaluating Interfaces with Users Why evaluation is crucial to interface design General approaches and tradeoffs.
James Tam Evaluating Interfaces With Users Why evaluation is crucial to interface design General approaches and tradeoffs in evaluation The role of ethics.
Spring break survey how much will your plans suck? how long are your plans? how many people are involved? how much did you overpay? what’s your name? how.
6.S196 / PPAT: Principles and Practice of Assistive Technology Monday, 17 Sept 2012 Prof. Rob Miller Today: Ethics of Human Subject Research.
Intro to Evaluation See how (un)usable your software really is…
Part 1-Intro; Part 2- Req; Part 3- Design  Chapter 20 Why evaluate the usability of user interface designs?  Chapter 21 Deciding on what you need to.
Gathering User Data IS 588 Dr. Dania Bilal Spring 2008.
Human Computer Interaction
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Usability Testing Chapter 6. Reliability Can you repeat the test?
1 ISE 412 Usability Testing Purpose of usability testing:  evaluate users’ experience with the interface  identify specific problems in the interface.
CS2003 Usability Engineering Usability Evaluation Dr Steve Love.
Usability Testing Chris North cs3724: HCI. Presentations karen molye, steve kovalak Vote: UI Hall of Fame/Shame?
User Testing 101. Recruiting Users Find people with the same experience level as the typical user Don’t get people who are familiar with the product or.
CS5714 Usability Engineering Formative Evaluation of User Interaction: During Evaluation Session Copyright © 2003 H. Rex Hartson and Deborah Hix.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Ethics in Evaluation Why ethics? What you have to do Slide deck by Saul Greenberg. Permission is granted to use this for non-commercial purposes as long.
Spring /6.831 User Interface Design and Implementation1 Lecture 13: User Testing next 3 lectures required for 6.831G only nanoquizzes on those.
Usability Evaluation or, “I can’t figure this out...do I still get the donuts?”
How do we know if our UI is good or bad?.
1 Usability Analysis n Why Analyze n Types of Usability Analysis n Human Subjects Research n Project 3: Heuristic Evaluation.
1 ITM 734 Introduction to Human Factors in Information Systems Cindy Corritore Testing the UI – part 2.
School of Engineering and Information and Communication Technology KIT305/607 Mobile Application Development Week 7: Usability (think-alouds) Dr. Rainer.
REVISING STUDYING SOCIETY Learning objective: To independently answer short mark exam questions. Starter: Splat! Work on your own to write as many definitions.
Day 8 Usability testing.
Introduction to Usability Engineering
User Interface Evaluation
Evaluation through user participation
What aspect of usability is this clip illustrating?
Adapted from PPT developed by Jhpiego corporation
Usability Testing 3 CPSC 481: HCI I Fall 2014 Anthony Tang.
CHAPTER 2 Ethics in Psychological Research
Usability Evaluation, part 2
Usability Evaluation.
Young Leader Training Module A prepare for take-off!
Studying Social Life: Sociological Research Methods
CHAPTER 7: Ethics in Psychological Research
Introduction to Usability Engineering
SY DE 542 User Testing March 7, 2005 R. Chow
CS 522: Human-Computer Interaction Lab: Usability Testing
CS 139 – Programming Fundamentals
Unit 1 The History of Earth Overview and Unit Guide
Experimental Design.
Observing users.
Experimental Design.
User Testing 101.
Types of Research in Sociology
Scientific Method Steps
Ethics Review Morals: Rules that define what is right and wrong Ethics: process of examining moral standards and looking at how we should interpret and.
Qualitative Research English 102●K. Turner.
Methods of Psychological Research
Learning about your users (cont.): The field interview
Experimental Evaluation
Introduction to Usability Engineering
Empirical Evaluation Data Collection: Techniques, methods, tricks Objective data IRB Clarification All research done outside the class (i.e., with non-class.
Human-Computer Interaction: Overview of User Studies
User CENTERED DESIGN IB TOPIC 7.
Presentation transcript:

CS 422: UI Design and Programming Intro to User Testing

Housekeeping GR3 is due on Thursday, 3/7. Rubric and examples are posted on Piazza. Your superhero TAs have posted both PS1 grades and GR2 grades! Graduate students: Note upcoming assignments and don’t leave them for the last minute. Mid term syllabus: everything we have covered to this day. Format of the mid-term exam and mid-term presentations will be reviewed in the next class. You should already be working on PS2.

User Testing

Today we’ll learn the following: Different kinds of user testing Ethics of Human Subject research Formative Evaluation: In details User testing in class: get some of your GR3 done

What is user testing? We learned about usability and how to design for it. Now, we need to put our designs to test. We can test our user interfaces during design and development or after development. Formative evaluation is done in between design iterations. Summative evaluation is done after development (you will do this in GR6) User testing is about putting an interface in front of real users.

System testing vs. User testing System testing does NOT involve human subjects or real users of the system. System testing focuses on the functionalities and utility of the system, and sometimes non-functional requirements, such as latency. User testing focuses on usability dimensions, such as learnability, efficiency, and safety. User testing always involves human subjects or real users of the system.

Different kinds of user testing Formative evaluation Field study Controlled experiment

Formative evaluation Find problems for next iteration of design (during the design process) Formative evaluation doesn’t need a full working implementation, but can be done on a variety of prototypes. Evaluates prototype or implementation, in lab, on chosen tasks The results of formative evaluation are largely qualitative observations, usually a list of usability problems. You also choose the tasks given to users, which are generally realistic (drawn from task analysis, which is based on observation) but nevertheless fake.

Formative evaluation: Key disadvantages The testing environment is controlled and tasks are fake. You are coming up with representative tasks based on your scenarios and understanding of the problem space Running a test in a lab environment on tasks of your invention may not tell you enough about how well your interface will work in a real context on real tasks. How to fix that?

Field study A field study can answer these questions, by actually deploying a working implementation to real users, and then going out to the users’ real environment and observing how they use it. Ethnography Ethnomethodology Interviews Diary Data Logging ESM or Experience Sampling Method Survey Field study is mostly qualitative and is not used to test a hypothesis.

Controlled Experiment The goal of a controlled experiment is to test a quantifiable hypothesis about one or more interfaces. Think A/B testing. Controlled experiments happen under carefully controlled conditions using carefully-designed tasks— much more carefully chosen than formative evaluation tasks. Hypotheses can only be tested by quantitative measurements of usability, like time elapsed, number of errors, or subjective ratings.

Different kinds of user testing Formative evaluation All: GR3 Field study Not covered in this class Controlled experiment Only graduate students: AS1-3/ RS1-3

Ethics of Human Subject research User testing always involves human subjects or real users of the system. Real often means representative users.

Ethics of Human Subject research Users are human beings Human subjects have been seriously abused in the past The Nazi-era experiments led to the Nuremberg Code, an international agreement on the rights of human subjects.

Basic Principles (Belmont Report) Respect for persons voluntary participation informed consent protection of vulnerable populations (children, prisoners, people with disabilities, esp. cognitive) Beneficence do no harm risks vs. benefits: risks to subjects should be commensurate with benefits of the work to the subjects or society Justice fair selection of subjects

Institutional Review Boards Research with people is subject to scrutiny IRB oversight is confined to research

UIC IRB

Pressures on a User Performance anxiety Feels like an intelligence test Comparing self with other subjects Feeling stupid in front of observers Competing with other subjects A programmer with an ironclad ego may scoff at such concerns, but these pressures are real.

Treat the User With Respect Time Don’t waste it Comfort Make the user comfortable Informed consent Inform the user as fully as possible Privacy Preserve the user’s privacy Control The user can stop at any time

What to do before a test? Time Comfort Privacy Information Pilot-test all materials and tasks Comfort “We’re testing the system; we’re not testing you.” “Any difficulties you encounter are the system’s fault. We need your help to find these problems.” Privacy “Your test results will be completely confidential.” Information Brief about purpose of study Inform about audiotaping, videotaping, other observers Answer any questions beforehand (unless biasing) Control: “You can stop at any time.” “Keep in mind that we’re testing the computer system. We’re not testing you.” (comfort)

What to do during a test? Time Comfort Eliminate unnecessary tasks Comfort Calm, relaxed atmosphere Take breaks in long session Never act disappointed Give tasks one at a time First task should be easy, for an early success experience Privacy: User’s boss shouldn’t be watching Information: Answer questions (again, where they won’t bias) Control User can give up a task and go on to the next User can quit entirely Answer the user’s questions as long as they don’t bias the test.

After the Test Comfort Information Privacy Say what they’ve helped you do Information Answer questions that you had to defer to avoid biasing the experiment Privacy Don’t publish user-identifying information Don’t show video or audio without user’s permission

Formative Evaluation: How to Find some users Should be representative of the target user class(es), based on user analysis Give each user some tasks Should be representative of important tasks, based on task analysis Watch user do the tasks

Roles in Formative Evaluation User Facilitator Observers

User’s Role User should think aloud Problems What they think is happening What they’re trying to do Why they took an action Problems Feels weird Thinking aloud may alter behavior Disrupts concentration Another approach: pairs of users Two users working together are more likely to converse naturally Also called co-discovery, constructive interaction

Facilitator’s Role Does the briefing Provides the tasks Coaches the user to think aloud by asking questions “What are you thinking?” “Why did you try that?” Controls the session and prevents interruptions by observers

Observer’s Role Be quiet! Take notes Don’t help, don’t explain, don’t point out mistakes Sit on your hands if it helps Take notes Watch for critical incidents: events that strongly affect task performance or satisfaction Usually negative Errors Repeated attempts Curses May be positive “Cool!” “Oh, now I see.”

How to record observations? Pen & paper notes Prepared forms can help Audio recording For think-aloud Video recording Usability labs often set up with two cameras, one for user’s face, one for screen User may be self-conscious Good for closed-circuit view by observers in another room Generates too much data Retrospective testing: go back through the video with the user, discussing critical incidents Screen capture & event logging Cheap and unobtrusive Camtasia, CamStudio

User testing in class: get some of your GR3 done GR3: You should do at least two rounds of testing, each round involving at least three users. I suggest you finish one round of testing in class. Draft tasks Recruit users Take notes Repeat

Why three users? Discount Usability Each team tested 3 users, and I judged how well they ran the study on a 0–100 scale, with 100 representing my model of the ideal test. In total, the software package had 8 major usability problems (along with more smaller issues that I didn't analyze). https://www.nngroup.com/articles/discount-usability-20-years/

Next class Review for Mid-Terms