Evaluation of Mobile Interfaces

Slides:



Advertisements
Similar presentations
Human Computer Interaction
Advertisements

Designing and Evaluating Mobile Interaction: Challenges and Trends Authors: Marco de Sa and Luis Carrico.
Eye-witness testimony
6.811 / PPAT: Principles and Practice of Assistive Technology Wednesday, 16 October 2013 Prof. Rob Miller Today: User Testing.
CSCI 4163/6904, Summer Diary studies…  Participants collect data about events  As they happen  In the context of the event (in situ)  Can think.
Diary studies Rikard Harr November 2010 © Rikard Harr Outline The Diary study: benefits, challenges and alternatives The papers: aims and use of.
Semester in review. The Final May 7, 6:30pm – 9:45 pm Closed book, ONE PAGE OF NOTES Cumulative Similar format to midterm (probably about 25% longer)
©N. Hari Narayanan Computer Science & Software Engineering Auburn University 1 COMP 7620 Evaluation Chapter 9.
Empirical Methods in Human- Computer Interaction.
User Interface Testing. Hall of Fame or Hall of Shame?  java.sun.com.
Evaluation Methodologies
Jesper Kjeldskov & Jan Stage Department of Computer Science Aalborg University Denmark New Techniques for Usability Evaluation of Mobile Systems.
Evaluation Methods April 20, 2005 Tara Matthews CS 160.
Feedback from Usability Evaluation to User Interface Design: Are Usability Reports Any Good? Christian M. Nielsen 1 Michael Overgaard 2 Michael B. Pedersen.
An evaluation framework
Usability and Evaluation Dov Te’eni. Figure ‎ 7-2: Attitudes, use, performance and satisfaction AttitudesUsePerformance Satisfaction Perceived usability.
Diaries KSE966/986 Seminar - Fall 2012/Spring 2013 March 29, 2013 Uichin Lee.
2. new design3. new experience1. problem Austin Toombs I544 EDCritique #4: Mobile Device Without Screen Display original design I544 Experience Design.
Housekeeping  Seminar topics  Requirements  Sign up by Feb 3rd :  Bonus.
1 SWE 513: Software Engineering Usability II. 2 Usability and Cost Good usability may be expensive in hardware or special software development User interface.
Guidelines and Prototypes CS774 Human Computer Interaction Spring 2004.
Evaluation Techniques Material from Authors of Human Computer Interaction Alan Dix, et al.
Multimedia Specification Design and Production 2013 / Semester 1 / week 9 Lecturer: Dr. Nikos Gazepidis
Human Computer Interaction
Usability Evaluation June 8, Why do we need to do usability evaluation?
©2010 John Wiley and Sons Chapter 6 Research Methods in Human-Computer Interaction Chapter 6- Diaries.
MULTIMEDIA DEFINITION OF MULTIMEDIA
Evaluation Techniques zEvaluation ytests usability and functionality of system yoccurs in laboratory, field and/or in collaboration with users yevaluates.
Supporting rapid design and evaluation of pervasive application: challenges and solutions Lei Tang 1,2, Zhiwen Yu 1, Xingshe Zhou 1, Hanbo Wang 1, Christian.
1 ISE 412 Usability Testing Purpose of usability testing:  evaluate users’ experience with the interface  identify specific problems in the interface.
CS2003 Usability Engineering Usability Evaluation Dr Steve Love.
A Case Study of Interaction Design. “Most people think it is a ludicrous idea to view Web pages on mobile phones because of the small screen and slow.
Usability Assessment Methods beyond Testing Chapter 7 Evaluating without users.
Evaluation Techniques Evaluation –tests usability and functionality of system –occurs in laboratory, field and/or in collaboration with users –evaluates.
SD1230 Unit 6 Desktop Applications. Course Objectives During this unit, we will cover the following course objectives: – Identify the characteristics.
Housekeeping Seminar Topics doodle poll on facebook page – sign up by tomorrow night (Fri) please, and I’ll make the schedule this weekend Option to propose.
EVALUATION PROfessional network of Master’s degrees in Informatics as a Second Competence – PROMIS ( TEMPUS FR-TEMPUS-JPCR)
Evaluation Methods - Summary. How to chose a method? Stage of study – formative, iterative, summative Pros & cons Metrics – depends on what you want to.
1 User testing approaches: Practical techniques toward a more usable website Arc Worldwide 1.
Digital Media & Interaction Design LECTURE 4+5. Lecture 4+5 Draw requirement + Prototyping.
SIE 515 Design Evaluation Lecture 7.
Evaluation through user participation
Human Computer Interaction Lecture 15 Usability Evaluation
Chapter 16: User Interface Design
Imran Hussain University of Management and Technology (UMT)
Usability Testing 3 CPSC 481: HCI I Fall 2014 Anthony Tang.
Chapter 13 Designing Forms and Reports
Unit 2 User Interface Design.
The next generation of collaboration
Abdulrahman Jabour | Alexander Gountras | Said Achmiz | Yamini Karanam
Lesson 1: Buttons and Events – 12/18
Prototyping.
NBKeyboard: An Arm-based Word-gesture keyboard
Design, prototyping and construction
SY DE 542 User Testing March 7, 2005 R. Chow
Evaluation techniques
CS 522: Human-Computer Interaction Lab: Formative Evaluation
ABAB Design Ethical considerations
Designing Mobile User Experiences 13) Design at Scale Assignment
HCI Evaluation Techniques
CSM18 Usability Engineering
Testing & modeling users
Miguel Tavares Coimbra
Evaluation Techniques
Experimental Evaluation
Does Time Heal? A Longitudinal Study of Usability
Empirical Evaluation Data Collection: Techniques, methods, tricks Objective data IRB Clarification All research done outside the class (i.e., with non-class.
Miguel Tavares Coimbra
Thoughts Concerning the King+Rook_vs_Rook Problem
Design, prototyping and construction
Presentation transcript:

Evaluation of Mobile Interfaces Professor John Canny Spring 2006 11/22/2018

Review Last time: User testing process Severity and Cost ratings Discount usability methods Heuristic evaluation HE vs. user testing 11/22/2018

Mobile Evaluation What changes in the mobile space? People are moving People are distracted, in a hurry Its hard to use both hands There are often other people around … 11/22/2018

Mobile Evaluation But both readings showed that “laboratory” evaluation can still work very well. “Series 60 Voice Mailbox in Rome” used all the methods we talked about: Lo-fi paper prototypes Wizard-Of-Oz simulation Horizontal and vertical prototypes But with an Italian moderator and translator 11/22/2018

Mobile Evaluation Kjeldskov and Stage’s “New Techniques…” paper compared 6 methods for mobile evaluation. 1. Sitting or standing around a table – same as standard usability studies. 11/22/2018

Mobile Evaluation 2. Walkin’ in the lab, on a treadmill at constant speed. 11/22/2018

Mobile Evaluation 3. Walkin’ on a treadmill with varying speed 11/22/2018

Mobile Evaluation 4. Walkin’ at constant speed on a track that is changing with moving obstacles 11/22/2018

Mobile Evaluation 5. Walkin’ at varying speed on a track that is changing with moving obstacles 11/22/2018

Mobile Evaluation 6. Walkin’ on a pedestrian street. 11/22/2018

Results Largest number of usability problems were detected in case 1. (users sitting down!) But, when broken down across severity (critical, serious, cosmetic), the only significant difference was in cosmetic errors. i.e. sitting users found cosmetic errors what walking users did not. 11/22/2018

Workload Results Task Load Index (TLX): a method from NASA to measure the difficulty of a task. TLX increased in with the task difficulty (roughly from 1. to 6.), but not significantly. So did physical effort, and overall workload. 11/22/2018

Workload Results So the elaborate variations on walking in the lab really did make the task harder for users. Their setups are quite easy to replicate (treadmills, walking obstacle courses). Other ways to recreate the outdoor experience? 11/22/2018

Error Results Typing error rates were higher on difficult tasks (higher condition numbers), which is a warning against sitting tests. The task difficulty was also responsible for the higher number of cosmetic errors found in case 1. 11/22/2018

Going Mobile In a laboratory user study we have: A facilitator helping the user to think aloud Several observers looking for critical incidents A Wizard, simulating the computer How can we replicate this on a mobile device? 11/22/2018

Ideas? 11/22/2018

Ideas Facilitator can use the phone for voice contact with the subject – It is probably important to give the facilitator some context, such as frequent images or video. 11/22/2018

Ideas WoZ: need two-way data transfer from phone keys to dialog elements. Paper wont work, but a Denim-like lo-fi could. 11/22/2018

Ideas Recording: Sound is OK, but video? Really want to see the user’s expression and also their hands near the phone. Try eyeglass cameras for hands. Camera on phone for users face – need a video-style phone for this. 11/22/2018

Break 11/22/2018

Research in Mobile Evaluation More advanced methods: ESM (Experience Sampling Method): A mobile system prompts the user at random times: “What are you doing right now?” Diary studies: users record the significant events around them. 11/22/2018

Experience Sampling Method: System generates events at random times. These events trigger data capture on the mobile device. This could include: Location or other sensor data (no user input) A short question for the user, especially multiple choice. 11/22/2018

Experience Sampling Method: System generates events at random times. These events trigger data capture on the mobile device. This could include: Location or other sensor data (no user input) A short question for the user, especially multiple choice. 11/22/2018

Experience Sampling Even though a tiny fraction of the user’s actual behavior is sampled, random sampling gives an unbiased estimate of what they’re doing the rest of the time. 11/22/2018

Experience Sampling Experience sampling originated as a method for studying human behavior, not UI performance. However, it is quite simple to do, and can provide good information about certain mobile applications. 11/22/2018

Experience Sampling Tradeoffs Pros: Unbiased sampling of user behavior. Simple to do, typically text-only interaction (you can use SMS for it). Cons: Randomly samples all behavior, not system use. Very unlikely to catch critical incidents. Very little information is recorded, hard to recognize and diagnose problems later. 11/22/2018

Experience Sampling Variants Use system interaction as a trigger: Only sample when user is “using” the system, i.e. for up to 30 seconds after the last key press. If possible, look for apparent critical incidents and use those as triggers. Other ideas? 11/22/2018

Diary Studies In a diary study, the user records significant events by themselves. There are two kinds: Feedback studies: Users answer predefined questions about each significant event. Elicitation studies: Users make a recording when the event happens to serve as a memory aid in a later interview. The recording could be a photo, or voice recording, or video. 11/22/2018

Diary Study tradeoffs Feedback studies are hard – users are notoriously unreliable in reporting all events. They are also bad at judging how often and for how long they do things. Intel Smart Home Study: Participants wear location sensors all day (Ubisense) Their behavior is also studied using diary questionnaires. 11/22/2018

Elicitation Studies Have a number of advantages. Some recent research results: Photos seem to work better than other media, in terms of recall. But photo capture is very intrusive in social situations, and then audio is often used. Still better results are obtained with media capture and question-and-answer annotations. 11/22/2018

Summary Laboratory evaluations of mobile systems can work quite well (but more data needed). You can simulate cognitive and physical effects of walking in the lab. Experience Sampling and Diary Studies are two methods that support true mobile evaluation. Elicitation diary studies are probably the most effective Annotated photo capture is the best media for an elicitation study. 11/22/2018