PS50118 – Interacting with Technology Laboratory vs. Field Usability Evaluation Jason Cooper.

Slides:



Advertisements
Similar presentations
Requirements Engineering Processes – 2
Advertisements

©2011 1www.id-book.com Evaluation studies: From controlled to natural settings Chapter 14.
Chapter 14: Usability testing and field studies
Q.
Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
No 1 IT Governance – how to get the right and secured IT services Bjorn Undall and Bengt E W Andersson The Swedish National Audit Office Oman
Supporting further and higher education Setting the scene Rhona Sharpe Learner Experience Support Project.
Evaluation of User Interface Design
Communication Theory Lecture 1: Introduction to Communication Theory and Novel Technology Dr. Danaë Stanton Fraser.
Methodology and Explanation XX50125 Lecture 1: Part I. Introduction to Evaluation Methods Part 2. Experiments Dr. Danaë Stanton Fraser.
Chapter 6 Negotiating access and research ethics
1 Drafting a Standard n Establish the requirements n Agree the process n Draft the Standard n Test the Standard n Implement the Standard.
Copyright ©2010 Pearson Education, Inc. publishing as Prentice Hall
Fact-finding Techniques Transparencies
Effectively applying ISO9001:2000 clauses 6 and 7.
Human Computer Interaction
SoberIT Software Business and Engineering Institute HELSINKI UNIVERSITY OF TECHNOLOGY Dr. (Soc.Sc.) Kalle Toiskallio The Model of Mobile Context of Use.
©2011 1www.id-book.com Introducing Evaluation Chapter 12.
Jesper Kjeldskov, Mikael B. Skov, Benedikte S. Als, and Rune T. Høegh Mikael B. Skov Department of Computer Science Aalborg University, Denmark Is it Worth.
Cognitive Walkthrough More evaluation without users.
Chapter 15 Application of Computer Simulation and Modeling.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
Usability Evaluation Evaluation should occur continually through the design and implementation process. Evaluation methods are applied as the interface.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
Evaluation Methodologies
Jesper Kjeldskov & Jan Stage Department of Computer Science Aalborg University Denmark New Techniques for Usability Evaluation of Mobile Systems.
Feedback from Usability Evaluation to User Interface Design: Are Usability Reports Any Good? Christian M. Nielsen 1 Michael Overgaard 2 Michael B. Pedersen.
Jesper Kjeldskov Mikael B. Skov Jan Stage HCI-Lab Department of Computer Science Aalborg University Denmark Does Time Heal? A Longitudinal Study of Usability.
From Controlled to Natural Settings
INTRODUCTION. Concepts HCI, CHI Usability User-centered Design (UCD) An approach to design (software, Web, other) that involves the user Interaction Design.
Web 2.0 Testing and Marketing E-engagement capacity enhancement for NGOs HKU ExCEL3.
RESEARCH DESIGN.
Usability Methods: Cognitive Walkthrough & Heuristic Evaluation Dr. Dania Bilal IS 588 Spring 2008 Dr. D. Bilal.
Chapter 14: Usability testing and field studies
Predictive Evaluation
Usability Testing Teppo Räisänen
Presentation: Techniques for user involvement ITAPC1.
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Data Collection and Evaluation Undergraduate Final Year Projects Lydia.
Principle of Human Computer Interaction
Human Computer Interaction
Usability testing. Goals & questions focus on how well users perform tasks with the product. – typical users – doing typical tasks. Comparison of products.
Usability Evaluation June 8, Why do we need to do usability evaluation?
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley The Resonant Interface HCI Foundations for Interaction Design First Edition.
Bryan Kern (SUNY Oswego), Anna Medeiros (UFPB), Rafael de Castro (UFPB), Maria Clara (UFPB), José Ivan (UFPB), Tatiana Tavares (UFPB), Damian Schofield.
©2010 John Wiley and Sons Chapter 6 Research Methods in Human-Computer Interaction Chapter 6- Diaries.
Testing & modeling users. The aims Describe how to do user testing. Discuss the differences between user testing, usability testing and research experiments.
System Analysis-Gathering Requirements.  System analysis is the process of gathering info about existing system, which may be computerized or not, while.
CS2003 Usability Engineering Human-Centred Design Dr Steve Love.
Usability Engineering Dr. Dania Bilal IS 582 Spring 2006.
Usability Evaluation, part 2. REVIEW: A Test Plan Checklist, 1 Goal of the test? Specific questions you want to answer? Who will be the experimenter?
Dobrin / Weisser / Keller: Technical Communication in the Twenty-First Century. © 2010 Pearson Education. Upper Saddle River, NJ, All Rights Reserved.
Usability Engineering Dr. Dania Bilal IS 592 Spring 2005.
AVI/Psych 358/IE 340: Human Factors Evaluation October 31, 2008.
June 5, 2007Mohamad Eid Usability Testing Chapter 8.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Fall 2002CS/PSY Predictive Evaluation (Evaluation Without Users) Gathering data about usability of a design by a specified group of users for a particular.
User Interface Evaluation Introduction Lecture #15.
Usability Engineering Dr. Dania Bilal IS 582 Spring 2007.
Usability Engineering Dr. Dania Bilal IS 587 Fall 2007.
Research Design. How do we know what we know? The way we make reasoning Deductive logic Begins with one or more premises, reasoning then proceeds logically.
Human Computer Interaction Lecture 15 Usability Evaluation
Chapter 9 Collecting primary data through observation
Interaction qualities
From Controlled to Natural Settings
Human Computer Interaction
From Controlled to Natural Settings
HCI Evaluation Techniques
Testing & modeling users
Does Time Heal? A Longitudinal Study of Usability
Presentation transcript:

PS50118 – Interacting with Technology Laboratory vs. Field Usability Evaluation Jason Cooper

Outline What is Usability? Carrying out Usability Testing Usability Evaluation of Mobile Devices The Comparison Conclusion 2

What is Usability? Commonly considered to be a way of ensuring that systems that promote interactivity with a user are easy to learn, effective to use, and enjoyable from the users perspective (Preece et al 2002) ISO states that usability refers to the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of user 3

Usability Goals (Preece et al 2002) Effectiveness: This goal refers to how good the system itself is at doing what it is supposed to do. Efficiency: Refers to how the system supports the users in carrying out their activities and whether they are able to use the system productively once they have gained enough experience. Safety: Is concerned with the way in which the user is protected from dangerous conditions and undesirable situations Utility: Refers to the way that the system is capable of providing the correct kind of functionality to the user at the right time, such that they are able to do what they need or want to. Learnability: Considers how easy it is for a user to become competent in the use of a system. Memorability: Refers to how straightforward it is for the user of the system to remember how to use it once it is initially learned. 4

Usability Testing – The Lab 1 Traditionally usability testing is carried out in a controlled environment where the product is tested to determine whether it can be considered usable. The controlled environment usually consists of a laboratory where a set of pre-planned activities or scenarios can be run and repeatedly measured. Its goal is to assess whether the product will do what it is intended to do. 5

Usability Testing – The Lab 2 Data collected includes opinions of users of the system and performance on the set of activities. Quantitative performance measures are gathered which all for the following types of data to be produced: Time taken by the user to complete a specific activity. Time to complete an activity after a specified time away from the product Number and type of errors per activity Number of errors per specified unit of time Number of times the user had to navigate to the online help or manuals Number of users making a particular error Number of users completing a specific activity successfully 6

Usability Testing – The Field Usually conducted to determine how a product or prototype is adopted and use by people in their working and everyday lives. Length of time that testing can taken within the field can vary from just a few minutes to months or in some cases even years depending on what the product that that is being tested. Provides predominantly qualitative data such descriptions of peoples behaviours and activities. Collected by observing and interviewing users, collecting video, audio and field notes that attempt to detail what has occurred in the environment during the testing, 7

Usability Evaluation of a Mobile Device 1 Shift in focus from the lab to the field. Nielsen et al (2002) asserted that all mobile devices should always been evaluated with a realistic and natural setting. Initial reason for this were that it was thought lab testing was unlikely to be able to find all problems that occur in real mobile usage (Johnson P. 1998) There also appears to be an implicit assumption that usability of a mobile device could only be properly evaluated in the field. (Gregory et al 2000 & Brewster 2002) 8

Usability Evaluation of a Mobile Device 2 However there still remained a significant preference for lab based evaluation with 71% being undertaken in the lab and 19% being conduced within the field. (Kjeldskov 2005) Reasons for this included Field was considered to be time consuming in terms of organisation and collection of data. Complication of data recording Lack of control Hard to know whether all was evaluated that should of been evaluated. Whereas laboratory evaluations: Controlled conditions Clear set tasks Peaceful space that enabled concentration Control over activities and monitoring Special equipment 9

The Comparison Different empirical studies were found that attempt to compare usability evaluation of mobile systems in different settings, however they provide different results, but all attempt to focus on the number, type and severity of mobile device usability problems that are found in the relative settings. Kjeldskov et al (2004): Is it Worth the Hassle? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field. Kaikkonen et al (2005) Usability testing of mobile applications: A comparison between laboratory and field testing Nielsen et al (2006). It's worth the hassle!: the added value of evaluating the usability of mobile systems in the field. 10

Field, its not worth the hassle! 1 Laboratory evaluation discovered the exact same number of usability problems as was discovered in the field. There was a lack of control that was experienced within the field. Both the field and laboratory evaluations were able to deliver context-aware related problems, which contradict with some literature that suggest context-aware problems are better acquired in a field setting. 11

Field, its not worth the hassle! 2 In conclusion the two studies reported that: Realistic aspects were not a problem. Possibility of Lab problems being false positives. Proposal of use of field studies in other areas of the development lifecycle, providing a better insight into what was needed from the system in the first place. 12

Field, its worth the hassle! 1 Nielsen et al (2006) proposed that the contradictory results in Kjeldskov et al (2004) and Kaikkonen et al (2005) reports were possibly due to: Low number of test subjects. Same data collection techniques were not employed in the field as it was in the lab. Conflicting procedures. 13

Field, its worth the hassle! 2 Nielsen et al (2006) carried out a comparison study of field and evaluation usability evaluation of mobile device using the similar conditions and same data collection equipment it showed: Usability problems categorised as relating to either cognitive load or interaction style were identified only in the field evaluation Reason given this was that field enabled realistic setting which in turn meant the user become frustrated easier. The nature of laboratory setting was also said to increase the mental demands and frustration level of the participants significantly. When both the evaluations were conducted in the same way, field was more successful at identifying the more significant usability problems In conclusion although the cost, complexity and amount of time it takes to carry out a field evaluation is a down side, Nielsen et al consider the added value gained in terms of the capability of field evaluation to provide usability issues not detected in the laboratory setting makes field evaluation worthwhile. 14

Conclusion 1 So is it worth it? Yes – If we are able to detect usability problems in the field that are not detected in the lab then we must undertake a field study. But why are there so little people doing it then? This is for all the reasons outlined in the presentation. Costly in terms of money and time Little Control People wonder still is it worth while Complicated We must also remember that mobile systems are relatively new, so people are still use to doing it in the lab, but as more take place within the field I believe it is likely more advantages will emerge. 15

Conclusion 2 However alternatives do exist, whereby the field is simulated in the lab. (D. Svanæs ???). Hybrid Research Strategy with a full-scale simulated ward environment created with the help of health workers. With the aid of video recording they were able to observe details in patient-doctor interaction and in technology that were overlooked in the field study. However the field study gave a much richer picture. Conclusion is that both the lab and field supplement each other and it is the combination that provide valuable insights that can not be gained from one method alone. 16

References Preece, J., Rogers, Y., Sharp, H., (2002) Interaction design beyond human-computer interaction. West Sussex: Wiley Nielsen, C. M., Overgaard, M., Pedersen, M. B., Stage, J., and Stenild, S It's worth the hassle!: the added value of evaluating the usability of mobile systems in the field. In Proceedings of the 4th Nordic Conference on Human-Computer interaction: Changing Roles (Oslo, Norway, October , 2006). A. Mørch, K. Morgan, T. Bratteteig, G. Ghosh, and D. Svanaes, Eds. NordiCHI '06, vol ACM, New York, NY, DOI= Kjeldskov, J., Skov, M. B., Als, B. S. and Høegh, R. T. (2004) Is it Worth the Hassle? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field. In Proceedings of the 6th International Mobile HCI 2004 conference. LNCS, Springer-Verlag. Kaikkonen, A., Kallio, T., Kekäläinen, A., Kankainen, A. and Cankar, M. (2005) Usability testing of mobile applications: A comparison between laboratory and field testing. Journal of Usability Studies, 1(1): Baillie, L. (2003) Future Telecommunication: Exploring actual use, In Proceedings of IFIP TC13 International Conference on Human-Computer Interaction, (INTERACT '03). IOS Press Abowd G, D,. Mynatt, E,D Charting past, present, and future research in ubiquitous computing, ACM Transactions on Computer-Human Interaction (TOCHI), v.7 n.1, p.29-58, March 2000 [doi> / ] /