1 Debriefing, Recommendations CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology May 3, 2007.

Slides:



Advertisements
Similar presentations
4.13 Jacob.
Advertisements

FINDING OUT WHAT PEOPLE THINK “Quizzing the community.” Data Gathering techniques including Interviews, Surveys & Questionnaires
D e b r i e f i n g DEBRIEFING Reflecting on our experiences of the workshops and how they relate to our lives!
23/11/2007Asian School of Business, Trivandrum Business Research Scales and Questionnaire.
Marketing Information Systems Marketing Research What is Marketing Research? Process Terminology Techniques MKIS - Marketing Information Systems What.
Feedback training session
Collecting Data by Communication
ACTIVELY ENGAGING THE STAKEHOLDER IN DEFINING REQUIREMENTS FOR THE BUSINESS, THE STAKEHOLDER, SOLUTION OR TRANSITION Requirements Elicitation.
1 Usability Test Plans CSSE 376 Software Quality Assurance Rose-Hulman Institute of Technology April 20, 2007.
1 Selecting Test Participants CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology April 24, 2007.
Chapter 14: Usability testing and field studies. Usability Testing Emphasizes the property of being usable Key Components –User Pre-Test –User Test –User.
Herman Aguinis, University of Colorado at Denver Prentice Hall, Inc. © 2006 Measuring Results and Behaviors: Overview  Measuring Results  Measuring Behaviors.
1 Overview of Usability Testing CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology April 19, 2007.
User Interface Testing. Hall of Fame or Hall of Shame?  java.sun.com.
McGraw-Hill/Irwin © 2002 The McGraw-Hill Companies, Inc., All Rights Reserved Chapter Eleven Decision Making.
1 Test Materials CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology April 30, 2007.
An Experimental Evaluation on Reliability Features of N-Version Programming Xia Cai, Michael R. Lyu and Mladen A. Vouk ISSRE’2005.
An evaluation framework
Testing and Modeling Users Kristina Winbladh & Ramzi Nasr.
Effect of Staff Attitudes on Quality in Clinical Microbiology Services Ms. Julie Sims Laboratory Technical specialist Strengthening of Medical Laboratories.
Pilot Phase Results. Research to date  College A Course 1  Digital Library Course 2  Online learning activities  College B Course 3  Concept Mapping.
Measuring Learning Outcomes Evaluation
Turning Marketing Information into Action: Marketing Research Chapter 8.
Pengukuran Opini Publik. Survey Research Survey research is a technique that well designed for assessing the prevalence and distribution of attitudes,
Usability Methods: Cognitive Walkthrough & Heuristic Evaluation Dr. Dania Bilal IS 588 Spring 2008 Dr. D. Bilal.
Background Info Opportunity for people to visually see their brainwave patterns through visual and audio cues on the computer Games and activities are.
Chapter 14: Usability testing and field studies
The Vocabulary of Research. What is Credibility? A researcher’s ability to demonstrate that the study is accurate based on the way the study was conducted.
Unit 2: Engineering Design Process
CSc640: Summary of Usability Testing D. Petkovic.
Copyright ©2008 by Cengage Learning. All rights reserved 1 Chapter 5 Planning and Decision Making Ellen A Drost, Ph.D.
Unit 8: Renaissance and Reformation
Final Idea: Working Drawing
Maths - Subtracting. Subtracting is simple. It is the opposite of adding. For example: 3 – 2 =___.
Usability Testing Chapter 6. Reliability Can you repeat the test?
Database Analysis and the DreamHome Case Study
4 Conducting Marketing Research 1. What is Marketing Research? Marketing research is the systematic design, collection, analysis, and reporting of data.
steps in psychological research
Chapter 8 Usability Specification Techniques Hix & Hartson.
Data Gathering Techniques 27 th February Data Gathering Techniques System requirements specify what the system must do or what property or quality.
Marketing Research and Information Systems. Marketing Research ‘the systematic gathering, recording and analysing of data about problems relating to the.
Questionnaires Questions can be closed or open Closed questions are easier to analyze, and may be done by computer Can be administered to large populations.
What is virtualization? virtualization is a broad term that refers to the abstraction of computer resources in order to work with the computer’s complexity.
Benchmarking in European Service of public Transport (BEST) Main results of the BEST 2008 Survey.
After testing users Compile Data Compile Data Summarize Summarize Analyze Analyze Develop recommendations Develop recommendations Produce final report.
1 Technical Communication A Reader-Centred Approach First Canadian Edition Paul V. Anderson Kerry Surman
CHAPTER TEN Multiple Parties and Teams McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved.
1 Evaluating the User Experience in CAA Environments: What affects User Satisfaction? Gavin Sim Janet C Read Phil Holifield.
13-1 McGraw-Hill/Irwin ©2006 The McGraw-Hill Companies, Inc., All Rights Reserved CHAPTER THIRTEEN Multiple Parties and Teams.
4-1 Marketing Research Defined Systematic design, collection, analysis, and reporting of data and findings relevant to a specific marketing situation facing.
Usability Engineering Dr. Dania Bilal IS 587 Fall 2007.
Chapter 4.6 NUMERICAL INTEGRATION. After you finish your HOMEWORK you should be able to… Approximate a definite integral using the Trapezoidal Rule Approximate.
4-1 Marketing Research Defined Systematic design, collection, analysis, and reporting of data and findings relevant to a specific marketing situation facing.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
Fact Finding (Capturing Requirements) Systems Development.
1 Testing—tuning and monitoring © 2013 by Larson Technical Services.
The Information School of the University of Washington Information System Design Info-440 Autumn 2002 Session #20.
Day 8 Usability testing.
Self-Report Methods Questionnaires and Interviews.
Russell & Jamieson chapter Evaluation Steps 15. Evaluation Steps Step 1: Preparing an Evaluation Proposal Step 2: Designing the Study Step 3: Selecting.
Attitude Scales Measurements
Multiple Parties and Teams
NEGOTIATION SEVENTH EDITION
Data Collection Methods
Multiple Parties and Teams
Interviews and Questionnaires
Usability Techniques Lecture 13.
1. INTRODUCTION.
Chapter 6 Indexes, Scales, and Typologies
The Marketing Survey-29.2 After finishing this section, you will know:
Presentation transcript:

1 Debriefing, Recommendations CSSE 376, Software Quality Assurance Rose-Hulman Institute of Technology May 3, 2007

2 Outline Post-test Questionnaire Debriefing Final Report Findings Analysis Recommendations

3 Post-test Questionnaire Purpose: collect preference information from participant May also be used to collect background information

4 Likert Scales Overall, I found the widget easy to use a.strongly agree b.agree c.neither agree nor disagree d.disagree e.strongly disagree

5 Semantic Differentials Circle the number closest to your feelings about the product: Simple Complex Easy to use Hard to use Familiar Unfamiliar Reliable Unreliable

6 Free-form Questions I found the following aspects of the product particularly easy to use ________________________________

7 First Cartoon of the Day

8 Debriefing Purpose: find out why the participant behaved the way they did. Method: interview May focus on specific behaviors observed during the test.

9 Debriefing Guidelines 1. Review participant's behaviors and post-test answers. 2. Let participant say whatever is on their mind. 3. Start with high-level issues and move on to specific issues. 4. Focus on understanding problems, not on problem-solving.

10 Debriefing Techniques What did you remember? When you finished inserting an appointment did you notice any change in the information display? Devil's advocate Gee, other people we've brought in have responded in quite the opposite way.

11 Findings Summarize what you have learned Performance Preferences

12 Performance Findings Mean time to complete Median time to complete Mean number of errors Median number of errors Percentage of participants performing successfully

13 Preference Findings Limited-choice questions sum each answer compute averages to compare questions Free-form questions group similar answers

14 Second Cartoon of the Day

15 Analysis Focus on problematic tasks 70% of participants failed to complete task successfully Conduct a source of error analysis look for multiple causes look at multiple participants Prioritize problems by criticality Criticality = Severity + Probability

16 Recommendations (1/2) Get some perspective wait until a couple days after testing collect thoughts from group of testers get buy-in from developers Focus on highest impact areas

17 Recommendations (2/2) Ignore "political considerations" for first draft Provide short-term and long-term solutions short-term: will not significantly affect development schedule long-term: needed for ultimate success of product