Black Box Software Testing Fall 2004

Slides:



Advertisements
Similar presentations
Thoughts on Systematic Exploratory Testing of Important Products James Bach, Satisfice, Inc.
Advertisements

Black Box Software Testing Copyright © Cem Kaner & James Bach 1 Black Box Software Testing Spring 2005 Part 4 -- QUALITY COST ANALYSIS by Cem Kaner,
Screen 1 of 24 Reporting Food Security Information Understanding the User’s Information Needs At the end of this lesson you will be able to: define the.
Overview Lesson 10,11 - Software Quality Assurance
Computer Engineering 203 R Smith Risk Management 7/ Risk Management The future can never be predicted with 100% accuracy. Failure to plan for risks.
SIM5102 Software Evaluation
Project Closure CHAPTER FOURTEEN Student Version Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Level 2 IT Users Qualification – Unit 1 Improving Productivity Name.
What is Business Analysis Planning & Monitoring?
Software Quality Chapter Software Quality  How can you tell if software has high quality?  How can we measure the quality of software?  How.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Black Box Software Testing Copyright © 2003 Cem Kaner & James Bach 1 Black Box Software Testing Fall 2004 Part Exercises by Cem Kaner, J.D., Ph.D.
Capability Maturity Model Part One - Overview. History Effort started by SEI and MITRE Corporation  assess capability of DoD contractors First.
CS3100 Software Project Management Week 26 - Quality Dr Tracy Hall.
Project Tracking. Questions... Why should we track a project that is underway? What aspects of a project need tracking?
Copyright (c) Cem Kaner. All Rights Reserved. 1 Black Box Software Testing (Professional Seminar) Cem Kaner, J.D., Ph.D. Professor of Computer.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III.
Chapter 16 Problem Solving and Decision Making. Objectives After reading the chapter and reviewing the materials presented the students will be able to:
Black Box Software Testing Copyright © Cem Kaner & James Bach 1 Black Box Software Testing Fall 2005 Overview—Part 2 (Mission of Testing) Cem Kaner,
Black Box Software Testing Copyright © 2003 Cem Kaner & James Bach 1 Black Box Software Testing Fall 2004 PART USER TESTING by Cem Kaner, J.D., Ph.D.
Copyright (c) Cem Kaner. 1 Software Testing 1 CSE 3411 SWE 5411 Assignment #1 Replicate and Edit Bugs.
Black Box Software Testing Copyright © 2003 Cem Kaner & James Bach 1 Black Box Software Testing Spring 2005 PART 7 -- FUNCTION TESTING by Cem Kaner, J.D.,
Ethics of Software Testing Thomas LaToza CS 210 Final Presentation 12 / 2 / 2002.
Black Box Software Testing Copyright © Cem Kaner & James Bach 1 Black Box Software Testing Fall 2005 Overview—Part 3 (Test oracles) Cem Kaner,
Construction, Testing, Documentation, and Installation Chapters 15 and 16 Info 361: Systems Analysis and Design.
Project Management All projects need to be “managed” –Cost (people-effort, tools, education, etc.) –schedule –deliverables and “associated” characteristics.
AN INTRODUCTION Managing Change in Healthcare IT Implementations Sherrilynne Fuller, Center for Public Health Informatics School of Public Health, University.
Black Box Software Testing Copyright © 2003 Cem Kaner & James Bach 1 Black Box Software Testing 2004 Academic Edition Part EDITING BUGS by Cem Kaner,
Session # Rational User Conference 2002 Author Note: To edit Session # go to: View/Master/Title Master ©1998, 1999, 2000, 2001, 2002 Rational Software.
Chair of Software Engineering Exercise Session 6: V & V Software Engineering Prof. Dr. Bertrand Meyer March–June 2007.
By Godwin Alemoh. What is usability testing Usability testing: is the process of carrying out experiments to find out specific information about a design.
Yazd University, Electrical and Computer Engineering Department Course Title: Advanced Software Engineering By: Mohammad Ali Zare Chahooki The Project.
Review Writing Opinión Writing.
Copyright (c) Cem Kaner. All Rights Reserved. 1 Black Box Software Testing (Professional Seminar) Cem Kaner, J.D., Ph.D. Professor of Computer.
Personal Comments on the NSERC ICT Panel’s Decision-Making Process Carl McCrosky.
Black Box Software Testing Copyright © 2003 Cem Kaner & James Bach 1 Black Box Software Testing Spring 2005 PART 8 -- TEST DESIGN by Cem Kaner, J.D., Ph.D.
System Development Life Cycle (SDLC). Activities Common to Software Projects Planning : Principles Principle #1. Understand the scope of the project.
Sept 17, 2007C.Watters 1 Reviewing Published Articles.
Abstract  An abstract is a concise summary of a larger project (a thesis, research report, performance, service project, etc.) that concisely describes.
Software Engineering Experimentation
Critical thinking for assignments to get a better grade
Black Box Software Testing Spring 2005
Software Configuration Management
Software Configuration Management
Software Project Configuration Management
Managing the Project Lifecycle
Black Box Software Testing 2004 Academic Edition
Chapter 18 Maintaining Information Systems
Black Box Software Testing Spring 2005
AF1: Thinking Scientifically
CRITICAL ANALYSIS Purpose of a critical review The critical review is a writing task that asks you to summarise and evaluate a text. The critical review.
Business planning Super-project.eu.
Technical Communication: Foundations
How does a Requirements Package Vary from Project to Project?
Informatics 121 Software Design I
Black Box Software Testing Fall 2004
Black Box Software Testing Fall 2005 Overview – Part 1 of 3
Black Box Software Testing 2004 Academic Edition
Black Box Software Testing (Academic Course - Fall 2001)
Applied Software Project Management
Dr Nick Hutcheon SWAP East Co-ordinator
GIRLS 78% BOYS 22%.
IDEA Student Ratings of Instruction
Black Box Software Testing Fall 2005 Overview—Part 3 (Test oracles) Cem Kaner, J.D., Ph.D. Professor of Software Engineering Florida Institute of Technology.
Questioning and evaluating information
Black Box Software Testing (Professional Seminar)
Software Development Techniques
Exploring Exploratory Testing
Strategi Memperbaiki dan Menyiapkan Naskah (Manuscript) Hasil Review
Presentation transcript:

Black Box Software Testing Fall 2004 Part 30 -- Credibility & Mission by Cem Kaner, J.D., Ph.D. Professor of Software Engineering Florida Institute of Technology and James Bach Principal, Satisfice Inc. Copyright (c) Cem Kaner & James Bach, 2000-2004 This work is licensed under the Creative Commons Attribution-ShareAlike License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA. These notes are partially based on research that was supported by NSF Grant EIA-0113539 ITR/SY+PE: "Improving the Education of Software Testers." Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Decision making, Information flow, And credibility

The Signal Detection & Recognition Problem Response Bug Feature Feature Bug Actual event Hit Miss False Alarm Correct Rejection When we study the equipment, we find that people make mistakes even on the best equipment (a) Titchener - sounds, lights, beat the students (b) Green and Swets, signal detection theory, ducks and bombers DON’T DRAW AN ROC We’re looking at the conditional probabilities of saying bug given that the real situation is a bug saying bug when the underlying state is a feature Refer to Testing Computer Software, pages 24, 116-118

Lessons From Signal Detection: We Make Decisions Under Uncertainty When you try to decide whether an item belongs to one category or the other (bug or feature), your decision will be influenced by your expectations and your motivation. Can you cut down on the number of false alarms without increasing the number of misses? Can you increase the number of hits without increasing the number of false alarms? Pushing people to make fewer of one type of reporting error will inevitably result in an increase in another type of reporting error. Training, specs, etc. help, but the basic problem remains. understand that the signal is often unclear. Documents like specifications help clear the signal, but there is still noise that adds to confusion.

Lessons From Signal Detection: Decisions Are Subject To Bias We make decisions under uncertainty. Decisions are subject to bias, and much of this is unconscious. The prime biasing variables are: perceived probability: If you think that an event is unlikely, you will be substantially less likely (beyond the actual probability) to report it. perceived consequence of a decision: What happens if you make a False Alarm? Is this worse than a Miss or less serious? perceived importance of the task: The degree to which you care / don’t care can affect your willingness to adopt a decision rule that you might otherwise be more skeptical about

Lessons From Signal Detection: Decisions Are Subject To Bias Decisions are made by a series of people. Bug reporting policies must consider the effects on the overall decision-making system, not just on the tester and first-level bug reader. Trace these factors through the decisions and decision-makers (next slides). For example, what happens to your reputation if you Report every bug, no matter how minor, in order to make sure that no bug is ever missed? Report only the serious problems (the “good bugs”)? Fully analyze each bug? Only lightly analyze bugs? Insist that every bug get fixed?

Decisions Made During Bug Processing Bug handling involves many decisions by different people, such as: Tester: Should I report this bug? Should I report these similar bugs as one bug or many? Should I report this awkwardness in the user interface? Should I stop reporting bugs that look minor? How much time should I spend on analysis and styling of this report? Your decisions will reflect on you. They will cumulatively have an effect on your credibility, because they reflect your judgment. The comprehensibility of your reports and the extent and skill of your analysis will also have a substantial impact on your credibility. Refer to Testing Computer Software, pages 90-97, 115-118

Decisions Made During Bug Processing-2 Bug handling involves many decisions by different people, such as: Programmer: Should I fix this bug or defer it? Project Manager: Should I approve the deferral of this bug? Tester: Should I appeal the deferral of this bug? How much time should I spend analyzing this bug further? Test Group Manager: Should I make an issue about this bug? Should I encourage my tester to investigate the bug further argue the bug further, or to quit worrying about this one, or should I just keep out of the discussion this time? Refer to Testing Computer Software, pages 90-97, 115-118

Decisions Made During Bug Processing - 3 Customer Service, Marketing, Documentation: Should I ask the project manager to reopen this bug? (The tester appealed the deferral) Should I support the tester this time? Should I spend time trying to figure this thing out? Will this call for extra work in the answer book / advertising / manual / help? Director, Vice President, other senior staff: Should I override the project manager’s deferral of this bug?

Decisions Made During Bug Processing - 4 Who else is in your decision loop? ______________________________________________________________________________________________________________________ ___________________________________________________________ ___________________________________________________________

Issues That Will Bias People Who Evaluate Bug Reports These reduce the probability that the bug will be taken seriously and fixed. Language critical of the programmer. Severity inflation. Pestering & refusing to ever take “No” for an answer. Tight schedule. Incomprehensibility, excessive detail, or apparent narrowness of the report. Weak reputation of the reporter. There are probably more factors that bias people in reporting bugs in your organization or in a specific project. You need to identify what they are and use the knowledge to benefit your bug reporting skill. 102

Issues That Will Bias People Who Evaluate Bug Reports These increase the probability that the bug will be taken seriously and fixed. Reliability requirements in this market. Ties to real-world applications. Report from customer/beta rather than from development. Strong reputation of the reporter. Weak reputation of the programmer. Poor quality/performance comparing to competitive product(s). News of litigation in the press. There are probably more factors that bias people in reporting bugs in your organization or in a specific project. You need to identify what they are and use the knowledge to benefit your bug reporting skill. 102

Clarify Expectations Make the rules explicit. One of the important tasks of a test manager is to clarify everyone’s understanding of the use of the bug tracking database and to facilitate agreements that this approach is acceptable to the stakeholders. Track open issues / tasks or just bugs? Track documentation issues or just code? Track minor issues late in the schedule or not? Track issues outside of the published spec and requirements or not? How to deal with similarity? Make the rules explicit.

Biasing People Who Report Bugs These will reduce the probability that bugs will be reported, by discouraging reporters, by convincing them that their work is pointless or will be filtered out, or by creating incentives for other people to pressure people not to report bugs. Never use bug statistics for employee bonus or discipline. Never use bug statistics to embarrass people. Never filter reports that you disagree with. Never change an in-house bug reporter’s language, or at least not without free permission. Add your comments as additional notes, not as replacement text. Monitor language in the reports that is critical of the programmer or the tester. Beware of accepting lowball estimates of bug probabilities.

Biasing People Who Report Bugs These help increase the probability that people will report bugs. Give results feedback to non-testers who report bugs. Encourage testers to report all anomalies. Adopt a formal system for challenging bug deferrals. Weigh schedule urgency consequences against an appraisal of quality costs. (Early in the schedule, people will report more bugs; later people will be more hesitant to report minor problems). Late in the schedule, set up a separate database for design issues (which will be evaluated for the start of the next release).

The mission of bug reporting and tracking systems

Mission of bug reporting / tracking systems Having a clear mission is important Without one, the system will be called on to do Too many things (making bug reporting inefficient) Annoying things Contradictory things

Mission of bug reporting / tracking systems A bug tracking process exists for the purpose of getting the right bugs fixed The mission that I operate from is this:

Mission of bug reporting / tracking systems Given a primary mission, all other uses of the system are secondary and must be changed or modified if they interfere with the primary mission. Issues to consider Auditing Tracing to other documents Personal performance metrics (reward) Personal performance metrics (punishment) Progress reporting Schedule reality checking Scheduling (when can we ship?) Archival bug pattern analysis