Software Testing: Bug Reporting Iain McCowatt imccowatt.

Slides:



Advertisements
Similar presentations
Testing Relational Database
Advertisements

Critical Reading Strategies: Overview of Research Process
Accident and Incident Investigation
More and Better Test Ideas Rikard Edgren TIBCO Spotfire EuroSTAR share one-liner test ideas.
Test process essentials Riitta Viitamäki,
Page 1 of 5 UWA Service Desk The Service Desk self service portal allows you (staff or student) to not only monitor the progress of any Incident or request.
Unit Testing in the OO Context(Chapter 19-Roger P)
Modifying existing content Adding/Removing content on a page using jQuery.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
UCD Yong Choi BPA. What is UCD? A use case is a set of scenarios that describing an interaction between a user and a system. – What a system does…rather.
Writing a Research Paper
Software Quality Assurance Inspection by Ross Simmerman Software developers follow a method of software quality assurance and try to eliminate bugs prior.
Etiquette for Professors. Why is Etiquette Important? Audiences interact with the printed word as though it has a personality and that personality.
Group Project. Don’t make me think Steve Krug (2006)
SIM5102 Software Evaluation
The Basics of Software Testing
E | W | E | W | NHS e-Referral Service Provider Roles Issued: 3 rd.
Yahoo Tutorial This tutorial aims to quickly cover some of the basic elements of web based using Yahoo - a free service Use the Index.
Chapter 11: Testing The dynamic verification of the behavior of a program on a finite set of test cases, suitable selected from the usually infinite execution.
CS4723 Software Validation and Quality Assurance Lecture 9 Bug Report Management.
Test Design Techniques
Tutorial Introduction Fidelity NTSConnect is an innovative Web-based software solution designed for use by customers of Fidelity National Title Insurance.
CHAPTERCHAPTER McGraw-Hill/Irwin©2008 The McGraw-Hill Companies, All Rights Reserved Rules of Construction NINENINE.
The Writing Process Introduction Prewriting Writing Revising
- Some teachers take the attitude of teaching grammar in their books that “it’s there,” so it has to be taught. -However, the grammar points in the course.
Case Submittal Best Practice
The Islamic University of Gaza
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Software Testing: Introduction Iain McCowatt imccowatt.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Module CC3002 Post Implementation Issues Lecture for Week 6 AY 2013 Spring.
© 2012 IBM Corporation Rational Insight | Back to Basis Series Work on a Defect from QA Liu Xue Ning.
Introduction Telerik Software Academy Software Quality Assurance.
How to do Quality Research for Your Research Paper
Tracking The Problem  By Aaron Jackson. What’s a Problem?  A suspicious or unwanted behavior in a program  Not all problems are errors as some perceived.
Fundamentals of Data Analysis Lecture 9 Management of data sets and improving the precision of measurement.
Introduction to problem solving. Introductory Discussion? What problems are you faced with daily? Do you think a computer can be designed to solve those.
Program Development Life Cycle (PDLC)
Designing Interface Components. Components Navigation components - the user uses these components to give instructions. Input – Components that are used.
Testing E001 Access to Computing: Programming. 2 Introduction This presentation is designed to show you the importance of testing, and how it is used.
Copyright (c) Cem Kaner. 1 Software Testing 1 CSE 3411 SWE 5411 Assignment #1 Replicate and Edit Bugs.
Testing Methods Carl Smith National Certificate Year 2 – Unit 4.
Introduction to Software Testing. Types of Software Testing Unit Testing Strategies – Equivalence Class Testing – Boundary Value Testing – Output Testing.
ISO NON-CONFORMANCE, CORRECTIVE AND PREVENTIVE ACTION.
1. Objectives At the end of this chapter you should be able to:  Discuss the use and features of a data model  Define the terms entity and attribute.
CSC 240 (Blum)1 Introduction to Data Entry, Queries and Reports.
Requirements Analysis and Design Engineering Southern Methodist University CSE 7313.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
1 Version /05/2004 © 2004 Robert Oshana Requirements Engineering Use cases.
Building Simulation Model In this lecture, we are interested in whether a simulation model is accurate representation of the real system. We are interested.
Copyright 2010, The World Bank Group. All Rights Reserved. Testing and Documentation Part II.
CSC 480 Software Engineering Test Planning. Test Cases and Test Plans A test case is an explicit set of instructions designed to detect a particular class.
Week 6. Statistics etc. GRS LX 865 Topics in Linguistics.
PROGRAMMING TESTING B MODULE 2: SOFTWARE SYSTEMS 22 NOVEMBER 2013.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
Recap Introduction to Inheritance Inheritance in C++ IS-A Relationship Polymorphism in Inheritance Classes in Inheritance Visibility Rules Constructor.
“The Role of Experience in Software Testing Practice” A Review of the Article by Armin Beer and Rudolf Ramler By Jason Gero COMP 587 Prof. Lingard Spring.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Taking a Closer Look: Incorporating Research into Your Paper.
The Quality Gateway Chapter 11. The Quality Gateway.
REPORT WRITING.
A Step by Step Tutorial This PowerPoint presentation is intended to show eDMR users how to login and use the eDMR system.
CompSci 230 Software Construction
Strategies For Software Test Documentation
The Object-Oriented Thought Process Chapter 05
Introducing ISTQB Agile Foundation Extending the ISTQB Program’s Support Further Presented by Rex Black, CTAL Copyright © 2014 ASTQB 1.
CS240: Advanced Programming Concepts
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Week Ten – IT Audit Reporting
Checkboxes, Select boxes, Radio buttons
Presentation transcript:

Software Testing: Bug Reporting Iain McCowatt imccowatt

Bug reports are one of the most visible work products of the tester The purpose of writing a bug report is to identify an item that requires additional activity; be it further investigation or resolution Those doing the investigation/resolution may be under considerable time pressure, and not disposed towards taking on more work When you write a bug report, you are effectively trying to convince someone to do something Both the content and the positioning of your bug report have a significant bearing on whether your bug will be resolved or ignored Introduction

Gerald Weinberg: “value to some person” Quality is subjective: – It only exists in relation to peoples perceptions – Those perceptions may vary – Often it relates to the fulfillment of some (persons) need, or the value (to someone) attained through providing a solution to a problem Reminder: Quality

Bolton & Bach: “something that bugs somebody who matters” Kaner & Bach: “anything that causes an unnecessary or unreasonable reduction of the quality of a software product” A bug is a relationship between a person and a software product: – If it reduces value to someone then it is a bug Reminder: Bugs

A Bug Model Error Bug Failure (Symptoms) Failure (Symptoms) Conditions Someone makes a mistake This manifests as a bug in the software When the software is executed – with the right set of conditions – a failure occurs Different conditions may result in the same, a different, or no failure Testers observe failures NOT errors or bugs

– The precise conditions that cause the failure to occur when the bug is executed may not be immediately apparent from the steps that we have executed – Under certain conditions the bug may not give rise to a failure at all – The same bug may give rise to a variety of different failures, under different conditions – The failure we observe may not be the most significant failure that might occur – The conditions under which we observed a failure may not be the most likely set of conditions to arise in a real life situation We must perform follow up tests to sort this out Follow Up Testing

Testers observe failures NOT errors or bugs – The precise conditions that cause the failure to occur when the bug is executed may not be immediately apparent from the steps that we have executed Follow Up Testing: Reproduction

One of the most common reasons for the rejection of a bug report is “cannot reproduce”. The conditions that led to the failure may have little to do with conditions set by the test itself, e.g.: – Something left over from a previous test – Configuration or data – Something that happened in the background. The test that led to the failure being observed may contain a lot of “noise” – steps and conditions that are irrelevant to the failure. Follow Up Testing: Reproduction

We perform follow up tests to: – Ensure we can reproduce the failure – Identify any critical conditions not inherent in the test that exposed the failure – Identify the minimum set of steps, minimum conditions that will cause the failure Follow Up Testing: Reproduction

Summary: Item shows in basket with quantity = 0 Description: – Steps: Log in, user = JDoe, password = password Browse catalog Select widget1, set quantity = 1, add to basket Select widget2, set quantity = 2, add to basket Browse basket Select widget2, type quantity = 0, save – Expected: Basket shows 1 x widget1, no widget2 – Actual: Basket shows 1 x widget1, 0 x widget2 Follow Up Testing: Reproduction - Example Is the user actually relevant to the repro steps or is it noise? Is widget1 relevant to the repro steps?

Testers observe failures NOT errors or bugs – Under certain conditions the bug may not give rise to a failure at all – The same bug may give rise to a variety of different failures, under different conditions Follow Up Testing: Isolation

Debugging code can be challenging. Understanding when a failure does – or does not – occur can provide vital clues to the developers We perform follow up tests to: – Identify what sets of conditions cause the failure – Identify what sets of conditions cause a different failure – Identify what sets of conditions cause no failure Follow Up Testing: Isolation

Summary: Item shows in basket with quantity = 0 Description: – Steps: Log in, user = JDoe, password = password Browse catalog Select widget1, set quantity = 1, add to basket Select widget2, set quantity = 2, add to basket Browse basket Select widget2, type quantity = 0, save – Expected: Basket shows 1 x widget1, no widget2 – Actual: Basket shows 1 x widget1, 0 x widget2 Follow Up Testing: Isolation - Example Does this occur with other users? Could this be configurable behavior for users or groups? Does this occur with other users? Could this be configurable behavior for users or groups? Does this occur only with widget2? What about similar items in the catalog? What about different items? Could this be configurable behavior? Does this occur only with widget2? What about similar items in the catalog? What about different items? Could this be configurable behavior? What’s the bug here? We’re reporting a failure (can update quantity to 0 – still shows in basket) but can you add something to the basket with quantity 0? What’s the bug here? We’re reporting a failure (can update quantity to 0 – still shows in basket) but can you add something to the basket with quantity 0?

Testers observe failures NOT errors or bugs – The same bug may give rise to a variety of different failures, under different conditions – The failure we observe may not be the most significant failure that might occur Follow Up Testing: Maximization

Imagine that you have logged a bug report documenting a failure that you observed. It is minor, and is ultimately rejected Now suppose the same bug, under different conditions causes a catastrophic failure…would you want to miss that? Imagine that you could have reported the latter instead. Which report is more likely to be acted upon? We perform follow up tests to: – Identify what sets of conditions cause a different failure – Identify what is the worst that can happen Follow Up Testing: Maximization

Summary: Item shows in basket with quantity = 0 Description: – Steps: Log in Browse catalog Select widget1, set quantity = 1, add to basket Browse basket Select widget1, type quantity = 0, save – Expected: Basket empty – Actual: Application crashed, see attached trace Follow Up Testing: Maximization - Example Behavior was configurable “widget2” was set to allow “widget1” was set to disallow but this branch was not properly implemented and resulted in an unmanaged exception which killed the application Behavior was configurable “widget2” was set to allow “widget1” was set to disallow but this branch was not properly implemented and resulted in an unmanaged exception which killed the application

Testers observe failures NOT errors or bugs – The conditions under which we observed a failure may not be the most likely set of conditions to arise in a real life situation Follow Up Testing: Generalization

Another common reason for rejecting bugs because the bugs are “edge cases”, or because “nobody would ever do that”. Sometimes the conditions that we use in tests can be pretty obscure If the conditions are so obscure as to be unbelievable, the defect may be rejected We perform follow up tests to: – Identify the most realistic, most likely, set of conditions that can still cause a failure Follow Up Testing: Generalization

Summary: Item shows in basket with quantity = 0 Description: – Steps: Log in Browse catalog Select any widget of class A, set quantity = 1, add to basket Browse basket Select entry in basket, set quantity = 0 using arrows, delete button or by typing, save – Expected: Basket empty – Actual: Application crashed, see attached trace Follow Up Testing: Generalization - Example Behavior was configurable for an entire class of products There were many, more general and more likely, ways of causing the failure.

Some tips for follow up testing: – Try similar test cases – Vary your test steps – Try omitting steps – Vary the test data – Try omitting certain test data (use whatever defaults are set) – Vary reference data not set by the test itself – Vary configuration settings Follow Up Testing: Tips

The more credible the bug report, the more likely it is to be taken seriously and resolved Bug reports can be made more credible by: – Being reproducible – Being related to likely scenarios rather than unlikely “edge” cases – Being clear and unambiguous – Making a fair, non-exaggerated, case as to the impact of the failure observed – Referencing credible sources or oracles (e.g. requirements, design, previous versions, previous bugs, comparable products, the opinions of stakeholders with influence, relevant standards etc.) in order to establish that the items represents a reduction in quality Reporting with Influence: Credibility

All bug reports should: – Be clearly and concisely written – Remain as factual as possible (particularly in describing repro steps and failures) – Where it is necessary to deviate from fact – for example to cite opinions relating to the potential impact of failures, this should be clearly stated – and the source clearly cited. – Be neutral and non antagonistic Reporting with Influence: Tone

Things to avoid: – Blame – Judgement – Being drawn into lengthy discussion via defect comments – pick up the phone or use the triage process …to do so will discredit both your bug report AND you as a tester. Reporting with Influence: Tone

Summary A bug report is one of a tester’s most important products A bug report is intended to influence someone as to the need to perform additional work Testing does not stop with the observation of a failure – follow up testing is required before an effective bug report can be written Reporting with credibility and an appropriate tone is essential if you are to be taken seriously as a tester

Exercise Form pairs In your pairs, review the sample defect report provided Identify: – Any areas the report could be improved: – How would you structure the report differently? – What changes would you make to tone? – What additional information would you add? – What information would you choose to omit? What follow up testing might help improve the report? – Include suggestions as to conditions to test Present your thoughts to the group

Further Information Much of the content of this lecture has been inspired or derived from the Black Box Software Testing (BBST) series of courses by Kaner, Bach and others: Additional information, such as video lectures, exercises and readings are available from: –