Testability.

Slides:



Advertisements
Similar presentations
System Integration Verification and Validation
Advertisements

SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Software Quality Assurance Plan
Software Quality Metrics
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Project Documentation and its use in Testing JTALKS.
Chapter 11: Testing The dynamic verification of the behavior of a program on a finite set of test cases, suitable selected from the usually infinite execution.
Introduction to Software Testing
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 11 System Test Design
Tracing Requirements 1. The Role of Traceability in Systems Development  Experience has shown that the ability to trace requirements artifacts through.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
Managing Software Quality
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
Tor Stålhane Requirements Specification and Testing Requirements testability.
Verification and Validation Overview References: Shach, Object Oriented and Classical Software Engineering Pressman, Software Engineering: a Practitioner’s.
FCS - AAO - DM COMPE/SE/ISE 492 Senior Project 2 System/Software Test Documentation (STD) System/Software Test Documentation (STD)
Verification and Validation in the Context of Domain-Specific Modelling Janne Merilinna.
Chair of Software Engineering Exercise Session 6: V & V Software Engineering Prof. Dr. Bertrand Meyer March–June 2007.
A Metrics Program. Advantages of Collecting Software Quality Metrics Objective assessments as to whether quality requirements are being met can be made.
Software Requirements Specification Document (SRS)
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
CS223: Software Engineering Lecture 25: Software Testing.
 System Requirement Specification and System Planning.
ISQB Software Testing Section Meeting 10 Dec 2012.
Chapter 7: Modifiability
Software Quality Control and Quality Assurance: Introduction
Regression Testing with its types
Testing Tutorial 7.
Software Testing.
Rekayasa Perangkat Lunak Part-13
Approaches to ---Testing Software
CSC 480 Software Engineering
Test Automation CS 4501 / 6501 Software Testing
CompSci 230 Software Construction
PROJECT LIFE CYCLE AND EFFORT ESTIMATION
Chapter 8 – Software Testing
Arab Open University 2nd Semester, M301 Unit 5
Verification and Testing
Verification and Validation Overview
John D. McGregor Session 9 Testing Vocabulary
Software Requirements analysis & specifications
UNIT-4 BLACKBOX AND WHITEBOX TESTING
John D. McGregor Session 9 Testing Vocabulary
Paul Ammann & Jeff Offutt
Introduction to Software Testing
Software Testing (Lecture 11-a)
Verification and Validation Unit Testing
Thursday’s Lecture Chemistry Building Musspratt Lecture Theatre,
Software life cycle models
Fundamental Test Process
Test Automation CS 4501 / 6501 Software Testing
Chapter 13 Quality Management
Baisc Of Software Testing
Welcome to Corporate Training -1
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Verification, Validation, and Acceptance Testing
An Eternal Triangle Martin Woodward, U. Liverpool, UK
Lecture # 7 System Requirements
Building Valid, Credible, and Appropriately Detailed Simulation Models
Chapter 7 Software Testing.
Subject Name: SOFTWARE ENGINEERING Subject Code:10IS51
Software Verification and Validation
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

Testability

Quality “Totality of features of a software product that bears on its ability to satisfy given needs.” [Source: IEEE-STD-729] “Composite characteristics of software that determine the degree to which the software in use will meet the expectations of the customer.” [Source: IEEE-STD-729]

http://search.proquest.com/docview/215836114?pq- origsite=gscholar www.idi.ntnu.no/emner/tdt4242/foiler/3-2-Testability.ppt http://paris.utdallas.edu/qrs16/docs/Keynote-Jeff-Voas-slides.pdf http://www.rti.org/sites/default/files/resources/software_testing.pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.391&r ep=rep1&type=pdf

Testability According to ISO 9126, testability is defined as: The capability of the software product to enable modified software to be validated. IEEE Dictionary of Standard Terminology “(1) the degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met; and (2) the degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether the criteria have been met”

PIE Propagation Infection Execution

2 concerns How easy is it to test the implementation? How test-friendly is the requirement? 3 ways to examine T – Tests. Input / output. Involves the computer system and peripherals. E – Experiments. Input / output but involves also the users. I – Inspections. Evaluation based on documents.

In order to be testable, a requirement needs to be stated in a precise way. For some requirements this is in place right from the start: When the ACC system is turned on, the “Active” light on the dashboard shall be turned on. In other cases we need to change a requirement to get a testable version. The system shall be easy to use.

Requirement: Reverse thrust may only be used, when the airplane is landed. The important questions are “How do you define landed?” Who should you ask – e.g. pilots, airplane construction engineers, or airplane designers?

First and foremost: The customer needs to know what he wants and why he wants it. In some cases it is easier to test if the user actually has achieved his goal than to test that the system implements the requirement. Unfortunately, the “why”-part is usually not stated as part of a requirement.

Each requirement needs to be Relevant, i.e. pertinent to the system’s purpose and at the right level of restrictiveness. Feasible, i.e. possible to realize. If it is difficult to implement, is might also be difficult to test. Traceable, i.e. it must be possible to relate it to one or more Software components Process steps

Automatic door opener – what is missing? If the door is closed and a person is detected then send signal Open_Door. If no person is detected after 10 sec., send signal Close_Door.

Words and phrases that include: as appropriate if practical as required to the extent necessary / practical. Their meaning is subject to interpretation make the requirement optional Phrases like "at a minimum" only ensure the minimum, while "shall be considered" only requires the contractor to think about it.

Words and phrases that include: as appropriate if practical as required to the extent necessary / practical. Their meaning is subject to interpretation make the requirement optional Phrases like "at a minimum" only ensure the minimum, while "shall be considered" only requires the contractor to think about it.

Observability A software component is observable “if distinct outputs are generated from distinct inputs” Progress of the test execution? This is important for tests that do not produce output – e.g. the requirement is only concerned with an internal state change or update of a database. Results of the test? Important for tests where the output is dependent on an internal state or database content.

Controllable A software component is controllable “if, given any desired output value, an extra input exists which ‘forces’ the component output to that value”

Test Restartability This is mostly a question of check points in the code. How easy is it to Stop the test temporarily Study current state and output Start the test again from The point where it was stopped Start

Given a function Q = f(t) Domain The domain of f is the set of all possible input values, t, which yield an output Range The range of f is the corresponding set of output values Q

Domain The domain is the set of all possible inputs into the function { 1, 2, 3, … } The nature of some functions may mean restricting certain values as inputs

Range { 9, 14, -4, 6, … } The range would be all the possible resulting outputs The nature of a function may restrict the possible output values

Definition Given a function Q = f(t) Domain The domain of f is the set of all possible input values, t, which yield an output Range The range of f is the corresponding set of output values Q

PIE

Estimation Execution probability is estimated for some given input distribution by determining the proportion of cases that execute the location of interest. Infection probability is estimated by introducing mutations of the location and determining the proportion of input cases that give rise to a different state from the non-mutated location. Propagation probability is estimated by determining the proportion of cases for which perturbed data states at the location propagate and give rise to faulty results.

Fault size the syntactic size of a fault as “the number of statements or tokens that need to be changed to get a correct program”. the semantic size of a fault as “the relative size of the subdomain of D for which the output mapping is incorrect”

DRR where |D| is the cardinality of the domain of the specification and |R| is the cardinality of the range.

Design for testability Binder [6] talks about six major factors that result in testability in the development process. The factors are: characteristics of the design documentation, characteristics of the implementation, built-in test capabilities, presence of a test suite, presence of test tools and the software development process capability/maturity.

DRR The closer to 1 the DRR is the more testable the function The domain range ratio DRR of a

http://search.proquest.com/docview/215836114?pq- origsite=gscholar Write a half page summary of the content nd half a page of critique of the idea, not the presentation Submit as ususal Oct 17th by 11:59PM

Fewer tests means more escapes: Suppose our tests have 1:100 odds of finding a bug and there are 1,000 latent bugs in our system. We need to run at least 100,000 tests to find these bugs. But suppose we run only 50,000 tests and release; we’ll probably ship with about 500 latent bugs. Testability determines the limit to which the risk of costly or dangerous bugs can be reduced to an acceptable level. That is, poor testability means you’ll probably ship/release a system with more nasty bugs than is prudent.

About 130 individual factors that contribute to testability according to Binder. The effect of all this can be measured with two ratios: Efficiency: average tests per unit of effort. Or, much testing can we get done with the time, technology, and people on hand? Effectiveness: average probability of killing a bug per unit of effort.

Modules with low testability Faults are harder to find Coverage criteria hould vary Stopping criteria