Outsourcing, subcontracting and COTS Tor Stålhane.

Slides:



Advertisements
Similar presentations
Testing Relational Database
Advertisements

Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
White Box and Black Box Testing Tor Stålhane. What is White Box testing White box testing is testing where we use the info available from the code of.
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
Software Quality Assurance Inspection by Ross Simmerman Software developers follow a method of software quality assurance and try to eliminate bugs prior.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Page 1 Building Reliable Component-based Systems Chapter 10 - Predicting System Trustworthiness Chapter 10 Predicting System Trustworthiness.
The Architecture Design Process
Karlstad University Computer Science Design Contracts and Error Management Design Contracts and Errors A Software Development Strategy (anpassad för PUMA)
The Basics of Software Testing
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
8-1 Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft.
Copyright ©2011 Pearson Education 8-1 Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft Excel 6 th Global Edition.
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
Software Quality Assurance
Software Integration and Documenting
Test coverage Tor Stålhane. What is test coverage Let c denote the unit type that is considered – e.g. requirements or statements. We then have C c =
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Reliability Growth. Three Questions Frequently Asked Just Prior to Release 1.Is this version of software ready for release (however “ready” is.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Exam examples Tor Stålhane. The A scenario – 1 We are working in a small software development company – 10 developers plus two persons in administrative.
System Testing There are several steps in testing the system: –Function testing –Performance testing –Acceptance testing –Installation testing.
Testing and Cost / Benefit Tor Stålhane. Why cost / benefit – 1 For most “real” software systems, the number of possible inputs is large. Thus, we can.
Software Quality Assurance Lecture #8 By: Faraz Ahmed.
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
Confidence Interval Estimation
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
Test vs. inspection Part 2 Tor Stålhane. Testing and inspection A short data analysis.
University of Palestine software engineering department Testing of Software Systems Testing throughout the software life cycle instructor: Tasneem.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Outsourcing, subcontracting and use of COTS Torbjørn Skramstad.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
Topics Covered: Software testing Software testing Levels of testing Levels of testing  Unit testing Unit testing Unit testing  Integration testing Integration.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
CS451 Software Implementation and Integration Yugi Lee STB #555 (816) Note: This lecture was designed.
Karlstad University Computer Science Design Contracts and Error Management External and internal errors and their contracts.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Outsourcing, subcontracting and COTS Tor Stålhane.
Observing the Current System Benefits Can see how the system actually works in practice Can ask people to explain what they are doing – to gain a clear.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
“The Role of Experience in Software Testing Practice” A Review of the Article by Armin Beer and Rudolf Ramler By Jason Gero COMP 587 Prof. Lingard Spring.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Testing and Evolution CSCI 201L Jeffrey Miller, Ph.D. HTTP :// WWW - SCF. USC. EDU /~ CSCI 201 USC CSCI 201L.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
COTS testing Torbjørn Skramstad.
Testing Tutorial 7.
Software Testing.
Chapter 8 – Software Testing
Verification and Testing
BASICS OF SOFTWARE TESTING Chapter 1. Topics to be covered 1. Humans and errors, 2. Testing and Debugging, 3. Software Quality- Correctness Reliability.
COTS testing Tor Stålhane.
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Test coverage Tor Stålhane.
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Overview Activities from additional UP disciplines are needed to bring a system into being Implementation Testing Deployment Configuration and change management.
Presentation transcript:

Outsourcing, subcontracting and COTS Tor Stålhane

Contents We will cover the following topics Testing as a confidence building activity Testing and outsourcing Testing COTS components Sequential testing Simple Bayesian methods

Responsibility It is important to bear in mind that The company that brings the product to the marketplace carries full responsibility for the product’s quality. It is only possible to seek redress from the company we outsourced to if we can show that they did not fulfill their contract

Testing and confidence The role of testing during: Development – find and remove defects. Acceptance – build confidence in the component When we use testing for COTS or components where the development has been outsourced or developed by a subcontractor, we want to build confidence.

A product trustworthiness pattern Product is trustworthy Trustworthiness definition Product related Process related People related Environment definition System definition

Means to create product trust Based on the product trust pattern, we see that we build trust based on The product itself – e.g. a COTS component The process – how it was developed and tested People – the personnel that developed and tested the component

A process trustworthiness pattern Activity is trustworthy Argument by considering process Trustworthiness definition Process definition Team is competent Method address problem Process is traceable

Means to create process trust If we apply the pattern on the previous slide we see that trust in the process stems from three sources: Who does it – “Team is competent” How is it done – “Method addresses problem” We can check that the process is used correctly – “Process is traceable”

Testing and outsourcing If we outsource development, testing need to be an integrated part of the development process. Testing is thus a contract question. If we apply the trustworthiness pattern, we need to include requirements for The component - what The competence of the personnel – who The process – how

Outsourcing requirements - 1 When drawing up an outsourcing contract we should include: Personnel requirements – the right persons for the job. We need to see the CV for each person. Development process – including testing. The trust can come from – A certificate – e.g. ISO 9001 – Our own process audits

Outsourcing requirements - 2 Last but not least, we need to see and inspect some important artifacts: Project plan – when shall they do what? Test strategy – how will they test our component requirements? Test plan – how will the tests be run? Test log – what were the results of the tests?

Trust in the component The trust we have in the component will depend on how satisfied we are with the answers to the questions on the previous slide. We can, however, also build our trust on earlier experience with the company. The more we trust the company based on earlier experiences, the less rigor we will need in the contract.

Testing COTS We can test COTS by using e.g. black box testing or domain partition testing. Experience has shown that we will get the greatest benefit from our effort by focusing on tests for Internal robustness External robustness

Robustness – 1 There are several ways to categorize these two robustness modes. We will use the following definitions: Internal robustness – the ability to handle faults in the component or its environment. Here we will need wrappers, fault injection etc. External robustness – the ability to handle faulty input. Here we will only need the component “as is”

Robustness – 2 The importance of the two types of robustness will vary over component types. Internal robustness - components that are only visible inside the system border External robustness – components that are part of the user interface.

Internal robustness testing Internal robustness is the ability to Survive all erroneous situations, e.g. – Memory faults – both code and data – Failing function calls, including calls to OS functions Go to a defined, safe state after having given the error message Continued after the erroneous situation with a minimum loss of information.

Why do we need a wrapper By using a wrapper, we obtain some important effects: We control the component’s input, even though the component is inserted into the real system. We can collect and report input and output from the component. We can manipulate the exception handling and effect this component only.

What is a wrapper – 1 A wrapper has two essential characteristics An implementation that defines the functionality that we wish to access. This may, or may not be an object (one example of a non-object implementation would be a DLL whose functions we need to access). The “wrapper” class that provides an object interface to access the implementation and methods to manage the implementation. The client calls a method on the wrapper which access the implementation as needed to fulfill the request.

What is a wrapper – 2 A wrapper provides interface for, and services to, behavior that is defined elsewhere

Fault injection – 1 On order to test robustness, we need to be able to modify the component’s code – usually through fault injection. A fault is an abnormal condition or defect which may lead to a failure. Fault injection involves the deliberate insertion of faults or errors into a computer system in order to determine its response. The goal is not to recreate the conditions that produced the fault

Fault injection – 2 There are two steps to Fault Injection: Identify the set of faults that can occur within an application, module, class, method. E.g. if the application does not use the network then there’s no point in injecting network faults Exercise those faults to evaluate how the application responds. Does the application detect the fault, is it isolated and does the application recover?

Example byte[] readFile() throws IOException {... final InputStream is = new FileInputStream(…);... while((offset < bytes.length) && (numRead = is.read(bytes,offset,(bytes.length-offset))) >=0) offset += numRead;... is.close(); return bytes; } What could go wrong with this code? new FileInputStream() can throw FileNotFoundException InputStream.read() can throw IOException and IndexOutOfBoundsException and can return -1 for end of file is.close() can throw IOException

Fault injection – 3 Change the code – Replace the call to InputStream.read() with some local instrumented method – Create our own instrumented InputStream subclass possibly using mock objects – Inject the subclass via IoC (requires some framework such as PicoContainer or Spring) Comment out the code and replace with throw new IOException()

Fault injection – 4 Fault injection doesn’t have to be all on or all off. Logic can be coded around injected faults, e.g. for InputStream.read(): Throw IOException after n bytes are read Return -1 (EOF) one byte before the actual EOF occurs Sporadically mutate the read bytes

External robustness testing – 1 Error handling must be tested to show that Wrong input gives an error message The error message is understandable for the intended users Continued after the error with a minimum loss of information.

External robustness testing – 2 External robustness is the ability to Survive the input of faulty data – no crash Give an easy-to-understand error message that helps the user to correct the error in the input Go to a defined state Continue after the erroneous situation with a minimum loss of information.

Easy-to-understand message – 1 While all the other characteristics of the external robustness are easy too test, the error message requirement can only be tested by involving the users. We need to know which info the user needs in order to: Correct the faulty input Carry on with his work from the component’s current state

Easy-to-understand message – 2 The simple way to test the error messages is to have a user to Start working on a real task Insert an error in the input at some point during this task We can then observe how the user tries to get out of the situation and how satisfied he is with the assistance he get from the component.

Sequential testing In order to use sequential testing we need: Target failure rate p 1 Unacceptable failure rate p 2 and p 2 > p 1 The acceptable probability of doing a type I or type II decision error –  and  hese two values are used to compute a and b, given as

Background - 1 We will assume that the probability of failure is Binomially distributed. We have: The probability of and the probability of observing the number-of-defects sequence x 1, x 2,…x n can be written as

Background - 2 We will base our test on the log likelihood ratio, which is defined as: For the sake of simplicity, we introduce

The test statistics Using the notation from the previous slide, we find that We have p 1, p 2 << 1 and can thus use the approximations ln(1-p) = -p, v = (p 2 – p 1 ) and further that (u – v) = u

Sequential test – example We will use  = 0.05 and  = This will give us a = -1.6 and b = 2.8. We want a failure rate p 1 = and will not accept a component with a failure rate p 2 higher than 2* Thus we have u = and v = The lines for the “no decision” area are  x i (reject) = M*10 -3  x i (accept) = M*10 -3

Sequential test – example M xx 4*10 3 Accept

Sequential testing - summary Testing software – e.g. p < : The method needs a large number of tests. It should thus only be used for testing robustness based on automatically generated random input. Inspecting documents – e.g. p < : The method will give useful results even when inspecting a reasonable number of documents

Simple Bayesian methods Instead of building our trust on only test results, contractual obligations or past experience, we can combine these three factors. The easy way to do this is to use Bayesian statistics. We will give a short intro to Bayesian statistics and show one example of how it can be applied to software testing

Bayes theorem In a simplified version., Bayes’ theorem says that When we want to estimate B, we will use the likelihood of our observations as our P(B|A) and use P(B) to model our prior knowledge.

A Bayes model for reliability For reliability it is common to use a Beta distribution for the reliability and a Binomial distribution for the number of observed failures. This gives us the following results:

Estimates A priori we have that If x is the number of successes and n is the total number of tests, we have posteriori, that

Some Beta probabilities

Testing for reliability We will use a Beta distribution to model our prior knowledge. The knowledge is related to the company that developed the component or system, e.g. How competent are the developers How good is their process, e.g. – Are they ISO 9001 certified – Have we done a quality audit What is our earlier experience with this company

Modeling our confidence Several handbooks on Bayesian analysis contain tables where we specify two out of three values: R 1 : our mean expected reliability R 2 : our upper 5% limit. P(R > R 2 ) = 0.05 R 3 : our lower 5% limit. P(R < R 3 ) = 0.05 When we know our R-values, we can read the two parameters n 0 and x 0 out of a table.

The result We an now find the two parameters for the prior Beta distribution as:  = x 0  = n 0 – x 0 if we run N tests and observe x successes then the Bayesian estimate for the reliability is: R = (x + x 0 ) / (N + n 0 )

Sequential test with Bayes We can combine the info supplied by the Bayesian model with a standard sequential test chart by starting at (n 0 - x 0, n 0 ) instead of starting at origo as shown in the example on the next slide. Note - we need to use n 0 - x 0, since we are counting failures We have the same number of tests necessary, but n 0 of them are virtual and stems from our confidence in the company.

Sequential test with Bayes – example M xx 4*10 3 Accept n0n0