Outsourcing, subcontracting and COTS Tor Stålhane.

Slides:



Advertisements
Similar presentations
Chapter 17 Failures and exceptions. This chapter discusses n Failure. n The meaning of system failure. n Causes of failure. n Handling failure. n Exception.
Advertisements

White Box and Black Box Testing Tor Stålhane. What is White Box testing White box testing is testing where we use the info available from the code of.
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
Outsourcing, subcontracting and COTS Tor Stålhane.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Page 1 Building Reliable Component-based Systems Chapter 10 - Predicting System Trustworthiness Chapter 10 Predicting System Trustworthiness.
The Architecture Design Process
The Basics of Software Testing
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
Software Quality Assurance
Introduction to Computer Technology
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Exam examples Tor Stålhane. The A scenario – 1 We are working in a small software development company – 10 developers plus two persons in administrative.
1 Object-Oriented Testing CIS 375 Bruce R. Maxim UM-Dearborn.
Software Quality Assurance Lecture #8 By: Faraz Ahmed.
Slides Credit Umair Javed LUMS Web Application Development.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Designing For Testability. Incorporate design features that facilitate testing Include features to: –Support test automation at all levels (unit, integration,
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
Coupling and Cohesion Pfleeger, S., Software Engineering Theory and Practice. Prentice Hall, 2001.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Cohesion and Coupling CS 4311
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Outsourcing, subcontracting and use of COTS Torbjørn Skramstad.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
1 Introduction to Software Testing. Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Chapter 1 2.
Topics Covered: Software testing Software testing Levels of testing Levels of testing  Unit testing Unit testing Unit testing  Integration testing Integration.
Storage Systems CSE 598d, Spring 2007 Rethink the Sync April 3, 2007 Mark Johnson.
CPSC 873 John D. McGregor Session 9 Testing Vocabulary.
Lecture 13.  Failure mode: when team understands requirements but is unable to meet them.  To ensure that you are building the right system Continually.
ICS3U_FileIO.ppt File Input/Output (I/O)‏ ICS3U_FileIO.ppt File I/O Declare a file object File myFile = new File("billy.txt"); a file object whose name.
CPSC 871 John D. McGregor Module 8 Session 1 Testing.
CS451 Software Implementation and Integration Yugi Lee STB #555 (816) Note: This lecture was designed.
Karlstad University Computer Science Design Contracts and Error Management External and internal errors and their contracts.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Coupling and Cohesion Schach, S, R. Object-Oriented and Classical Software Engineering. McGraw-Hill, 2002.
Coupling and Cohesion Pfleeger, S., Software Engineering Theory and Practice. Prentice Hall, 2001.
CPSC 372 John D. McGregor Module 8 Session 1 Testing.
Principles of Information Systems Eighth Edition
COTS testing Torbjørn Skramstad.
SE-1021 Software Engineering II
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Testing Tutorial 7.
John D. McGregor Session 9 Testing Vocabulary
Chapter 8 – Software Testing
Verification and Testing
Some Simple Definitions for Testing
BASICS OF SOFTWARE TESTING Chapter 1. Topics to be covered 1. Humans and errors, 2. Testing and Debugging, 3. Software Quality- Correctness Reliability.
John D. McGregor Session 9 Testing Vocabulary
COTS testing Tor Stålhane.
COTS testing Tor Stålhane.
John D. McGregor Session 9 Testing Vocabulary
Verification and Validation Unit Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Software Verification, Validation, and Acceptance Testing
Chapter 7 Software Testing.
Overview Activities from additional UP disciplines are needed to bring a system into being Implementation Testing Deployment Configuration and change management.
Presentation transcript:

Outsourcing, subcontracting and COTS Tor Stålhane

Contents We will cover the following topics Testing as a confidence building activity Testing and outsourcing Testing COTS components Sequential testing Simple Bayesian methods

Responsibility It is important to bear in mind that The company that brings the product to the marketplace carries full responsibility for the product’s quality. It is only possible to seek redress from the company we outsourced to if we can show that they did not fulfill their contract

Testing and confidence The role of testing during: Development – find and remove defects. Acceptance – build confidence in the component When we use testing for COTS or components where the development has been outsourced or developed by a subcontractor, we want to build confidence.

A product trustworthiness pattern Product is trustworthy Trustworthiness definition Product related Process related People related Environment definition System definition

Means to create product trust Based on the product trust pattern, we see that we build trust based on The product itself – e.g. a COTS component The process – how it was developed and tested People – the personnel that developed and tested the component

A process trustworthiness pattern Activity is trustworthy Argument by considering process Trustworthiness definition Process definition Team is competent Method address problem Process is traceable

Means to create process trust If we apply the pattern on the previous slide we see that trust in the process stems from three sources: Who does it – “Team is competent” How is it done – “Method addresses problem” We can check that the process is used correctly – “Process is traceable”

Testing and outsourcing If we outsource development, testing need to be an integrated part of the development process. Testing is thus a contract question. If we apply the trustworthiness pattern, we need to include requirements for The component - what The competence of the personnel – who The process – how

Outsourcing requirements - 1 When drawing up an outsourcing contract we should include: Personnel requirements – the right persons for the job. We need to see the CV for each person. Development process – including testing. The trust can come from – A certificate – e.g. ISO 9001 – Our own process audits

Outsourcing requirements - 2 Last but not least, we need to see and inspect some important artifacts: Project plan – when shall they do what? Test strategy – how will they test our component requirements? Test plan – how will the tests be run? Test log – what were the results of the tests?

Trust in the component The trust we have in the component will depend on how satisfied we are with the answers to the questions on the previous slide. We can, however, also build our trust on earlier experience with the company. The more we trust the company based on earlier experiences, the less rigor we will need in the contract.

Testing COTS We can test COTS by using e.g. black box testing or domain partition testing. Experience has shown that we will get the greatest benefit from our effort by focusing on tests for Internal robustness External robustness

Robustness – 1 There are several ways to categorize these two robustness modes. We will use the following definitions: Internal robustness – the ability to handle faults in the component or its environment. Here we will need wrappers, fault injection etc. External robustness – the ability to handle faulty input. Here we will only need the component “as is”

Robustness – 2 The importance of the two types of robustness will vary over component types. Internal robustness - components that are only visible inside the system border External robustness – components that are part of the user interface.

Internal robustness testing Internal robustness is the ability to Survive all erroneous situations, e.g. – Memory faults – both code and data – Failing function calls, including calls to OS functions Go to a defined, safe state after having given the error message Continued after the erroneous situation with a minimum loss of information.

Why do we need a wrapper By using a wrapper, we obtain some important effects: We control the component’s input, even though the component is inserted into the real system. We can collect and report input and output from the component. We can manipulate the exception handling and effect this component only.

What is a wrapper – 1 A wrapper has two essential characteristics An implementation that defines the functionality that we wish to access. This may, or may not be an object (one example of a non-object implementation would be a DLL whose functions we need to access). The “wrapper” class that provides an object interface to access the implementation and methods to manage the implementation. The client calls a method on the wrapper which access the implementation as needed to fulfill the request.

What is a wrapper – 2 A wrapper provides interface for, and services to, behavior that is defined elsewhere

Fault injection – 1 On order to test robustness, we need to be able to modify the component’s code – usually through fault injection. A fault is an abnormal condition or defect which may lead to a failure. Fault injection involves the deliberate insertion of faults or errors into a computer system in order to determine its response. The goal is not to recreate the conditions that produced the fault

Fault injection – 2 There are two steps to Fault Injection: Identify the set of faults that can occur within an application, module, class, method. E.g. if the application does not use the network then there’s no point in injecting network faults Exercise those faults to evaluate how the application responds. Does the application detect the fault, is it isolated and does the application recover?

Example byte[] readFile() throws IOException {... final InputStream is = new FileInputStream(…);... while((offset < bytes.length) && (numRead = is.read(bytes,offset,(bytes.length-offset))) >=0) offset += numRead;... is.close(); return bytes; } What could go wrong with this code? new FileInputStream() can throw FileNotFoundException InputStream.read() can throw IOException and IndexOutOfBoundsException and can return -1 for end of file is.close() can throw IOException

Fault injection – 3 Change the code – Replace the call to InputStream.read() with some local instrumented method – Create our own instrumented InputStream subclass possibly using mock objects – Inject the subclass via IoC (requires some framework such as PicoContainer or Spring) Comment out the code and replace with throw new IOException()

Fault injection – 4 Fault injection doesn’t have to be all on or all off. Logic can be coded around injected faults, e.g. for InputStream.read(): Throw IOException after n bytes are read Return -1 (EOF) one byte before the actual EOF occurs Sporadically mutate the read bytes

External robustness testing – 1 Error handling must be tested to show that Wrong input gives an error message The error message is understandable for the intended users Continued after the error with a minimum loss of information.

External robustness testing – 2 External robustness is the ability to Survive the input of faulty data – no crash Give an easy-to-understand error message that helps the user to correct the error in the input Go to a defined state Continue after the erroneous situation with a minimum loss of information.

Easy-to-understand message – 1 While all the other characteristics of the external robustness are easy too test, the error message requirement can only be tested by involving the users. We need to know which info the user needs in order to: Correct the faulty input Carry on with his work from the component’s current state

Easy-to-understand message – 2 The simple way to test the error messages is to have a user to Start working on a real task Insert an error in the input at some point during this task We can then observe how the user tries to get out of the situation and how satisfied he is with the assistance he get from the component.