1 Verification, validation and testing Chapter 12, Storey.

Slides:



Advertisements
Similar presentations
Testing Relational Database
Advertisements

Unit-V testing strategies and tactics.
Verification and Validation
Ossi Taipale, Lappeenranta University of Technology
Software Processes Coherent sets of activities for specifying, designing, implementing and testing software systems.
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
Formal Methods in Software Engineering Credit Hours: 3+0 By: Qaisar Javaid Assistant Professor Formal Methods in Software Engineering1.
Software testing.
Developing safety critical systems
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Certification Chapter 14, Storey. 2 Topics  What is certification?  Various forms of certification  The process of system certification (the planning.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Understand.
1 Software Testing and Quality Assurance Lecture 1 Software Verification & Validation.
Introduction to Software Testing
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
Test Design Techniques
Formal Methods 1. Software Engineering and Formal Methods  Every software engineering methodology is based on a recommended development process  proceeding.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
Expert System Presentation On…. Software Certification for Industry - Verification and Validation Issues in Expert Systems By Anca I. Vermesan Presented.
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
Introduction Telerik Software Academy Software Quality Assurance.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Software Testing Testing types Testing strategy Testing principles.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
1 Introduction to Software Engineering Lecture 1.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Chapter 19 Verification and Validation.
Safety Critical Systems 5 Testing T Safety Critical Systems.
Safety-Critical Systems 5 Testing and V&V T
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Formal Methods in SE Software Verification Using Formal Methods By: Qaisar Javaid, Assistant Professor Formal Methods1.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
This chapter is extracted from Sommerville’s slides. Textbook chapter 22 1 Chapter 8 Validation and Verification 1.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
Defect testing Testing programs to establish the presence of system defects.
SOFTWARE TESTING AND QUALITY ASSURANCE. Software Testing.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
CS223: Software Engineering Lecture 25: Software Testing.
Testing Integral part of the software development process.
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVII. Verification and Validation.
ISQB Software Testing Section Meeting 10 Dec 2012.
Software Testing.
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Testing and Debugging PPT By :Dr. R. Mall.
Verification & Validation
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Lecture 09:Software Testing
Verification and Validation Unit Testing
Fundamental Test Process
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Verification and Validation
Software Verification and Validation
Software Verification and Validation
Chapter 7 Software Testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

1 Verification, validation and testing Chapter 12, Storey

2 Introduction (1) -The development of a system can be seen as a series of transformations of its definition from the customers requirements to its complete implementation. -Each phase takes the description of the system as its input and develops this to form the input to the next phase. -In order to have confidence in the final system, it is necessary to confirm that each phase of the development work has been performed correctly. -This is achieved through a process of verification: -the process of determining whether the output of a lifecycle phase fulfils the requirements specified by the previous phase -To demonstrate that the output of a phase conforms to its input, and not to show that the output is actually correct.

3 Introduction (2) -If the input specification is wrong, the verification process will not necessary detect this. -To overcome this, verification is supplemented by validation: -the process of confirming that the specification of a phase, or the complete system, is appropriate and consistent with the customer requirements. -May be performed on individual phases, but is usually used to investigate the characteristics of the complete system. -Often it looks at the behaviour of a prototype system, or a simulation, and determines whether this operates in a manner that satisfies the needs of the customer or user.

4 Testing (1) -Verification and validation is achieved by performing various tests. Testing is the process used to verify or validate a system or its components. -The results from testing may be used to assess the integrity of the system, to investigate specific characteristics such safety and to uncover faults (and thus increasing the system’s dependability). -Testing can be performed at various stages during the development of a system.

5 Testing (2) -The major activities in testing are: -module testing: involves evaluation of small, simple functions of software or hardware. Faults detected are usually relatively straightforward to locate and remove. -system integration testing: investigates the characteristics of a collection of modules and is generally aimed at establishing their correct interaction. Faults detected are likely to be more expensive to correct, because of more complexity. -system validating testing: aims to demonstrate that the complete system satisfies its requirements. Faults detected at this stage are extremely costly to correct (usually involves weaknesses in the customer requirements documents or in the specification), since the modifications must propagate through the entire development process. -Complexity and cost of correcting faults increase as we move from module testing to system testing -Locate faults as soon as possible

6 Testing (3) Testing may take a number of forms and techniques may be broadly classified into: –Dynamic testing: involves execution of a system or component in order to investigate its characteristics. Tests are carried out within the system’s natural working environment or within a simulation of that environment –Static testing: investigates the characteristics of a system or component without operating it. Examples include reviews, inspections and design walkthroughs. -Modelling: involves the use of a mathematical representation of the behaviour of a system or its environments. Animation of a formal specification is an example of modelling. For a typical development programme: –both static and dynamic testing will be included, as well as modelling –the importance of techniques tends to vary throughout the lifecycle –the choice of techniques will be affected by the safety integrity level.

7 Principal testing methods within the development lifecycle Lifecycle phaseDynamic testingStatic testingModelling Requirements analysis and specification XX Top-level designXX Detailed designXX implementationXX Integration testingXXX System validationXX CONTESSE, 1995

8 Testing (4) -Testing may also be divided into ”black-box” and ”white-box” techniques depending on the amount of knowledge the test engineer has of the system being tested. -black-box testing: the test engineer has no knowledge of the implementation of the system and relies simply on information given in the specification (simply checks whether the system does what the specification says it should). Most commonly used on complete systems. -white-box testing: the engineer has access to information concerning the implementation of the system and uses this to guide his work. Such techniques are applicable for testing at all stages of development. -Dynamic testing can be performed using black-box and white-box techniques, while static testing only can be performed using white-box techniques.

9 Planning for verification and validation The task of verification and validation represent a very large part of the required effort when developing safety-critical systems. Since verification and validation are based on testing, test planning is an essential part of the development process and should be planned at an early stage. Validation of the complete system is one of the last stages of system development, but the planning of this activity should be performed at an early stage (where such plans may affect the design).

10 Dynamic testing -Involves execution of a number of test cases that investigates particular aspects of the system. -Each test case comprises a set of input data, a specification of the expected output and an explanation of the function being tested. -Dynamic testing normally includes: -Functional testing: identify and test all the functions of the system that are defined within its requirements. Requires no knowledge of the implementation of the system and is therefore an example of an black-box approach. -Structural testing: uses detailed knowledge of the system’s internal structure to investigate the system’s characteristics. It is therefore an example of a white-box approach. -Random testing: while functional and structural testing choose input that are chosen to investigate some particular characteristics of the system under test, random testing choose input randomly from the entire input space. Tries to detect fault conditions that usually are missed by more systematic techniques. -Dynamic testing involves a mix of ”black-box” and ”white-box” techniques.

11 Dynamic testing techniques Test cases based on equivalence partitioning Test cases based on boundary value analysis State transition testing Probabilistic testing Structure-based testing Process simulation Error guessing Error seeding Timing and memory tests Performance testing Stress testing

12 Static testing -Static testing methods investigate the properties of a system without operating it. -Some of the techniques are performed manually (walkthroughs, design reviews, inspections and checklists), others by using automated tools. -Static testing requires an insight into the nature of the system and therefore always uses a ”white-box” approach. -Many software static testing packages come under the heading of static code analysis tools (formal verification, semantic analysis etc.).

13 Static analysis techniques Walkthroughs / design reviews Checklists Formal proofs Fagan inspections Control flow analysis Data flow analysis Symbolic execution Metrics Sneak circuit analysis

14 Modelling -Involves the use of a mathematical or a graphical representation of the behaviour of a system or its environments. -The model is used to gain an insight into the likely characteristics of the system. -Can be applied manually or by using computer-based tools. -Used most extensively during the early phases of project development and is of particular importance in the production of specification and the top-level design. -Also an important part in system validation. -Modelling techniques include a wide range of methods including some aspects of formal methods. Such techniques are neither ”black-box” nor ”white-box”.

15 Modelling techniques Formal methods Software prototyping / animation Performance modelling State transitions diagrams Time Petri net Data flow diagrams Structure diagrams Environmental modelling

16 Testing for safety -Testing of non-critical systems is primarily concerned with investigating performance with respect to functional requirements -In safety-critical systems one need to show that safety requirement also are satisfied. -In safety-critical systems, much of the testing is aimed at demonstrating the safety of the system. –General safety requirements: includes the achievement of appropriate levels of safety integrity, reliability and quality. –Specific safety requirements: include mechanisms for dealing with various hazards associated with the system. -Validation of a system in respect of its specific safety requirements requires that tests are performed to show that each identified hazard has been effectively encountered. May be possible to demonstrate such properties by the use of dynamic testing alone, although static testing and modelling may be needed. -Validation of the general safety requirements of a system will often require a combination of testing techniques.

17 Test strategies -Several testing techniques can be used in the development of safety- critical systems, and the relative use differs considerably between the various lifecycle phases (see table 12.3 ). -The choice of testing techniques is usually determined by a number of factors: -In-house expertise -Available tools -Integrity of the unit being developed -International standards give guidance on techniques that might be suitable for systems of differing levels of integrity (see table 12.4). -In addition to selecting appropriate techniques for the testing process, it is also necessary to demonstrate the effectiveness of testing. -The effectiveness of testing may be quantified through the use of measures of: -test coverage -test adequacy.

18 Test coverage (1) -Test coverage analysis attempts to estimate the performance of the testing procedure as a percentage of some ideal value. -Test coverage analysis may be applied to black-box testing by considering all the possible input states of a system. If the system is then tested by applying a certain number of test cases, the test coverage may be calculated by dividing the number of test vectors used by the size of the input space. -An ideal test will provide complete input test coverage, and is called exhaustive testing. -This is however almost always impossible (a system with 40 binary inputs has an input space of 2 40 combinations. By performing one test per millisecond this would take about 35 years to test).

19 Test coverage (2) -Effective testing is therefore reliant on the shill of the test engineer in defining a programme of tests that will yield meaningful data. -As all the properties of a system cannot be tested, it is necessary to identify the features of importance and to determine an appropriate strategy and investigate these. -Coverage-based testing: identifies a number of situations to be investigated and then attempt to test an appropriate number of these cases. -Requirements test coverage: the percentage of the functions within the requirements document that are investigated. -Structure-based testing: information on the internal structure of the system is used to perform tests. Program elements that may be tested are statement, branches, paths etc. -Because of the importance and high cost of testing, the needs of testing must be considered during the design stage.

20 Test adequacy -Determine the form and amount of testing for a given application and also the manner in which the test results should be obtained and analysed. -A typical set of criteria will require the use of several testing methods and will necessitate both black-box and white-box techniques. -May be divided into two min categories: -Requirements-based criteria (associated with black-box testing and takes their information from the definition of the system) -Structure-based criteria (require white-box techniques and use data on the structure of the system) -A adequacy criterion is normally associated with an underlying testing technique that is required to satisfy it.

21 Development tools -The development of any computer-based system requires the use of a range of hardware (logic analysers, timing analysers, personal computers) and software (compilers, debuggers, editors) tools. -Of special interest, when developing safety-critical systems, is tools associated with dynamic and static testing. -The effectiveness of testing will be greatly affected by the automated tools used. -Since the verification of a system will be based on test results, it is important that the tools themselves are of high dependability. -Unfortunately, few test tools are validated and almost no tools have been developed to the integrity levels required for testing the most critical systems.

22 Environmental simulation -When developing safety-critical systems it is often impossible, or inadvisable, to test a system fully within its operating environment (nuclear shut-down systems). -In such cases, systems are tested using some form of simulation of the system’s environment. -This not only guarantees safety during the testing process, but may also allow a more efficient and complete investigation of the system’s performance. -The correctness of this simulation is fundamental to the validity of the test results.

23 Independent verification and validation Testing is more effective when performed by staff that are independent from those responsible for implementation. As the integrity requirements increase, the need for independence also increases (IEC 61508). Degree of independence required for validation SIL 1SIL 2SIL 3SIL 4 Independent personsHR NR Independent department-HR NR Independent organization--HR

24 Exercises Chapter 12: 1, 2, 3, 5, 7, 8, 11, 18 and 19