Ch6: Software Verification. 1 Decision table based testing  Applicability:  Spec. is described by a decision table.  Tables describe:  How combinations.

Slides:



Advertisements
Similar presentations
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Advertisements

CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software Testing and Quality Assurance Lecture 30 - Introduction to Software Testing.
IMSE Week 18 White Box or Structural Testing Reading:Sommerville (4th edition) ch 22 orPressman (4th edition) ch 16.
CS 425/625 Software Engineering Software Testing
Outline Types of errors Component Testing Testing Strategy
Software Testing & Strategies
Software Testing Introduction. Agenda Software Testing Definition Software Testing Objectives Software Testing Strategies Software Test Classifications.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
System/Software Testing
ECE 355: Software Engineering
CMSC 345 Fall 2000 Unit Testing. The testing process.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Software Testing.
Overview of Software Testing 07/12/2013 WISTPC 2013 Peter Clarke.
INT-Evry (Masters IT– Soft Eng)IntegrationTesting.1 (OO) Integration Testing What: Integration testing is a phase of software testing in which.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Software Testing Testing types Testing strategy Testing principles.
Software Testing The process of operating a system or component under specified conditions, observing and recording the results, and making an evaluation.
Software Testing. 2 CMSC 345, Version 4/12 Topics The testing process  unit testing  integration and system testing  acceptance testing Test case planning.
System Test Methods TESTTEME The Test Challenge Bottom Up Testing Strategy Integration Test System Test Types of Testing Unit Test = Code-based Testing.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Software Construction Lecture 18 Software Testing.
TESTING LEVELS Unit Testing Integration Testing System Testing Acceptance Testing.
Testing Object-Oriented Software (an Overview) Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 12.
Testing OO software. State Based Testing State machine: implementation-independent specification (model) of the dynamic behaviour of the system State:
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Integration testing Integrate two or more module.i.e. communicate between the modules. Follow a white box testing (Testing the code)
Software Engineering Issues Software Engineering Concepts System Specifications Procedural Design Object-Oriented Design System Testing.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Integration Testing Beyond unit testing. 2 Testing in the V-Model Requirements Detailed Design Module implementation Unit test Integration test System.
Software Quality Assurance and Testing Fazal Rehman Shamil.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Chapter 12: Software Testing Omar Meqdadi SE 273 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
SOFTWARE TESTING. SOFTWARE Software is not the collection of programs but also all associated documentation and configuration data which is need to make.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
ANOOP GANGWAR 5 TH SEM SOFTWARE TESTING MASTER OF COMPUTER APPLICATION-V Sem.
Defect testing Testing programs to establish the presence of system defects.
Testing Integral part of the software development process.
Topics  Direct Predicate Characterization as an evaluation method.  Implementation and Testing of the Approach.  Conclusions and Future Work.
Software Testing.
Software Testing.
Rekayasa Perangkat Lunak Part-13
SUCHITA M.DAKI TYIT(sem v)
Chapter 13 & 14 Software Testing Strategies and Techniques
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Lecture 09:Software Testing
Higher-Level Testing and Integration Testing
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Software testing.
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

Ch6: Software Verification

1 Decision table based testing  Applicability:  Spec. is described by a decision table.  Tables describe:  How combinations of inputs generate different outputs.  Advantages:  Tables are easy to understand  Supports a systematic derivation of tests

2 Decision table based testing (contd..) “The word-processor may present portions of text in three different formats: plain text (p), boldface (b), italics (i). The following commands may be applied to each portion of text: make text plain (P), make boldface (B), make italics (I), emphasize (E), super emphasize (SE). Commands are available to dynamically set E to mean either B or I (we denote such commands as E=B and E=I, respectively.) Similarly, SE can be dynamically set to mean either B (command SE=B) or I (command SE=I), or B and I (command SE=B+I.)” Example

3 Decision table based testing (contd..) P B I E SE E = B E = I SE = I SE = B SE = B + I pbibibib,i action * * * ** * * *** * * * * *

4 Cause effect graphs  Based on a formal way of structuring complex input-output specifications  Transformation of inputs and outputs:  Transform into Boolean values.  Program transformation converted into a Boolean function.

5 Cause effect graphs (contd..) B I P E E = B SE E = I SE = B SE = I SE = B + I b i p A N D O R A N D O R The AND/OR graph represents the correspondence between causes and effects

6 Cause effect graphs (contd..) Constraints may be added to describe legal output a b c e a b c o a b r a b m at most one a b c i at least one one and only one requires masks

7 Cause effect graphs (contd..) “Both B and I exclude P (i.e., one cannot ask both for plain text and, say, italics for the same portion of text.) E and SE are mutually exclusive.” B I P E E = B SE E = I SE = B SE = I SE = B + I m m b i p m m A N D O R A N D O R

8 Cause effect graphs (contd..)  Complete coverage principle:  Generate all possible combinations of inputs & check if o/p occurs according to specs.  Consistency & completeness: o/p corresponding to i/p violates compatibility, or no i/p comb. Generates o/p.  May reduce the number by going backwards from outputs  OR node with true output: Generate only one test case with at least one input true. Output will be true if at least one variable is true.  AND node with false output: Use only one combination with one false output.  May highlight when admissible input violates graph’s consistency requirements  May show incompleteness

9 Testing boundary conditions  Input domain partitioning:  Partition i/p domain in classes, assuming that the behavior is similar for data within a class.  Some typical programming errors occur at the boundary between different classes  X < y, x <=y.  Suggestion:  Test using values both within & at the boundaries.  Applies to both white and black-box testing

10 The oracle problem  How to inspect the results of test executions to reveal failures?  Oracles are reqd. at each stage of testing.  Automated test oracles for running a large # of tests.  Oracles are difficult to design – no universal recipe.

11 Testing in the large: Testing and Modularity  How to test large software systems?  If we apply the same techniques we used to test modules to test systems, extremely complex. Try enumerating paths through the whole system.  Software architecture of the system may be used to drive verification.  Test modules individually and then the whole system.  Benefits of modular testing:  Easier to localize errors, discover errors early, classify errors as to their scope.  Good design techniques enhance verifiability.

12 Testing in the large: Testing and modularity  Module testing: Verifies if a given module has been implemented correctly according to expected ext. behavior.  Integration testing: Testing that occurs as the system is being gradually integrated.  System testing: Test the entire system, as a collection of all the modules, usually in the delivery environment.  Acceptance testing: System testing by a customer.

13 Module testing  Modules often cannot be tested in isolation  May need to provide a temporary context in which the module might execute:  Any modules it calls.  Any non-local data structures it accesses.  Simulate self-calls.

14 Module testing (contd..) Stub: Module may call other modules. Procedure that emulates the behavior, Has the same i/o behavior as the called module, but simple impl. May be a Look-up table. Driver: Module may get called by other modules, piece of code that simulates the behavior of the calling module. PROCEDURE UNDER TEST DRIVERSTUB CALL ACCESS TO NONLOCAL VARIABLES Harness or scaffolding: Support s/w needed for testing.

15 Integration testing  Big-bang testing: test individual modules & then system:  No system testing, all inter-module dependencies resolved during testing, many interactions may lead to high complexity.  Incremental testing: Apply incrementality principle to integration testing:  Modules are developed and tested incrementally.  Easier to localize errors.  May identify collection of modules as autonomous subsystems.  Reduces the need for stubs & drivers for module testing.  Can be bottom-up, top-down.

16 Integration testing (contd..)  Bottom-up aggregation:  Start aggregation and testing via USES hierarchy.  Need drivers to emulate calls.  Top-down aggregation:  Start from the top-level modules.  Use stubs to simulate lower level modules.  Top-down and bottom-up approaches may be used in conjunction with each other

17 Integration testing (contd..) A B C D E If integration and test proceed bottom-up only need drivers Otherwise, if we proceed top-down only stubs are needed

18 Integration testing (contd..) M 1M 2 2,12,2 MM M1 USES M2 and M2 IS_COMPOSED_OF {M2,1, M2,2} CASE 1 Test M1, providing a stub for M2 and a driver for M1 Then provide an implementation for M2,1 and a stub for M2,2 CASE 2 Implement M2,2 and test it by using a driver, Implement M2,1 and test the combination of M2,1 and M2,2 (i.e., M2) by using a driver Finally, implement M1 and test it with M2, using a driver for M1

19 Separation of concerns in testing  In general, testing involves several phases, with different goals, performed by different people  Use principle of separation of concerns to design test cases for different qualities:  Performance, robustness, user friendliness.  A sample of different concerns  Overload testing: Test behavior under peak conditions.  Robustness: Test under unexpected conditions, erroneous user commands, power failure.  Regression testing: Test whether correctness & other qualities are maintained after mods.

20 Testing object-oriented systems  General testing principles may be applied to object-oriented systems  Classes are similar to components in the traditional approach  Testing in the small:  Add a few methods for testing. Other classes may be stubbed in.  Testing in the large:  Inheritance, interactions, public interface, private implementation.

21 Testing object-oriented systems (contd..)  New issues  Inheritance  Genericity  Polymorphism  Dynamic binding  Open problems still exist

22 Testing object-oriented systems (contd..)  “ Flattening ” the whole hierarchy and considering every class as a totally independent component  Finding an ad-hoc way to take advantage of the hierarchy  A sample strategy:  A test that does not have to be repeated for any child class  A test that must be performed for child class X and all of its further children  A test that must be redone by applying the same input data, but verifying that the output is not (or is) changed  A test that must be modified by adding other input parameters and verifying that the output changes accordingly

23 Testing object-oriented systems (contd..)  Black-box testing:  Test if class provides the functionality, ignore implementation. Essentially testing of public interface.  Complete coverage principle – test each method.  White-box testing:  Test hidden implementation. Ignores if public implementation. Apply coverage criteria to test cases.  Must perform both black and white box testing