Presentation is loading. Please wait.

Presentation is loading. Please wait.

INSE - Lecture 11 Testing u Verification testing vs pursuit testing u Philosophy of testing u Stages of testing u Methods of testing u Design of test data.

Similar presentations


Presentation on theme: "INSE - Lecture 11 Testing u Verification testing vs pursuit testing u Philosophy of testing u Stages of testing u Methods of testing u Design of test data."— Presentation transcript:

1 INSE - Lecture 11 Testing u Verification testing vs pursuit testing u Philosophy of testing u Stages of testing u Methods of testing u Design of test data u Management issues

2 Verification testing versus Pursuit testing u Definitions

3 Verification vs Pursuit testing u Verification testing is testing to find out if a product is “ correct ” and acceptable u Pursuit testing is when we know there is an error, and we improvise additional tests to “ chase ” the error, and locate it – it ’ s better called diagnostics. u Pursuit testing is really part of debugging - in next lecture. u This lecture is about Verification testing

4 Philosophy of testing u Key observations

5 What tests can/can ’ t do u A test can “ prove ” the existence of a bug u A test cannot “ prove ” the absence of bugs: –there can be bugs in a program, but the tests just don ’ t trigger them; –or the bug might be triggered, but you just don ’ t spot that in the output u So a “ good ” test will increase our confidence that there are no evident bugs. –but what else does “ good ” mean in that context?

6 “ This product ” u Software products do very diverse things. u So the tests need to be correspondingly diverse. u So basic thinking from tests of one product is unlikely to carry forward well to another product. u So every new product needs a stage of original thought on how to test this unique product.

7 Stages of testing u Test preparation - philosophy, test design, test scheduling u Component tests - usually find coding problems u Integration tests - usually find design problems u User tests - usually find spec & other problems u Maintenance tests - find introduced problems

8 Test preparation u Derives from specification documents and design documents; u Needed after implementation … So really needs to be a separate “ stream ” of the lifecycle, in parallel to implementation. u Should not be improvised in a hurry after implementation - such tests will have “ gaps ” in their coverage.

9 Component tests / Unit tests u To test a small fragment in isolation will need a small “ main program ” for the purpose … u … we call this a “ test harness ”. u The test harness should be designed to (try to) exhibit possible faults. u Some IDEs permit direct execution without explicit test harnesses.

10 Component tests / Module tests u Ideally, one is testing something which doesn ’ t have enough bugs that their symptoms confuse one another … –… suggests an optimum “ module size ” »e.g. if you average one bug per 250 lines, then keep modules down to (say) 500 lines. u Again - need a test harness (or IDE support).

11 Integration / Subsystem tests u Again - need a test harness (or IDE support). u Hard to test a module without having already tested and debugged any modules it needs... u … but we might “ fake ” a used module by instead using a “ test stub ”… u … so we can to some extent achieve top- down testing.

12 Integration / System tests u Testing the whole product - i.e. first test against the (whole) specification since prototyping. u It ’ s often very hard to devise comprehensive system tests - especially ones that reflect “ live ” patterns of use.

13 User tests / Beta tests u Give a near-finished version of the product to sample customers … u … almost a sort of late prototyping; u … meets the problem of testing in “ real ” situations? u … the lack of finishing might be –cosmetic; –fancier facilities missing; –debugging not yet complete. u Biggest problem: how to collect representative feedback.

14 User tests / Acceptance tests u For software “ written to order ”, these are usually specified in contract - i.e. what the customer wants to see before agreeing you ’ ve met contract. u Therefore usually done by (or with) customer staff; or perhaps by third-party independent specialist testers.

15 Maintenance tests u All the usual tests, but many with an extra flavour... u Regression tests - comparing results of a test with the results of the same test from a prior version of the software - often to see that there ’ s been no change, sometimes to see that there ’ s only been intended changes in the results.

16 Methods of testing u top-down testing? u static testing. u dynamic testing & design of test data. u black-box & white-box. u be-bugging.

17 Top-down testing? u The “ natural ” order of testing is bottom- up. u But using test stubs we can (to some extent) test top-down.

18 Static test methods u Walkthroughs of the code u Compiler checks u Checks based on tools –e.g. cross-referencers u “ Proving ” the source code –very long proofs => programs to do the proving

19 Dynamic test methods u Running the program –with carefully-designed test data –then carefully checking the output u Profile-running the program –inspect the profile for anomalies u Running the program under a “ dynamic debugger ”

20 Black-box tests u Design the test-data (and harness) to determine how well the product matches its specification –e.g. for some “ range ” input - try »just in range (both ends?), »just out of range (both ends?), »a sample well in range, and »a sample well out of range.

21 White-box tests u tests designed using internal knowledge of the design & code; u attack especially any perceived weak points - –profiling to ensure every execution path is tested; –adding “ print ” statements to verify transient values; –avoid re-testing dual uses of re-used code?

22 The “ be-bugging ” method u Deliberately introduce a known number of “ typical ” bugs into a fairly clean program. u Set a new team of tester to find bugs in the program. u Suppose they find (say) 2 / 3 of the bugs you “ sowed ” in the program plus another 10. u Assume the 10 is 2 / 3 of the bugs that were they & you didn ’ t know about. u Then there are ~5 more unknown bugs to find … ?

23 Design of test data

24 Design of (dynamic) test data u The design of data for test-runs should be designed to search in every corner of –the code under test (white-box testing); –the problem & the specification (black-box testing) for all imaginable errors. u (Unimaginable, too!) u For unit & subsystem tests, that usually means designing the test data and the test harness (for the unit/subsystem) together.

25 All test output needs checking u (something often forgotten when designing tests & test-data!) u Design the tests so that the output can easily & reliably be checked - –e.g. helpful layout; –e.g. not of excessive volume; –e.g. “ simple ” - of some evident pattern or other not needing careful thought.

26 Management issues in testing

27 Things to ensure – u Check that it ’ s done! –… and done well enough for that product! –… and done imaginatively but meticulously! u Aptitudes and motivation of testing staff? –test-designers? –test-doers? –checkers of the test output? u Audit trails of testing done - –test-auditors? –test-documentors?

28 After this lecture u think about the testing you are going to do, and the testing you have done.

29 u © C Lester 1997-2014


Download ppt "INSE - Lecture 11 Testing u Verification testing vs pursuit testing u Philosophy of testing u Stages of testing u Methods of testing u Design of test data."

Similar presentations


Ads by Google