Download presentation
Presentation is loading. Please wait.
Published byDana Edwards Modified over 9 years ago
1
T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL
2
T. E. Potok - University of Tennessee Testing
3
3 Software Engineering CS 594T. E. Potok - University of Tennessee Testing Objectives To ensure that the final product works according to the requirements and that the product works correctly in a wide variety of situations
4
4 Software Engineering CS 594T. E. Potok - University of Tennessee Validation Ensuring that the product meets the initial requirements or contract Demonstration may be required to validate that the software performs as expected – Performance requirements – Functional requirements – Usability requirements
5
5 Software Engineering CS 594T. E. Potok - University of Tennessee Methods Black box White box Random User centered Regression
6
6 Software Engineering CS 594T. E. Potok - University of Tennessee Black Box Treat the items to be tested as a black box Concerned only about inputs and outputs Easy test to perform – Provide an input set, compare the results against the know correct answers May not exercise key branches of the code
7
7 Software Engineering CS 594T. E. Potok - University of Tennessee White Box Test all of the possible paths within a section of code Requires strong knowledge of the code, and great care in generating test input The number of possible paths through a complex section of code can be very large Very thorough test, may be impractical
8
8 Software Engineering CS 594T. E. Potok - University of Tennessee Random Testing Randomly generate input, then validate that the output is correct Easy test to perform May not fully exercise the code
9
9 Software Engineering CS 594T. E. Potok - University of Tennessee User Centered Determine what features and functions that most users will use Test the software based on expected usage patterns Will find most bugs a users will encounter Will miss bugs found by more sophisticated users
10
10 Software Engineering CS 594T. E. Potok - University of Tennessee Regression Testing Store up test cases from previous releases Run these test for every new release, – New changes will not “break” older functions – The tests are well understood, and provide a good test of the over all system
11
11 Software Engineering CS 594T. E. Potok - University of Tennessee How much testing is required Theoretically, there are a fixed number of errors in a section of software Further, this number can be estimated given the size and complexity of the code Testing this code can then be based on finding some percentage of the latent bugs in the system
12
12 Software Engineering CS 594T. E. Potok - University of Tennessee Error Classification It is even possible to be able to classify the type of errors that will be found If this is known, then the type of testing that is needed can be determined as well For example, if performance errors are expected, then performance testing can be applied
13
13 Software Engineering CS 594T. E. Potok - University of Tennessee Testing Types Many different types of testing – Unit – Functional – System – Usability – Performance
14
14 Software Engineering CS 594T. E. Potok - University of Tennessee Unit Testing Verifying that a single module works correctly. White box testing can be very effective, particularly for small modules Testing is generally informal, often performed by the author of the code
15
15 Software Engineering CS 594T. E. Potok - University of Tennessee Function Testing This involves testing several modules that make of part of the functionality of the system White box testing can be used, but is often impractical Black box, or random testing is often used The testing is normally not done by the authors of the code
16
16 Software Engineering CS 594T. E. Potok - University of Tennessee System Testing This involves testing the entire system White box is usually impractical Black box and random testing can be used User centered testing can be quite effective Automated testing procedures can be used as well
17
17 Software Engineering CS 594T. E. Potok - University of Tennessee Usability Tests Difficult test to perform Find collection of typical users Video tape the user performing typical features of a product Fix and adjust the system, then retest
18
18 Software Engineering CS 594T. E. Potok - University of Tennessee Performance Test Record various performance characteristics of the system – Response time – CPU, Memory, and disk utilization Based on various operations performed Under a variety of expected computer and software loads
19
19 Software Engineering CS 594T. E. Potok - University of Tennessee Quality What defines quality? – Error free? – Easy to use? – Wide functionality? Can quality be too high? When do you stop using a software package because of quality
20
20 Software Engineering CS 594T. E. Potok - University of Tennessee Testing Goals Provide the minimum amount of testing that ensures maximum quality for the user Testing little used features, or over testing other features is a waste of time Removing every bug in a system will take a great deal of time, and money, even though a user may never experience these errors
21
21 Software Engineering CS 594T. E. Potok - University of Tennessee Mean Time to Failure A typical measurement of quality if MTTF A light bulb has a MTTF of many hours, and a known failure rate Software may have a similar failure rate, but it is much harder to determine In some cases these values are critical – Telecommunications, military, or space travel
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.