Presentation is loading. Please wait.

Presentation is loading. Please wait.

Testing Overview References:

Similar presentations


Presentation on theme: "Testing Overview References:"— Presentation transcript:

1 Testing Overview References:
Pressman, Software Engineering: a Practitioner’s Approach, McGraw Hill Pfleeger, Software Engineering, Theory and Practice, Prentice Hall J. McGregor and D. Sykes. A Practical Guide to Testing Object-Oriented Software, Addison-Wesley, 2001. I. Burnstein. Practical Software Testing, Springer-Verlag, 2003. 1209

2 Question How do you know your software works correctly?

3 Goals of testing: I want to show that my program is correct: it generates the right answer for every input.

4 How long will it take? Consider X+Y for 32-bit integers.

5 Example 2 A loop returns to A We want to count the number of paths
The maximum number of iterations of the loop is 20 B C

6 Example 3 Consider testing the following function:
The function fn shall take as input an integer value and return the integer part of the one number look ahead divided by 30,000. (The one number look ahead is simply the number one greater than the input.)

7 Example 4 Consider testing a Java compiler?

8 Limits of testing:

9 Goals of testing:

10 Cost Testing accounts for
Microsoft employs one tester for each developer We want to reduce the cost How? Organize!

11 Levels of Software Testing
Unit/Component testing Integration testing System Testing Acceptance Testing Installation Testing

12 Testing Phases: V-Model
Requirements Specification System Specification System Design Detailed Design Acceptance Test Plan System Integration Test Plan Sub-system Integration Test Plan Unit code and Test Acceptance Test System Integration test Sub-system Integration test Service

13 Hierarchy of Testing Testing Ad hoc Program Testing System Testing
Acceptance Testing Unit Testing Integration Testing Function Benchmark Properties Black Box Top Down Pilot Performance Equivalence Bottom Up Boundary Reliability Alpha Big Bang Decision Table Availability Beta Sandwich State Transition Security Use Case Usability Documentation Domain Analysis Portability White Box Capacity Control Flow Data Flow

14 Definitions-1 Test Case: a particular set of input data and the expected outcome Input d from some domain D where d  D Output r from some range R where r  R Specification: The definition of some function S that maps inputs to outputs: S(d) = r

15 Definitions-2 Implementation: The function P that some program implements that maps inputs to outputs: P(d) = r Failure: P(d) = r, but S(d)  r

16 Definitions-3 Test set T
Finite set of test cases T is successful if every test in T produces the specified result Significant test case (set): a test case (set) with a high probability of detecting an error

17 Definitions-4 Test selection criterion C C is consistent if
A set of test sets. We specify some condition that each of the test sets in C must satisfy, such as “has at least one positive and one negative number” C is consistent if for any two test sets satisfying C, T1  C and T2  C, T1 is successful if and only if T2 is All the test sets satisfying a criterion expose the same errors.

18 Definitions-5 Criterion C is complete if C1 is finer than C2 if
whenever P is incorrect, some TC exposes the error TC | (t=(r,d)T | S (r,d)  (P(r, d))) C1 is finer than C2 if  T1C1 T2C2  T2  T1 For every test set satisfying C1 there is a smaller (subset) test set satisfying C2 C1 requires more test cases

19 Exercise C1 is more reliable than C2 if whenever P is incorrect, there is no test set T2 in C2 that causes P to fail but some T1 in C1 does not.

20 The problem with testing

21 Who Tests? Professional testers: Organize and run tests
Analysts: involved in system requirements definition and specification System designers: understand proposed solution and solution’s constraints Implementers: Understand constraints associated with implementation Configuration management representative: arranges for changes to be made consistently across all artifacts

22 A good test: Has a reasonable probability of catching an error

23 Competent Programmer Hypothesis
“We assume, as an article of faith, that the programmers are well trained, well supplied with the proper tools, and competent.”

24 Ad hoc Testing Most popular approach

25 Ad Hoc Testing Simple example (Kaner):
“The program is designed to add two numbers, which you enter. Each number should be one or two digits. The program will echo your entries, then print the sum. Press <Enter> after each number. To start the program, type ADDER.”

26 Automate testing Do whenever possible. Design it for this.
“A century ago the steam locomotive reached it’s peak. Fifty years before that intercity stagecoaches pulled by teams of horses had an outrider on the first horse to stabilize the team. Manual testing to me is like having such a rider at the front of a speeding locomotive.” Beizer.

27 Advice View testing as part of the development process
Buy a tool and use it Testing is the last line of defense: Errors indicate there is a problem with the development process

28 Closing words “Testing is our last line of defense against bugs, not the first or only line of defense. When a bug is found by testing, it means that earlier phases of our software development process are wanting.” “I don’t see testing actually disappearing because the remaining bugs are always subtler and nastier.” Bezier


Download ppt "Testing Overview References:"

Similar presentations


Ads by Google