Presentation is loading. Please wait.

Presentation is loading. Please wait.

6.1/80 TESTING…. 6.2/80 Overview l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What.

Similar presentations


Presentation on theme: "6.1/80 TESTING…. 6.2/80 Overview l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What."— Presentation transcript:

1 6.1/80 TESTING…

2 6.2/80 Overview l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing?, l Testing distributed SW, l Testing Real-Time SW, l When testing stops?, l Summary.

3 6.3/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

4 6.4/80 Motivation l SW life cycle models too often include separate testing phase… l Nothing could be more dangerous!, l Testing should be carried continuously throughout the SW life cycle.

5 6.5/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

6 6.6/80 Testing – Glossary … l “V & V” VS Testing: –Verification – אימות Determine if the phase was completed correctly. (Take place at the end of each phase) Boehm: Verification = “Are we building the product right?”, –Validation – תקפות Determine if the product as a whole satisfies its requirements Takes place before product is handed to the client Boehm: Validation = “Are we building the right product?”.

7 6.7/80 Testing – Glossary (Cont’d) l Warning: –“Verify” also used for all non-execution-based testing, –V&V might implies that there is a separate phase for testing. l The are two types of testing: –Execution-based testing, –Non-execution-based testing, l It is impossible to ‘execute’ the MRD or EPS, l On the other hand, is code-testing enough/efficient for the implementation phases?

8 6.8/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

9 6.9/80 Quality …

10 6.10/80 Quality (Cont’d) l Quality: l Peculiar and essential character, l An inherent feature, l Degree of excellence, l Superiority in kind, l An intelligible feature by which a thing may be identified.

11 6.11/80 SW Quality … l In other areas quality implies excellence, l Not here!, l The quality of SW is the extent to which the product satisfies its specifications.

12 6.12/80 SW Quality (Cont’d) … l Very often bugs are found as the delivery deadline approaches: Release a faulty product or Late delivery, l Have a separate SW Quality Assurance (SQA) team, l Instead of 100 programmers devoting 30% of their time to SQA activities, Have full-time SQA professionals, l In small company utilize cross-review.,

13 6.13/80 SW Quality (Cont’d) l Managerial independence: –Development group, –SQA group,

14 6.14/80 The SQA Team Responsibilities l To ensure that the current phase is correct, l To ensure that the development phase have been carried out correctly, l To check that the product as a whole is correct, l The development of various standards and tools, to which the SW and the SW development must conform, [CMM level?], l Establishment of monitoring procedures for assuring compliance with those standards.

15 6.15/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

16 6.16/80 Non-Execution-Based Testing … l Underlying principles: l Group synergy, l We should not review our own work, l Our own blind-spot, Cover your right eye and stare at the red circle. Then, slowly move away from the page (or if you're already far, move toward the page). As you move away/toward the page (Do not look at the blue star), there should be a point where the blue star disappears from the picture. That is your blind spot!

17 6.17/80 Non-Execution-Based Testing …

18 6.18/80 Non-Execution-Based Testing (Cont’d) l Non-execution-Based Testing: l Walkthrough – סקירה, l Inspection – ביקורת, l (Peer-Reviews – סקר עמיתים),

19 6.19/80 Walkthrough – The Team l 4–6 members, representative from: l Specification team member (document author?), l Specification team manager, l Client, l Next team (spec’s clients), l SQA – chairman of the walkthrough.

20 6.20/80 Walkthrough – Preparations l Set the team, l Distribute spec in advance, l Each reviewer prepares two lists: –Things he/she does not understands, –Things he/she thinks are incorrect, l Execute the walkthrough session(s).

21 6.21/80 The Walkthrough Session … l Chaired by SQA (the one who will loose most..): whose roles are: –Elicit questions, –Facilitate discussion, –Prevent ‘point-scoring-session’, –Prevent annual evaluation session, (Remember team leader and team manager …), l Up to 2 hours sessions, l Might be participant-driven or document-driven, l Verbalization leads to fault finding!, l Most fault are found by the presenter!,

22 6.22/80 The Walkthrough Session (Cont’d) l Detect faults – do not correct them!, l Why?, l Cost-benefit of the correction (6 members..), l Faults should be analyzed carefully, l There is not enough time in the session, l The ‘committee attitude.’,

23 6.23/80 Inspection … l More formal process, with six-stages: l Planning – set the team, set schedule, l Overview session – Overview & doc. distribution, l Preparation – learn the spec, aided by statistics of fault types, that is – utilize the organization knowledge base, l Inspection – walkthrough the document, verifying each item. Formal summary will be distributed with summary and ARs. (Action required items: task and due dates), l Rework – fault resolving, l Follow-up – every issue is resolved: fix or clarification.

24 6.24/80 Inspection (Cont’d) … l Team of five: l Moderator (I.E. – Spec team leader) – מתווך: –Manages the inspection – team and object, –Ensures that the team takes positive approach, l Specification author: –Answers questions about the product, l Reader: –Reads the doc aloud, l Recorder: –Documents the results of the inspection, l SQA, inspector, specialist: –Provides independent assessment of the spec.

25 6.25/80 Inspections (Cont’d) l Use a checklist of potential faults: l Is each item of the spec correctly addressed?, l In case of interface, do actual and formal arguments correspond?, l Have error handling mechanism been identified?, l Is SW design compatible with HW design?, l Etc., l Throughout the inspection: Faults recording.

26 6.26/80 Fault Statistics l Recorded by severity and fault type. –Major.. (Premature termination, DB damage, etc..), – Or minor, l Usage of the data: –Compare with previous products, –What if there are a disproportionate number of faults in a specific module?, –Maybe – redesign from scratch?, –Carry forward fault statistics to the next phase, –Not for performance appraisal!

27 6.27/80 Inspection – Example [Fagan 1976] l 100 person-hours task, l Rate of two 2-hours inspections per day, l Four-person team, l 100/ (5*4) = 5 working days, l 67% of all faults were detected before module execution-based testing!, l 38% fewer faults than a comparable product.

28 6.28/80 Statistics on Inspections l 93% of all detected faults (IBM, 1986), l 90% decrease in cost of detecting fault (switching system, 1986), l 4 major faults, 14 minor faults per 2 hours (JPL, 1990). Savings of $25,000 per inspection, l Number of faults decreased exponentially by phase (JPL, 1992).

29 6.29/80 Review Strengths and Weaknesses l Strengths: –Effective way of faults detecting, –Early detection, –Saving $, l Weaknesses: –Depends upon process adequate (הולם, מספיק), –Large-scale SW is extremely hard to review (unless modularity concept – OOP), –Depends upon previous phase documents, –Might be used for performance appraisal.

30 6.30/80 Metrics for Inspections l Fault density: –Faults per page or – –Faults per KLOC, l By severity (major/minor), l By phase, l Fault detection rate (e.g. Faults detected per hour), l Fault detection efficiency (e.g. Faults detected per person/hour), l What does a 50% increase in the fault detection rate mean?

31 6.31/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

32 6.32/80 Execution-Based Testing l Definitions: –Failure (incorrect behavior), –Error (mistake made by programmer), l Nonsensical statement: –“Testing is demonstration that faults are not present.”, l Dijkstra: –“Program testing can be very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence” [Dijkstra, 1972],

33 6.33/80 What Is Execution-based Tested? l “The process of inferring certain behavioral properties of product based, in part, on results of executing product in known environment with selected inputs.” [IEEE 610.12, 1990], l Troubling implications: l Inference – היסק, היקש. –Trying to find whether there is a black cat in a dark room. l Known environment? –Neither the SW nor the HW are really known, l Selected inputs: –What about RT systems? (e.g. Avionic system),

34 6.34/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

35 6.35/80 But What Should Be Tested? l Utility – תועלת, l Reliability – אמינות, l Robustness – חוֹסֶן, l Performance – ביצועים, l Correctness – נכונות.

36 6.36/80 Utility – תועלת l Utility – the extent to which user’s needs are met when a correct product is used under conditions permitted by its specification, l Does it meet user’s needs? –Ease of use, –Useful functions, –Cost-effectiveness, l Utility should be tested first, and if the product fails on that score, testing should be stop,

37 6.37/80 Reliability –אמינות l Reliability – A measure of the frequency and criticality of product failure, l Frequency and criticality of failure: l MTBF – Mean Time Between Failures, l MTTR – Mean Time To Repair, l Mean time, cost to repair results of failure, –Suppose our SW fails only one every six month, but when it fails it completely wipes out a database. The SW can be re-run within 2hr., but the DB reconstruction might take a week,

38 6.38/80 Robustness – חוֹסֶן … l Range of operating conditions: Possibility of unacceptable results with valid input, Effect of invalid input, l A product with a wide permissible operating conditions is more robust than a product that is more restrictive.

39 6.39/80 Robustness – חוֹסֶן(Cont’d) l A robust product should not yield unacceptable results when the input satisfies its specifications, l A robust product should not crash when the product is not under permissible operating conditions,

40 6.40/80 Performance – ביצועים l Extent to which space and time constraints are met, l Real-time SW – hard-time constraints: Can the CPU process an image data within 5ms? (For a 200hz sampling rate?),

41 6.41/80 Correctness – נכונות l A product is correct if it satisfies its output specifications, independent of its use of computing resources, when operated under permitted conditions,

42 6.42/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

43 6.43/80 Correctness Proofs (Verification) l Mathematical technique for showing that a product is correct, l Correctness means – It satisfies its specifications, l Is it an alternative to execution-based testing?

44 6.44/80 Specifications Correctness … l Specification for a sort: l Are these good specifications?, l Function trickSort satisfies these specifications:

45 6.45/80 Specifications Correctness (Cont’d) l Incorrect specification for a sort: l Corrected specification for the sort:

46 6.46/80 Correctness l NOT sufficient – as was demonstrated in previous example, l NOT a showstopper – consider a new compiler that is: –Twice faster, –Object code is 20% smaller, –Object code is 20% faster, –Much clearer error-code messages, –But – A single error message – for the first ‘for’ statement encountered in any class, –Will you use it?

47 6.47/80 Correctness Proofs – Glossary l An assertion is: A claim that a certain mathematical property holds true at a given point, l An invariant is: A mathematically expression that holds true under all conditions tested, l An input specification is: A condition that holds true before the code is executed.

48 6.48/80 Example of Correctness Proof … l Code to be proven correct.,

49 6.49/80 Example (Cont’d) … l Flowchart of code segment:

50 6.50/80 Example (Cont’d) …

51 6.51/80 Example (Cont’d) … l We have to prove the output spec: @H: S = y[0] + y[1] + … + y[n-1], l Or even a stronger assertion: @H: k=n and S = y[0] + y[1] + … + y[n-1].

52 6.52/80 Example (Cont’d) … l We will prove for the loop invariant @D: n  k and S = y[0] + y[1] + … + y[k-1], l (1) @A: n ∈ {1,2,3 …}, l (2) @B: k=0 and n ∈ {1,2,3 …} (we will omit that), l (3) @C: k=0 and S=0, l Before the loop is entered (@D): @D: k=0, S=0 and n ∈ {1,2,3 …}, hence n  k thus S=0, l This is our induction base.

53 6.53/80 Example (Cont’d) … l The induction step: we assume that for some stage, k 0, n  k 0  0, the loop invariant holds: @D: n  k 0 and S = y[0] + y[1] + … + y[k 0 -1], l Control now passes to the test box: One possibility: if K 0  n because n  k 0 (our assumption)  k 0 =n. So we get: @H: k 0 =n and S = y[0] + y[1] + … + y[k 0 -1] which is our target!

54 6.54/80 Example (Cont’d) … l The other possibility: k 0 <n, so it follows: l (4) @E: k 0 <n and S = y[0] + … + y[k 0 -1], l We execute S = S+y[k 0 ], l (5)@F: k 0 <n and S = y[0] + … + y[k 0 -1] + y[k 0 ] that is: S = y[0] + … +y[k 0 ], l We execute k 0 = k 0 + 1. We are at point G and so we get: k 0  n and S = y[0] + … +y[k 0 -1]. l Exactly the loop invariant!

55 6.55/80 Example (Cont’d) l So the loop invariant holds for n  k  0, l We have to prove that the loops terminates. Obvious – as each time we increase k by 1, while n is fixed.

56 6.56/80 Correctness Proof Case Study l Never prove a program correct without testing it as well, l We need both testing and correctness proof.

57 6.57/80 Naur and the Line-editor – Episode 1 … l 1969 — Naur paper, l “Naur text-processing problem”, Given a text consisting of words separated by blank or by nl (new line) characters, convert it to line-by-line form in accordance with following rules: (1) Line breaks must be made only where given text has blank or n, (2) Each line is filled as far as possible, as long as, (3)No line will contain more than maxpos characters, l Naur constructed a procedure (25 lines of Algol60), and informally proved its correctness.

58 6.58/80 Naur and the Line-editor – Episode 2 … l 1970 — reviewer in computing reviews.computing reviews –In the output of Naur’s procedure, the first word of the first line is preceded by blank unless the first word is exactly maxpos characters long, l Most likely that such a problem have been detected by testing.

59 6.59/80 Naur and the Line-editor – Episode 3 … l 1971 — London finds 3 more faults, l Including: –The procedure does not terminate unless a word longer than maxpos characters is encountered, l Again, this fault is likely to be have been detected if the procedure had been tested, l London, present a corrected version and a formal proof.

60 6.60/80 Naur and the Line-editor – Episode 4 l 1975 — Goodenough and Gerhart find three further faults Including: l The last word will not be output unless it is followed by blank or nl, l Again… reasonable choice of test data would have detected that fault.

61 6.61/80 Proofs and SW Engineering l Out of seven faults, four could have been detected simply by running the procedure on test data, such as illustrations given in Naur’s original paper, l Lesson: even if product is proved correct, it must STILL be tested.

62 6.62/80 Three Myths … l Why correctness proving should not be viewed as a standard SW engineering technique? 1. SW engineers do not have enough math for proofs, 2. Proving is too expensive to be practical, l 3. Proving is too hard,

63 6.63/80 Three Myths (Cont’d) l Math knowledge – most CS today either take courses in the requisite material or have the background to learn correctness-proving on the job. (Remember the acquaintance questionnaire?), l Expensive – consider SW for a space station, or anywhere else where human life are at stake, l HARD – although… many nontrivial SW products have been successfully proved to be correct including OS kernels, compilers and communication systems.

64 6.64/80 Proofs Difficulties … l Can we trust a theorem prover?, l What if a theorem prover prints out: “This product is correct”?, l Consider: void theoremProver() { system.Out.Println(“this product is correct”); }, l What if we submit a prover to itself?

65 6.65/80 Proofs Difficulties (Cont’d) … l How to find input–output specifications, loop invariants?, l What if the specifications are wrong (trickSort..)?, l Can never be sure that specifications or a verification system are correct [manna & Waldinger].

66 6.66/80 Proofs and SW Engineering (Cont’d) … l Correctness proofs are a vital SW engineering tool, WHERE APPROPRIATE, If: –Human lives are at stake, –Indicated by cost/benefit analysis, –Risk of not proving is too great, l Also, informal proofs can improve the quality of the product, l Assertion in code: If at run time the assertion does not hold, he product will be halted.

67 6.67/80 Proofs and SW Engineering (Cont’d) l Languages with assertion capability: –Java, Ada –Assert statement. (Eiffel) assert (checkVar > 0) if, at any time, (checkVar is not > 0) – execution is stopped, l Assert statement are mostly under debug mode, and turned off to accelerate execution, l Using bounds checking while developing a product but turning it off once the product is working correctly, can be likened to learning sail on dry land wearing a life jacket and then taking the life jacket off when actually at sea,

68 6.68/80 l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What should be tested?, l Correctness proofs, l Who should perform execution-based testing?, l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

69 6.69/80 Who Should Performs Ex.-based Testing? l Is testing a “destructive” task? –A successful test finds a fault, –A programmer doesn’t wish to destroy his own work, l Solution: –1.The programmer does informal testing, –2.SQA does systematic testing, –3.The programmer debugs the module, (that is – finding the cause of the failure and correcting the fault), l All test cases must be: –Planned beforehand, including expected output, –Retained afterwards.

70 6.70/80 l Motivation. l Testing glossary. l Quality issues. l Non-execution-based testing. l Execution-based testing. l What should be tested? l Correctness proofs. l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

71 6.71/80 Testing Distributed SW … l Testing code on a uni-processor we assume: –There is a global environment, –The execution of the product within the environment is deterministic, –The instructions of the product are executed sequentially, and –Inserting debugging statements between source code statements will not modify the execution of the product, l All these assumptions do not hold in a distributed system.

72 6.72/80 Testing Distributed SW (Cont’d) l In a distributed SW: –There is no global environment, –Product execution may be not reproducible, –The product’s instructions are executed in parallel, –Inserting debugging statements might affect process timing, l We need special tools, e.g. distributed debugger, for testing distributed SW, l We need to maintain history files in order to be able to reproduce exact sequence that led to a failure.

73 6.73/80 l Motivation. l Testing glossary. l Quality issues. l Non-execution-based testing. l Execution-based testing. l What should be tested? l Correctness proofs. l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops?, l Summary.

74 6.74/80 Testing Real-Time SW l RT systems are critically dependent upon the timing of inputs and the order of inputs, l These two factors are not controlled by the programmer, l Examples: –Air-craft arrival, –Temperature of a nuclear reactor, –Patient heart rate in an intensive-care-unit, l In RT system we will meet a higher demand for robustness, as often these are stand-alone systems, i.e. they handle many exceptions – thus they are required for self-recovery capabilities.

75 6.75/80 Technique for RT SW Testing … l Structure analysis: –In order to investigate control flow, we prove that any part of the code is ‘feasible’, and that there is a ‘termination path’ from from any part of the code, –Detect and prevent deadlocks, l Correctness proofs: –A number of theorem provers were constructed to prove RT systems, l Systematic testing: –Running sets of test cases consisting of the same input data arranged in all possible orderings (n input will induce n! test cases!).

76 6.76/80 Technique for RT SW Testing (Cont’d) l Statistical techniques, –In order to decrease system failures to, say, 0.001%! l Simulation, –“A simulator is a device which calculates, emulates or predicts the behavior of another device, or some aspect of the behavior of the world.”, –A simulator might use as a test-bed on which the product can be run, –The SQA might use the simulator to provide selected inputs to the product, –Simulators are particularly important when it is impossible or too dangerous to test a product against suitable sets of test data – like aircraft stalling etc.

77 6.77/80 l Motivation. l Testing glossary. l Quality issues. l Non-execution-based testing. l Execution-based testing. l What should be tested? l Correctness proofs. l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

78 6.78/80 When Testing Stop? l Only when the product has been irrevocably retired, l Do maintain old test cases.

79 6.79/80 l Motivation. l Testing glossary. l Quality issues. l Non-execution-based testing. l Execution-based testing. l What should be tested? l Correctness proofs. l Who should perform execution-based testing? l Testing distributed SW, l Testing Real-Time SW, l When testing stops? l Summary.

80 6.80/80 TESTING The End


Download ppt "6.1/80 TESTING…. 6.2/80 Overview l Motivation, l Testing glossary, l Quality issues, l Non-execution-based testing, l Execution-based testing, l What."

Similar presentations


Ads by Google