Download presentation
Presentation is loading. Please wait.
Published byCharles Crawford Modified over 9 years ago
1
Slide 14.1 CHAPTER 14 IMPLEMENTATION PHASE
2
Slide 14.2 Overview l Choice of programming language l Fourth generation languages l Good programming practice l Coding standards l Module reuse l Module test case selection l Black-box module-testing techniques l Glass-box module-testing techniques
3
Slide 14.3 Overview (contd) l Code walkthroughs and inspections l Comparison of module-testing techniques l Cleanroom l Potential problems when testing objects l Management aspects of module testing l When to rewrite rather than debug a module l CASE tools for the implementation phase l Air Gourmet Case Study: Black-box test cases l Challenges of the implementation phase
4
Slide 14.4 Implementation Phase l Programming-in-the-many l Choice of Programming Language –Language is usually specified in contract l But what if the contract specifies –The product is to be implemented in the “most suitable” programming language l What language should be chosen?
5
Slide 14.5 Choice of Programming Language (contd) l Example –QQQ Corporation has been writing COBOL programs for over 25 years –Over 200 software staff, all with COBOL expertise –What is “most suitable” programming language? l Obviously COBOL
6
Slide 14.6 Choice of Programming Language (contd) l What happens when new language (e.g., C++) is introduced –New hires needed –Retrain existing professionals –Future products in C++ –Maintain existing COBOL products –Two classes of programmers »COBOL maintainers (despised) »C++ developers (usually paid more) –Need expensive software and hardware to run it –100s of person-years of expertise with COBOL wasted
7
Slide 14.7 Choice of Programming Language (contd) l Only possible conclusion –COBOL is the “most suitable” programming language l And yet, the “most suitable” language for the latest project may be C++ –COBOL is suitable for only DP applications l How to choose a programming language? –Cost-benefit analysis –Compute costs, benefits of all relevant languages
8
Slide 14.8 Choice of Programming Language (contd) l Which is the most appropriate object-oriented language? –C++ is (unfortunately) C-like –Java enforces the object-oriented paradigm –Training in the object-oriented paradigm is essential before adopting any object-oriented language l What about choosing a fourth generation language (4GL)?
9
Slide 14.9 Fourth Generation Languages l First generation languages –Machine languages l Second generation languages –Assemblers l Third generation languages –High-level languages (COBOL, FORTRAN, C, C++) l Fourth generation languages (4GLs) –Non-procedural high-level languages built around database systems –Application specific languages –E.g., database query languages such as SQL; Focus, Metafont, PostScript, RPG-II, S, IDL-PV/WAVE, Gauss, MathematicaSQLFocus MetafontPostScriptRPG-IISIDL-PV/WAVE GaussMathematica
10
Slide 14.10 Fourth Generation Languages (contd) l One 3GL statement is equivalent to 5–10 assembler statements l Each 4GL statement intended to be equivalent to 30 or even 50 assembler statements l It was hoped that 4GLs would –Speed up application-building –Applications easy, quick to change »Reducing maintenance costs –Simplify debugging –Make languages user friendly »Leading to end-user programming
11
Slide 14.11 Productivity Increases with a 4GL? l The picture is not uniformly rosy l Problems with –Poor management techniques –Poor design methods l James Martin (who coined the term 4GL) suggests use of –Prototyping –Iterative design –Computerized data management –Computer-aided structuring l Is he right? Does he (or anyone else) know?
12
Slide 14.12 Actual Experiences with 4GLs l Playtex used ADF, obtained an 80 to 1 productivity increase over COBOL –However, Playtex then used COBOL for later applications l 4GL productivity increases of 10 to 1 over COBOL have been reported –However, there are plenty of reports of bad experiences
13
Slide 14.13 Actual Experiences with 4GLs (contd) l Attitudes of 43 Organizations to 4GLs –Use of 4GL reduced users’ frustrations –Quicker response from DP department –4GLs slow and inefficient, on average –Overall, 28 organizations using 4GL for over 3 years felt that the benefits outweighed the costs
14
Slide 14.14 Fourth Generation Languages (contd) l Market share –No one 4GL dominates the software market –There are literally hundreds of 4GLs –Dozens with sizable user groups l Reason –No one 4GL has all the necessary features l Conclusion –Care has to be taken in selecting the appropriate 4GL
15
Slide 14.15 Good Programming Practice l Use of “consistent” and “meaningful” variable names –“Meaningful” to future maintenance programmer –“Consistent” to aid maintenance programmer
16
Slide 14.16 Good Programming Practice Example Module contains variables freqAverage, frequencyMaximum, minFr, frqncyTotl l Maintenance programmer has to know if freq, frequency, fr, frqncy all refer to the same thing –If so, use identical word, preferably frequency, perhaps freq or frqncy, not fr –If not, use different word (e.g., rate ) for different quantity l Can use frequencyAverage, frequencyMyaximum, frequencyMinimum, frequencyTotal l Can also use averageFrequency, maximumFrequency, minimumFrequency, totalFrequency l All four names must come from the same set
17
Slide 14.17 Good Programming Practice (contd) l Issue of self-documenting code –Exceedingly rare l Key issue: Can module be understood easily and unambiguously by –SQA team –Maintenance programmers –All others who have to read the code
18
Slide 14.18 Good Programming Practice (contd) l Example –Variable xCoordinateOfPositionOfRobotArm –Abbreviated to xCoord –Entire module deals with the movement of the robot arm –But does the maintenance programmer know this?
19
Slide 14.19 Prologue Comments l Mandatory at top of every single module –Minimum information »Module name »Brief description of what the module does »Programmer’s name »Date module was coded »Date module was approved, and by whom »Module parameters »Variable names, in alphabetical order, and uses »Files accessed by this module »Files updated by this module »Module i/o »Error handling capabilities »Name of file of test data (for regression testing) »List of modifications made, when, approved by whom »Known faults, if any
20
Slide 14.20 Other Comments l Suggestion –Comments are essential whenever code is written in a non-obvious way, or makes use of some subtle aspect of the language l Nonsense! –Recode in a clearer way –We must never promote/excuse poor programming –However, comments can assist maintenance programmers l Code layout for increased readability –Use indentation –Better, use a pretty-printer –Use blank lines
21
Slide 14.21 Nested if Statements l Example –Map consists of two squares. Write code to determine whether a point on the Earth’s surface lies in map square 1 or map square 2, or is not on the map
22
Slide 14.22 Nested if Statements (contd) l Solution 1. Badly formatted
23
Slide 14.23 Nested if Statements (contd) l Solution 2. Well-formatted, badly constructed
24
Slide 14.24 Nested if Statements (contd) l Solution 3. Acceptably nested
25
Slide 14.25 Nested if Statements (contd) l Combination of if-if and if-else-if statements is usually difficult to read l Simplify: The if-if combination if is frequently equivalent to the single condition if && l Rule of thumb –if statements nested to a depth of greater than three should be avoided as poor programming practice
26
Slide 14.26 Programming Standards l Can be both a blessing and a curse l Modules of coincidental cohesion arise from rules like –“Every module will consist of between 35 and 50 executable statements” l Better –“Programmers should consult their managers before constructing a module with fewer than 35 or more than 50 executable statements”
27
Slide 14.27 Remarks on Programming Standards l No standard can ever be universally applicable l Standards imposed from above will be ignored l Standards must be checkable by machine
28
Slide 14.28 Remarks on Programming Standards (contd) l Examples of good programming standards –“Nesting of if statements should not exceed a depth of 3, except with prior approval from the team leader” –“Modules should consist of between 35 and 50 statements, except with prior approval from the team leader” –“Use of goto s should be avoided. However, with prior approval from the team leader, a forward goto may be used for error handling”
29
Slide 14.29 Remarks on Programming Standards (contd) l Aim of standards is to make maintenance easier –If it makes development difficult, then must be modified –Overly restrictive standards are counterproductive –Quality of software suffers
30
Slide 14.30 Software Quality Control l After preliminary testing by the programmer, the module is handed over to the SQA group
31
Slide 14.31 Module Reuse l The most common form of reuse
32
Slide 14.32 Module Test Case Selection l Worst way—random testing l Need systematic way to construct test cases
33
Slide 14.33 Module Test Case Selection (contd) l Two extremes to testing 1. Test to Specifications –also called black-box, data-driven, functional, or input/output driven testing) –Ignore code. Use specifications to select test cases 2. Test to Code –also called glass-box, logic-driven, structured, or path-oriented testing) –Ignore specifications. Use code to select test cases
34
Slide 14.34 Feasibility of Testing to Specifications l Example –Specifications for data processing product include 5 types of commission and 7 types of discount –35 test cases l Cannot say that commission and discount are computed in two entirely separate modules—the structure is irrelevant
35
Slide 14.35 Feasibility of Testing to Specifications l Suppose specs include 20 factors, each taking on 4 values –4 20 or 1.1 10 12 test cases –If each takes 30 seconds to run, running all test cases takes > 1 million years l Combinatorial explosion makes testing to specifications impossible
36
Slide 14.36 Feasibility of Testing to Code l Each path through module must be executed at least once –Combinatorial explosion
37
Slide 14.37 Feasibility of Testing to Code (contd) l Flowchart has over 10 12 different paths
38
Slide 14.38 Feasibility of Testing to Code (contd) l Can exercise every path without detecting every fault
39
Slide 14.39 Feasibility of Testing to Code (contd) l Path can be tested only if it is present l Weaker Criteria –Exercise both branches of all conditional statements –Execute every statement
40
Slide 14.40 Coping with the Combinatorial Explosion l Neither testing to specifications nor testing to code is feasible l The art of testing: –Select a small, manageable set of test cases to »Maximize chances of detecting fault, while »Minimizing chances of wasting test case –Every test case must detect a previously undetected fault
41
Slide 14.41 Coping with the Combinatorial Explosion l We need a method that will highlight as many faults as possible –First black-box test cases (testing to specifications) –Then glass-box methods (testing to code)
42
Slide 14.42 Black-Box Module Testing Methods l Equivalence Testing l Example –Specifications for DBMS state that product must handle any number of records between 1 and 16,383 (2 14 –1) –If system can handle 34 records and 14,870 records, then probably will work fine for 8,252 records l If system works for any one test case in range (1..16,383), then it will probably work for any other test case in range –Range (1..16,383) constitutes an equivalence class l Any one member is as good a test case as any other member of the class
43
Slide 14.43 Equivalence Testing (contd) l Range (1..16,383) defines three different equivalence classes: –Equivalence Class 1: Fewer than 1 record –Equivalence Class 2: Between 1 and 16,383 records –Equivalence Class 3: More than 16,383 records
44
Slide 14.44 Boundary Value Analysis l Select test cases on or just to one side of the boundary of equivalence classes –This greatly increases the probability of detecting fault
45
Slide 14.45 Database Example l Test case 1: 0 recordsMember of equivalence class 1 (and adjacent to boundary value) l Test case 2: 1 recordBoundary value l Test case 3:2 recordsAdjacent to boundary value l Test case 4:723 recordsMember of equivalence class 2
46
Slide 14.46 Boundary Value Analysis of Output Specs l Example: In 2001, the minimum Social Security (OASDI) deduction from any one paycheck was $0.00, and the maximum was $4,984.80 –Test cases must include input data which should result in deductions of exactly $0.00 and exactly $4,984.80 –Also, test data that might result in deductions of less than $0.00 or more than $4,984.80
47
Slide 14.47 Overall Strategy l Equivalence classes together with boundary value analysis to test both input specifications and output specifications –Small set of test data with potential of uncovering large number of faults
48
Slide 14.48 Glass-Box Module Testing Methods l Structural testing –Statement coverage –Branch coverage –Linear code sequences –All-definition-use path coverage
49
Slide 14.49 Structural Testing: Statement Coverage l Statement coverage: –Series of test cases to check every statement –CASE tool needed to keep track l Weakness –Branch statements l Both statements can be executed without the fault showing up
50
Slide 14.50 Structural Testing: Branch Coverage l Series of tests to check all branches (solves above problem) –Again, a CASE tool is needed l Structural testing: path coverage
51
Slide 14.51 Linear Code Sequences l In a product with a loop, the number of paths is very large, and can be infinite –We want a weaker condition than all paths but that shows up more faults than branch coverage l Linear code sequences –Identify the set of points L from which control flow may jump, plus entry and exit points –Restrict test cases to paths that begin and end with elements of L –This uncovers many faults without testing every path
52
Slide 14.52 All-definition-use-path Coverage l Each occurrence of variable, zz say, is labeled either as –The definition of a variable zz = 1 or read (zz) –or the use of variable y = zz + 3 or if (zz < 9) errorB () l Identify all paths from the definition of a variable to the use of that definition –This can be done by an automatic tool l A test case is set up for each such path
53
Slide 14.53 All-definition-use-path Coverage (contd) l Disadvantage: –Upper bound on number of paths is 2 d, where d is the number of branches l In practice –The actual number of paths is proportional to d in real cases l This is therefore a practical test case selection technique
54
Slide 14.54 Infeasible Code l It may not be possible to test a specific statement –May have an infeasible path (“dead code”) in the module l Frequently this is evidence of a fault
55
Slide 14.55 Measures of Complexity l Quality assurance approach to glass-box testing –Module m1 is more “complex” than module m2 l Metric of software complexity –Highlights modules mostly likely to have faults l If complexity is unreasonably high, then redesign, reimplement –Cheaper and faster
56
Slide 14.56 Lines of Code l Simplest measure of complexity l Underlying assumption: –Constant probability p that a line of code contains a fault l Example –Tester believes line of code has 2% chance of containing a fault. –If module under test is 100 lines long, then it is expected to contain 2 faults l Number of faults is indeed related to the size of the product as a whole
57
Slide 14.57 Other Measures of Complexity l Cyclomatic complexity M (McCabe) –Essentially the number of decisions (branches) in the module –Easy to compute –A surprisingly good measure of faults (but see later) l Modules with M > 10 have statistically more errors (Walsh)
58
Slide 14.58 Software Science Metrics l [Halstead] Used for fault prediction –Basic elements are the number of operators and operands in the module l Widely challenged l Example
59
Slide 14.59 Problem with These Metrics l Both Software Science, cyclomatic complexity: –Strong theoretical challenges –Strong experimental challenges –High correlation with LOC l Thus we are measuring LOC, not complexity l Apparent contradiction –LOC is a poor metric for predicting productivity l No contradiction — LOC is used here to predict fault rates, not productivity
60
Slide 14.60 Code Walkthroughs and Inspections l Rapid and thorough fault detection –Up to 95% reduction in maintenance costs [Crossman, 1982]
61
Slide 14.61 Comparison: Module Testing Techniques l Experiments comparing –Black-box testing –Glass-box testing –Reviews l (Myers, 1978) 59 highly experienced programmers –All three methods equally effective in finding faults –Code inspections less cost-effective l (Hwang, 1981) –All three methods equally effective
62
Slide 14.62 Comparison: Module Testing Techniques (contd) l Tests of 32 professional programmers, 42 advanced students in two groups (Basili and Selby, 1987) l Professional programmers –Code reading detected more faults –Code reading had a faster fault detection rate l Advanced students, group 1 –No significant difference between the three methods l Advanced students, group 2 –Code reading and black-box testing were equally good –Both outperformed glass-box testing
63
Slide 14.63 Comparison: Module Testing Techniques (contd) l Conclusion –Code inspection is at least as successful at detecting faults as glass-box and black-box testing
64
Slide 14.64 Cleanroom l Different approach to software development l Incorporates –Incremental process model –Formal techniques –Reviews
65
Slide 14.65 Cleanroom (contd) l Case study l 1820 lines of FoxBASE (U.S. Naval Underwater Systems Center, 1992) –18 faults detected by “functional verification” –Informal proofs –19 faults detected in walkthroughs before compilation –NO compilation errors –NO execution-time failures
66
Slide 14.66 Cleanroom (contd) l Fault counting procedures differ: l Usual paradigms –Count faults after informal testing (once SQA starts) l Cleanroom –Count faults after inspections (once compilation starts)
67
Slide 14.67 Cleanroom (contd) l Report on 17 Cleanroom products [Linger, 1994] –350,000 line product, team of 70, 18 months –1.0 faults per KLOC –Total of 1 million lines of code –Weighted average: 2.3 faults per KLOC –“[R]emarkable quality achievement”
68
Slide 14.68 Testing Objects l We must inspect classes, objects l We can run test cases on objects l Classical module –About 50 executable statements –Give input arguments, check output arguments l Object –About 30 methods, some with 2, 3 statements –Do not return value to caller—change state –It may not be possible to check state—information hiding –Method determine balance —need to know accountBalance before, after
69
Slide 14.69 Testing Objects (contd) l Need additional methods to return values of all state variables –Part of test plan –Conditional compilation l Inherited method may still have to be tested
70
Slide 14.70 Testing Objects (contd) l Java implementation of tree hierarchy
71
Slide 14.71 Testing Objects (contd) l Top half l When displayNodeContents is invoked in BinaryTree, it uses RootedTree.printRoutine
72
Slide 14.72 Testing Objects (contd) l Bottom half l When displayNodeContents is invoked in method BalancedBinaryTree, it uses BalancedBinaryTree.printRoutine
73
Slide 14.73 Testing Objects (contd) l Bad news –BinaryTree.displayNodeContents must be retested from scratch when reused in method BalancedBinaryTree –Invokes totally new printRoutine l Worse news –For theoretical reasons, we need to test using totally different test cases
74
Slide 14.74 Testing Objects (contd) l Two testing problems: l Making state variables visible –Minor issue l Retesting before reuse –Arises only when methods interact –We can determine when this retesting is needed [Harrold, McGregor, and Fitzpatrick, 1992] l Not reasons to abandon the paradigm
75
Slide 14.75 Module Testing: Management Implications l We need to know when to stop testing –Cost–benefit analysis –Risk analysis –Statistical techniques
76
Slide 14.76 When to Rewrite Rather Than Debug l When a module has too many faults –It is cheaper to redesign, recode l Risk, cost of further faults
77
Slide 14.77 Fault Distribution In Modules Is Not Uniform l [Myers, 1979] –47% of the faults in OS/370 were in only 4% of the modules l [Endres, 1975] –512 faults in 202 modules of DOS/VS (Release 28) –112 of the modules had only one fault –There were modules with 14, 15, 19 and 28 faults, respectively –The latter three were the largest modules in the product, with over 3000 lines of DOS macro assembler language –The module with 14 faults was relatively small, and very unstable –A prime candidate for discarding, recoding
78
Slide 14.78 Fault Distribution In Modules Not Uniform (contd) l For every module, management must predetermine maximum allowed number of faults during testing l If this number is reached –Discard –Redesign –Recode l Maximum number of faults allowed after delivery is ZERO
79
Slide 14.79 Air Gourmet Case Study: Black-Box Test Cases l Sample black-box test cases l Appendix J contains complete set
80
Slide 14.80 Challenges of the Implementation Phase l Module reuse needs to be built into the product from the very beginning l Reuse must be a client requirement l Software project management plan must incorporate reuse
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.