Download presentation
Presentation is loading. Please wait.
1
Information flow-Test coverage measure
•Most data-flow testing strategies focus on the program paths that link the definition and use of variables. Such a path is called a du-path. We distinguish between - variable uses within computations (c-uses) - variable uses within predicates or decisions (p-uses) • The most stringent and theoretically effective data-flow strategy is to find enough test cases so that every du-path lies on at least one program path executed by the test cases; this is called the all du-paths strategy. Weaker testing strategies involve all cuses, all p-uses, all defs and all uses. • Rapps and Weyuker(1985) note that, although the all du-paths strategy is the most discriminating, it requires a potentially exponential number of test cases Specifically, if t is the number of conditional transfers in a program, then in the worst case there are 2i du-paths. Despite this characteristic, empirical evidence suggests that du-paths testing is feasible in practice, and that the minimal number of paths P required for all du-paths testing is a very useful measure for quality-assurance purposes. Software Metrics Dr. Salalm Salloum
2
Information flow-Test coverage measure(cont.)
Example.The worse case scenario for all du- paths criterion in th following figure is 13 tests. Yet, in this case P is 3, because the set of du-paths can be covered by just 3 paths. Software Metrics Dr. Salalm Salloum
3
Empirical study of the all du-paths strategy
The following table summarizes the results of an empirical study of the all du-paths strategy. Bieman and Schultz (1992)looked at a commercial system consisting of 143 modules. For each module, they computed P, the minimum number of paths required to satisfy all du-paths. Bieman and Schultz found that, for 81% of all modules, P was less than 11, and in only one module was P prohibitively large. Thus, the strategy seems to be practical for almost all modules. Moreover, Bieman and Schultz noted that the module with excessively high P was known to be the "rogue" module of the system Thus, P may be a useful quality assurance measure. To assure quality, we may want to review (and, if necessary, rewrite) those few modules with infeasible values of P, since it is likely that they are overly complex. P< % 10<P< % 25<P< % 50<P< % 100<P< % P< % Software Metrics Dr. Salalm Salloum
4
Object-oriented metrics
•Chidamber and Kemerer (1994) proposed the following metricsfor object-oriented software: Metric 1: weighted methods per class (WMC) This metric is intended to relate to the notion of complexity. For a class C with methods M1 M2 ..., Mn, weighted respectively with "complexity" c1, c2, ..., cn, the measure is calculated as WMC =ci Metric 2: depth of inheritance tree (DIT) In an object-oriented design, the application domain is modeled as a hierarchy of classes. This hierarchy can be represented as a tree, called the inheritance tree. The nodes in the tree represent classes, and for each such class, the DIT metric is the length of the maximum path from the node to the root of the tree. This measure relates to the notion of scope of properties. DIT is a measure of how many ancestor classes can potentially affect this class. Software Metrics Dr. Salalm Salloum
5
Object-oriented metrics(cont.)
Metric 3: number of children (NOC) This metric relates to a node (class) of the inheritance tree. It is the number of immediate successors of the class. Metric 4: coupling between object classes (CBO) For a given class, this measure is defined to be the number of other classes to which the class is coupled. Metric 5: response for class (RFC) This measure captures the size of the response set of a class. The response set of a class consists of all the methods called by local methods. RFC is the number of local methods plus the number of methods called by local methods. Metric 6: lack of cohesion metric (LCOM) The cohesion of a class is characterized by how closely the local methods are related to the local instance variables in the class. LCOM is defined as the number of disjoint( that is, non-intersecting) sets of local methods. Software Metrics Dr. Salalm Salloum
6
Object-oriented metrics(cont.)
While the use of object-oriented metrics looks promising, there is as yet no widespread agreement on what should be measured in object-oriented systems and which metrics are appropriate. Indeed, Churcher and Shepperd(1995) criticized the Chidamber-Kemerer metrics on the grounds that there is as yet no consensus on the underlying attributes. For example, they argue that even the notion of "methods per class" is ambiguous. Li and Henry (1993)used these six metrics plus several others, including some simple size metrics (such as the number of attributes plus the number of local methods), to determine whether object-oriented metrics could predict maintenance effort. For the weights in Metric 1 (WMC), they used cyclomatic number. On the basis of an empirical study using regression analysis, they concluded that these measures are indeed useful. In particular, they claim that the Chidamber-Kemerer metrics "contribute to the prediction of maintenance effort over and beyond what can be predicted using size metrics alone." Lorenz(1993)reported that the analysis of object-oriented code at IBM has led to several recommendations about design, including maximum sizes for number of methods per class, and restrictions on the number of different message types sent and received. Software Metrics Dr. Salalm Salloum
7
Data Structure metrics
Globally, we want to capture the "amount of data" for a given system. An intuitively reasonable measure for this attribute is a count of the total number of (user-defined) variables. For example, the Halstead's u2 ( the number of distinct operands), which is computed as: u2 = number of variables + number of unique constants + number of labels • We can use u2 as our data measure, or Halstead's measure N2, the total number of occurrences of operands. Boehm(1981) also addressed the general problem of measuring the amount of data in a system when he constructed the COCOMO model He notes that a traditional measure like "number of classes of items in the databases" is poorly defined; the lack of precision has led to data measures that range from 3 to for very similar systems. To solve this problem, Boehm defines the ratio: D/P= Database size in bytes or characters/Program size in DSI where DSI is the number of delivered source instructions. Software Metrics Dr. Salalm Salloum
8
Data Structure metrics(cont.)
Using this measure, Boehm has derived a simple ordinal-scale measure called DATA, used as one of the COCOMO cost drivers, to measure amount of data. The definition of this measure and its relative multiplicative effect on cost is shown in the following table. According to the COCOMO multipliers in the following table, the cost of a project is increased by 16% when DATA is rated "very high." On the other hand, the cost reduces to 94% of the nominal cost when DATA is "low." DATA multiplier Low(D/P,10) 0.94 Nominal(10<=D/P<100) 1.00 High(100<=D/P<1000) 1.08 Very high(D/P>=1000) 1.16 Software Metrics Dr. Salalm Salloum
9
Software Metrics Dr. Salalm Salloum
Data structure(cont.) The overall "complexity" of a system cannot be depicted completely without measures of data structure; control-flow measures can fail to identify complexity when it is hidden in the data structure.Consider the following example. The figure in a next slide presents two functionally equivalent programs that are coded quite differently. Program A has a high control-flow structural complexity (for example, its cyclomatic number is 7), whereas program B is a simple sequence of statements (and its cyclomatic number is 1). The major difference between the two is that the control-flow "complexity" in A has been transferred to the data structure of B. Whereas A has a single data item of simple type (integer), B requires an 11-dimensional array of character strings, and an integer variable. Software Metrics Dr. Salalm Salloum
10
Software Metrics Dr. Salalm Salloum
Data structure(cont.) If we wish to define data-structure measures along the lines suggested above, we might assign a value of 1 to a simple integer or character type, a multiplicative value of 2 to the operation of forming strings, and a multiplicative value of 2 (times number of entries) to the operation of forming an array. In this case, program A has a single data item whose measure is 1, whereas B has two data items, one whose measure is 1, and one whose measure is 44. Thus, the data-structure measure characterizes the difference between A and B in a way not captured by control-flow structure. Software Metrics Dr. Salalm Salloum
11
Software Metrics Dr. Salalm Salloum
Example(cont.) Software Metrics Dr. Salalm Salloum
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.