Lecture 15: Technical Metrics Software Engineering Lecture 15: Technical Metrics
Today’s Topics Chapter 19 from SEPA 5/e Focus: assessing the quality of the product as it is engineered (Chapter 4 focuses on metrics applied at the process and project level) Qualitative & Quantitative Metrics
Product Quality “Requirements are the foundation for measuring quality; lack of conformance to requirements is lack of quality” Standards define development criteria which should be followed to maximize quality Software must meet implicit as well as explicit requirements
Measuring Quality Direct Measurement factors that can be directly measured e.g. defects per function point Indirect Measurement factors that can only be measured indirectly e.g. usability, maintainability
McCall’s Quality Factors “ability to undergo change” “adaptability to new environments” “operational characteristics” [From SEPA 5/e]
Quality Factors [2] Correctness meets spec & customer’s objectives Reliability performs with required precision Efficiency amount of resources required Integrity control of access to code & data
Quality Factors [3] Usability effort to learn, operate, interpret Maintainability effort to locate & fix errors Flexibility effort to modify an operational program Testability effort required to test a program
Quality Factors [4] Portability effort to switch h/w, s/w platform Reusability effort to reuse all or part of code Interoperability effort to couple with other systems
Measurability Fq = c1 x m1 + c2 x m2 +…+ cn x mn Q: Is it possible to directly measure these quality factors? A: Difficult, sometimes impossible Remedy: indirect formula based on things you can measure Fq = c1 x m1 + c2 x m2 +…+ cn x mn Ci = tuning coefficient mi = measurable metric Quality Factor
Grading Scheme Measurements = grade each dimension on a scale of 0 to 10 Combine metrics with an indirect formula to derive a quality factor Each quality factor is calculated by a subset of the metrics (see table)
Quality Factors Metrics e.g.: ‘Integrity’ is derived from grading on Auditability, Instrumentation, and Security Graded on a 0-10 Scale [From SEPA 5/e]
Other Quality Factors FURPS (HP): Functionality, Usability, Reliability, Performance, Supportability ISO 9126 Standard: Functionality, Reliability, Usability, Efficiency, Maintainability, Portability Can be used as a basis for indirect measurements
Qualitative vs. Quantitative Checklist approach is qualitative (and hence subjective) Lack of precision (two evaluators might assign different scores) Challenge for technical metrics: develop quantitative assessment of software quality
Measurement Principles Formulation derivation of appropriate metrics Collection mechanism used to collect data Analysis computation based on the data Interpretation evaluating the results to gain insight Feedback recommendations to the team
Effective Metrics Are: Simple & computable Empirically & intuitively persuasive Consistent & objective Proper mixes of units & dimensions Language-independent Effective mechanism for feedback ? Some good metrics don’t satisfy all the criteria : e.g. Function Points
Analysis Metrics Metrics based on output of analysis phase E.g., evaluate data flow diagram to determine: number of inputs number of outputs number of user queries number of files number of external interfaces
Analysis Metrics [2] [From SEPA 5/e]
Analysis Metrics [3] [From SEPA 5/e]
Specification Metrics Specificity, Completeness, Correctness, Understandability, Verifiability, Consistency, … Can be made quantitative, e.g. specificity (lack of ambiguity): Qs = nui/nr Requirements interpreted identically by all users Total requirements in specification
Design Metrics “Morphology” (Fenton ‘91) size = |nodes| + |arcs| = 17 + 18 = 35 depth = 4 width = 6 arc/node ratio: r = a/n = 18/17 = 1.06 “connectivity density” (coupling) “Morphology” (Fenton ‘91) [From SEPA 5/e]
Other Design Metrics Card & Glass (1990): Henry & Kafura (1981) Structural Complexity S(i) = fo(i)2 Data Complexity D(i) = v(i)/fo(i)+1 System Complexity C(i) = S(i) + D(i) Henry & Kafura (1981) HKM(i) = length(i) x (fi(i)+fo(i))2
Component-Level Metrics Cohesion (Bieman & Ott, 1994) data slices, glue tokens, stickiness “What % of tokens in the module are relevant to all data slices?” Coupling (Dhama, 1994) data/control flow, global, environmental “How many input/output parameters, accesses to global data, external calls to this module?” Complexity (e.g. Cyclomatic Complexity)
Interface Design Metrics Layout Appropriateness (Sears ‘93) Analyze user actions in terms of frequency of transition, cost of transition (from one GUI element to another) Optimize the LA by minimizing total “cost” of the layout Task performance and perceived satisfaction are highly correlated
Maintenance Metrics Software Maturity Index (SMI) (IEEE Std. 982.1-1988) Inputs: MT = number of modules in release FC = number of modules changed FA = number of modules added FD = number of modules deleted SMI = [MT -(FA+FC+FD)]/MT (tends to 1.0 as product stabilizes)
Summary To be useful, metrics must be simple & computable persuasive, consistent, objective language-independent able to provide useful feedback Based on analysis, design, or component descriptions Metrics for source code, testing, maintenance are less-developed
Questions?