Download presentation
Presentation is loading. Please wait.
1
Software metrics
4
Measurement & Metrics Against:
Collecting metrics is too hard ... it’s too time consuming ... it’s too political ... they can be used against individuals ... it won’t prove anything For: In order to characterize, evaluate, predict and improve the process and product a metric baseline is essential. “Anything that you need to quantify can be measured in some way that is superior to not measuring it at all” Tom Gilb
5
Terminology Measure: Quantitative indication of the extent, amount, dimension, or size of some attribute of a product or process. A single data point Metrics: The degree to which a system, component, or process possesses a given attribute. Relates several measures (e.g. average number of errors found per person hour) Indicators: A combination of metrics that provides insight into the software process, project or product Direct Metrics: Immediately measurable attributes (e.g. line of code, execution speed, defects reported) Indirect Metrics: Aspects that are not immediately quantifiable (e.g. functionality, quantity, reliability) Faults: Errors: Faults found by the practitioners during software development Defects: Faults found by the customers after release
7
A Good Manager Measures
process process metrics project metrics measurement product metrics product What do we “Not everything that can be counted counts, and not everything that counts can be counted.” - Einstein use as a basis? • size? • function?
8
Process Metrics Focus on quality achieved as a consequence of a repeatable or managed process. Strategic and Long Term. Statistical Software Process Improvement (SSPI). Error Categorization and Analysis: All errors and defects are categorized by origin The cost to correct each error and defect is recorded The number of errors and defects in each category is computed Data is analyzed to find categories that result in the highest cost to the organization Plans are developed to modify the process Defect Removal Efficiency (DRE). Relationship between errors (E) and defects (D). The ideal is a DRE of 1:
9
Project Metrics Used by a project manager and software team to adapt project work flow and technical activities. Tactical and Short Term. Purpose: Minimize the development schedule by making the necessary adjustments to avoid delays and mitigate problems Assess product quality on an ongoing basis Metrics: Effort or time per SE task Errors uncovered per review hour Scheduled vs. actual milestone dates Number of changes and their characteristics Distribution of effort on SE tasks
10
Product Metrics Focus on the quality of deliverables
Product metrics are combined across several projects to produce process metrics Metrics for the product: Measures of the Analysis Model Complexity of the Design Model Internal algorithmic complexity Architectural complexity Data flow complexity Code metrics
13
Normalization for Metrics
How does an organization combine metrics that come from different individuals or projects? Depend on the size and complexity of the projec Normalization: compensate for complexity aspects particular to a product Normalization approaches: Size oriented (lines of code approach) Function oriented (function point approach)
14
Typical Normalized Metrics
Project LOC FP Effort (P/M) R(000) Pp. doc Errors Defects People alpha 12100 189 24 168 365 134 29 3 beta 27200 388 62 440 1224 321 86 5 gamma 20200 631 43 314 1050 256 64 6 Size-Oriented: errors per KLOC (thousand lines of code), defects per KLOC, R per LOC, page of documentation per KLOC, errors / person-month, LOC per person-month, R / page of documentation Function-Oriented: errors per FP, defects per FP, R per FP, pages of documentation per FP, FP per person-month
16
Computing Function Points
Analyze information domain of the application and develop counts Establish count for input domain and system interfaces Weight each count by assessing complexity Assign level of complexity (simple, average, complex) or weight to each count Assess the influence of global factors that affect the application Grade significance of external factors, F_i, such as reuse, concurrency, OS, ... FP = SUM(count x weight) x C where complexity multiplier C = ( x N) degree of influence N = SUM(F_i) Compute function points
18
Exercise: Function Points
Compute the function point value for a project with the following information domain characteristics: Number of user inputs: 32 Number of user outputs: 60 Number of user enquiries: 24 Number of files: 8 Number of external interfaces: 2 Assume that weights are average and external complexity adjustment values are not important. Answer:
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.