Presentation is loading. Please wait.

Presentation is loading. Please wait.

For University Use Only

Similar presentations


Presentation on theme: "For University Use Only"— Presentation transcript:

1 For University Use Only
Supplementary Slides for Software Engineering: A Practitioner's Approach, 5/e copyright © 1996, 2001 R.S. Pressman & Associates, Inc. For University Use Only May be reproduced ONLY for student use at the university level when used in conjunction with Software Engineering: A Practitioner's Approach. Any other reproduction or use is expressly prohibited. This presentation, slides, or hardcopy may NOT be used for short courses, industry seminars, or consulting purposes.

2 For University Use Only
Supplementary Slides for Software Engineering: A Practitioner's Approach, 5/e copyright © 1996, 2001 R.S. Pressman & Associates, Inc. For University Use Only May be reproduced ONLY for student use at the university level when used in conjunction with Software Engineering: A Practitioner's Approach. Any other reproduction or use is expressly prohibited. This presentation, slides, or hardcopy may NOT be used for short courses, industry seminars, or consulting purposes.

3 Chapter 19 Technical Metrics for Software

4 McCall’s Triangle of Quality
b l y P o r t a b i l y F l e x i b t y R e u s a b i l t y T e s t a b i l y I n t e r o p a b i l y P R O D U C T E V I S N P R O D U C T A N S I P R O D U C T E A I N C o r e c t n s U s a b i l t y E f i c e n y R e l i a b t y I n t e g r i y

5 A Comment McCall’s quality factors were proposed in the
early 1970s. They are as valid today as they were in that time. It’s likely that software built to conform to these factors will exhibit high quality well into the 21st century, even if there are dramatic changes in technology.

6 Formulation Principles
The objectives of measurement should be established before data collection begins; Each technical metric should be defined in an unambiguous manner; Metrics should be derived based on a theory that is valid for the domain of application (e.g., metrics for design should draw upon basic design concepts and principles and attempt to provide an indication of the presence of an attribute that is deemed desirable); Metrics should be tailored to best accommodate specific products and processes [BAS84]

7 Collection and Analysis Principles
Whenever possible, data collection and analysis should be automated; Valid statistical techniques should be applied to establish relationship between internal product attributes and external quality characteristics Interpretative guidelines and recommendations should be established for each metric

8 Attributes simple and computable. It should be relatively easy to learn how to derive the metric, and its computation should not demand inordinate effort or time empirically and intuitively persuasive. The metric should satisfy the engineer’s intuitive notions about the product attribute under consideration consistent and objective. The metric should always yield results that are unambiguous. consistent in its use of units and dimensions. The mathematical computation of the metric should use measures that do not lead to bizarre combinations of unit. programming language independent. Metrics should be based on the analysis model, the design model, or the structure of the program itself. an effective mechanism for quality feedback. That is, the metric should provide a software engineer with information that can lead to a higher quality end product

9 Analysis Metrics Function-based metrics: use the function point as a normalizing factor or as a measure of the “size” of the specification Bang metric: used to develop an indication of software “size” by measuring characteristics of the data, functional and behavioral models Specification metrics: used as an indication of quality by measuring number of requirements by type

10 Architectural Design Metrics
Structural complexity = g(fan-out) Data complexity = f(input & output variables, fan-out) System complexity = h(structural & data complexity) HK metric: architectural complexity as a function of fan-in and fan-out Morphology metrics: a function of the number of modules and the number of interfaces between modules

11 Component-Level Design Metrics
Cohesion metrics: a function of data objects and the locus of their definition Coupling metrics: a function of input and output parameters, global variables, and modules called Complexity metrics: hundreds have been proposed (e.g., cyclomatic complexity)

12 Interface Design Metrics
Layout appropriateness: a function of layout entities, the geographic position and the “cost” of making transitions among entities

13 Code Metrics Halstead’s Software Science: a comprehensive collection of metrics all predicated on the number (count and occurrence) of operators and operands within a component or program

14 Chapter 8 Software Quality Assurance

15 Why SQA Activities Pay Off?
cost to find and fix a defect 100 log scale 10.00 10 3.00 1.50 1.00 1 0.75 Design test field Req. system code use test

16 Quality Concepts general objective: reduce the “variation between samples” ... but how does this apply to software? quality control: a series of inspections, reviews, tests quality assurance: analysis, auditing and reporting activities cost of quality appraisal costs failure costs external failure costs

17 Software Quality Assurance
SQA Process Definition & Standards Formal Technical Reviews Analysis & Reporting Test Planning & Review Measurement

18 Reviews & Inspections ... there is no particular reason
why your friend and colleague cannot also be your sternest critic. Jerry Weinberg

19 What Are Reviews? a meeting conducted by technical people for technical people a technical assessment of a work product created during the software engineering process a software quality assurance mechanism a training ground

20 What Reviews Are Not! They are not: a project budget summary
a scheduling assessment an overall progress report a mechanism for reprisal or political intrigue!!

21 The Players review leader producer reviewer recorder
standards bearer (SQA) producer maintenance oracle reviewer recorder user rep

22 Conducting the Review 1. be prepared—evaluate
product before the review 2. review the product, not the producer 3. keep your tone mild, ask questions instead of making accusations 4. stick to the review agenda 5. raise issues, don't resolve them 6. avoid discussions of style—stick to technical correctness 7. schedule reviews as project tasks 8. record and report all review results

23 Review Options Matrix * IPR WT IN RRR trained leader
agenda established reviewers prepare in advance producer presents product “reader” presents product recorder takes notes checklists used to find errors errors categorized as found issues list created team must sign-off on result IPR—informal peer review WT—Walkthrough IN—Inspection RRR—round robin review no maybe yes no yes no yes no maybe *

24 Metrics Derived from Reviews
inspection time per page of documentation inspection time per KLOC or FP inspection effort per KLOC or FP errors uncovered per reviewer hour errors uncovered per preparation hour errors uncovered per SE task (e.g., design) number of minor errors (e.g., typos) number of major errors (e.g., nonconformance to req.) number of errors found during preparation

25 Statistical SQA measurement • collect information on all defects
• find the causes of the • move to provide fixes for the process Product & Process measurement ... an understanding of how to improve quality ...


Download ppt "For University Use Only"

Similar presentations


Ads by Google