1 Some Topics In Measurement/Quantitative Analysis John E. Gaffney, Jr., PE 301-509-8552 November 3, 2010.

Slides:



Advertisements
Similar presentations
On Representing Uncertainty In Some COCOMO Model Family Parameters October 27, 2004 John Gaffney Fellow, Software & Systems.
Advertisements

These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e and are provided with permission by.
Chapter 10 Schedule Your Schedule. Copyright 2004 by Pearson Education, Inc. Identifying And Scheduling Tasks The schedule from the Software Development.
W5HH Principle As applied to Software Projects
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Some Experience With COSYSMOR At Lockheed Martin
Software Measurement and Process Improvement
Extensions of COSYSMO to Represent Reuse 21 st International Forum on COCOMO and Software Cost Modeling November 9, 2006 Ricardo ValerdiJohn Gaffney Garry.
Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering.
SE 450 Software Processes & Product Metrics Reliability Engineering.
SE 450 Software Processes & Product Metrics Software Metrics Overview.
Ray. A. DeCarlo School of Electrical and Computer Engineering Purdue University, West Lafayette, IN Aditya P. Mathur Department of Computer Science Friday.
Introduction Wilson Rosa, AFCAA CSSE Annual Research Review March 8, 2010.
(c) Copyright, Lockheed Martin Corporation, Looking At Schedule vs. Effort In Software Engineering, Systems Engineering and Systems John Gaffney.
Software Defect Modeling at JPL John N. Spagnuolo Jr. and John D. Powell 19th International Forum on COCOMO and Software Cost Modeling 10/27/2004.
COSYSMO Risk/Confidence Estimation Prototype John Gaffney March 14, 2005.
SE 555 – Software Requirements & Specifications Introduction
How to Measure the Impact of Specific Development Practices on Fielded Defect Density.
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
RISK MANAGEMENT IN SOFTWARE ENGINEERING RISK MANAGEMENT IN SOFTWARE ENGINEERING Prepared by Prepared by Sneha Mudumba Sneha Mudumba.
Verification and Validation
Capability Maturity Model
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 17 Slide 1 Extreme Programming.
Pop Quiz How does fix response time and fix quality impact Customer Satisfaction? What is a Risk Exposure calculation? What’s a Scatter Diagram and why.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
S/W Project Management
CLEANROOM SOFTWARE ENGINEERING.
N By: Md Rezaul Huda Reza n
Performance Measurement and Analysis for Health Organizations
VTT-STUK assessment method for safety evaluation of safety-critical computer based systems - application in BE-SECBS project.
Project Management : Techniques and Tools (60-499) Fall 2014 / Winter 2015.
Software Inspection A basic tool for defect removal A basic tool for defect removal Urgent need for QA and removal can be supported by inspection Urgent.
Software Estimation and Function Point Analysis Presented by Craig Myers MBA 731 November 12, 2007.
1 POP Quiz T/F Defect Removal Effectiveness and Defect Removal Models are not true Predictive Models Define DRE What is a Checklist? What is it for? What.
Extreme/Agile Programming Prabhaker Mateti. ACK These slides are collected from many authors along with a few of mine. Many thanks to all these authors.
Software Project Management With Usage of Metrics Candaş BOZKURT - Tekin MENTEŞ Delta Aerospace May 21, 2004.
UKSMA 2005 Lessons Learnt from introducing IT Measurement Peter Thomas –
Software Function, Source Lines Of Code, and Development Effort Prediction: A Software Science Validation ALLAN J. ALBRECHT AND JOHN E.GAFFNEY,JR., MEMBER,IEEE.
9/17/2002 COSYSMO Usage Experience Panel: What is Happening at Lockheed Martin Garry Roedler, Lockheed Martin Engineering Process Improvement Center
West Virginia University Towards Practical Software Reliability Assessment for IV&V Projects B. Cukic, E. Gunel, H. Singh, V. Cortellessa Department of.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Audit Sampling: An Overview and Application to Tests of Controls
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 1: Software and Software Engineering.
Welcome to Session 3 – Project Management Process Overview
Second Hour Lecture 9:30 – 10:20 am, September 8, 2001 Evolution of Software Economics Improving Software Economics (from Chapters 2 and 3 of Royce’ book)
Software Product Line Material based on slides and chapter by Linda M. Northrop, SEI.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin 8-1 Chapter Eight Audit Sampling: An Overview and Application.
Iterative Development Royce, “Successful Software Management Style: Steering and Balance”, IEEE Software sep/oct Sp8Jan22iterdev2.
1 Software Quality Engineering. 2 Quality Management Models –Tools for helping to monitor and manage the quality of software when it is under development.
What is project management?
9/8/99Lecture 51 CIS 4251 / CIS 5930 SOFTWARE DEVELOPMENT Fall 1999 Sept. 8, 1999 Marge Holtsinger.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
Software Engineering Lecture 8: Quality Assurance.
11/04/091 Some Topics Concerning The COSYSMOR Model/Tool John E. Gaffney, Jr Center For Process Improvement Excellence.
Overview of Addressing Risk with COSYSMO Garry Roedler & John Gaffney Lockheed Martin March 17, 2008.
11/03/091 Integrating Cost and Schedule Estimation In The COSYSMOR Model/Tool John E. Gaffney, Jr Center For Process Improvement.
Building Valid, Credible & Appropriately Detailed Simulation Models
Capability Maturity Model. What is CMM? n CMM: Capability Maturity Model n Developed by the Software Engineering Institute of the Carnegie Mellon University.
Verifying – Evaluating Software Estimates
Software Quality Engineering
Software Reliability Models.
Introducing ISTQB Agile Foundation Extending the ISTQB Program’s Support Further Presented by Rex Black, CTAL Copyright © 2014 ASTQB 1.
Software Engineering I
Capability Maturity Model
Software metrics.
Capability Maturity Model
Software Project Management
Presentation transcript:

1 Some Topics In Measurement/Quantitative Analysis John E. Gaffney, Jr., PE November 3, 2010

2 Topics Some Topics In Measurement/Quantitative Analysis: –Function Points –Risk and Cost Estimation, COSYSMOR –Defect Analysis, Software Reliability Some Observations

3 Measurement/Quantitative Management Measurement is all about supporting good management: –Knowing where you want to go; having the resources in place to get there; determining whether you are there or are likely to get there; and taking action if you are diverging from your goals Quantitative Management is a ”closed-loop” process: 1.Establish goals, 2.Compare actuals with goals, 3.Take action if appropriate (e.g. a metric out of expected range; don’t act if O.K.) The closed-loop (feedback) approach tends to compensate for “noise” in the system, analogous to feedback in control systems (electrical eng.) Quantitatively characterize process, product, performance, personnel, tools/methodology

4 Function Points: Some Observations Invented by Allen J. Albrecht, IBM, in the1970’s. Developed to capture “..user/customer requirements in a way that is more easily understood by the user/customer than SLOC.”* Originally used to estimate development costs for a particular IBM organization for business applications; absorbed both size and productivity Later, in more broadly-based usage, FP’s used as the metric for application size; then development effort was based on the “universal” size measure and the particular productivity The usage of FP’s very quickly spread For many, the use of FP’s in lieu of SLOC became virtually a religious matter There are many variants of function points, e.g., feature points (Capers Jones), simplified function points (Gaffney) –Some related measures are: story points and Halstead’s “unique I/O count” Function points may not be particularly well suited to use in highly calculation- intensive, “engineering” type software SLOC count and function point count/value typically highly correlated * Albrecht,Gaffney, IEEE,TSE,Nov. ‘83

5 From Albrecht/Gaffney Data

6

7

8

9 Defect Management Process Defect management is best executed as a ”closed-loop” process –On many projects it is not, however, and defect content, etc. are treated as “free variables,” with no goals established for their values Defect management should be viewed as just as much a part of management as are cost and schedule management –Defect detection, analysis (including root cause analysis), correction, and avoidance have cost and schedule implications We need to know: where we are going (goal); measure progress against goals, determine whether we are likely to achieve each goal or already have done so (measure and compare); take corrective action as necessary A good measurement program is key to a successful Defect Management Process Active Defect Management provides potential early indication (headlight metrics) of success and/or of problems Software Reliability estimation: based on estimates of latent (delivered, relevant) defects and prospective rate of discovery

10 Defect Tools Overview Purpose of the tools: fit defect discovery data to a curve to make predictions about discovery later in the development/testing process from data obtained earlier; track progress against goals, early course correction as required - Weibull curves: e.g. Rayleigh, exponential The tools are a key to implementing the software defect management process Two types of tools: activity-based and time-based Provide “headlight” metrics; e.g., can indicate defect discovery and content objectives are not likely to be realized; can provide better mid-course cost/schedule predictions Activity-based tools fit data obtained from development activity verification, e.g., code inspection, and from testing –A key value add: provide early prediction of testing results and latent defect content before testing has started and during testing; this can help minimize rework costs Tools (Gaffney) evolution history: STEER I (IBM,1985; key point, made an activity-based tool; before then, time-based fits only, to JEG’s knowledge); SWEEP (Software Productivity Consortium, 1988 et seq.); STEER II Lockheed Martin, 1997 et seq.); DEFT ( Post-Lockheed 2010)

11

12

13

14 Some Aspects of Uncertainty Management and Risk Modeling Better management of uncertainty starts with recognizing its existence and in estimating cost/schedule/quality and other “risks” Use the risk information to make better management decisions during both business acquisition and program execution –Serve as a basis for discussion of alternatives within the project as well as with the customer –A key aspect of “affordability” Recognize that the actual project outcomes, e.g, size, cost, duration/schedule, quality, are uncertain as are many of the key determining parameters, e.g., productivity, size Thus, the risk of not achieving desired objectives should be quantified to the degree possible. “Risk” where smaller values are desirable, e.g., development effort, as used in COSYSMOR: Risk=Prob [Actual Effort Will Be >Value] Confidence=100%-Risk “Risk” where larger values are desirable, MTBF and reliability: Risk=Prob [Actual Reliability Will Be <Desired Value] Confidence=100%-Risk

15 A COSYSMOR PLOT: Person Hours Risk (Larger values of effort are less desirable) Target

16 A COSYSMOR PLOT: Person Hours Overrun Risk Tail of the Previous Plot (Larger values of effort overrun are less desirable)

17 Reliability: Prob of no failures during some time interval, starting at some point in time after system/software/item goes into service and the clock starts (Smaller values are less desirable)

18 Original COSYSMOR Parameter Inputs Low, Likely and High values for each parameter; defines probability distribution using Keefer and Bodily (1983) non-parametric method

19 Some Observations-1 Expect your work life to offer you alternatives (and thus real possibilities for personal and professional growth) that you cannot plan for Your education is never complete; look for opportunities to learn (formal and informal) Don’t be afraid to challenge “accepted wisdom” if you believe that you are correct –Perhaps, no one else has done so or others are afraid to do so –Ask questions of “ancient worthies;” just because the boss or someone else has more experience or seemingly greater credentials, doesn’t necessarily make him or her correct (But, please do be polite !) –Ask questions of yourself, of your assumptions and conclusions

20 Some Observations-2 Determine the use for the metrics to be collected/developed; what information are they going to provide/what questions are they going to answer or help to answer; who wants the information and what decisions are they going to make using that information Some keys to getting good metrics: use a development process that has well-defined activities, artifacts produced by each, well-defined entry/exit criteria; good definitions for the metrics; good collection and analysis processes; training of all personnel involved; strong commitment by both management and team members; recognition by project team members that the metrics are used to help manage the project and to make a product that has predictable attributes using a process that also has predictable attributes Observation 1: Lack of good historical and current data on which to characterize processes, products, performance and thence to support bases of estimates Observation 2: Mathematics/statistics skills are often found in project personnel at a lesser than desirable level; can lead to not knowing what to expect/over-confidence/no confidence

21 Backup

22

23

24 Additional Functions Provided By COSYSMOR COSYSMOR provides four major additional functions beyond those provided by Academic COSYSMO: 1.Estimation of Cost/Effort and Schedule Uncertainties/Risk and Confidence: Provides quantification of the impacts of uncertainties in the values of key model parameter values. Provides multiple cost and schedule values with associated probabilities. Risk=Prob [Actual Effort Will Be >Estimated Effort] Confidence=100%-Risk 2.Representation of Multiple Types of Size Drivers: Provides for entering counts of: new, modified, reused, and deleted types for each of the four size driver categories. 3.Labor Scheduling: Provides the spread of systems engineering labor for the five systems engineering activities and across four the development phases (time). 4.Labor Allocation: Provides for the user to select the percentage allocations of the twenty activity/phase pairs or effort elements.

25 Affordability “Affordability” is a measure of a system’s effectiveness “Affordability” means that a given set of needs (performance requirements) can be met within stated cost and schedule constraints. “Affordability” can also be defined as the probability (confidence) of achieving a stated set of needs at a stated cost and schedule (effort). The associated “risk” is determined (estimated) on the basis of the capability of the organization to meet this set of needs. –“Risk” equals 100% minus “Confidence” 25 From presentation by Cole, Gaffney and Schimmoller at the 2009 PSM Conference

26 Keefer and Bodily Three-Point Approximation To A Continuous Probability Distribution* 0.05 Fractile↔0.185 Prob 0.50 Fractile↔0.630 Prob 0.95 Fractile↔0.185 Prob *D.L. Keefer and S.E. Bodily, 3-Point Approximations For Continuous Random Variables, Management Science, 2995), 1983,