California Institute of Technology Estimating and Controlling Software Fault Content More Effectively NASA Code Q Software Program Center Initiative UPN.

Slides:



Advertisements
Similar presentations
1 H2 Cost Driver Map and Analysi s Table of Contents Cost Driver Map and Analysis 1. Context 2. Cost Driver Map 3. Cost Driver Analysis Appendix A - Replica.
Advertisements

Chapter 2 – Software Processes
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
1 The Role of the Revised IEEE Standard Dictionary of Measures of the Software Aspects of Dependability in Software Acquisition Dr. Norman F. Schneidewind.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
September 6, Achieving High Software Reliability Taghi M. Khoshgoftaar Empirical Software Engineering Laboratory Florida Atlantic.
Chapter 15 Application of Computer Simulation and Modeling.
Software Metrics II Speaker: Jerry Gao Ph.D. San Jose State University URL: Sept., 2001.
SIM5102 Software Evaluation
Software Defect Modeling at JPL John N. Spagnuolo Jr. and John D. Powell 19th International Forum on COCOMO and Software Cost Modeling 10/27/2004.
SIGDIG – Signal Discrimination for Condition Monitoring A system for condition analysis and monitoring of industrial signals Collaborative research effort.
DITSCAP Phase 2 - Verification Pramod Jampala Christopher Swenson.
Software Process and Product Metrics
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 11 System Test Design
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
National Aeronautics and Space Administration SAS08_Classify_Defects_Nikora1 Software Reliability Techniques Applied to Constellation Allen P. Nikora,
1 CHAPTER M4 Cost Behavior © 2007 Pearson Custom Publishing.
Introduction to Computer Technology
1 NASA OSMA SAS02 Software Reliability Modeling: Traditional and Non-Parametric Dolores R. Wallace Victor Laing SRS Information Services Software Assurance.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management 1.
Introduction to Systems Analysis and Design Trisha Cummings.
1 Software Quality Engineering CS410 Class 5 Seven Basic Quality Tools.
S/W Project Management Software Process Models. Objectives To understand  Software process and process models, including the main characteristics of.
N By: Md Rezaul Huda Reza n
1SAS 03/ GSFC/SATC- NSWC-DD System and Software Reliability Dolores R. Wallace SRS Technologies Software Assurance Technology Center
1 Using Excel to Implement Software Reliability Models Norman F. Schneidewind Naval Postgraduate School 2822 Racoon Trail, Pebble Beach, California, 93953,
1 Process Engineering A Systems Approach to Process Improvement Jeffrey L. Dutton Jacobs Sverdrup Advanced Systems Group Engineering Performance Improvement.
Software Engineering Software Process and Project Metrics.
Chapter 6 : Software Metrics
9/14/2012ISC329 Isabelle Bichindaritz1 Database System Life Cycle.
Configuration Management Matti Kuikka CONFIGURATION MANAGEMENT by Matti Kuikka, Unit Manager, Ericsson, Turku, Telecom R&D, Wireless Charging.
Software Project Management Lecture # 7. What are we studying today? Chapter 24 - Project Scheduling  Effort distribution  Defining task set for the.
Testing Workflow In the Unified Process and Agile/Scrum processes.
This chapter is extracted from Sommerville’s slides. Text book chapter
Software Engineering Saeed Akhtar The University of Lahore Lecture 8 Originally shared for: mashhoood.webs.com.
1/26/2004TCSS545A Isabelle Bichindaritz1 Database Management Systems Design Methodology.
West Virginia University Towards Practical Software Reliability Assessment for IV&V Projects B. Cukic, E. Gunel, H. Singh, V. Cortellessa Department of.
V&V of COTS RTOS for Space Flight Projects The 1st Annual NASA Office of Safety and Mission Assurance (OSMA) Software Assurance Symposium (SAS) Michael.
California Institute of Technology Formalized Pilot Study of Safety- Critical Software Anomalies Dr. Robyn Lutz and Carmen Mikulski Software Assurance.
Safety Critical Systems 5 Testing T Safety Critical Systems.
Chapter 3: Software Project Management Metrics
Configuration Management and Change Control Change is inevitable! So it has to be planned for and managed.
Research Heaven, West Virginia FY2003 Initiative: Hany Ammar, Mark Shereshevsky, Walid AbdelMoez, Rajesh Gunnalan, and Ahmad Hassan LANE Department of.
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
1 Report on results of Discriminant Analysis experiment. 27 June 2002 Norman F. Schneidewind, PhD Naval Postgraduate School 2822 Racoon Trail Pebble Beach,
CS532 TERM PAPER MEASUREMENT IN SOFTWARE ENGINEERING NAVEEN KUMAR SOMA.
CSc 461/561 Information Systems Engineering Lecture 5 – Software Metrics.
Chapter 6 CASE Tools Software Engineering Chapter 6-- CASE TOOLS
Chapter 2 – Software Processes Lecture 1 Chapter 2 Software Processes1.
Project Management. Introduction  Project management process goes alongside the system development process Process management process made up of three.
1 NASA OSMA SAS02 Software Fault Tree Analysis Dolores R. Wallace SRS Information Services Software Assurance Technology Center
1 Overview of Maintenance CPRE 416-Software Evolution and Maintenance-Lecture 3.
SRR and PDR Charter & Review Team Linda Pacini (GSFC) Review Chair.
1 Software Testing and Quality Assurance Lecture 17 - Test Analysis & Design Models (Chapter 4, A Practical Guide to Testing Object-Oriented Software)
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
Company LOGO. Company LOGO PE, PMP, PgMP, PME, MCT, PRINCE2 Practitioner.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
Failure Modes, Effects and Criticality Analysis
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
Experience Report: System Log Analysis for Anomaly Detection
NASA OSMA SAS '02 Software Fault Tree Analysis Dolores R. Wallace SRS Information Services Software Assurance Technology Center
Achieving High Software Reliability
Software Engineering (CSI 321)
Chapter 18 Maintaining Information Systems
Authors: Maria de Fatima Mattiello-Francisco Ana Maria Ambrosio
Software Project Planning &
Goal, Question, and Metrics
Chapter 13 Quality Management
Software metrics.
UNIT No- III- Leverging Information System ( Investing strategy )
Presentation transcript:

California Institute of Technology Estimating and Controlling Software Fault Content More Effectively NASA Code Q Software Program Center Initiative UPN ; Kenneth McGill, Research Lead OSMA Software Assurance Symposium Sept 5-7, 2001 Allen P. Nikora Autonomy and Control Section Jet Propulsion Laboratory California Institute of Technology Norman F. Schneidewind Naval Postgraduate School Monterey, CA John C. Munson Q.U.E.S.T. Moscow, ID

California Institute of Technology SAS’012 Topics Overview Benefits Preliminary Results Work in progress References Backup information

California Institute of Technology SAS’013 Overview Objectives: Gain a better quantitative understanding of the effects of requirements changes on fault content of implemented system. Gain a better understanding of the type of faults that are inserted into a software system during its lifetime. Estimated Fault Counts by Type for Implemented System Lines of Source Code Max Nesting Depth Total Operands Source Code Modules Function Count Environmental Constraints Number of Exceptions Component Specifications Conditional Execution Faults Execution Order Faults Variable Usage Faults Incorrect Computation Source Code Modules Structural Measurements of Specification Structural Measurements of Source Code Use measurements to PREDICT faults, and so achieve better Numbers of estimated faults of given type in given module … Measure- ments of given type for given module types of measurements types of faults types of measurements planning (e.g., time to allocate for testing, identify fault prone modules) guidance (e.g., choose design that will lead to fewer faults) assessment (e.g., know when close to being done testing)

California Institute of Technology SAS’014 Overview: Goals  Quantify of the effects of requirements changes on the fault content of the implemented system by identifying relationships between measurable characteristics of requirements change requests and the number and type of faults inserted into the system in response to those requests.  Improve understanding of the type of faults that are inserted into a software system during its lifetime by identifying relationships between types of structural change and the number and types of faults inserted.  Improve ability to discriminate between fault-prone modules and those that are not prone to faults.

California Institute of Technology SAS’015 Overview: Approach Measure structural evolution on collaborating development efforts –Initial set of structural evolution measurements collected Analyze failure data –Identify faults associated with reported failures –Classify identified faults according to classification rules –Identify module version at which each identified fault was inserted –Associate type of structural change with fault type

California Institute of Technology SAS’016 Identify relationships between requirements change requests and implemented quality/reliability –Measure structural characteristics of requirements change requests (CRs). –Track CR through implementation and test –Analyze failure reports to identify faults inserted while implementing a CR Overview: Approach (cont’d)

California Institute of Technology SAS’017 Overview: Structural Measurement Framework Fault Measurement and Identification Structural Measurement Compute Fault Burden

California Institute of Technology SAS’018 Overview: Requirements Risk to Quality Based on risk factors evaluated when changes to STS on-board software are under consideration Risk Factors – see backup slides for more detail –Complexity –Size –Criticality of Change –Locality of Change –Requirements Issues and Function –Performance –Personnel Resources –Tools

California Institute of Technology SAS’019 Overview: Status Year 1 of planned 2-year study Installed initial version of measurement framework on collaborating JPL development efforts –Mission Data System (MDS) –Possible collaboration with ST-3 (Starlight) software development effort Investigated use of Logistic Regression Functions (LRFs) in combination with Boolean Discriminant Functions (BDFs) –Improve accuracy of quality and inspection cost predictions

California Institute of Technology SAS’0110 Benefits Use easily obtained metrics to identify software components that pose a risk to software and system quality. –Implementation – identify modules that should have additional review prior to integration with rest of system –Prior to implementation – estimate impact of changes to requirements on quality of implemented system. Provide quantitive information as a basis for making decisions about software quality. Measurement framework can be used to continue learning as products and processes evolve.

California Institute of Technology SAS’0111 Preliminary Results: Quality Classification Investigated whether Logistic Regression Functions (LRFs), when used in combination with Boolean Discriminant Functions (BDFs), would improve the quality classification ability of BDFs when used alone. When the union of a BDF and LRF was used to classify quality, the predicative accuracy of quality and inspection cost was improved over that of using either function alone for the Space Shuttle. The significance is that very high quality classification accuracy (1.25% error) can be obtained while reducing the inspection cost incurred in achieving high quality.

California Institute of Technology SAS’0112 For the same software system and using the same set of metrics, BDFs were superior to LRFs for quality discrimination. LRFs used in isolation were of limited value. However, when combined with BDFs, LRFs provided a marginal improvement in quality discrimination for low quality modules. When LRFs are added, inspection cost is reduced from that incurred when BDFs are used alone. This is a significant finding in terms of providing an accurate quality predictor for safety critical systems at reasonable cost. Preliminary Results: Quality Classification

California Institute of Technology SAS’0113 Preliminary Results: Quality Classification The ranking of Pi provided accurate thresholds for identifying both low and high quality modules. The method for determining the critical value of LRFs, using the inverse of the Kolmogorov- Smirnov (K-S) distance, provided good balance between quality and inspection cost. The results are encouraging but more builds should be analyzed to increase confidence in the results. The methods are general and not particular to the Shuttle. Thus, the methods should be applicable to other domains.

California Institute of Technology SAS’0114 Preliminary Results: Fault Types vs. Structural Change Structural measurements collected for one subsystem of MDS –375 source files –976 unique modules –38298 total measurements made Fault index and proportional fault burdens computed for each unique module Domain scores computed for each version of each module

California Institute of Technology SAS’0115 Work in progress Analyze MDS failures to identify faults –Identify module and version where fault was inserted –Determine fault type –Relate fault type to structural change type (domain scores from PCA) Relate requirements changes to implemented quality –Measure change request –Identify changes to source code modules that implement change request CM system based on “change packages” – one change package per change request Measure structural changes to source modules implementing change:  R   C   F Identify number and type of faults occurring in implemented change request

California Institute of Technology SAS’0116 References [Schn01]Norman F. Schneidewind, “Investigation of Logistic Regression as a Discriminant of Software Quality”, proceedings of the International Metrics Symposium, 2001 [Nik01]A. Nikora, J. Munson, “A Practical Software Fault Measurement and Estimation Framework”, to be presented in the Industrial Practices Sessions at the International Symposium on Software Reliability Engineering, Hong Kong, November 27-30, 2001 [Schn99]N. Schneidewind, A. Nikora, "Predicting Deviations in Software Quality by Using Relative Critical Value Deviation Metrics", proceedings of the 10th International Symposium on Software Reliability Engineering, Boca Raton, FL, Nov 1-4, 1999 [Nik98]A. Nikora, J. Munson, “Software Evolution and the Fault Process”, proceedings, 23 rd Annual Software Engineering Workshop, NASA/Goddard Space Flight Center, Greenbelt, MD, December 2-3, 1998 [Schn97]Norman F. Schneidewind, “ A Software Metrics Model for Quality Control”, Proceedings of the International Metrics Symposium, Albuquerque, New Mexico, November 7, 1997, pp [Schn97a]Norman F. Schneidewind, “A Software Metrics Model for Integrating Quality Control and Prediction”, Proceedings of the International Symposium on Software Reliability Engineering, Albuquerque, New Mexico, November 4, 1997, pp

California Institute of Technology SAS’0117 Backup Slides – Quality Classification

California Institute of Technology SAS’0118 Quality Classification Metric Ranking –The critical values derived from applying the Kolmogorov-Smirnov (K-S) Distance method are shown in Table 1. –Metrics were entered incrementally in the BDFs and LRFs in the sequence given by the K-S ranks in Table 1. –A graphical picture is shown in Figure 1. –Figure 2 shows the plot of drcount versus Pi. –Where Pi is the probability of drcount>0 –Both figures indicate the critical value of Pi.

California Institute of Technology SAS’0119 Quality Classification (cont’d) Metric Critical Values

California Institute of Technology SAS’0120 Quality Classification (cont’d) Kolmogorov-Smirnov (K-S) Distance

California Institute of Technology SAS’0121 Quality Classification Pi (probability of drcount>0)

California Institute of Technology SAS’0122 Tables 2 (validation predictions) and 3 (application results) provide a comparison of the ability of BDFs and LRFs to classify quality and the inspection cost that would be incurred. BDF  LRF is the superior quality classifier as shown by the highlighted (red) cells. Quality Classification BDF – LRF Validation Predictions and Application Results

California Institute of Technology SAS’0123 Quality Classification (cont’d) BDF – LRF Validation Predictions, BDF – LRF Application Results

California Institute of Technology SAS’0124 Quality Classification (cont’d) LRF Validation Predictions Table 4 indicates that there is not a statistically significant difference between the medians of Pi and drcount at the indicated  for both the four and six metric LRFs (i.e., large alpha for result). We performed an additional analysis to determine the percentage of drcount corresponding to the highest and lowest 100 ranks of Pi, using Build 1. The predictions are shown in Table 4 for the four and six metric cases.

California Institute of Technology SAS’0125 Quality Classification (cont’d) Ranks of Pi (probability of drcount>0) The purpose of this analysis was to establish a predictor threshold (i.e., highest 100 ranks of Pi) of the lowest quality modules in the Application product (Build 2). Figure 3 shows drcount versus the rank of Pi for the four metric LRF for Build 1.

California Institute of Technology SAS’0126 Quality Classification LRF Quality Rankings

California Institute of Technology SAS’0127 Quality Classification Rank of Pi (probability of drcount>0)

California Institute of Technology SAS’0128 Backup – Requirements Risk Factors

California Institute of Technology SAS’0129 STS Software Requirements Change Risk Factors Definitions of the risk factors evaluated when a change to the STS on-board software is under consideration are given on the following slides. –If the answer to a yes/no question is "yes", it means this is a high-risk change with respect to the given factor. –If the answer to a question that requires an estimate is an anomalous value, it means this is a high-risk change with respect to the given factor.

California Institute of Technology SAS’0130 STS Software Requirements Change Risk Factors (cont’d) Complexity Factors –Qualitative assessment of complexity of change (e.g., very complex) Is this change highly complex relative to other software changes that have been made on the Shuttle? –Number of modifications or iterations on the proposed change How many times must the change be modified or presented to the Change Control Board (CCB) before it is approved?

California Institute of Technology SAS’0131 STS Software Requirements Change Risk Factors (cont’d) Size Factors –Number of lines of code affected by the change How many lines of code must be changed to implement the change? –Size of data and code areas affected by the change How many bytes of existing data and code are affected by the change?

California Institute of Technology SAS’0132 STS Software Requirements Change Risk Factors (cont’d) Criticality of Change Factors –Whether the software change is on a nominal or off-nominal program path (i.e., exception condition) Will a change to an off-nominal program path affect the reliability of the software? –Operational phases affected (e.g., ascent, orbit, and landing) Will a change to a critical phase of the mission (e.g., ascent and landing) affect the reliability of the software?

California Institute of Technology SAS’0133 STS Software Requirements Change Risk Factors (cont’d) Locality of Change Factors –The area of the program affected (i.e., critical area such as code for a mission abort sequence) Will the change affect an area of the code that is critical to mission success? –Recent changes to the code in the area affected by the requirements change Will successive changes to the code in one area lead to non-maintainable code? –New or existing code that is affected Will a change to new code (i.e., a change on top of a change) lead to non-maintainable code? –Number of system or hardware failures that would have to occur before the code that implements the requirement would be executed Will the change be on a path where only a small number of system or hardware failures would have to occur before the changed code is executed ?

California Institute of Technology SAS’0134 STS Software Requirements Change Risk Factors (cont’d) Requirements Issues and Function Factors –Number and types of other requirements affected by the given requirement change (requirements issues) Are there other requirements that are going to be affected by this change? If so, these requirements will have to be resolved before implementing the given requirement. –Possible conflicts among requirements changes (requirements issues) Will this change conflict with other requirements changes (e.g., lead to conflicting operational scenarios) –Number of principal software functions affected by the change How many major software functions will have to be changed to make the given change?

California Institute of Technology SAS’0135 STS Software Requirements Change Risk Factors (cont’d) Performance Factors –Amount of memory required to implement the change Will the change use memory to the extent that other functions will be not have sufficient memory to operate effectively? –Effect on CPU performance Will the change use CPU cycles to the extent that other functions will not have sufficient CPU capacity to operate effectively?

California Institute of Technology SAS’0136 STS Software Requirements Change Risk Factors (cont’d) Personnel Resources Factors –Number of inspections required to approve the change. –Workforce requirements required to implement the change –Will the manpower required to implement the software change be significant? –Workforce required to verify and validate the correctness of the change –Will the workforce required to verify and validate the software change be significant?

California Institute of Technology SAS’0137 STS Software Requirements Change Risk Factors (cont’d) Tools Factor –Any software tools creation or modification required to implement the change –Will the implementation of the change require the development and testing of new tools? –Requirements specifications techniques (e.g., flow diagram, state chart, pseudo code, control diagram). –Will the requirements specification method be difficult to understand and translate into code?