Achieving High Software Reliability

Slides:



Advertisements
Similar presentations
Metrics and Databases for Agile Software Development Projects David I. Heimann IEEE Boston Reliability Society April 14, 2010.
Advertisements

On the application of GP for software engineering predictive modeling: A systematic review Expert systems with Applications, Vol. 38 no. 9, 2011 Wasif.
Test Yaodong Bi.
Evaluating a Software Architecture By Desalegn Bekele.
Software Quality Ranking: Bringing Order to Software Modules in Testing Fei Xing Michael R. Lyu Ping Guo.
September 6, Achieving High Software Reliability Taghi M. Khoshgoftaar Empirical Software Engineering Laboratory Florida Atlantic.
SE curriculum in CC2001 made by IEEE and ACM: Overview and Ideas for Our Work Katerina Zdravkova Institute of Informatics
A GOAL-BASED FRAMEWORK FOR SOFTWARE MEASUREMENT
Software Quality Analysis with Limited Prior Knowledge of Faults Naeem (Jim) Seliya Assistant Professor, CIS Department University of Michigan – Dearborn.
1 Software Engineering II Presentation Software Maintenance.
CSE 300: Software Reliability Engineering Topics covered: Software metrics and software reliability Software complexity and software quality.
(c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 1 Software Test and Analysis in a Nutshell.
How to Measure the Impact of Specific Development Practices on Fielded Defect Density.
Chapter 11: Testing The dynamic verification of the behavior of a program on a finite set of test cases, suitable selected from the usually infinite execution.
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
Capability Maturity Model
Effective Methods for Software and Systems Integration
1 Forecasting Field Defect Rates Using a Combined Time-based and Metrics-based Approach: a Case Study of OpenBSD Paul Luo Li Jim Herbsleb Mary Shaw Carnegie.
IV&V Facility 1 Software Reliability Corroboration Bojan Cukic, Erdogan Gunel, Harshinder Singh, Lan Guo West Virginia University Carol Smidts University.
N By: Md Rezaul Huda Reza n
Validation Metrics. Metrics are Needed to Answer the Following Questions How much time is required to find bugs, fix them, and verify that they are fixed?
Unit 8 Syllabus Quality Management : Quality concepts, Software quality assurance, Software Reviews, Formal technical reviews, Statistical Software quality.
Chapter 6 : Software Metrics
Software Measurement & Metrics
Achieving High Software Reliability Using a Faster, Easier and Cheaper Method NASA OSMA SAS '01 September 5-7, 2001 Taghi M. Khoshgoftaar The Software.
Formal Methods in Software Engineering
Combining techniques for software quality classification: An integrated decision network approach Ruben de Jong.
Chapter 3: Software Project Management Metrics
Research Heaven, West Virginia FY2003 Initiative: Hany Ammar, Mark Shereshevsky, Walid AbdelMoez, Rajesh Gunnalan, and Ahmad Hassan LANE Department of.
California Institute of Technology Estimating and Controlling Software Fault Content More Effectively NASA Code Q Software Program Center Initiative UPN.
1 Report on results of Discriminant Analysis experiment. 27 June 2002 Norman F. Schneidewind, PhD Naval Postgraduate School 2822 Racoon Trail Pebble Beach,
SOFTWARE METRICS Software Metrics :Roadmap Norman E Fenton and Martin Neil Presented by Santhosh Kumar Grandai.
© ABB Corporate Research January, 2004 Experiences and Results from Initiating Field Defect Prediction and Product Test Prioritization Efforts at.
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
Pavan Rajagopal, GeoControl Systems James B. Dabney, UHCL Gary Barber, GeoControl Systems 1Spacecraft FSW Workshop 2015.
1 Overview of Maintenance CPRE 416-Software Evolution and Maintenance-Lecture 3.
Case Study of Agile Development Ronald J. Leach Copyright Ronald J. Leach, 1997, 2009, 2014,
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVI. Software Evolution.
Software Defects Cmpe 550 Fall 2005
NASA OSMA SAS '02 Software Fault Tree Analysis Dolores R. Wallace SRS Information Services Software Assurance Technology Center
Automated Software Testing
Overview Software Maintenance and Evolution Definitions
Software Quality Engineering
MSA / Gage Capability (GR&R)
Chapter 4 Requirements Engineering (1/3)
Benchmarking of Indian Urban Water Sector: Performance Indicator System versus Data Envelopment Analysis By: Dr. Mamata Singh, Dr. Atul K. Mittal, and.
Software Quality Engineering
Authors: Maria de Fatima Mattiello-Francisco Ana Maria Ambrosio
Chapter 8 – Software Testing
Verification and Testing
DEFECT PREDICTION : USING MACHINE LEARNING
Why Do We Measure? assess the status of an ongoing project
IS442 Information Systems Engineering
Applications of Data Mining in Software Engineering
Auditing Application Controls
Software Project Planning &
Software Quality Engineering
Lecture 12: Chapter 15 Review Techniques
Object-Oriented and Classical Software Engineering Eighth Edition, WCB/McGraw-Hill, 2011 Stephen R. Schach.
CSSSPEC6 SOFTWARE DEVELOPMENT WITH QUALITY ASSURANCE
Why Do We Measure? assess the status of an ongoing project
Chapter 25 Process and Project Metrics
Capability Maturity Model
Lecture 06:Software Maintenance
Why Do We Measure? assess the status of an ongoing project
Why Do We Measure? assess the status of an ongoing project
Capability Maturity Model
Subject Name: SOFTWARE ENGINEERING Subject Code:10IS51
Presentation transcript:

Achieving High Software Reliability OSMA Software Assurance Symposium 2002 Achieving High Software Reliability The Software Measurement Analysis and Reliability Toolkit & Module-Order Modeling Taghi M. Khoshgoftaar (taghi@cse.fau.edu) Empirical Software Engineering Laboratory Florida Atlantic University Boca Raton, Florida USA September 6, 2002

Overview SMART: The Software Measurement Analysis and Reliability Toolkit Module-Order Modeling Investigating the impact of underlying prediction models on module-order models Empirical case studies Summary September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

SMART Case-Based Reasoning Module-Order Models quantitative software quality prediction models: predicting faults, code churn, etc. qualitative software classification (risk-based) models: two-group and three-group models Module-Order Models priority-based ranking of modules with respect to their software quality September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

SMART (GUI) September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

SMART (GUI) September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Module-Order Models Why module-order models? Classification models are not suitable from the business & improved cost-effective view points same quality improvement resources applied to all modules predicted as high-risk or fault-prone A priority-based software quality improvement is more suited for a cost-effective usage of available resources inspecting the most fault-prone modules first September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

MOMs ... Answers practical questions posed by project management, such as which & how many modules to target for V&V? what’s the best usage of available resources? Different underlying quantitative software quality prediction models available what is their impact on the performance of module-order models? September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

MOMs ... Components of a module-order model underlying software quality prediction model ranking of modules according to the predicted quality factor, and procedure for evaluating accuracy and effectiveness of predicted ranking Alberg diagrams: faults accounted-for by rankings Performance diagrams: measuring accuracy of the predicted ranking with respect to actual (perfect) ranking September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

MOMs ... Based on schedule & resources allocated for testing and V&V, determine a range of cutoff percentages that includes the management’s options for covering the last module (as per the ranking) to be inspected Choose a set of representative cutoff percentages, ‘c’, from that range for each c, determine the number of faults accounted for by the actual & predicted ranking September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Case Study Example A large legacy telecommunications system mission-critical software written in a procedural language software metrics from four system releases, with a few thousand modules in each release fault data comprised of faults discovered during post unit testing, including system operations 24 product metrics & 4 execution metrics used Release 1 used as fit data & others as test data September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Fault Prediction Models Rank-order based on average absolute error and average relative error of models 1. CART-LAD regression tree 2. Case-Based Reasoning (SMART) 3. Multiple Linear Regression 4. Artificial Neural Networks 5. CART-LS regression tree 6. S-PLUS regression tree September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results of MOMs Group 1 Group 2 (all available regression trees) CBR MLR ANNs Group 2 (all available regression trees) CART-LS CART-LAD S-PLUS September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results of MOMs ... Alberg Diagram for Group 1: Release 2 September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results of MOMs ... Performance Diagram for Group 1: Release 2 September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results of MOMs ... Alberg Diagram for Group 2: Release 2 September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results of MOMs ... Performance Diagram for Group 2: Release 2 September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results Summary Group 1 models, i.e., CBR, ANN, & MLR had similar performances with respect to their module-order models S-PLUS (Group 2) module-order model performed similar to CBR, ANN, & MLR Though CART-LAD yielded best AAE and ARE values, it showed a relatively-lower performance as a module-order model September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results Summary ... When used as a module-order model, CART-LS is better than CART-LAD In contrast, with respect to AAE and ARE values CART-LAD is better than CART-LS Overall, for this case study the CART-LS module-order model performed generally better than the other five models, i.e., CBR, CART-LAD, ANN, MLR, and S-PLUS September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Results Summary ... Observing the effects of data characteristics performance of MOMs is dependent on the system domain and the software application Are AAE & ARE good performance metrics for selecting underlying prediction models for module-order modeling? Selecting the prediction models based on AAE and ARE did not provide any conclusive insight into the performance of a module-order model September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Conclusion Software fault prediction and quality classification models by themselves may not be sufficient from the business and practical view points (return-on-investment) Module-order modeling presents a more goal-oriented approach by predicting a priority-based ranking of modules with respect to software quality September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Conclusion ... Case studies investigating the impact of different underlying prediction models on module-order models Completed the ready-to-use (stand alone) version of SMART, including its requirements and specifications document design, implementation, and integration document September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Future Work A resource-based approach for the selection and evaluation of software quality models Developing models that provide an improved goal- and objective-oriented software quality assurance lowering the expected cost of misclassification improving the cost-benefit factor of models a better focus on return-on-investment September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Future Work ... Applying the SMART technology to software metrics and fault data collected from a NASA software project evaluating performance & benefits of SMART in the context of NASA software data Incorporating SMART into a live NASA software project demonstrating practical technology transfer September 6, 2002 Empirical Software Engineering Laboratory Florida Atlantic Univeristy

Achieving High Software Reliability OSMA Software Assurance Symposium 2002 Achieving High Software Reliability Thank You … Taghi M. Khoshgoftaar taghi@cse.fau.edu (561) 297 3994 Empirical Software Engineering Laboratory Florida Atlantic University Boca Raton, Florida, USA September 6, 2002