Product reliability Measuring

Slides:



Advertisements
Similar presentations
Chapter 4 Quality Assurance in Context
Advertisements

Metrics for Process and Projects
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Software Quality Metrics
Software Defect Modeling at JPL John N. Spagnuolo Jr. and John D. Powell 19th International Forum on COCOMO and Software Cost Modeling 10/27/2004.
(c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 1 Software Test and Analysis in a Nutshell.
Performance Measurement and Strategic Information Management
1 Software Testing and Quality Assurance Lecture 1 Software Verification & Validation.
Software Process and Product Metrics
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
Chapter 20: Defect Classification and Analysis  General Types of Defect Analyses.  ODC: Orthogonal Defect Classification.  Analysis of ODC Data.
Achieving Better Reliability With Software Reliability Engineering Russel D’Souza Russel D’Souza.
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
Defect Management Defect Injection and Removal
THE MANAGEMENT AND CONTROL OF QUALITY, 5e, © 2002 South-Western/Thomson Learning TM 1 Chapter 8 Performance Measurement and Strategic Information Management.
SOFTWARE ENGINEERING1 Introduction. Software Software (IEEE): collection of programs, procedures, rules, and associated documentation and data SOFTWARE.
Chapter 6 : Software Metrics
Project Management Estimation. LOC and FP Estimation –Lines of code and function points were described as basic data from which productivity metrics can.
Topic (1)Software Engineering (601321)1 Introduction Complex and large SW. SW crises Expensive HW. Custom SW. Batch execution.
1 POP Quiz T/F Defect Removal Effectiveness and Defect Removal Models are not true Predictive Models Define DRE What is a Checklist? What is it for? What.
Software Measurement & Metrics
This chapter is extracted from Sommerville’s slides. Text book chapter
Chapter 19: Quality Models and Measurements  Types of Quality Assessment Models  Data Requirements and Measurement  Comparing Quality Assessment Models.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Software Metrics – part 2 Mehran Rezaei. Software Metrics Objectives – Provide State-of-art measurement of software products, processes and projects Why.
This material is approved for public release. Distribution is limited by the Software Engineering Institute to attendees. Sponsored by the U.S. Department.
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
SOFTWARE PROCESS AND PROJECT METRICS. Topic Covered  Metrics in the process and project domains  Process, project and measurement  Process Metrics.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
1 Experience from Studies of Software Maintenance and Evolution Parastoo Mohagheghi Post doc, NTNU-IDI SEVO Seminar, 16 March 2006.
Hussein Alhashimi. “If you can’t measure it, you can’t manage it” Tom DeMarco,
Advanced Software Engineering Lecture 4: Process & Project Metrics.
Chapter 22 Metrics for Process and Projects Software Engineering: A Practitioner’s Approach 6 th Edition Roger S. Pressman.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Software Measurement: A Necessary Scientific Basis By Norman Fenton Presented by Siv Hilde Houmb Friday 1 November.
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
This chapter is extracted from Sommerville’s slides. Textbook chapter 22 1 Chapter 8 Validation and Verification 1.
Transportation Planning Asian Institute of Technology
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVII. Verification and Validation.
Software Test Metrics When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure,
Software Defects Cmpe 550 Fall 2005
Project Monitoring Review Class 22 Project Monitoring cont
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Software Quality Engineering
CSC 480 Software Engineering
Software Testing Introduction CS 4501 / 6501 Software Testing
Software Quality Engineering
Modeling and Simulation (An Introduction)
Introduction SOFTWARE ENGINEERING.
Software Reliability Definition: The probability of failure-free operation of the software for a specified period of time in a specified environment.
Verification & Validation
Verification and Validation
DEFECT PREDICTION : USING MACHINE LEARNING
Software Project Sizing and Cost Estimation
OLAP Systems versus Statistical Databases
Software Engineering: A Practitioner’s Approach, 6/e Chapter 23 Estimation for Software Projects copyright © 1996, 2001, 2005 R.S. Pressman & Associates,
Software Quality Engineering
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Chapter 13 Quality Management
Software Quality Assurance
Capability Maturity Model
Software metrics.
Software Engineering: A Practitioner’s Approach, 6/e Chapter 23 Estimation for Software Projects copyright © 1996, 2001, 2005 R.S. Pressman & Associates,
Chapter # 7 Software Quality Metrics
Metrics for Process and Projects
6. Software Metrics.
Capability Maturity Model
Chapter 7 Software Testing.
Chapter 26 Estimation for Software Projects.
Meta-analysis, systematic reviews and research syntheses
Presentation transcript:

Product reliability Measuring

Product Reliability Measuring Types of Quality Assessment Models Data Requirements and Measurement Comparing Quality Assessment Models Measurement and Model Selection WWWW.KAASHIVINFOTECH.COM

Introduction Analytical models provide quantitative assessment of selected quality characteristics Applied over time, provide accurate prediction of future quality Purpose of measurement and analysis is to make corrective actions =>improvement provide timely feedback/assessment identify problematic areas prediction, anticipating/planning for scheduling and resource allocation WWWW.KAASHIVINFOTECH.COM

Models for Quality Assessment Direct indicators of quality defect measurements - defect density for correctness probability of failure-free operation for reliability measured at end of software development Indirect indicators of quality product internal attributes (e.g. KLOC, McCabe’s) interaction between product and user development process general characteristics of product (e.g. telecom) may be available early enough to make predictions WWWW.KAASHIVINFOTECH.COM

Models for Quality Assessment WWWW.KAASHIVINFOTECH.COM

Generalized Models for Quality Assessment Require little or no project-specific data Three categories Overall model – provides a single estimate of overall product quality Segmented model – provides different quality estimates for different industrial segments Dynamic model – provides quality trend or distribution over time or development process WWWW.KAASHIVINFOTECH.COM

Overall Models Most general subtype of generalized quality models Provide a rough estimate of product quality, e.g. defect density = total defects / product size Lump all products together – abstraction of commonly observed facts about quality generally true over all kinds of application domains, e.g. 80:20 rule which states 80% of defects are concentrated in 20% of product modules/components linkages between software defect, risk, process maturity to quality WWWW.KAASHIVINFOTECH.COM

Segmented Models Abstraction of commonly observed facts about quality over product market segments, e.g. reliability levels (measured by failure rate) safety-critical SW – medical devices and nuclear reactors commercial SW – telecommunications and business auxiliary SW – games and low-cost PC SW WWWW.KAASHIVINFOTECH.COM

Dynamic Models Provide information about quality over time or development phases, e.g. defect distribution profile over dev. phases Putnam model – effort and defect profiles over time reliability growth during product testing Can be combined with segmented models to give us segmented dynamic models WWWW.KAASHIVINFOTECH.COM

Product-Specific Models Provide more precise quality assessments using product-specific data Three categories Semi-customized models – extrapolate product history to predict quality for the current project (Table 2) Observation-based models – estimate quality based on observations from the current project Measurement-driven predictive models – establish predictive relations between various early measurements and product quality WWWW.KAASHIVINFOTECH.COM

Semi-Customized Models Use general characteristics and historical information about product, process, or envt Provide quality extrapolations Examples: Defect removal models (DRMs) provide defect distribution profile over development phases based on previous releases of the same product Combine DRM with orthogonal defect classification (ODC) model - profiles defects by individual phases in which they where injected, discovered, and by categories => identify high-defect areas WWWW.KAASHIVINFOTECH.COM

Observation-based Models Relate observations of the software system behavior to information about related activities for more precise quality assessments, e.g. SRGMs – estimate parameters based on observation data Usually use data from current project WWWW.KAASHIVINFOTECH.COM

Measurement-driven predictive models Establish predictive relations between quality and other measurements from historical data Provide early predictions of quality Identify problems early for timely actions Use statistical analysis techniques / learning algorithms Examples: Relationships between defect fixes and design and code measurements high-defect modules of legacy products associated with numerous changes and high data complexity high-defect modules of new products associated with complex design and control structures WWWW.KAASHIVINFOTECH.COM

Identify High-risk areas in Development Relationship between defect fixes and various design and code measurements High-defect modules of legacy products associated with numerous changes and high data complexity High-defect modules of new projects associated with complex design and control structures WWWW.KAASHIVINFOTECH.COM

Model Comparison and Interconnections Comparisons based on usefulness of modeling results, how accurate quality estimates are, and applicability of models to different environments Model inter-connections examined in two opposite directions Customization required of generalized quality models to create product-specific models Generalization of product specific models when enough empirical evidence from different products or projects is accumulated WWWW.KAASHIVINFOTECH.COM

WWWW.KAASHIVINFOTECH.COM

Comparisons Usefulness can be weighted against cost (such as collecting data) Generalized models more widely applicable and less expensive to use (do not require product-specific measurements) Generalized models more useful in product planning stage and early development phases – when product- specific data unavailable, except when historical data exists in which case semi-customized models are better WWWW.KAASHIVINFOTECH.COM

More Comparisons Observation-based and Measurement-based predictive models better manage QA activities and later development and maintenance activities as more measurement data collected WWWW.KAASHIVINFOTECH.COM

More Comparisons Counterparts in generalized models to product- specific models and vice versa Generalized models can be customized into product-specific ones Product-specific models can be generalized Depends on kind of measurement data collected and analysis results available WWWW.KAASHIVINFOTECH.COM

Data Requirements and Measurement Different models have different data requirements (direct and/or indirect) Generalized models based on industrial averages and general profiles for all products or product segment. No data from current project needed directly But measurement taken at current project can be accumulated into empirical base to calibrate models for future applications WWWW.KAASHIVINFOTECH.COM

Data Requirements and Measurement for Product-Specific Models Measurement-driven models need direct quality measurements and indirect quality measurements (process, product and people) need early measurements from historical / current releases Semi-customized models indirect environmental measurements to characterize current project extrapolate quality estimates from previous releases use course-grain activity measures WWWW.KAASHIVINFOTECH.COM

Data Requirements and Measurement for Product-Specific Models Observation-based models direct quality measurements environmental characteristics assumed WWWW.KAASHIVINFOTECH.COM

Data Requirements and Measurement (Table 19.5) WWWW.KAASHIVINFOTECH.COM

Models Supported by Kinds of Data Direct and indirect quality measurements from industry form empirical basis for generalized models Direct quality measurements used in all product- specific models product-specific extrapolations in semi-customized models development activities in observation-based models predicted by early measurements in measurement- driven models WWWW.KAASHIVINFOTECH.COM

Models Supported by Kinds of Data Environmental measurements mainly used in semi- customized models characterize current product to make extrapolations Product internal measurements used in measurement- driven predictive models early assessment of product quality identify problematic areas Activity measurements used by various models course-grained used in semi-customized models, e.g. defect data grouped by phase. fine-grained used in observation-based models Summarized in Figure 19.3 WWWW.KAASHIVINFOTECH.COM

WWWW.KAASHIVINFOTECH.COM

Selecting Measurements and Models Use a goal-oriented approach (GQM) Set specific quality goals (e.g. high reliability) Choose specific quality assessment models that can answer our concerns (e.g. SRGMs) Choose appropriate measurements (e.g. failure and test execution time measurements) Examples A - C in text. WWWW.KAASHIVINFOTECH.COM

WWWW.KAASHIVINFOTECH.COM

Thankyou WWWW.KAASHIVINFOTECH.COM