Chapter 19: Quality Models and Measurements  Types of Quality Assessment Models  Data Requirements and Measurement  Comparing Quality Assessment Models.

Slides:



Advertisements
Similar presentations
Process Database and Process Capability Baseline
Advertisements

Software Process Models
Chapter 4 Quality Assurance in Context
Metrics for Process and Projects
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Software Quality Metrics
Software Defect Modeling at JPL John N. Spagnuolo Jr. and John D. Powell 19th International Forum on COCOMO and Software Cost Modeling 10/27/2004.
(c) 2007 Mauro Pezzè & Michal Young Ch 1, slide 1 Software Test and Analysis in a Nutshell.
Testing Metrics Software Reliability
Performance Measurement and Strategic Information Management
1 Software Testing and Quality Assurance Lecture 1 Software Verification & Validation.
Chapter 9 Flashcards. measurement method that uses uniform procedures to collect, score, interpret, and report numerical results; usually has norms and.
Software Process and Product Metrics
Software Testing and QA Theory and Practice (Chapter 15: Software Reliability) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
Chapter 13: Defect Prevention & Process Improvement
Chapter 20: Defect Classification and Analysis  General Types of Defect Analyses.  ODC: Orthogonal Defect Classification.  Analysis of ODC Data.
Chapter 22. Software Reliability Engineering (SRE)
Software Reliability Growth. Three Questions Frequently Asked Just Prior to Release 1.Is this version of software ready for release (however “ready” is.
Pop Quiz How does fix response time and fix quality impact Customer Satisfaction? What is a Risk Exposure calculation? What’s a Scatter Diagram and why.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management 1.
1 Software Quality Engineering CS410 Class 5 Seven Basic Quality Tools.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 15 Software Reliability
Quality Planning & Defect Estimation
 The software systems must do what they are supposed to do. “do the right things”  They must perform these specific tasks correctly or satisfactorily.
N By: Md Rezaul Huda Reza n
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
Defect Management Defect Injection and Removal
THE MANAGEMENT AND CONTROL OF QUALITY, 5e, © 2002 South-Western/Thomson Learning TM 1 Chapter 8 Performance Measurement and Strategic Information Management.
Chapter 6 : Software Metrics
Project Management Estimation. LOC and FP Estimation –Lines of code and function points were described as basic data from which productivity metrics can.
Topic (1)Software Engineering (601321)1 Introduction Complex and large SW. SW crises Expensive HW. Custom SW. Batch execution.
1 POP Quiz T/F Defect Removal Effectiveness and Defect Removal Models are not true Predictive Models Define DRE What is a Checklist? What is it for? What.
Software Measurement & Metrics
This chapter is extracted from Sommerville’s slides. Text book chapter
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
Software Metrics – part 2 Mehran Rezaei. Software Metrics Objectives – Provide State-of-art measurement of software products, processes and projects Why.
Lecture 4 Software Metrics
This chapter is extracted from Sommerville’s slides. Textbook chapter
This material is approved for public release. Distribution is limited by the Software Engineering Institute to attendees. Sponsored by the U.S. Department.
Software Reliability Research Pankaj Jalote Professor, CSE, IIT Kanpur, India.
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
SOFTWARE PROCESS AND PROJECT METRICS. Topic Covered  Metrics in the process and project domains  Process, project and measurement  Process Metrics.
SOFTWARE METRICS Software Metrics :Roadmap Norman E Fenton and Martin Neil Presented by Santhosh Kumar Grandai.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
+ Chapter Scientific Method variable is the factor that changes in an experiment in order to test a hypothesis. To test for one variable, scientists.
1 Experience from Studies of Software Maintenance and Evolution Parastoo Mohagheghi Post doc, NTNU-IDI SEVO Seminar, 16 March 2006.
Hussein Alhashimi. “If you can’t measure it, you can’t manage it” Tom DeMarco,
1 IRU Data versus information Geoff Leese Sept 2001, revised Sept 2002, Sept 2003, August 2008, October 2009.
Advanced Software Engineering Lecture 4: Process & Project Metrics.
Chapter 22 Metrics for Process and Projects Software Engineering: A Practitioner’s Approach 6 th Edition Roger S. Pressman.
CSE SW Metrics and Quality Engineering Copyright © , Dennis J. Frailey, All Rights Reserved CSE8314M13 8/20/2001Slide 1 SMU CSE 8314 /
Software Measurement: A Necessary Scientific Basis By Norman Fenton Presented by Siv Hilde Houmb Friday 1 November.
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
Software Quality Prepared By: Rooshabh Kothari Assistant Professor T & P Co-ordinator CSE/IT Department 1.
This chapter is extracted from Sommerville’s slides. Textbook chapter 22 1 Chapter 8 Validation and Verification 1.
Laurea Triennale in Informatica – Corso di Ingegneria del Software I – A.A. 2006/2007 Andrea Polini XVII. Verification and Validation.
Software Test Metrics When you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot measure,
Project Monitoring Review Class 22 Project Monitoring cont
CSC 480 Software Engineering
Product reliability Measuring
Software Project Sizing and Cost Estimation
1 Chapter.
Chapter 13 Quality Management
Software Quality Assurance
Software metrics.
Chapter # 7 Software Quality Metrics
Metrics for Process and Projects
6. Software Metrics.
Chapter 26 Estimation for Software Projects.
Presentation transcript:

Chapter 19: Quality Models and Measurements  Types of Quality Assessment Models  Data Requirements and Measurement  Comparing Quality Assessment Models  Measurement and Model Selection

Introduction  Analytical models provide quantitative assessment of selected quality characteristics  Applied over time, provide accurate prediction of future quality  Purpose of measurement and analysis is to make corrective actions =>improvement provide timely feedback/assessment identify problematic areas prediction, anticipating/planning for scheduling and resource allocation

Models for Quality Assessment  Direct indicators of quality defect measurements - defect density for correctness probability of failure-free operation for reliability measured at end of software development  Indirect indicators of quality product internal attributes (e.g. KLOC, McCabe’s) interaction between product and user development process general characteristics of product (e.g. telecom) may be available early enough to make predictions

Models for Quality Assessment

Generalized Models for Quality Assessment  Require little or no project-specific data  Three categories Overall model – provides a single estimate of overall product quality Segmented model – provides different quality estimates for different industrial segments Dynamic model – provides quality trend or distribution over time or development process

Overall Models  Most general subtype of generalized quality models  Provide a rough estimate of product quality, e.g. defect density = total defects / product size  Lump all products together – abstraction of commonly observed facts about quality generally true over all kinds of application domains, e.g. 80:20 rule which states 80% of defects are concentrated in 20% of product modules/components linkages between software defect, risk, process maturity to quality

Segmented Models  Abstraction of commonly observed facts about quality over product market segments, e.g. reliability levels (measured by failure rate)  safety-critical SW – medical devices and nuclear reactors  commercial SW – telecommunications and business  auxiliary SW – games and low-cost PC SW

Dynamic Models  Provide information about quality over time or development phases, e.g. defect distribution profile over dev. phases Putnam model – effort and defect profiles over time reliability growth during product testing  Can be combined with segmented models to give us segmented dynamic models

Product-Specific Models  Provide more precise quality assessments using product-specific data  Three categories Semi-customized models – extrapolate product history to predict quality for the current project (Table 2) Observation-based models – estimate quality based on observations from the current project Measurement-driven predictive models – establish predictive relations between various early measurements and product quality

Semi-Customized Models  Use general characteristics and historical information about product, process, or envt  Provide quality extrapolations  Examples: Defect removal models (DRMs) provide defect distribution profile over development phases based on previous releases of the same product Combine DRM with orthogonal defect classification (ODC) model - profiles defects by individual phases in which they where injected, discovered, and by categories => identify high-defect areas

Observation-based Models  Relate observations of the software system behavior to information about related activities for more precise quality assessments, e.g. SRGMs – estimate parameters based on observation data  Usually use data from current project

Measurement-driven predictive models  Establish predictive relations between quality and other measurements from historical data  Provide early predictions of quality  Identify problems early for timely actions  Use statistical analysis techniques / learning algorithms  Examples: Relationships between defect fixes and design and code measurements  high-defect modules of legacy products associated with numerous changes and high data complexity  high-defect modules of new products associated with complex design and control structures

Identify High-risk areas in Development  Relationship between defect fixes and various design and code measurements High-defect modules of legacy products associated with numerous changes and high data complexity High-defect modules of new projects associated with complex design and control structures

Model Comparison and Interconnections  Comparisons based on usefulness of modeling results, how accurate quality estimates are, and applicability of models to different environments Model inter-connections examined in two opposite directions  Customization required of generalized quality models to create product-specific models  Generalization of product specific models when enough empirical evidence from different products or projects is accumulated

Comparisons  Usefulness can be weighted against cost (such as collecting data)  Generalized models more widely applicable and less expensive to use (do not require product-specific measurements)  Generalized models more useful in product planning stage and early development phases – when product-specific data unavailable, except when historical data exists in which case semi-customized models are better

More Comparisons  Observation-based and Measurement- based predictive models better manage QA activities and later development and maintenance activities as more measurement data collected

More Comparisons  Counterparts in generalized models to product- specific models and vice versa Generalized models can be customized into product- specific ones Product-specific models can be generalized  Depends on kind of measurement data collected and analysis results available

Data Requirements and Measurement  Different models have different data requirements (direct and/or indirect)  Generalized models based on industrial averages and general profiles for all products or product segment. No data from current project needed directly But measurement taken at current project can be accumulated into empirical base to calibrate models for future applications

Data Requirements and Measurement for Product-Specific Models  Measurement-driven models need direct quality measurements and indirect quality measurements (process, product and people) need early measurements from historical / current releases  Semi-customized models indirect environmental measurements to characterize current project extrapolate quality estimates from previous releases use course-grain activity measures

Data Requirements and Measurement for Product-Specific Models  Observation-based models direct quality measurements environmental characteristics assumed

Data Requirements and Measurement (Table 19.5)

Models Supported by Kinds of Data  Direct and indirect quality measurements from industry form empirical basis for generalized models  Direct quality measurements used in all product- specific models product-specific extrapolations in semi-customized models development activities in observation-based models predicted by early measurements in measurement- driven models

Models Supported by Kinds of Data  Environmental measurements mainly used in semi- customized models characterize current product to make extrapolations  Product internal measurements used in measurement-driven predictive models early assessment of product quality identify problematic areas  Activity measurements used by various models course-grained used in semi-customized models, e.g. defect data grouped by phase. fine-grained used in observation-based models  Summarized in Figure 19.3

Selecting Measurements and Models  Use a goal-oriented approach (GQM) Set specific quality goals (e.g. high reliability) Choose specific quality assessment models that can answer our concerns (e.g. SRGMs) Choose appropriate measurements (e.g. failure and test execution time measurements)  Examples A - C in text.