1 In-Process Metrics for Software Testing Kan Ch 10 Steve Chenoweth, RHIT Left – In materials testing, the goal always is to break it! That’s how you know.

Slides:



Advertisements
Similar presentations
S-Curves & the Zero Bug Bounce:
Advertisements

Project Scheduling Using OpenProj.
Metrics to improve software process
Chapter 4 Quality Assurance in Context
1 Defect Removal Effectiveness Kan ch 6 Steve Chenoweth, RHIT Left – Some defects are easier to remove than others. This is the cruise ship Costa Concordia,
The “Lifecycle” of Software. Chapter 5. Alternatives to the Waterfall Model The “Waterfall” model can mislead: boundaries between phases are not always.
Stoimen Stoimenov QA Engineer SitefinityLeads, SitefinityTeam6 Telerik QA Academy Telerik QA Academy.
Copyright , Dennis J. Frailey CSE7315 – Software Project Management CSE7315 M30 - Version 9.01 SMU CSE 7315 Planning and Managing a Software Project.
Computer Engineering 203 R Smith Project Tracking 12/ Project Tracking Why do we want to track a project? What is the projects MOV? – Why is tracking.
1 Exponential Distribution and Reliability Growth Models Kan Ch 8 Steve Chenoweth, RHIT Right: Wait – I always thought “exponential growth” was like this!
1 Software Architecture CSSE 477: Week 5, Day 1 Statistical Modeling to Achieve Maintainability Steve Chenoweth Office Phone: (812) Cell: (937)
Swami NatarajanJune 10, 2015 RIT Software Engineering Activity Metrics (book ch 4.3, 10, 11, 12)
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering.
COMP8130 and 4130Adrian Marshall 8130 and 4130 Test Execution and Reporting Adrian Marshall.
CMPUT Software QualityMajor Software Strategies - 1 © Paul Sorenson MAJOR SW STRATEGIES BASED HEAVILY ON ROBERT GRADY'S BOOK “Practical Software.
SE 450 Software Processes & Product Metrics Reliability Engineering.
SE 450 Software Processes & Product Metrics Software Metrics Overview.
RIT Software Engineering
SE 450 Software Processes & Product Metrics 1 Defect Removal.
SE 450 Software Processes & Product Metrics Activity Metrics.
Testing Metrics Software Reliability
Systems Analysis and Design in a Changing World, 6th Edition
Xtreme Programming. Software Life Cycle The activities that take place between the time software program is first conceived and the time it is finally.
EXAMPLES OF METRICS PROGRAMS
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
The Software Product Life Cycle. Views of the Software Product Life Cycle  Management  Software engineering  Engineering design  Architectural design.
Testing - an Overview September 10, What is it, Why do it? Testing is a set of activities aimed at validating that an attribute or capability.
1 Software Quality Metrics Ch 4 in Kan Steve Chenoweth, RHIT What do you measure?
Readiness Index – Is your application ready for Production? Jeff Tatelman SQuAD October 2008.
1 Quality Management Models Kan Ch 9 Steve Chenoweth, RHIT Right – To keep in mind – For Kan, this is part of Total Quality Management.
Kan Ch 7 Steve Chenoweth, RHIT
Quality Planning & Defect Estimation
What is Software Engineering? the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software”
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
INFO 637Lecture #81 Software Engineering Process II Integration and System Testing INFO 637 Glenn Booker.
CS4723 Software Validation and Quality Assurance Lecture 15 Advanced Topics Test Plans and Management.
Software Engineering Software Process and Project Metrics.
Software Testing Testing principles. Testing Testing involves operation of a system or application under controlled conditions & evaluating the results.
 CS 5380 Software Engineering Chapter 8 Testing.
1 The Do’s and Don’ts of Software Process Improvement Steve Chenoweth, RHIT.
Testing -- Part II. Testing The role of testing is to: w Locate errors that can then be fixed to produce a more reliable product w Design tests that systematically.
1 Pop Quiz What is an Orthogonal Defect? Which is more precise: predictive models or management models? Why? Essay: Why are metrics and models an important.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
University of Palestine software engineering department Testing of Software Systems Testing throughout the software life cycle instructor: Tasneem.
From Quality Control to Quality Assurance…and Beyond Alan Page Microsoft.
INFO 636 Software Engineering Process I Prof. Glenn Booker Week 9 – Quality Management 1INFO636 Week 9.
AP-1 5. Project Management. AP-2 Software Failure Software fails at a significant rate What is failure? Not delivering it on time is an estimation failure.
1 Metrics and lessons learned for OO projects Kan Ch 12 Steve Chenoweth, RHIT Above – New chapter, same Halstead. He also predicted various other project.
Copyright © , Satisfice, Inc. V1. James Bach, Satisfice, Inc. (540)
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Software Quality Metrics III. Software Quality Metrics  The subset of metrics that focus on quality  Software quality metrics can be divided into: End-product.
Test status report Test status report is important to track the important project issues, accomplishments of the projects, pending work and milestone analysis(
CSC 480 Software Engineering Test Planning. Test Cases and Test Plans A test case is an explicit set of instructions designed to detect a particular class.
1 Software Quality Engineering. 2 Quality Management Models –Tools for helping to monitor and manage the quality of software when it is under development.
1 Object-Oriented Analysis and Design with the Unified Process Figure 13-1 Implementation discipline activities.
Copyright , Dennis J. Frailey CSE Software Measurement and Quality Engineering CSE8314 M00 - Version 7.09 SMU CSE 8314 Software Measurement.
Tool Support for Testing Classify different types of test tools according to their purpose Explain the benefits of using test tools.
Swami NatarajanOctober 1, 2016 RIT Software Engineering Software Metrics Overview.
TK2023 Object-Oriented Software Engineering
Why Metrics in Software Testing?
Software Project Configuration Management
Testing the System.
Software Architecture in Practice
Software Reliability Engineering
Testing and Test-Driven Development CSC 4700 Software Engineering
Progression of Test Categories
Chapter # 7 Software Quality Metrics
6. Software Metrics.
Presentation transcript:

1 In-Process Metrics for Software Testing Kan Ch 10 Steve Chenoweth, RHIT Left – In materials testing, the goal always is to break it! That’s how you know how strong it is.

2 Of course, that method has limits!

3 Lots of testing metrics get proposed Question is – how much experience backs them up? For each, also need to know: – Purpose – Data required – Interpretation – Use Right – It’s like the “Required” data on a form.

4 The test progress S curve Looks suspiciously close to plan, huh!

5 Of course… Some test cases are more important than others. – Could assign scores to test cases. – Like 10 = most important, 1 = least. – Always subjective. Test points should be planned. – What’s possible to test, when.

6 Can then compare products

7 Tracking defect arrivals Testing progress – usually built into tools. Defect arrival pattern over time is important. Defect density during testing – a summary indicator, but – Not an in-process indicator. Pattern of defect arrivals over time – more informative. – Need to “normalize” what’s expected – And consider severity

8 What’s good & bad?

9 Unfortunate severity pattern… More high severity at the end!

10 Putting it all together

11 Defect backlog testing Testing the “defects remaining.” Use PTR backlog to judge. – Shows test progress. – Notes “customer rediscoveries.” A large number of defects during development impedes test progress. If you have a different organization for development testing vs fixing defects in the field, don’t ship with a significant backlog! Why?

12 When to focus on the backlog During prime time for development testing, – The focus should be on test effectiveness and test execution. – Focusing too early on overall backlog conflicts with this. Development team is reluctant to report defects! Focus should be on fix turnaround of defects that impede test progress, vs on the entire backlog. When testing approaches completion, – Then do strong focus on overall defect backlog.

13 Product size as an indicator Lines of code = “Effort”? Explains test progress Helps measure total defect volume per unit. Feature removed – very interesting!

14 CPU Utilization? There actually are many types of measurements you might want to do during a “stress test.” CPU usage is certainly one of these. – Kan’s discussion is important – issues like “where the peaks occur, during a customer’s duty cycle” need to be investigated. We’ll talk about this all later, under “performance metrics” (week 7)

15 System Crashes Alias reboots, or IPLs (Initial Program Loads) Alias “hangs” – This can also be starvation or deadlock, etc. Related to CPU utilization – That leads to breaking things that can’t get done in their usual time (e.g., a lock release) – Also timing errors (more going on) – Associated resource problems (disks, memory) – High stress level

16 Release by crash count?

17 Other showstoppers Takes only a few to render a product dysfunctional. Qualitative “metric”: – Number of critical problems over time, with a release-to-release comparison. – Types of critical problems, and the analysis and resolution of each.

18 Kan’s recommendations Use calendar time, instead of phases of the development process, as measurement unit for in-process metrics. For time-based metrics, use ship date as the reference and weeks as the unit of measurement. Metrics show “good” or “bad” in terms of quality or schedule. Know which metrics are subject to strong management intervention. Metrics should drive improvements.

19 Recommended: Use the effort / outcome model

20 Test coverage Setup Install Min/max configuration Concurrence Error recovery Cross-product interoperability Cross-release compatibility Usability Double-byte character set (DBCS) Tends not to be tested enough! Leads to cascading problems!

21 Metrics for acceptance testing Related to test cases – Percentage of test cases attempted – Number of defects per executed test case – Number of failing test cases without defect records Related to test execution records – Success rate – Persistent failure rate – Defect injection rate – Code completeness Of course, sometimes “acceptance” means “by the customer”…

22 Good enough to ship? System stability, reliability, and availability Defect volume Outstanding critical problems Feedback from early customer programs Other quality attributes of specific importance to a particular product and its acceptance

23 Recommendations for small organizations Use: – Test progress S curve – Defect arrival density – Critical problems or showstoppers – Effort / outcome model to interpret metrics and manage quality – Conduct an evaluation on whether the product is good enough to ship