Topics in Metrics for Software Testing [Reading assignment: Chapter 20, pp. 314-326]

Slides:



Advertisements
Similar presentations
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Advertisements

© Chinese University, CSE Dept. Software Engineering / Software Engineering Topic 1: Software Engineering: A Preview Your Name: ____________________.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
1 Estimating Software Development Using Project Metrics.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Software Metrics II Speaker: Jerry Gao Ph.D. San Jose State University URL: Sept., 2001.
24/06/2015Dr Andy Brooks1 MSc Software Maintenance MS Viðhald hugbúnaðar Fyrirlestur 41 Maintainability of OSS OSS Open Source Software CSS Closed Source.
IMSE Week 18 White Box or Structural Testing Reading:Sommerville (4th edition) ch 22 orPressman (4th edition) ch 16.
Copyright © 1998 Wanda Kunkle Computer Organization 1 Chapter 2.1 Introduction.
Testing an individual module
Chapter 18 Testing Conventional Applications
Unit Testing CS 414 – Software Engineering I Don Bagert Rose-Hulman Institute of Technology January 16, 2003.
Software Testing Prasad G.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Chapter 13 & 14 Software Testing Strategies and Techniques
Topics in Software Dynamic White-box Testing Part 1: Control-flow Testing [Reading assignment: Chapter 7, pp plus many things in slides that are.
Overview Software Quality Metrics Black Box Metrics White Box Metrics
Path testing Path testing is a “design structural testing” in that it is based on detailed design & the source code of the program to be tested. The methodology.
A Complexity Measure THOMAS J. McCABE Presented by Sarochapol Rattanasopinswat.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
Topics in Software Dynamic White-box Testing Part 2: Data-flow Testing
CMSC 345 Fall 2000 Unit Testing. The testing process.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
SWEN 5430 Software Metrics Slide 1 Quality Management u Managing the quality of the software process and products using Software Metrics.
Agenda Introduction Overview of White-box testing Basis path testing
1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 23 Instructor Paulo Alencar.
1 Chapter 4 Software Process and Project Metrics.
Lecture 4 Software Metrics
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Engineering 2 Software Testing Claire Lohr pp 413 Presented By: Feras Batarseh.
INTRUDUCTION TO SOFTWARE TESTING TECHNIQUES BY PRADEEP I.
SOFTWARE METRICS. Software Process Revisited The Software Process has a common process framework containing: u framework activities - for all software.
Software Metrics.
1 Control Flow Analysis Topic today Representation and Analysis Paper (Sections 1, 2) For next class: Read Representation and Analysis Paper (Section 3)
Test Case Designing UNIT - 2. Topics Test Requirement Analysis (example) Test Case Designing (sample discussion) Test Data Preparation (example) Test.
Theory and Practice of Software Testing
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Agent program is the one part(class)of Othello program. How many test cases do you have to test? Reversi [Othello]
White Box Testing by : Andika Bayu H.
White-Box Testing Techniques I Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 7.
CS223: Software Engineering Lecture 21: Unit Testing Metric.
Lecture 2 Software Testing 2.1 System Testing 2.2 Component Testing ( Unit testing ) 2.3 Test Case Design 2.4 Test Automation.
Software Dynamic White-box Testing Part 2: Data-flow Testing Lecture 7 Prof. Mostafa Abdel Aziem Mostafa.
Software Dynamic White-box Testing Lecture 6 Part 1: Control-flow Testing.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
SOFTWARE TESTING TRAINING TOOLS SUPPORT FOR SOFTWARE TESTING Chapter 6 immaculateres 1.
Lab 7 Control-Flow Testing
Software Metrics 1.
Software Testing.
Cyclomatic complexity
Software Engineering (CSI 321)
Chapter 13 & 14 Software Testing Strategies and Techniques
Data Coverage and Code Coverage
Structural testing, Path Testing
White-Box Testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
White-Box Testing.
Halstead software science measures and other metrics for source code
Sudipto Ghosh CS 406 Fall 99 November 16, 1999
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
White-Box Testing Techniques I
Chapter 19 Technical Metrics for Software
Software Testing “If you can’t test it, you can’t design it”
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Unit III – Chapter 3 Path Testing.
Presentation transcript:

Topics in Metrics for Software Testing [Reading assignment: Chapter 20, pp ]

Quantification One of the characteristics of a maturing discipline is the replacement of art by science. Early physics was dominated by philosophical discussions with no attempt to quantify things. Quantification was impossible until the right questions were asked.

Quantification (Cont’d) Computer Science is slowly following the quantification path. There is skepticism because so much of what we want to quantify it tied to erratic human behavior.

Software quantification Software Engineers are still counting lines of code. This popular metric is highly inaccurate when used to predict: –costs –resources –schedules

Science begins with quantification Physics needs measurements for time, mass, etc. Thermodynamics needs measurements for temperature. The “size” of software is not obvious. We need an objective measure of software size.

Software quantification Lines of Code (LOC) is not a good measure software size. In software testing we need a notion of size when comparing two testing strategies. The number of tests should be normalized to software size, for example: –Strategy A needs 1.4 tests/unit size.

Asking the right questions When can we stop testing? How many bugs can we expect? Which testing technique is more effective? Are we testing hard or smart? Do we have a strong program or a weak test suite? Currently, we are unable to answer these questions satisfactorily.

Lessons from physics Measurements lead to Empirical Laws which lead to Physical Laws. E.g., Kepler’s measurements of planetary movement lead to Newton’s Laws which lead to Modern Laws of physics.

Lessons from physics (Cont’d) The metrics we are about to discuss aim at getting empirical laws that relate program size to: –expected number of bugs –expected number of tests required to find bugs –testing technique effectiveness

Metrics taxonomy Linguistic Metrics: Based on measuring properties of program text without interpreting what the text means. –E.g., LOC. Structural Metrics: Based on structural relations between the objects in a program. –E.g., number of nodes and links in a control flowgraph.

Lines of code (LOC) LOC is used as a measure of software complexity. This metric is just as good as source listing weight if we assume consistency w.r.t. paper and font size. Makes as much sense (or nonsense) to say: –“This is a 2 pound program” as it is to say: –“This is a 100,000 line program.”

Lines of code paradox Paradox: If you unroll a loop, you reduce the complexity of your software... Studies show that there is a linear relationship between LOC and error rates for small programs (i.e., LOC < 100). The relationship becomes non-linear as programs increases in size.

Halstead’s program length

Example of program length 48 7 log = H 1.0) 1, x,z, pow, 0, (y, 7 = n /)(minus), - *, =,! while,(sign), =,- <,(if, 9 = n  if (y < 0) pow = - y; else pow = y; z = 1.0; while (pow != 0) { z = z * x; pow = pow - 1; } if (y < 0) z = 1.0 / z;

Example of program length 48 7 log = H temp)list, k, last, N, 1, (j, 7 = n if) >,[], +, -, +,+ <,=, (for, 9 = n  for ( j=1; j<N; j++) { last = N - j + 1; for (k=1; k <last; k ++) { if (list[k] > list[k+1]) { temp = list[k]; list[k] = list[k+1]; list[k+1] = temp; }

Halstead’s bug prediction

How good are Halstead’s metrics? The validity of the metric has been confirmed experimentally many times, independently, over a wide range of programs and languages. Lipow compared actual to predicted bug counts to within 8% over a range of program sizes from 300 to 12,000 statements.

Structural metrics Linguistic complexity is ignored. Attention is focused on control-flow and data-flow complexity. Structural metrics are based on the properties of flowgraph models of programs.

Cyclomatic complexity McCabe’s Cyclomatic complexity is defined as: M = L - N + 2P L = number of links in the flowgraph N = number of nodes in the flowgraph P = number of disconnected parts of the flowgraph.

Property of McCabe’s metric The complexity of several graphs considered together is equal to the sum of the individual complexities of those graphs.

Examples of cyclomatic complexity L=1, N=2, P=1 M=1-2+2=1 L=4, N=4, P=1 M=4-4+2=2 L=4, N=5, P=1 M=4-5+2=1 L=2, N=4, P=2 M=2-4+4=2

Cyclomatic complexity heuristics To compute Cyclomatic complexity of a flowgraph with a single entry and a single exit: Note: –Count n-way case statements as N binary decisions. –Count looping as a single binary decision.

Compound conditionals Each predicate of each compound condition must be counted separately. E.g., A&B&C AB&C A A _ ___ A&B&C _____ A A A _ C B _ BC C _ M = 2 M = 3 M = 4

Cyclomatic complexity of programming constructs 2 2 M = 2 1. if E then A else B 2. C 1 l m K case E of 2. a: A 3. b: B … k. k-1: N l. end case m. L M = (2(k-1)+1)-(k+2)+2=K M = 2 1. loop A 2. exit when E B 3. end loop 4. C A B C … 2. Z M = 1

Applying cyclomatic complexity to evaluate test plan completeness Count how many test cases are intended to provide branch coverage. If the number of test cases < M then one of the following may be true: –You haven’t calculated M correctly. –Coverage isn’t complete. –Coverage is complete but it can be done with more but simpler paths. –It might be possible to simplify the routine.

Warning Use the relationship between M and the number of covering test cases as a guideline not an immutable fact.

Subroutines & M Nm+kNc Lm+kLc 0 Nm Lm+k Nc+2 Lc Lm+kLc-Nm-kNc+2 0 Lm+kLc-Nm-kNc+2 Lm+k-Nm+2 Lc-Nc-2+2=Lc-Nc=Mc Lm+Lc-Nm-Nc+k+2 Main Nodes Main Links Subnodes Sublinks Main M Subroutine M Total M Embedded Common Part Subroutine for Common Part

When is the creation of a subroutine cost effective? Break Even Point occurs when the total complexities are equal: The break even point is independent of the main routine’s complexity.

Example If the typical number of calls to a subroutine is 1.1 (k=1.1), the subroutine being called must have a complexity of 11 or greater if the net complexity of the program is to be reduced.

Cost effective subroutines (Cont’d)

Relationship plotted as a function Note that the function does not make sense for values of 0 < k < 1 because Mc < 0! Therefore we need to mention that k > Mc k

How good is M? A military software project applied the metric and found that routines with M > 10 (23% of all routines) accounted for 53% of the bugs. Also, of 276 routines, the ones with M > 10 had 21% more errors per LOC than those with M <= 10. McCabe advises partitioning routines with M > 10.

Pitfalls if... then... else has the same M as a loop! case statements, which are highly regular structures, have a high M. Warning: McCabe’s metric should be used as a rule of thumb at best.

Rules of thumb based on M Bugs/LOC increases discontinuously for M > 10 M is better than LOC in judging life-cycle efforts. Routines with a high M (say > 40) should be scrutinized. M establishes a useful lower-bound rule of thumb for the number of test cases required to achieve branch coverage.

Software testing process metrics Bug tracking tools enable the extraction of several useful metrics about the software and the testing process. Test managers can see if any trends in the data show areas that: –may need more testing –are on track for its scheduled release date Examples of software testing process metrics: –Average number of bugs per tester per day –Number of bugs found per module –The ratio of Severity 1 bugs to Severity 4 bugs –…

Example queries applied to a bug tracking database What areas of the software have the most bugs? The fewest bugs? How many resolved bugs are currently assigned to John? Mary is leaving for vacation soon. How many bugs does she have to fix before she leaves? Which tester has found the most bugs? What are the open Priority 1 bugs?

Example data plots Number of bugs versus: –fixed bugs –deferred bugs –duplicate bugs –non-bugs Number of bugs versus each major functional area of the software: –GUI –documentation –floating-point arithmetic –etc

Example data plots (cont’d) Bugs opened versus date opened over time: –This view can show: bugs opened each day cumulative opened bugs On the same plot we can plot resolved bugs, closed bugs, etc to compare the trends.

You now know … … the importance of quantification … various software metrics … various software testing process metrics and views