Download presentation
Presentation is loading. Please wait.
Published byEmma Payne Modified over 8 years ago
1
Realizing quality improvement through test driven development: Results and experiences of four industrial teams. by Nachiappan Nagappan, E Mchael Maximillien, Thriumalesh Bhat, and Laurie Williams Presented by Nikolas Terani and Eric Driggs
2
Abstract: Purpose: Lack of empirical evidence supporting or refuting TDD in an industrial setting. Studies didn't use other Agile/XP practices, Only TDD vs non-TDD Findings: 40% to 90% fewer defects (pre-release) 15% to 35% longer initial development time
3
Variations in Groups 4 Groups: 1 from IBM 3 from Microsoft
4
2. TDD Overview
5
Refactoring Example Combine Conditional Expression double disabilityAmount() { if (_seniority < 2) return 0; if (_monthsDisabled > 12) return 0; if (_isPartTime) return 0; // compute the disability amount double disabilityAmount() { if (isNotEligableForDisability()) return 0; // compute the disability amount
6
Potential Benefits ● Better Design ● Efficiency ● Test Assets ● Reducing Defect Injection
7
● George and Williams (2003) – 24 professional programmers (½ TDD, ½ waterfall) – TDD passed 18% more blackbox tests, but took 16% longer ● M üller and Hagner (2002) – Graduate Computer Science students – Wrote all tests before any code (BAD) – No quality improvement 3. Related TDD Studies
8
● Erdogmus et al. (2005) – 24 Undergraduate Computer Science students – TDD improved productivity but not quality ● M üller and Tichy (2001) – 11 University students – 87% stated TDD improved their confidence in their code. Related TDD Studies (cont.)
9
Project and the Teams ● Case Studies: – Body of Knowledge – Context Dependent ● 1 IBM Project ● 3 Microsoft Projects
10
Keys to Microsoft Case Study ● Comparable Project Managers ● Identical Higher-Level Manager ● Post Hoc Analysis ● No enforcement of TDD
11
Team Factors IBM ● Medium to Low – Experience – Domain Expertise – Language Expertise ● High – Program Manager's Experience ● Distributed
12
Team Factors Microsoft ● Windows and Visual Studio – High ● Experience Level ● Domain Expertise ● Language Expertise ● MSN – High: Experience Level – Medium: Domain Expertise, Language Expertise ● Colocated
13
Product Factors ● Size Ranged from 6 kloc to 155 kloc ● Test KLOC and coverage was substantial ● Time Ranged from 20 to 119 man months ● Size comparisons: – IBM: 72% – Windows: 133% – MSN: 17% – Visual Studio: 69%
14
5. TDD Implementations ● IBM – Used to reduce ambiguity and validate requirements – Complete unit testing with Junit – Each public class has an associated public test class, and each public interface has a matching public test interface. – Prototype classes, then UML class & sequence diagrams, alternating design. – Unit tests after coding, with 80% automated code coverage. – Build tests were run daily through ANT and Junit, with pass/fail results sent by email. – 2 developer locations communicated largely through email.
15
TDD Implementations (cont.) ● Microsoft – Hybrid TDD (detailed requirements documents written) – Requirements drove test and development. – TDD teams did not use other agile practices – Legacy teams didn't use TDD or agile. – Unit tests run from command line, which generated log files and sometimes emails. – Source code and test code tracked through Version Control System
16
Quality and Productivity Results ● Defects detected post-integration and mined from IBM's and Microsoft's bug databases ● Defect Density Improvement: – 39% for IBM – 62% to 91% for Microsoft ● Increased Development Time – 15-20% for IBM – 15-35% for Microsoft
17
IBM Result ● Legacy product – Maturity – Exposure to field ● New product outperformed despite “inherent” advantage. – Humphrey: changing code is up to 40X harder than developing from scratch – Where is the injection point?
18
Examining the Data ● Visual Studio had the least coverage and yet the most improvement. ● MSN had highest test to source KLOC ratio and yet there development time increased by the smallest percentage over control group. ● Team variables seemed to not affect improvement.
19
7. Threats to Validity ● Were the TDD developers more motivated? – Not necessarily since they didn't know they were part of a study. ● Were the TDD project easier? i.e. Were they valid comparisons? – Same high-level managers for TDD & control group. – Same corporate culture.
20
Conclusions ● Start TDD from the beginning of a project, incremental & continuous ● For a new TDD team, introduce automated build test integration around 2/3 development phase ● Add tests whenever problem found, no matter what ● Involve test team with TDD so they understand how to verify unit tests ● Set code coverage targets ● Daily unit tests are the heartbeat of the system's health ● Fast unit test execution and efficient unit test design ● Reuse code from unit tests ● Use metrics & check morale
21
Summary ● Improved defect density with cost of increased schedule ● If schedule is a primary concern, you can still effectively use TDD to ship a product on schedule with fewer features and higher quality. ● Test Assets quickly pinpoint where defaults are injected.
22
Questions ● How does TDD compare with strict Unit testing in terms of defect density and time?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.