Download presentation
Presentation is loading. Please wait.
Published byDominic Stanley Modified over 9 years ago
1
Cost / Benefits Arguments for Automation and Coverage Jeff Offutt Professor, Software Engineering George Mason University Fairfax, VA USA www.cs.gmu.edu/~offutt/offutt@gmu.edu
2
Who Am I n PhD Georgia Institute of Technology, 1988 n Professor at George Mason University since 1992 –BS, MS, PhD in Software Engineering (also CS) n Lead the Software Engineering MS program –Oldest and largest in USA n Editor-in-Chief of Wiley’s journal of Software Testing, Verification and Reliability (STVR) n Co-Founder of IEEE International Conference on Software Testing, Verification and Validation (ICST) n Co-Author of Introduction to Software Testing (Cambridge University Press) NoVa TAIG, August 2011 © Jeff Offutt 2
3
Software is a Skin that Surrounds Our Civilization NoVa TAIG, August 2011 © Jeff Offutt 3 Quote due to Dr. Mark Harman
4
Costly Software Failures NoVa TAIG, August 2011 © Jeff Offutt 4 n NIST report, “The Economic Impacts of Inadequate Infrastructure for Software Testing” (2002) –Inadequate software testing costs the US alone between $22 and $59 billion annually –Better approaches could cut this amount in half n Huge losses due to web application failures –Financial services : $6.5 million per hour (just in USA!) –Credit card sales applications : $2.4 million per hour (in USA) n In Dec 2006, amazon.com’s BOGO offer turned into a double discount n 2007 : Symantec says that most security vulnerabilities are due to faulty software World-wide monetary loss due to poor software is staggering
5
Types of Test Activities n Testing can be broken up into four general types of activities 1.Test Design 2.Test Automation 3.Test Execution 4.Test Evaluation n Each type of activity requires different skills, background knowledge, education and training n No reasonable software development organization uses the same people for requirements, design, implementation, integration and configuration control NoVa TAIG, August 2011 © Jeff Offutt 5 Why do test organizations still use the same people for all four test activities?? This clearly wastes resources 1.a) Criteria-based 1.b) Human-based
6
1. Test Design – (a) Criteria-Based n This is the most technical job in software testing n Requires knowledge of : –Discrete math –Programming –Testing n Requires much of a traditional CS degree n This is intellectually stimulating, rewarding, and challenging n Test design is analogous to software architecture on the development side n Using people who are not qualified to design tests is a sure way to get ineffective tests NoVa TAIG, August 2011 © Jeff Offutt 6 Design test values to satisfy coverage criteria or other engineering goal
7
1. Test Design – (b) Human-Based n This is much harder than it may seem to developers n Criteria-based approaches can be blind to special situations n Requires knowledge of : –Domain, testing, and user interfaces n Requires almost no traditional CS –A background in the domain of the software is essential –An empirical background is very helpful (biology, psychology, …) –A logic background is very helpful (law, philosophy, math, …) n This is intellectually stimulating, rewarding, and challenging –But not to typical CS majors – they want to solve problems and build things NoVa TAIG, August 2011 © Jeff Offutt 7 Design test values based on domain knowledge of the program and human knowledge of testing
8
Model-Driven Test Design – Steps NoVa TAIG, August 2011 © Jeff Offutt 8 software artifact model / structure test requirements refined requirements / test specs input values test cases test scripts test results pass / fail IMPLEMENTATION ABSTRACTION LEVEL DESIGN ABSTRACTION LEVEL analysis criterionrefine generate prefix postfix expected automate execute evaluate test requirements domain analysis feedback
9
MDTD – Activities NoVa TAIG, August 2011 © Jeff Offutt 9 software artifact model / structure test requirements refined requirements / test specs input values test cases test scripts test results pass / fail IMPLEMENTATION ABSTRACTION LEVEL DESIGN ABSTRACTION LEVEL Test Design Test Execution Test Evaluation Raising our abstraction level makes test design MUCH easier
10
Advantages of Criteria-Based Test Design n Criteria maximize the “bang for the buck” –Fewer tests that are more effective at finding faults n Comprehensive test set with minimal overlap n Traceability from software artifacts to tests –The “why” for each test is answered –Built-in support for regression testing n A “stopping rule” for testing—advance knowledge of how many tests are needed n Natural to automate NoVa TAIG, August 2011 © Jeff Offutt 10
11
Overview n These slides introduce some specific examples of how some of these ideas are being used in companies n Some companies are mentioned by name –Some names cannot be mentioned n I discuss some general process notes n Then discuss examples of some of the specific criteria being used NoVa TAIG, August 2011 © Jeff Offutt 11
12
Google n Programmers spend up to half of their time testing –Unit testing is measured as part of programmer productivity –Programmers must solve all problems found in system testing, immediately –If quality is bad, system testers refuse to help n Products are shipped daily –Release and iterate cycle –Focus on fast fixing instead of prevention n All tests are fully automated n Teams choose their own test criteria, but teams must use criteria n They have saved tens of millions of dollars –Automation –Developer responsibility –Immediate feedback NoVa TAIG, August 2011 © Jeff Offutt 12 Source – Patrick Copeland, Keynote Address, Intl Conf on Software Testing, Verification and Validation (ICST 2010)
13
Amazon n All tests are automated and documented n Developers are educated in testing n Developers are measured by their unit tests’ quality –Developers are rewarded for finding unit faults –Developers are measured by the number of faults found during system testing that trace back to them n They have lots of internal-use tools for automation and measuring criteria NoVa TAIG, August 2011 © Jeff Offutt 13 Source – visit to the company
14
Microsoft n Software Development Engineer in Test (SDET) –Developers who specialize in testing (not SMEs) n Goal is to automate all tests n They use Input Space Partitioning for many of their tests n Many groups use graph-based criteria (branch or node coverage) NoVa TAIG, August 2011 © Jeff Offutt 14 Source – How We Test Software at Microsoft, by Page, Johnston, and Rollison
15
Major US Government Contractor n Last year a manager started applying these ideas in her project –Focused on unit / developer testing –Held monthly reviews of documentation quality, code structure, and unit tests –Required use of test automation tools –Required use of a simple graph criterion (all branches) n Established a test design expert and a test automation expert n She received a commendation for saving tens of thousands of dollars in a few months –Is now teaching her approach to other managers on the project NoVa TAIG, August 2011 © Jeff Offutt 15 Source – personal contact
16
Graph Criteria n Web software company (in Northern VA) –Applying graph criteria to develop tests for new web applications –Automation with httpunit –Reduced deployment errors by 50%, reduced cost by 25% –Updating automated tests is a lot of work n Government contractor of security assessment tools –Applying graph criteria to test their threat assessment engines –Automation with JUnit and internal automation framework –Cut time to deploy new products by 20%, reduced development cost by 15% NoVa TAIG, August 2011 © Jeff Offutt 16 Sources – consulting / part-time student employee
17
Logic Criteria n Company that builds embedded, safety-critical, real-time, software for trains –Applied CACC to post-deployment communication software –Found over a dozen faults, 3 safety-critical, 2 real-time –Fixed all problems before the software failed in the system –Logic testing is now mandated on all safety-critical software n Aerospace company that manufactures planes –Applied CACC to flight guidance software (embedded, real-time, safety critical) –Found numerous problems –Automation estimated to have saved 30% of testing cost NoVa TAIG, August 2011 © Jeff Offutt 17 Sources – Student industry project / consulting
18
Input Space Partitioning n Freddie Mac (major financial service company) –System testing on calculation engines Faults can cause millions of dollars loss –Test manager tested two similar products, one with their traditional method and one using ISP –Special purpose tools to support ISP –ISP tests found 3.5 times as many faults, with half the effort ZERO defects reported in deployment (after 2 years) –ISP is now being disseminated throughout the company n Dozens of companies in Northern Virginia have used ISP over the past 15 years –All saved money and found more faults NoVa TAIG, August 2011 © Jeff Offutt 18 Sources – MS Thesis at GMU / part-time student employees
19
Mutation Testing n A major router manufacturer –One of my students applied mutation to an essential engine in a router – embedded, real-time software Already been in deployment for years –Found 3 major problems, one of which had cost the company over $70 million in downtime and lost revenue –My student got a bonus of $800,000 (1999) n Telecommunications company –Real-time, embedded software, plus web applications –I helped apply mutation testing and graph criteria to 3 software components – past testing, ready for deployment –About 150 tests found over 50 separate issues – at 25% the cost of their usual system testing NoVa TAIG, August 2011 © Jeff Offutt 19 Sources – student / consulting
20
Advantages of Criteria-Based Test Design n Criteria maximize the “bang for the buck” –Fewer tests that are more effective at finding faults n Comprehensive test set with minimal overlap n Traceability from software artifacts to tests –The “why” for each test is answered –Built-in support for regression testing n A “stopping rule” for testing—advance knowledge of how many tests are needed n Natural to automate NoVa TAIG, August 2011 © Jeff Offutt 20
21
Criteria-Based Testing Summary NoVa TAIG, August 2011 © Jeff Offutt 21 Many companies still use “monkey testing” A human sits at the keyboard, wiggles the mouse and bangs the keyboard No automation Minimal training required Some companies automate human-designed tests Reduces execution cost Eases repeat testing But companies that use automation and criteria-based testing Many companies still use “monkey testing” A human sits at the keyboard, wiggles the mouse and bangs the keyboard No automation Minimal training required Some companies automate human-designed tests Reduces execution cost Eases repeat testing But companies that use automation and criteria-based testing Save money Find more faults Build better software
22
© Jeff Offutt 22 Contact Jeff Offutt offutt@gmu.eduhttp://cs.gmu.edu/~offutt/ NoVa TAIG, August 2011 We are in the middle of a revolution in how software is tested Research is finally meeting practice
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.