Download presentation
Presentation is loading. Please wait.
1
Automated Code Coverage Analysis
SOFTWARE QUALITY CONFERENCE PACIFIC NW Automated Code Coverage Analysis October 17th 2016 PNSQC ™
2
Automation Today In order to be Agile we are asked to automate tests more than ever before Speed of Creation and Quantity of automated tests is increasing every day Typical library of tests contains 1000’s of tests with many of these having multiple asserts/validations Increasingly difficult, and thus of lower priority, to keep track of what is being tested by what test(s) How can we better ensure tests: Cover what they need to? Aren’t duplicating each other?
3
Usage of Code Coverage Typical usage of Coverage data
Collect % covered numbers and report on these Too time consuming and tedious to deep dive into what is getting covered or not covered and why Set some arbitrary “feel-good” goal, if reached ->SHIP IT! Check for coverage regressions build to build Measure coverage from build to build Compare coverage and fail if coverage % goes down in any of the tests covered (Unit, Integration, Functional…) If fail -> investigate and add coverage if appropriate and time allows Mostly being done for Unit Tests
4
Automated Analysis of Code Coverage Data
1st determine what will be covered and how Test Runs with coverage need to be frequent Recommend at least once per sprint Parse & Store Coverage data in a database Parsing and collection methods of coverage data depend on coverage tools used. E.g. Visual Studio ( .NET), Cobertura (Java), Istanbul (JavaScript) … Automate (preliminary) analysis of code coverage Done by summarizing/aggregating coverage of tests On a per test basis (e.g. Test 1 vs Test 2…): We will get counts of which tests have better coverage. We will know what is not getting covered
5
What can we do with the data? - With Existing Tests
As a guide when reviewing existing tests Should a test be removed or can tests be merged? Are we missing coverage? Prioritize tests to be run when time is limited i.e. choose test with highest coverage & shortest time to execute. Determine overall product readiness Are we comfortable with what is NOT being covered Help focus efforts on where to create & run any additional (e.g. manual/ad hoc) tests to increase confidence
6
What can we do with the data? - For New Tests
Help determine if planned test(s) is needed Multiple scrum teams – local and offshore may be working on different parts of product Few know what is covered by legacy automation Collect coverage for any planned tests and compare to the existing coverage data Collect data during design/planning to avoid duplication. i.e. don’t wait until end of sprint or PSI For functional tests get “new test” coverage by manually executing tests Compare coverage and decide if new test is needed
7
Managing & Analyzing The Coverage Data
Engineers painstakingly review and analyze data Tedious and time consuming Often scaled back due to higher priorities Can create numerous scripts to: View data Compare data Generate reports … In order to get consistent & quick guidance across various teams, suggest having a tool that will help to view/manage/slice & dice the data.
8
Tooling To Bring It All Together
Recommend tying together all that’s needed to collect and compare coverage data Execute sql scripts/commands Perform comparisons Access the db where coverage data is inserted. E.g. of tables we have: Summary: Overall coverage summary info Build, TestName, Coverage%, sources/binaries... Module: Per module coverage data Method: Per method coverage data Result is that analysis of data is no longer overwhelming nor inconsistent and can more easily be divided amongst engineers
9
Custom Tool We Created
10
Test Comparison Summary - Screen
11
Test Comparison Summary - Analysis
Summary view is the starting point when comparing tests Current Example Comparison: Test1 : 0 Additional Methods and 0 Methods with Better Coverage Test2 : 4094 Additional Methods and/or 144 Methods with Better Coverage Conclusion: Test1 is likely a subset of Test2 & should be considered for retirement.
12
Sample Test Comparison Details View
13
Drilling down into Test Comparison details
Module comparison: Shows high level coverage that’s common between tests Class comparison: Easiest in helping to determine what areas of the product are lacking coverage Method comparison: Drilling down further to provide greater details regarding coverage differences
14
Overall Coverage Having gathered all of this data into a common db there’s still more we can do with it* What all has been covered What hasn’t been covered What’s covered in: Unit Test vs Functional Tests vs Integration Tests What’s the merged coverage? Total % coverage from build to build – PSI to PSI … *The fine print: Always proceed with caution when working with coverage data as an indicator of product readiness.
15
Example: Methods With Zero Coverage
16
Demo
17
Questions? ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.