Why Coverage Doesn't Matter (sort of)

We’ve got a product called Diffblue Cover, so naturally we get asked a lot about unit test coverage. The reality is that many companies have set somewhat arbitrary or high goals on percentage coverage without understanding the impact and costs.

Code coverage cartoon.jpg

And this is not just a problem for Diffblue, but definitely for our those in the testing community that need to explain why the number is not 100%.

Some good examples of other articles:

We talk with a lot of leading software companies and other large financials services, manufacturing and other companies with millions of lines of legacy code. And for many, they have trouble answering some key questions:

  • What is a 1% coverage increase worth to your business?

  • What is the cost to increase coverage by 1%?

  • At what point do you reach diminishing marginal return on coverage investment?

When we consult resources online, such as this discussion on Stackoverflow, it seems everyone is a bit confused. This discussion points to some commonly expressed beliefs about 70 or 80% being “magic numbers”. But there is no specific reason, only that it has become a rule of thumb in many companies, probably because it looks close enough to 100% without taking on the last mile of effort that is likely to be the most time consuming and expensive. Not to mention the most soul destroying for developers.

Oftentimes, we find that the following are more important:

  • Quality of coverage and tests

  • Covering key features

  • Mitigating key business risk

Particularly when it comes to very unknown legacy code, look for a big percentage increase in code coverage rather than increasing total coverage. And knowing what is most important. You’ll set realistic goals and your developers and budget will thank you.