Optimizing software development using unit regression tests

Executive management teams are always challenging the rest of the business to come up with strategies to get ahead of the competition and stay there. Typical strategies revolve around winning new customers, increasing customer satisfaction to keep existing customers, and increasing efficiency to improve the bottom line.

Many organizations have identified software as a critical component to achieving success in these areas. As a result, companies are investing heavily in developing both customer-facing apps and internal software.

While adopting a software strategy to remain competitive is a good idea, companies need to ensure that they balance cost, speed, and quality to maximize their chances of success. In this article, we will examine this delicate balance and detail how, when done correctly, automating the creation of software tests can help improve quality and speed while remaining cost-effective. 

Balancing Act

When companies decide to use software as a competitive differentiator, they must move as quickly as possible to get new software in front of users and to update it with new features regularly. Failure to do this could result in loss of competitive advantage as their competitors leap-frog them with more feature-rich applications. 

While speed is vital, it is essential that companies also produce high-quality software. Software released quickly but with a high number of bugs frustrates users leading to a decrease in employee productivity for internal software or, worse, loss of customers because they lose faith in your entire brand due to “shoddy” software. Being able to test the software quickly and efficiently on an ongoing basis becomes essential as it becomes more critical to the business. 

How can you ensure quality?

When starting to consider software quality, one of the first metrics that organizations consider is code coverage. Developers use code coverage to measure what percentage of the software they have written is being exercised by their tests. 

On the surface, it sounds like a good idea: the more of the code you test, the better quality the software will be. If you reach 100% code coverage, then the software must be “fully tested” and, therefore “perfect,” right? Unfortunately, this is not the case. 

Measuring how much of the code developers have tested does not indicate how good the tests are. Are they covering all the possible paths through the software? What about the edge cases? What if a user inputs something the program isn’t expecting? Developers have to consider many factors when testing software, which code coverage doesn’t measure.

So, having a high code coverage percentage could be doing us harm. It could give us a false sense of security as we believe that our software developers are producing high-quality software because we are testing all of it when, in reality, we are only scratching the surface.

While any testing is better than no testing, it is important to understand what the tests are doing before relying on their results. 

Is it better to reach 100% code coverage with potentially “dubious” tests, or should we aim for a lower code coverage rate but with more certainty that the tests we are carrying out are more meaningful?

Martin Fowler says that between 80% and 90% is good enough and he “would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing.”

Let’s now discuss what “good enough” should look like for your organization.

How much is good enough?

First of all, if you only have minimal code coverage today, any additional testing will be beneficial. Like all aspects of DevOps, we are looking at a journey rather than a destination, so you should consider any additional code coverage as progress. Even if you achieve what industry best practices believe to be the right amount of code coverage today, developers will write more code and modify existing code. These changes will cause your code coverage to go down until developers create additional tests.

Generally speaking, aiming for above 75% code coverage is a good aspiration. There are only certain circumstances, such as safety-critical systems, where getting close to 100% is practical, necessary, and cost-effective. 

Getting to an aspirational figure will not happen instantly. In many cases, developers will have written code without testing in mind and will need to rewrite it before creating new tests. 

While setting a specific metric for testing is possible, many teams use quantitative measures to determine whether they are testing their software enough. You should consider increasing the number and possibly quality of tests you perform if:

  • Users are reporting a significant number of bugs in production software
  • Developers are not confident that they can modify existing software without introducing bugs that will not get flagged during testing

Many customers ask us about the 20-25% of code that is not covered if they aim for 75-80% code coverage as recommended. This code is either unnecessary to test because it does not implement business-critical logic or is just too difficult to verify. In these cases, developers must accept that any errors in the untested code will come to light when users start using the application.

Increasing code coverage with less stress

Developers can improve their code coverage by increasing the number of unit tests that exercise the code. A unit test runs through a small piece of application functionality to ensure that it behaves correctly and that the developer has not introduced any spurious behavior.

Adding unit tests sounds like an easy way to improve code coverage and, therefore, the quality of software delivered to users. However, developers do not like writing unit tests, and many skip them to deliver software quicker. 

In addition, if managers measure developer productivity using code coverage, then developers may be tempted to write the bare minimum number of unit tests to cover the code in question rather than writing high-quality tests that fully exercise all aspects of the software. Insisting on achieving a certain percentage of code coverage will almost certainly result in minimal tests that cover the code but don’t ensure that the software behaves as it should. 

High code coverage with low-quality testing doesn’t help anyone! Quite the reverse: 100% code coverage sounds good and that you have nothing to worry about. But in reality, you probably have a LOT to worry about if this is the case. Testing things that matter properly (e.g. critical business logic) and not achieving 100% is better than just having superficial tests.

So how can we overcome this dilemma? We need to increase code coverage and therefore write more unit tests to improve software quality, but developers do not want to write them, and if managers force them to do so, they are likely to write the bare minimum.

Diffblue Cover automatically writes unit regression tests for Java applications based on the behavior of the existing code, so that changes can be identified. Cover not only dramatically reduces the “backlog mountain” associated with legacy applications where very few unit tests have been written, but also automatically updates the unit tests as the application evolves. Cover’s tests are easy to understand, so developers can quickly work out what they need to do when a specific test flags an issue.

For more on the subject of finding the balance between achieving high code quality and reducing the time and costs associated with testing, check out our eBook. You can also try Diffblue Cover’s unit regression tests for yourself.