Intro
As Interim CTO I closely collaborate with non-technical folks. And often we come to the topic of “test coverage”. For non-techies this concept looks very simple. High test coverage is good, low test coverage is bad. Let’s just force the teams to produce a high test coverage.
But it’s not that easy - and forcing teams to produce a high test coverage will lead to disaster. Let’s explore.
What is Test Coverage?
Test coverage is a metric used to measure the amount of code that is tested by automated tests. Think of it as a way to check how much of your software’s functionality has been verified to work correctly. If you imagine your software as a book, test coverage would be like highlighting the parts of the text that have been proofread.
The Vanity Metric Trap
Test coverage can sometimes be misleading. It’s a bit like a company’s stock price. If you focus solely on increasing the stock price, it might work in the short term, but it can lead to disastrous results in the long run. Remember Enron? They manipulated numbers to look good on paper, but it eventually led to their downfall.
The same goes for test coverage. Engineers can write meaningless tests just to boost the coverage percentage. These tests might make it look like everything is thoroughly checked, but in reality, they don’t verify anything substantial.
What you really want is a team of engineers who understand their work and aim to create stable, reliable software that makes customers happy. This approach naturally leads to high test coverage.
What Test Coverage Doesn’t Tell You
High test coverage doesn’t automatically mean your software is rock solid. You don’t need 100% test coverage to have reliable software and happy customers. Instead, focus on the quality of the tests and the stability of the code.
On the flip side, low test coverage can be a red flag. It might indicate areas of your software that haven’t been thoroughly tested. This can be useful for engineers to identify parts of the code that need more attention.
The Rule of Thumb
A good rule of thumb is to aim for around 60% test coverage. This is a benchmark recommended by experts, such as those at Google. In their book, “How Google Tests Software” by James Whittaker, they suggest that 60% coverage is a reasonable target.
In my experience, when teams prioritize reliability and quality, they often achieve close to 100% test coverage. This isn’t because they’re forced to, but because they see it as part of their job to ensure the software is dependable.
Conclusion
- Aim for at least 60% test coverage. Anything less might be a cause for concern.
- Don’t force test coverage. Instead, encourage your engineers to focus on producing reliable and stable code. This will naturally lead to high test coverage.
By understanding and applying these principles, you can ensure that your software development process produces high-quality, reliable software that keeps your customers happy.
Your goal is happy customers and not high test coverage after all.