The basis of this article is a sequences of tweets betwen me and M. Robert Martin, on April 8th 2011:
- If you have 100% coverage you don’t know if your system works, but you do know that every line you wrote does what you thought it should.
- @unclebobmartin 100% code coverage doesn’t achieve anything, save making you safer while nothing could be further from the truth.
- @nicolas_frankel 100% code coverage isn’t an achievement, it’s a minimum requirement. If you write a line of code, you’d better test it.
- @unclebobmartin I can get 100% code coverage and test nothing because if have no assert. Stop making coverage a goal, it’s only a way!
This left me a little speechless, even more coming from someone as respected as M. Martin. Since the size of my arguments is well beyond the 140-characters limit, a full-fledged article is in order.
Code coverage doesn’t test anything
Let’s begin with the base assertion of M. Martin, namely that 100% code coverage ensures that every code line behaves as intended. Nothing could be further from the truth!
Code coverage is simply a way to trace where the flow passed during the execution of a test. Since coverage is achieved through instrumentation, code coverage could also be measured during the execution of standard applications.
Testing, however, is to execute some code by providing input and verifying the output is the desired one.
In Java applications, this is done at the unit level with the help of frameworks like JUnit and TestNG.
Both use the assert()
method to check output.
Thus, code coverage and test are completely different things. Taken to the extreme, we could code-cover the entire application, that is achieve 100%, and not test a single line because we have no assertion!
100% coverage means test everything
Not all of our code is neither critical nor complex. In fact, some can even seen as downright trivial. Any doubt? Think about getters and setter then. Should they be tested, even though any IDE worth his salt can generate them faithfully for us? If your answer is yes, you should also consider testing the frameworks you use, because they are a bigger source of bugs than generated getters and setters.
Confusion between the goal and the way to achieve it
The second assertion of M. Martin is that 100% of code coverage is a requirement. I beg to differ, but I use the word 'requirement' with a different meaning. Business analysts translate business needs into functional requirements whereas architects translate them into non-functional requirements.
Code coverage and unit testing are just a way to decrease the likehood of bugs, they are not requirements. In fact, users couldn’t care less about code average, as long as the software meet their needs. Only IT, and only in the case of long lived applications, may have an interest in high code coverage, and even then not 100%.
Cost effectiveness
I understand that in certain particular contexts, software cannot fail: in chirurgy or aeronautics, lives are at stake whereas in financial applications, a single failure can cost millions or even billions.
However, what my meager experience taught me so far, it’s that money reigns, whether we want or not. The equation is very simple: what’s the cost of the test, what’s the cost of the bug and what’s the likehood of it. If the cost of the bug is a human life and the likehood is high, the application better be tested like hell. On the contrary, if the cost of the bug is half a man-day and the likehood low, why should we take 1 man-day to correct it in advance? The technical debt point of view help us to answer this question. Moreover, it’s a decision managers have to make, with the help of IT.
Point is, achieving 100% testing (and not 100% coverage) is overkill in most software.
For the sake of it
Last but not least, I’ve seen in my line of work some interesting deviant behaviours.
The first is being, quality for quality’s sake. Me being a technical person and all, I must admit I fell for it: "Wow, if I just test this, I can gain 1% code coverage!" Was it necessary? More importantly, did it increase maintainability or decrease the likehood of bugs? In all cases, you should ask yourself these questions. If both answers are no, forget about it.
A slightly different case is the challenge between teams. Whereas challenges are good in that they create an emulation and can make everyone better, the object of the challenge should bring some added-value, something the raw code coverage percentage doesn’t.
Finally, I already rambled about it, but some egos there just do things to write on their CV, 100% coverage is just one of them. This is bad and if you have any influence over such individuals, you should strive to orient them toward more project-oriented goal (such as 100% OK acceptance tests).
Conclusion
With the above arguments, I proved beyond doubt that assertions like 'we must achieve 100% code coverage' or 'it’s a requirement' cannot be taken as a general rule and is utter nonsense without some proper context.
As for myself, before beginning a project, one of the thing on my todo list is to agree on a certain level of code coverage with all concerned team members (mainly QA, PM and developers). Then, I take great care to explain that this metrics is not a hard-and-fast one and I list getters/setters and no assert as ways of artificially increasing it. The more critical and the more complex the code, the higher the coverage it should have. If offered a choice, I prefer not reaching the agreed upon number and yet having both firmly tested.
For teams to embrace quality is a worthy goal. It’s even more attainable if we set SMART objectives: 100% code coverage is only Measurable, which is why it’s better to forget it, the sooner the better, and focus on more business-oriented targets.