« You can’t control what you can’t measure » (Tom de Marco)
I explained in the previous post The 3 costs that code analysis tools provided us a large number of qualitative information while quantitative metrics were less numerous, but very useful. This opinion is not always shared.
Raw metrics are easy to obtain with the great majority of code analysis tools. Generally, such a tool begins by presenting a global quality note which aggregates various rules of good practices of programming, design, documentation, etc. This simple note gives a general overview of the quality of the application, but is in fact rather subjective: it depends very strongly on the tool. The good practices which govern the quality of applications are well standardized and we should thus find the same rulesets in every tool. Well, it is not often the case. Some tools are specialized on a unique language for which they propose the maximum of possible rules. Others are multi-technologies and aim to balance a normalized ruleset between various languages, not to support as much as possible.
A more subjective factor still is the criticity of a rule, its weight in the qualimetric model supplied. Realizing a comparative study between two of these softwares, I discovered that, on a scale of 1 to 10:
- The first tool proposed 15 rules with a weight of 6 to 10 (critical or above), on a total of 73 rules, a 20% subset.
- The second tool proposed 58 diagnoses on 102, with a weight above 6, that is more than half of the total number of rules.
Sometimes, raising or lowering the criticity of a rule is enough to modify drastically the global quality note. Or it happens that a tool shows an error, either because it produces a large number of false-positive, or simply because of a bug!
I remember a tool which did not find all the links towards the error page of the application and concluded that there was a grave defect of security. Imagine the emotion in the room when the project team, their managers – and often the CIO – discover that their super-critical application presents a major risk for the security. And the big moment of solitude for the Quality consultant which presents this conclusion without having verified beforehand the exactness of this qualitative measure.
This leads to the paradox that the quality of your application depends on the quality of the software which evaluate the quality of your application.
Two different tools might not show the same evaluation about qualitative data, but quantitative measures are mostly similar and exact. I often hear: « this application is complex » but if I ask why it is complex, the answers are never precise. Well, I just have to look at the number of line of code (LOC) and I already have an indication. Is the size of this application rather down or rather over the average for this technology? Then, the Cyclomatic Complexity provides a second information: does this application incorporate a large number of paths? Does its level of complexity weighs on its maintenability.
If you know that a person weighs 60 kg, is it enough to say if she is thin or not? Of course not. But if you know his size, then you can say if this person is or not overweight.
Additional measures such as the Comments ratio or Duplicated code ratio allow to draw a profile of the application. Of course, the quantitative data are limited. And some people will say this is a heresy to use LOC while others swear only by Function points. All these opinions are interesting, but raw metrics present certain undeniable advantages:
- They are easily available.
- They are generally exact.
- They are easily understandable for the project managers.
They represent a solid base to estimate the quality of an application and if it is expensive to maintain. Then qualitative metrics allow to precise our evaluation about the ‘why’ of this cost: because the application is not legible, because it presents defects of reliability, because good practices of performance or optimization are not applied, etc.
Qualitative metrics allows to refine an assessment, to build a remediation plan, to implement a process of Continuous Improvement. They are precious for the project teams.
When the question to answer is: how much costs this application to maintain? quantitative data are simpler, more precise and easier of use.
Let us measure what we can control.
This post is also available in Leer este articulo en castellano and Lire cet article en français.