I was thinking to the previous post about the measure of the effort in a project – estimate the costs of development and QA before the project has started and there is little data – when I came across an announce in a forum about measurement of software quality, for a conference on the same subject.
You know, one of these events in which various speakers do a presentation on topics such as ‘what metrics to assess projects’ or participate to roundtables about ‘methods for estimating development costs and maintenance’.
The author of this thread asked to the forum members what topics of discussion they would like to see addressed at this conference, which triggered a series of responses and reactions rather unusual, that I will summarize briefly:
- Halt, stop, enough, enough of presentations and ‘papers’ on “How to measure productivity in software development?”, “Effective measures of risk” or “Using Function Points in the aerospace industry”.
- It is now more than 35 years that we use metrics and yet IT departments continue to ignore measures of code quality and the number of software projects that failed or are delayed is always higher.
And someone to comment that “the industry of software measurement is in a poor state” and that “we could learn from the Agile community that has integrated these measures into their practices.”
I completely agree with this statement, and it reminded me of a presentation by Olivier Gaudin, co-founder and CEO of SonarSource, that you can find here: Webinar: Take Continuous Inspection to Your Enterprise with Sonar 3.0.
Here is the slide of this presentation that seems to me the most important, about the vision of our market, and which is fully in line with the previous points:
Note: on the left is the current state of the market and on the right, what it should be.
Code analysis, application quality analysis, software quality measures, code quality metrics, … There are almost as many different names as tools, thank to the imagination of marketing departments. And even if they deny it, all these tools perform the same functions, analyzing code to identify violations of programming best practices and agregate different metrics into a dashboard.
5 years ago, there were far fewer tools on the market, they were expensive, usually limited to the analysis of two or three technologies, difficult to implement and their use restricted to transverse Quality teams. These teams tried to implement processes to audit applications, but their efforts were often unsuccessful in the absence of strong effort of the management to generalize these processes on projects, but also, because developers did not want to see their code audited.
Today, I often meet Quality managers, some of whom I know as former clients, and they tell me with a very preocupied look, that the code analysis tool that they bought at an high price is not used, but that project teams have implemented themselves a process of Continuous Integration based on Open Source tools like Sonar and Jenkins.
First important lesson: application quality comes from developers. To control developers and to measure the quality of the code they produced has little chance of success if they are not fully convinced of the value of such a process and associated with it. It is obviously easier to control the code provided by a service company, but it does give results only if the provider implements itself a process of Continuous Integration into their own project teams. Or you should be prepared to change your provider, which is not so easy when he is the only one with the knowledge of your code, knowledge that you lost when you decided to outsource your applications.
So the first conclusion: the developers are the primary users of a code analysis tool and it must be integrated into a software chain, from the programming tool on the developer’s workstation to a configuration management repository. Quality consultants who want to stay alone at command of a code analysis tool to control the developers will see their efforts unsuccessfull and will remain isolated. Their role should be to push the tools and quality processes into the teams.
This brings us to the second important question: why the work of Quality teams is mostly ignored by IT departments? Because the vast majority of Quality consultants do not know how to sell the value of the information and the support they can bring to IT departments and users.
Quality consultants are often more interested in the metric itself than in its use. I am experiencing that constantly, whenever I talk about LOC – the Lines Of Code metric – on a forum: immediately, many experts argue that my use of this metric is a ‘malpractice’ and only Function Points must be used to measure the size of an application. And then I have to deploy treasures of diplomacy and lengthy explanations to justify my use of this metric: not to measure the functional size of the application, just to measure the number of lines of code.
When you meet someone for the first time, what is the first thing you notice? Its size. Is it small, medium or large? It is the same with an application, I start by looking at its size. Then I can use functional weight measures to verify if this person, sorry this application is overweight or not.
Quality consultants often prefer to spend their time into disputes of experts on the usefulness or not of a specif metric instead to discuss about the use that can be made of it and its value for the project teams, the stakeholders and the IT managers. How many ‘papers’ to explain the complex variants of calculating Function Points on such application or such category of programs? How CIOs are supposed to understand such calculations and use them?
I know Infrastructure managers who constantly have on their desk a screen with alerts on all applications in the infrastructure they are responsible. If a critical application – for example, bank payments – has a problem – for example, on files with the payment data – it will appear in red on the screen, allowing him to react immediately, even before users do get knowledge of the problem.
How much CIOs, VP App devs, how managers responsible for a portfolio of applications use such a dashboard?
I do not know any. Why? Because most of the tools are complicated to use, with complex metrics to understand, and especially do not allow to personalize the dashboard according to the desires of the user (except Sonar). So you need a Quality consultant to interpret these metrics and to deliver valuable information to managers.
There are now more tools than in the past, but I see very little that:
- Effectively integrate into the chain of code production from the developer’s workstation.
- Can improve the efficiency of developers to produce code with high quality at no extra cost / workload.
- Have a sufficiently friendly and customizable interface, which allows different users to have their own dashboard and their own indicators as they think appropriate to their function.
Instead, I continue to see software vendors explain complex algorithms implemented by their tools to detect more defects and a large list of metrics that only them can calculate, but that will be useful only for a J2EE architect or a Quality consultant. As long as it will be so, the market for these tools will remain, as noted by Olivier Gaudin, a niche market with expensive software.
And Quality consultants will continue to wonder why nobody calls them when so many projects are late or do fail.
This post is also available in Leer este articulo en castellano and Lire cet article en français.