Third article in our series on creating custom metrics, and why the ability to define your own metrics in a code analysis tool is not the important factor that everyone believes it is.
We saw in the first article that the most simple metrics were not the most numerous and in the second post in this series, that the most interesting and important rules were also the more complex to implement, sometimes (or often) impossible.
In this post, we will address the issue of customization in terms of application technology, before a final article will summarize and identify all the right questions.
Custom metrics and technologies
When asked if it is possible to create metrics with a particular tool, I say ‘Why?’. I mean: ‘Why do you want new metrics?’. In most cases, you will not get very specific answers, which is obviously not a good sign as to the existence of a real need.
The more specific question is actually: “For which technologies? ‘.
Java is still the language most widely used in business, the one that is used with most applications. Good programming practices are well known, and standardized. Sure, there is sometimes some debate between purists whether the use of a particular instruction constitutes a violation of good practices, but let’s be serious, everyone agrees on the most important metrics to use.
The corresponding rules can be found in many tools, and most are easy to integrate: SonarQube knows how to work with FindBugs, PMD, Checkstyle for example. In fact, there are so many tools that incorporate so many rules that you can specialize them according to your priorities and goals:
- New and critical applications: you want to go hunting for potential bugs to deliver robust and efficient applications, and reduce problems for your users.
- Old and proven applications: some problems and delays on a new release are certainly not desirable, but your priority – and on what you are judged – is to control maintenance costs and do not explode your budget. You prefer the good programming practices that affect maintainability of your applications.
So, if you understand me well, there is no reason to set new rules of Java, unless you want to have fun of course, and play the apprentice guru.
Other new technologies
By ‘other new technologies’, I mean languages like C++, C#, .NET, etc. I must say that I am not an expert in these languages, nor a big fan, but I still analyzed some applications of this kind (not my favorites I must say), and I know that most of quality rules for these technologies are close enough to those we find in Java. I mean, an IF is an IF and a loop is a loop, so we’ll find the same good practices when it comes to best practices of code structuration, for example. Then, in all languages, there is always specific instructions that are not recommended.
I also think that the offer in terms of tools and rules repositories is becoming more abundant for these languages. So the above principles regarding customization of Java rules are the same here: do not reinvent the wheel.
In fact, there is only one acceptable case in my opinion: the proprietary ‘home’ frameworks. Imagine that you work in an SOA environment with a developed and maintained in-house middleware encapsulating calls to Mainframe Cobol treatments or to any ERP. It is obviously desirable that everyone uses the framework and nobody develops code outside of it.
Because when the program that is called in the SOA layer and the parameters for this call are changed, the framework – the APIs for this program – will also be updated. But if someone has made a program outside of the framework, it will not work anymore, and it will be difficult to identify it, unless you want a final user to find the bug for you, and even like that, it will be difficult to find the root cause of the problem an solve it. Assuming the author of this mess is still around.
Other cases not so unusual: I don’t know how it is in your country but I find amazing how many companies around here have developed their own version of Struts and Hibernate frameworks. Ok, most of them are service companies that have used these proprietary frameworks in the context of application development for their customers, thereby ensuring maintenance contracts of these applications for life. I’ve seen Java portfolios with up to 3 or 4 different proprietary frameworks.
So in this case, it is acceptable to bear the cost of customizing metrics for a custom framework, to ensure that it is used. It is also necessary that these rules are possible to implement. How to identify that a treatment does not use the instructions or objects in a framework when it should? In the best case, you must identify all instructions that will access a specific layer (SOA layer in our previous example, data layer for a framework like Hibernate, etc.), And that these instructions are in the developer’s code and not in the framework’s code, or not in any code of the framework that has been instantiated by the developer.
Believe me, this is horribly complicated, and you will have to work with an Abstract Metric Tree, which will require both specific tools and high skills. The cost of creating and maintaining these rules will be fairly or very high.
Even the cost of using these custom metrics will be higher than you think because this is where you will find a lot of false-positives. I remember a metric that would verify that each and every JSP or HTML page should be associated with an error page. Now the relationship with an error page is not always straightforward, sometimes it goes through a Struts object, sometimes through a specific component dedicated to error handling, etc. The first thing I always did after a first analysis was to have a look to this metric because there were always false-positive, sometimes up to 100% of the results. And of course, if you say to a Java team that he is mismanaging errors while the error is yours … hum hum. Great moment of loneliness.
I will not begin a discussion about what is a 4GL (4th generation language), let’s just say I am speaking of these client-server tools we had in the 80s and 90s: SQLWindows, PowerBuilder, Delphi, NS/DK (in France), etc.
There are still some applications with these languages, even if someone says regularly that they should be refactored, but it is never done. After all, if it works …
There are not much tools that can analyze this kind of code, because there are not enough applications and customers for such tools and therefore it is not profitable to invest in developing such tools, for most software publishers. Mainly, customers are already happy with anything that can check the most basic rules and customization is not really a priority.
The only exception may lie in a language like Oracle Forms, that remains relatively used. It depends on what part of the world we are speaking of, I am always amazed at how some tools are widely used in some countries and not at all in other regions. But there is still a lot of very large portfolios of critical Forms applications. However, their owners, are most times satisfied with some good important metrics before thinking to develop their own.
COBOL is one of the most standardized language with Java. This makes sense, considering that Sun and IBM were leading providers of hardware and software solutions and were encouraging using their platforms by providing a great number of publications on code quality . So the rules in this area are well known and standardized. At most, you will have to disable a rule that is not enforced by a project team. More on this in this post: Your own quality model.
It happened to me once or twice to meet a company that had developed a specific rule that did not correspond to an existing standard. For example: you know that a Cobol program will be divided into sections and subsections (paragraphs) equivalent to reusable procedures or functions and located in the program itself. A customer had made a rule saying that it was not possible to call a paragraph if it had not been declared and coded previously (before the call). The rule is justified, and even if I did not know anybody else that was using it, it was possible to implement with a RegExp. And the client insisted, so …
Unlike the previous case. SAP has never been a specialist in code quality. Their priority is to sell new versions, not to stabilize the existing ABAP code. It even happened that SAP did recommend the use of statements that represent a risk to the performance or reliability of the application.
Thus, many if not all companies with ABAP programs have a list of good programming practices. There we often find the main existing standards, but not always in the same form. For example, a common best practice is to always use and test a return code after a call to another treatment. But sometimes, with some customers, this rule will be more precised and will define precisely what return-code correspond to waht situation (0 = no error, 1 = error of such type, … X = error of X type, etc,). In such a case, you will probably have to develop more advanced metrics, that not only verifies the presence of a return-code but also its value.
You will also have important standards that will not be found in a book of custom rules, and you should check with your client if they have an interest to use these rules. It will often be the case, but he will have to explain to all his outsourcers that for years they did bad work hard not using these rules without anyone telling they had to. Also, it is in the ABAP world that you will meet the most complex or impossible rules to automate.
- Java and new technologies: unless you want to play little crazy chemist inventing his own recipes, it is difficult to justify the need for new metrics, unless you have …
- Framework: beware, the cost of development, maintenance and even use of custom metric for a proprietary framework will be high, very high.
- Other languages such as 4GL: little or no demand.
- Cobol: no need for custom metrics, but you will often have to perform customization of the qualimetric model.
- SAP ABAP is probably the only language for which customization of rules can be justified. But, not all can be automated and here you will find the most complex to implement.
In any case, the creation of new metrics often has an higher cost than we can think. The next post will allow us to identify what are the hidden costs and summarize all the questions to ask in order to better assess the actual need and get some return on investment.