10 to 20 metrics

We saw in the previous post ‘Use cases – Working seamlessly together‘ what use cases are most often applied with a code analysis tool, and provide the greatest benefits:

  • Quality Gate to validate the delivery of a new application version.
  • Management of SLAs and benchmarking of providers.
  • Continuous Integration / Improvement.

It is recommended to combine these three use cases for a better synergy and to optimize the results. However, their implementation depends on whether you are in a context of:

  • In-house development: Continuous Integration / Improvement + Quality Gate.
  • Outsourced development: Quality Gate + SLAs.

What are the most important metric for these use cases?

In fact, the target for Continuous Improvement and SLAs should be simple and realistic.
Realistic means: achievable. If you ask your development team a perfect score of 50 best practices, it is not realistic. Nobody can have 50 different measures in mind, 50 good programming practices perfectly respected. Even if they know them, even if they know they must avoid creating any violations of these quality rules, it is impossible to remember them all and you can not avoid the occasional lack of attention. And if the goal is not attainable, the team quickly becomes discouraged and stops paying attention and the process will fail.

Set a goal achievable with a minimum of 10 to 12 rules, 15 maximum.

Simple means: objectively measurable. For example, you decide to set a goal to reach 99.99% quality. But 99.99% of what? The number of lines of code? The number of objects covered by the metric? Let’s see a (classical) rule of exception management: a Java ‘throw’ should not be empty. It obviously applies only to methods that implement business logic. If a class has 100 setters / getters with no business logic, does the rule apply? If this class has 200 lines of code or more, will you include these lines in your calculation? Not only it gets complicated, but in addition you will penalize someone who develop a class with 20 methods implementing a ‘throw’: just forgetting one means a 5% error.

The simplest and most easily measured target is: 0 defect. No malpractice is tolerated, whatever the number of lines, methods, classes, etc..

So 10 or 12 metrics with 0 tolerance.

And what are the most important rules for which we will not accept failure? Those which constitute a critical error in terms of development, because the impact on application behavior and end-user will be critical. Improper error handling can result in a blank page for the user or an error page without any explanation, interruption, transaction not completed, or even data corruption.

This applies to the use of certain generic classes that do not specialize an exception and to understand and trace the error occurred. Also, certain syntaxes that can be dangerous in terms of data corruption and security.

Cobol and ABAP do not know the concept of classes or methods but these principles remain the same: always use a return code when calling a function or a database procedure, avoid instructions that interrupt processing such as Break or Stop Run. These technologies are also very sensitive to performance so we should add all the expensive treatments in SQL code and / or loops.

We therefore recommand metrics such as:

  • Java : Empty Try block, Empty Finally block, Empty If statement, Empty While Statement, Illegal throws (java.lang.error, java.lang.RuntimeException), Equals HashCode, Array stored directly, …
  • Cobol, Abap : If without Endif, Others in Case, When Other in Evaluate, Break, Stop Run, Sort, Select *, Select Distinct, Group by, Select in Select, Cursor inside a loop, Open/Close in a loop, …

These rules have the advantage of being equally acceptable within an SLA for a provider and a target of 0 defect is easily measurable.

Now, what about the rules for accepting or rejecting an application release in the Quality Gate? First we will include the previous metrics: if the ‘0 defect’ rule applies to the developers, we do not want to find these defects in the application.

The Quality Gate and the SLA shoud also include some measures on maintainability, that is to say, metrics that measure code quality in terms of maintenance costs and do not endanger the user.

Imagine a J2EE application with 500 classes with an average of 20 methods for a total of 1 000 methods. Let’s assume that Java code of good quality can count 7.5% complex objects and 2.5% very complex objects, and therefore 750 complex methods and 250 very complex methods in our example. This code is more costly to maintain because it is difficult to understand, and also the most dangerous because the risk of introducing a defect in case of change is highest.

Now imagine that the project team added 5% of complex and very complex methods with each release. This figure is quite limited, this represents only a dozen very complex methods on the existing 250. At a rate of 4 releases a year, the most complex rules will have increased by over 20% and the complexity of the application will double in less than 4 years. Knowing the impact of this complexity on your budget, your schedule and the number of bugs for users, one quickly understands why it is desirable to monitor any drift in this area.

We can therefore add this type of metric in a SLA and therefore in a Quality Gate, and other measures that impact the maintainability such as the% of duplicate code, % of code in comments (LOC Commented-out) … and of course, if the tool can calculate it, the Technical Debt (we will have to take into account the increase in code size for this measure). The percentage of comments may also be an interesting metric when the code is outsourced.

Obviously, it is advisable to check these metrics as soon as possible, which means to enforce them in a process of Continuous Integration. However, I do not recommend ‘0 defect’ policy at this level: the important is to reach the Quality Gate without any new complex object, not to systematically remove them from any’ build ‘. Some autonomy may be left to the project team in this area. We will nevertheless make them aware of these rules, which benefit to them also.

Note that the errors that pose a risk to the user can also significantly impact the costs of application maintenance. Improper error handling means higher detection time of bugs and therefore quite high costs of correction.

We do not pretend to list these metrics as the 10 to 20 most important, but as an example for our three use cases. You can create your own list according to your own criteria: the technologies used for your applications, the maturity level of your team, business alignment / IT (see ‘What is the first question?‘), whether applications are maintened in-house or outsourced, etc.

The 10 to 20 most important metrics are those that will help you improve the quality of your applications by maximizing the benefits of these three use cases.

This post is also available in Leer este articulo en castellano and Lire cet article en français.

Leave a Reply

Your email address will not be published. Required fields are marked *