Application benchmarking

I thought the previous post on the quality assessment of an application would be the last of our series on the analysis of Cobol code with Sonar. But I discovered this week a new plugin from eXcentia, very useful as a part of an assessment: the Sonar Benchmark Plugin.

This plugin allows a comparative evaluation – a benchmark – of an application over the entire code in your Sonar repository.

You remember that I have analyzed different Cobol applications, with whom I have created a Sonar View. With this View, we realized an evaluation of the quality of an application, not the most voluminous, but which had a significant number of violations.

For this, we put forward different numbers in order to make our assessment and propose some recommendations. But how does compare the quality of this application over the rest of our porfolio?

This is what we will verify with this Sonar Benchmark plugin.

Installation

The installation, as with any Sonar plugin is very simple:

  1. Download the plugin from the tab ‘Descargar’ in the directory ../extensions/plugins/ of your Sonar installation.
  2. Start again Sonar… et voilà.

Use

The plugin comes with its own dashboard (with the Sonar version 3.1.1 that I use): choose the application you want to benchmark and select ‘Benchmark’ in the menu bar.

We have two widgets. The first (Global Benchmark) categorize applications compared to ​​low, medium and maximum values.

We can see that:

  • The average project size of our application portfolio is 424,000 lines of code.
  • The smaller application includes 482 lines: this is the test file we downloaded from the Sonar site in order to conduct our initial analysis.
  • The most important application has more than 1 .6 million lines of code.

We can see immediately that we have 6 small projects, 2 of very large size, and no application of intermediate size.

How is situated our application in this application portfolio? This is what shows us the second widget.

The shape of this curve reflects the previous graph: most applications are below 200,000 LOC, and with 185,000 lines of code, ours is at the top (6/8) of the six smaller projects.

We saw in our assessment that this application had a level of documentation rather low. But this is the case with most projects with 6 applications under the 17% rate comments.

You can see that this chart is the exact reverse of the previous one. Well, the corresponding curve will also be the inverse image of the previous:

With a rate of 14.7% of comments, our application is in the middle (5/8): this is the entire application portfolio that suffers from a lack of documentation.

You remember that our application had a high rate of duplicated code.

In fact, this is the case with most of our projects: 2 with less than 20% of duplicate rows and 4 with a rate between 20% and 40%.

With 50% of Copy-Paste, our project leads on this metric, but not far from the pack.

Finally, we said our application was not complex.

This is indeed the case, with less than 20 points of Cyclomatic Complexity per program, while half of the applications are near or above the 100 points. This reflects the distribution of our application portfolio with a predominance of small applications and two very important applications.

The quality of the applications is quite correct, with an average of 75% compliance to Cobol good practices …

… and 5 applications over 80%. Our project is lower than this average …

… again at the bottom of the list.

Customize a dashboard

The advantage of this plugin is that ou can create your own widgets with metrics that you think interesting.

In the following example, I have customized a dashboard for management with two widgets that show the cost of correcting Blockers and Critical defects, centered on our Quality Model focused on performance and reliability, thus the riskiest defects for the users.

On average, the cost of resolution of these defects is 135 days per application, but six projects have less than 126 days.

With 56 days, our project is one of the first candidates for refactoring.

I told you in the previous post that the management would appreciate if you can provide information to facilitate his decision process.

  • Is this project an exception to the entire application portfolio trends?
  • Can we discern something specific to this application that would cause to decide special actions?
  • Or is it possible to extend something we discovered to the entire application portfolio?

For example, this application shows poor numbers in terms of documentation, but in fact is in the average. Also, however it is high on duplicated code, it is not much higher than the whole portfolio (around 40% of Copy-Paste). If the management is considering a global outsourcing of all the applications, you can explain that the cost of such strategy may be higher than the expected gains.

Another situation: the management must deal with disgruntled users not happy with the number of bugs in these applications or their poor performance, and he asks you to propose a plan for refactoring. This application is a good candidate for such an operation: although not large, there are nevertheless smaller applications with a more complex and higher cost of remediation.

Submit a refactoring in two phases: cleaning duplicated programs and do a new analysis to identify Blockers and Critical defects to be corrected.

To sell ​​the value of your analysis means to deliver objective information in order to facilitate the decision process. The Benchmark plugin from eXcentia allows you to refine your assessment of the quality of applications and to propose solutions based on objective and accurate information.

An essential tool for any assessment.

 

This post is also available in Leer este articulo en castellano and Lire cet article en français.

Leave a Reply

Your email address will not be published. Required fields are marked *