Use cases – Working seamlessly together

We have seen in the previous two posts that some quite common use cases require just a limited number – 10 to 20 – of metrics. These use cases are:

  • Validate the delivery of a new version of the application (Quality Gate).
  • Get objective data for SLAs.
  • Manage the Continuous Integration / Improvement processes.

A comment in the first post asked: “What are these 10 or 20 most important metrics?”

Let us first see the different use cases that you can implement.

Quality Gate

This process consists in the verification and the validation of the quality of an application delivery.

Why the name ‘Gate’?

Because it is a door between two phases, which lets – or not – the output of the previous phase pass to the next one. You will meet usually two types of Quality Gate:

  • Pre-QA: no need to waste time (and money) in QA if it is possible to easily find defects with an automated analysis of source code. And if these defects are too numerous, the code will go back towards the project team who will correct them. We can implement this Quality Gate for in-house teams (development or QA) of a customer or a provider.
  • Pre-Production: with the aim to check if there still are bugs. Do you want your end-users to encounter these bugs? Of course not: the cost is too high for your image and the IT department.

Which of these Quality Gate implement? It depends on the context.
If the application is developed internally:

  • Pre-QA Quality Gate strongly recommended.
  • Pre-production Quality Gate to verify that the corrections of defects encountered during QA has not introduced new ‘bad practices’.

If the application is developed by a provider (Outsourcing)

  • Pre-production Quality Gate required. We will wish that the provider puts in place its own QA.

Continuous Integration / Continuous Improvement

Code analysis implemented at level of development teams, in order to detect defects as early as possible. It is strongly recommended to automate this process to avoid extra work, which require appropriate tools:

  • A version management tool to track changes and to build easily a new version of the application at any time.
  • A tool to generate these ‘builds’ (compilation) of versions of the application and to launch automatically code analysis. We saw an example with Jenkins and Sonar: Sonar – Analysis with Jenkins.

The benefits of such a process are numerous:

  • Continuous Integration: correction of defects as soon as possible.
  • Continuous improvement: the frequent repetition of the analysis helps the project teams to become more efficient in respecting good programming practices and architecture.
  • It is much easier to achieve a Quality Gate.

The project team may submit to the QA team a report with the results of the latest source code analysis. Again, depending on the context:

  • Internal development: the QA team may decide to advance to the next phase simply by approving the report.
  • External development: QA team reviews the report provided by the provider. They may decide that there is too many defects and request a new version, which will require to adjust the schedule. Or they can approve the report and run a Quality Gate in their own environment.

Manage SLAs

A Service Level Agreement (SLA) is a contract between a service provider and his customer, with the objetive to define the quality of that service. The difficulty in to get reliable and objective data to measure the quality of the application and this is where a code analysis tool proves valuable.

The context here is only Outsourcing. Of course, it is possible to use SLA to measure the performance of in-house project teams, but this is usually found only in very large organizations with many teams acting as internal IT providers.

A service level agreement is computed for each delivery of an application with the results of the Quality Gate. The customer can choose between three options, normally defined in the contract:

  • OK: acceptance of the delivery when all quality objectives are achieved.
  • OK with corrections: acceptance of delivery, but some objectives were not met and the supplier will deliver as soon as possible a new version with the corrections requested. This happens frequently, to avoid any delay for a critical version, but only if the remaining defects are minor and do not impact the end user (we’ll see this with the metrics for this process) .
  • KO: there are too many flaws in the new version, or they are too serious. Rejection of the delivery.

SLA process is recommended for customers who work with different vendors, because it becomes possible to perform a benchmarking of providers, resulting in emulation (if not outright competition) between them. You know, when you can say, “You are in last place in the ranking of our suppliers this month.” This can be very effective. This explains why it is not easy to impose something to an unique provider: in this case, the customer has little weight.

It is advisable to start these three processes to achieve greater synergy and maximize profits, but again it depends on the context:

  • Internal development: continuous integration highly recommended, Quality Gate simpler and less expensive, no SLA (but the data quality can be integrated into a dashboard control projects, especially for large departments).
  • Outsourcing: Quality Gate essential and SLA recommended. It is recommended that the provider implement a continuous integration process.

The main benefit of a code analysis tool in terms of SLA comes from the metrics that can objectively facilitate the relationship with the supplier.

As all these processes work together, they will rely on a range of metrics quite similar. We will see that in our next post.

This post is also available in Leer este articulo en castellano and Lire cet article en français.

One thought on “Use cases – Working seamlessly together

  1. Pingback: 10 to 20 metrics | Qualilogy

Leave a Reply

Your email address will not be published. Required fields are marked *