Software Quality Maturity Model & Technical Debt

Software Quality Maturity Model & Technical DebtI suppose that you all know CMMI? This model developed by the Software Engineering Institute provides five levels of maturity in order to measure the quality of IT services, and best practices associated with this scale of five levels, in order to progress through it.

I will not write a whole post on this model, it is not the subject of this article, but I will try to present it simply, as I could summarize it to someone who knows nothing about project management and application development.

CMMI, the different levels

The model starts at level 1 where no process is defined. This is what I call the level of the champions or, the stuff of heroes, because the success of a project depends on the presence of a champion, such as an expert in the technologies used on it or an experienced project manager. In short, at least one person, usually well known, always appreciated and who generally ends up allocated to projects of highest priority. The failure of other projects does not encourage the organization to improve its practices.

The objective of level 2 (“Managed” in CMMI terminology) is to put in place processes that allow to overcome champions and make sure that any team can realize successfully any project by implementing them. This is what I call ‘replicate the success’ independently of individuals.

These processes are then adjusted (Level 3 “Defined”), that is to say, customized according to the project, with loops of improvement to capitalize on each experience. Level 4 (“Quantitatively Managed”) implements metrics and statistical measures to control the processes, and thus lay the foundation for Level 5 (“Optimized”) in which the organization works in a constant process of optimization.

CMMI level zero

I have always been surprised that the model starts at level 1, since in fact at this level, there is nothing, no process, only perfect amateurism. But it allowed me to define what I call the “CMMI level zero”, that well describes some organizations that I have known, and for which the company culture promotes and encourages the emergence of champions. Instead of seeking to progress to level 2 where the “indispensable people” are not necessary for the success of a project, managers looks unfavorably at any time spent into setting up any process, even as simple as a light documentation to reproduce correctly repeatable project’s tasks and ensuring their continuity in the absence of “champions”. They may even create inextricable situations of extreme difficulty to prove that they are made of “the stuff of heroes”. Like “Let’s change everything on last day to work overnight and deliver tomorrow in a hurry”.

Generally, these managers divide the world between “winners”, a small group to which they obviously consider that they belong, and “losers”, those weak and capricious people who are generally reluctant to work in the permanent chaos generated by an elitist hierarchy. These managers certainly do not want to progress up the CMMI scale but on the contrary to evolve down a CMMI level zero of quality where any improvement idea is suspect and whimsical.

A maturity model of software quality

The CMMI model has lost popularity since some companies supposedly certified at the highest level have actually proved unable to deliver a proper service quality. But I believe it has the advantage to be logical and easily understandable by everyone, and in many areas.

So I did imagine the following model that I propose to customers who want to measure their level of maturity in Software Quality.

Qualilogy_Software_Quality_Maturity_Model

At the first level, there are no processes or any structures to manage Software Quality. This does not mean that there are no initiatives in this area, but it’s only by a few champions who decide by themselves to install on their project a code analysis tool, versions control or to track bugs and change requests.

Level two is characterized by a reactive attitude to quality problems: the organization seeks to identify software defects and non-compliance of good programming practices that might impact users and endanger the business of the company, or cause delays in the delivery of applications and increase maintenance costs. At this level, software quality management is usually done by a dedicated structure, which installs and centrally manages a quality platform, that projects can join according to processes not always very flexible and often considered as restrictive. This team usually delivers on-demand services like Quality Gates to approve or reject the delivery of a  new release by an outsourcer, and sometimes does assessments for the management.

Level three allows us to switch to a proactive attitude in which we no longer simply identify and respond reactively to defects, but try to prevent them proactively, and as soon as possible in the project lifecycle. This requires a process of Continuous Integration and usually some Agile practices.

At level four, metrics are used to measure and control Software Quality in all aspects of project management. Service providers must comply with SLAs (Service Level Agreement) that incorporate them. Benchmarkings are done to measure the quality of service among providers and compile a ranking highly appreciated by the Purchasing or Procurement department in charge of negotiating prices with them: “Sir, you went from 5th to 7th place this quarter. You have to do something. I am waiting for an effort from you”. IT directions use Quality measures in order to draw a map of their information systems and to define a strategy for each block of applications.

At level five, projects are encouraged to optimize the management of Quality by pioneering new practices such as code reviews, Test Driven Development or eXtreme Programming (XP). The results on Software Quality are measured and compared to determine which of these practices are most profitable in the short, medium or long term. Trainings and workshops are conducted in order to spread the best practices and encourage their adoption.

Maturity and Technical Debt

Measuring Technical Debt is useful at all levels. At level one, we might find a few champions who know the metaphor, but this will be isolated initiatives, and moreover not always understood by management.

Assessments at level 2 may rely on an evaluation of the Technical Debt, especially when management hesitates about the strategy for an application. I frequently used it, based on certain criteria, such as the criticality of the application and its alignment with the business. For example, within the same company, new markets with high growth will need new applications where time-to-market, reliability and performance will be critical to create a differentiation from the competition and gain precious market shares. On older markets, we will find a strategy of maintaining margins: therefore, IT will focus on compliance with budgets, and our assessment of the Technical Debt will then be aligned with the maintainability of applications.

It is always important to align Technical Debt on the business when you have to answer questions to the management and propose different strategies and associated action plans: outsourcing of an application, knowledge transfer to another internal team (possibly composed of external developers), refactoring, reengineering (see our serie of posts on this subject). For a quality assessment, evaluating the Technical Debt is a real added value for the customer.

At level 3, Technical Debt is measured daily or at least on a frequent basis by the project team to avoid as soon as possible any inflation of the debt and its interests. Technical debt is thus managed at each iteration of the project and not specifically at the end of it (Quality Gate) or occasionally (assessment) as at level 2.

At Level 4, the Technical Debt is measured with the objective of containing it and not to grow the interest on the debt as at level 3 but also in a normalized way on all projects. This translates into SLAs for external suppliers. For example:

  • No additional complex classes or No additional God classes.
  • 0 Blockers (KO on Quality Gate).
  • 0 Critical defects (OK if solved in the next release).
  • Code coverage has not decreased.
  • Documentation has not decreased.

Normalized SLAs between outsourcers make possible to perform the benchmarkings I talked about before. Obviously, we can perform these same measures on internal teams, and therefore answer this universal question CIOs always have: “why do some teams deliver their projects in time and on budget while others constantly fail to do it”?

The same measures can be used to build a map of the Technical Debt at application portfolio level (by business). It is possible to build various kinds of portfolio representations along different axes such as:

  • Treemap along the axes Size x Technical Debt.
  • Quadrant along the axes Criticality x Technical Debt.
  • Representation of portfolio as a 3D City along the axes Complexity x Technical Debt.

These representations will provide valuable input to the management, and believe me, there is nothing that IT management appreciates most that informations that help them to make decisions. I think I will dedicate a specific post to this subject: this one is already too long.

I do not make significant difference between level 4 and level 5: we will find the same processes, but level 5 will generalize the idea to experiment with new methods that help to contain or even reduce Technical Debt, and spread widely diffusion of successful practices through trainings and workshops.

Using the model

Please forgive me CMMI purists, but you will understand that I am using the above model, not as a framework for implementing processes or certifications, but as an explanatory model, logical and easy to understand. Besides, I consider it possible to mix levels, that is to say, to have some teams at level 1 and others at level 3, within the same organization.

For example, I often see companies (or public services) that could be considered at level 2 with a specific structure dedicated to Quality management. Because it is centralized, it is not always present on projects. At the same time, I have seen cases where some teams were setting up their own tools and processes, regardless of this Quality entity, which generally results in that this structure eventually finds itself isolated, if not unemployed.

Another use of this model: it promotes the involvement of the hierarchy, increasingly important as you progress through the levels. You may well have some teams at level 2 or 3, but without support of the management, you will remain always blocked at level 1 with just a few teams of “champions”.

Another benefit – and final point in conclusion of this post: the model shows that the quality comes from the bottom and spreads on to the top of the organization. To reach level 4 where the measures of Quality and Technical Debt are used by IT management to make strategic decisions, you first need to have these measures carried out by in-house projects or by external suppliers. But Quality cannot be decreed. Although management shows a voluntary attitude and acts as a driving force in the implementation of Quality programs, however, it cannot succeed without the support of the teams, often reluctant to face initiatives they consider designed to control or measure their productivity with unclear or even arbitrary rules.

This is where Technical Debt, because it is objective and measurable, is a valuable tool. The estimation of the Technical Debt is now implemented on many projects, thanks to code analysis tools that support it. Agile methods are also increasingly present, that promote its adoption and use. Two indispensable components for successful projects and better management’s decision making.

This post is also available in Leer este articulo en castellano and Lire cet article en français.

Leave a Reply

Your email address will not be published. Required fields are marked *