I presented last week the main axes of Capacity Management according to ITIL.
If we try to apply these best practices in the domain of Quality, what are the lessons that can be learned? What would be Quality management seen as an analogy of Capacity management? Is it possible to do « more with less » as do more and more Production teams?
The primary objective of Capacity Management is to deliver the capacity, that is to say, the resources you need: a development or a QA server, a little more disk space for a database, more CPU in a virtual machine, etc.
ITIL adds also that Capacity Management must be ensured in accordance with SLA in a timely and cost effective manner.
If we apply this approach to Quality, we could say that the “Quality management” is to deliver quality, in accordance with service level agreements and deadlines and budgets. How to get there?
Know what you have
In order to provide capacity, you need to know what you have. This is the first step and the basis of a Capability plan. You can not manage what you do not know.
Virtualization has resulted in the creation of technology silos and the ‘Prod’ guys must be able to measure the number of physical or virtual servers for each OS, virtualization platform, etc. It is the same for application porfolios, which have seen an accumulation over time of various technologies: mainframe, client server, software, new technologies (J2EE mostly).
In addition, during its history, a business will evolve into different markets, create new businesses, sometimes acquire other companies, which will multiply these applications supporting the business. The reorganization of the banking sector in Spain is causing mergers and acquisitions between banks, and each time the question arises: what applications to use and which to throw out? This may seem surprising, but IT departments of some of these banks do not even know how many applications they have, much less what technologies are used in their applications or links between them.
Finally, men come and go, change jobs or leave the company and application knowledge is lost gradually over these moves. A responsible of development in an administration told me of a high-level meeting between directors and advisors of the Ministry to implement a recent decision in Brussels that required a change of customs regulations on hundreds and thousands of millions euros. But nobody could say if these changes could be implemented because they cannot know the impacts the systems collecting this money. They had to invite in the meeting an old Cobol programmer to ask him if it was possible to implement these changes in a timely manner. He told me then that such a meeting would not be possible today, and when I asked him why, he replied that the Cobol programmer was now retired and any knowledge of these applications was lost with him.
How many times a client who wants an application assessment does not know how many programs or files are in the application? And I am not even talking about lines of code.
The first step of a Quality management plan should therefore be based on an analysis of the application portfolio. A code analysis tool will provide quantitative data on the size of applications, the number of objects, their complexity, the level of documentation (comments) or duplicated code, for example.
Know the quality of what you have
The analysis of the application portfolio will also provide qualitative measures about two kinds of risks:
- Risks for the user: bugs, errors, defects preventing or impeding the correct use of the application, performance problems generating high response times, security issues or data corruption, etc.
- Risks for the maintainability: some quality defects will not result in a risk of failure for the user but will weigh on the maintainability of the application, with consequences for schedule delays and costs.
It is important to know the level of non-quality and its nature, for each application, so that you know how to …
Answer the user requests
« More with less » means answering faster and better users requests, without increasing the level of resources available to address them. The knowledge you get through a code analysis tool allows you to better estimate the effort required to implement changes requested by the business.
I often see users complain of delays in project teams to deliver a new version and the unreliability of forecasts schedule. Often these teams are experiencing a high turnover, especially when outsourced to a provider who is trying to better adjust its staffing level to the activity of the various projects and affecting developers from one team to another on.
So, how can teams quantify correctly changes in a code that they do not know?
This will be the subject of the next post.