We have seen in the previous post that, in order to provide quality in accordance with Service Level Agreements (SLAs), schedules and budgets, we need to know what we have, the application portfolio, but also its quality.
This knowledge based on quantitative and qualitative metrics help us to better meet users needs, as we will see in this second and final article.
Meet business needs
Do more, better, faster with less is the main challenge of the project teams currently. The management, especially among outsourcers, is constantly trying to adjust the resources to the level of project activity, and it is not uncommon to see one or more developers move to another team at the end of a project. Knowledge of the code is lost, and with it the reliability in predicting workloads and plannings for a new version.
Even when the functional requirements are accurate, to implement them may take the double or even more time depending on the quality of the code, its size, its complexity, its readability, the difficulty to introduce free-defect changes.
Knowing the application and its state from a quantitative and qualitative point of view then allows to define a strategy:
- Changes in this area that presents risks for maintainability? If you really have to modify any of these components, then plan to test exhaustively in order to avoid regression.
- A prior refactoring of this part of the application is necessary because there are too many duplicated code, that will cost a lot of effort to implement the requested changes. It is better to rewrite some components rather than wasting time to identify all the pieces of code copied and pasted and to change them. This will also simplify QA and will lower the risk of bug for the users.
- This application is poorly documented, with very poor comments, and there are too many new developers in the team. Better provide a few extra days at the beginning of the project to discover the code, even redocument it. This may avoid some nasty surprises later, and gain time later, especially if requirements change during the project.
It is not possible to define an action plan, estimate the level of resources needed and a schedule, without knowing what you have and the quality of what you have.
Optimize the decision
Another lesson that we can draw from the analogy with Capacity Management, that I find very interesting: the ability to guide the user’s requests based on objective data.
ITIL is clear about managing the infrastructure:
- In compliance with service level agreements, most often regarding the availability of resources.
- Profitably: 100% availability is not an objective because it would be too costly.
Let’s show it with an example.
A marketing department using an enterprise CRM software (Customer Relationship Management) is complaining about bad response time and slowdowns during the end of year, when it is most used, and that SLAs are not respected. They ask for an upgrade of the server without no additional cost.
The Capacity Manager meet with them and explain that the power available on the server is 8 000 MIPS, not 100% sufficient for optimum availability over the last two weeks of the year, but only 6 000 MIPS are used during the eleven and a half months remaining. Over the year, the service level agreements are met.
Based on this knowledge, he can offer several solutions:
- Upgrade the server as the users want, but at an higher cost corresponding to the additional resources.
- Provide during 50 weeks the power of 6 000 MIPS, or a little more in order to keep a safety margin, and go to 9 000 MIPS on the 2 weeks remaining. The bill will be lower, even if a cost of intervention will be added to handle this additional flexibility.
- A final possible solution: downgrade the server to only deliver 6 000 MIPS throughout the whole year, at the cost of an even greater degradation of resources during the end of year. This solution could be considered in case of budget restrictions and / or if marketing activity is lowered.
When it comes to application development, we assume that quality is necessarily maximum. A customer or a stakeholder would be horrified if you tell him that you cannot handle a 100% quality, even while his most important priorities, deadlines and budgets, are moving, usually decreasing. More with less. Now it is well known, because this is common sense, that fast and cheap do not rhyme with quality. Especially that schedules are most often decided at the beginning of the project, when requirements are not yet fixed and that workloads will often fluctuate in the ‘higher’ direction.
If this occurs on your project and you expect delays, use a code analysis tool within a Continuous Integration process in order to, like this Capacity Manager:
- Demonstrate with qualitative measures (number of blocking violation, reviews, etc.) that technical debt is controlled and the level of code quality is maintained.
- To justify that additional time is needed by showing that additional requirements implemented in the code will increase development complexity and QA tests (on these subjects, see Complexity and QA effort et The ABC metric).
- To propose solutions.
These solutions could be:
- Postpone the date of delivery in order to maintain the expected level of quality, keeping the technical debt under control, and allowing a serious QA plan.
- Respect original deadlines but with a risk for the quality and potential bugs for users. Refactoring will probably be needed later.
- Deliver a first version of the application without all the expected features, but in time and with acceptable quality without excessive risk to users. A second version would come in a second time to complement requirement, and correct errors found in the first version.
« More with less » does not mean to produce more code with less ressources. I see a lot of managers or customers believe that, but we can learn from Capacity Management and ITIL that the true meaning is to be able to respond to requests from more demanding users, in terms of functionality, deadlines, budgets and quality.
Implement a Continuous Integration process with a code analysis tool, Quality Gates before QA / Production, and SLAs (see for instance Use cases – Working seamlessly together) and you can get objective data that will allow you to offer alternative strategies to your clients and managers, better answer their requests, give them visibility, optimize decisions, based on the knowledge and optimal control of your portfolio of applications.
More and better with less: deliver the quality on time and budget.