Best of both worlds

Following the previous post What is the first question ? about the two major questions concerning application quality – costs vs. risks – someone asked if there were some processes or best practices that contribute to these two objectives.

Is there a “best of both worlds” way to produce free defect software without exceeding budget and schedule?

 

Well, you don’t really need best of both worlds. For instance, if I have to do an assessment for a banking system, I will verify that priorities are:

  • Asset management: time to market, no bugs, no performance or security difficulties. Money is not a problem: they can double the team to be on schedule. Maintainability is not too much a worry as the business evolves rapidly and the application could be obsolete within the next 3 years.
  • Bank retail: old systems, most of them Mainframe Cobol, with a lot of outsourcing because budget is the main point. Customers are used to 2 or 3 weeks delays for every major release and some bugs.

This is really important to qualify what is the business/IT alignment.
Say to the first team (Asset management) that you will measure drifts of maintainability in order to avoid budgets drifts or say to the second team (Bank retail) that you will put in place a zero tolerance policy on violations of performance best-practices, both would say: “So what? This does not matter”.

I do not believe in a “recette-miracle” or a magical potion that could lead to a best of both worlds solution. Now, let’s imagine I am asked to take in charge a project without knowing its ‘alignment’, this is what I would do:

1. ‘You cannot control what you cannot measure’ (Tom deMarco). I would first implement a code analysis tool that could give me a map of the quality of the application. See Measure and Control.

2. I would decide some strategy on managing this application with its experts: this area is risky, don’t touch it. Now if you must modify one of his components, test it exhaustively. To rewrite that part of the application in order to eliminate duplicated code and make these components reusable will be a priority, or it will cost us too much to maintain it later. Document this area? Ok, can wait. How much time to do this, etc…
This would lead to an action plan with different priorities at different costs at short / mid / long terms. You need to know the application ‘alignment’ to do it.

3. Code defects / best practices violations are easy to find and to solve. Do regular code analysis and put in place some process to solve new ones, according to your action plan. This would be a first step toward a Continuous Improvement process. On this subject: Continuous Improvement.

4. Requirements defects (lack of, or erroneous or misunderstood requirements) are very costly and very difficult to indentify. This is where you need some methodology involving peer/user reviews, prototyping, agile, etc… I let you choose the one you prefer but in my opinion, it must:

  • Incorporate the user: he has to validate both usability and functionalities at some point(s). I mean final users or representative users, but they must be part of the project and take their responsibilities. How much requirement defects are in fact requirements that have changed? As a project manager, I want it to be clear that I am not responsible for that.
  • Be compatible with the technology. I don’t see much Scrum or agile methods on Mainframe-Cobol projects.

5. Tests. They can weigh on budgets and schedules. First, define the level of QA to be done. If the application is not very complex and with a small size, unit/integration tests could be sufficient. In fact, the most difficult for complex/big applications is to define how much to invest in test plans, QA teams and tools.
This is where collaboration between project and QA teams is important. You can show the quality map and your action plan to the QA teams. For instance, a customer asks for an audit on the scability of an application: they have to multiply data and users by 3.
1st code analysis identified potential performance leaks that are solved. Results of subsequent analysis are shown to QA teams during the development phase, in order to help them plan their tests. It was agreed to automate performance tests as it is easy to do, and manual testing focused on functionalities. A Quality Gate was put in place so that the quality of development could be proved Ok and receives a GO to next phase.
You can spend a lot of time and money on tests: try to get as much visibility as you can to focus on areas where your investment will be optimized.

6. I like feedback from the fields. Get the number of defects in production and correlate it with the number of violations found with regular code analysis. Show the results to your teams and to your management. Usually, you should be able to demonstrate it goes in the right direction and everybody is proud to know everything goes in the right direction.

7. Try to have the right equipment. Code analysis tool plugged with development tool, change management tool and even some testing tools. Depends of technologies.

8. You will need teams with a good level of maturity.

This would be a long way and could take some years to have all this in place.
So again, I would choose the best way according to IT objectives and alignment and not try a “best of both worlds” strategy.

This post is also available in Leer este articulo en castellano and Lire cet article en français.

Leave a Reply

Your email address will not be published. Required fields are marked *