We have seen in the previous post how to initiate a reengineering with SonarQube, starting with a functional redistribution and a re-design of our application.
Now we can go down a little deeper into the code to identify flows of treatments that would be good candidates for a restructuring. Continue reading
We defined our reengineering project as rewriting our legacy application in a new language or migrating it to a new technology, as opposed to a refactoring which involves reorganizing the code and to correct certain defects in order to make it more maintainable and reduce its technical debt.
We saw different plans of refactoring more or less ambitious, with the help of SonarQube and the SQALE plugin, from the resolution of the most critical defects to a reduction of the technical debt to a ‘A’ level (SQALE rating ) .
However, for the same Legacy application, is it more interesting to carry out a reengineering project or ‘just’ refactoring? Continue reading
A reengineering does not always mean in everyone’s mind, rewriting our Legacy application into another language generally more recent, but it is nevertheless the option we chose.
When it is ‘only’ about reorganizing the code to make it more maintainable, but without porting it onto a new hardware or software platform – such as migrating a Mainframe Cobol application to a Unix architecture – I prefer to talk of a refactoring.
I remind you that this blog has no academic ambitions, so I will not worry about meticulously exact definitions, which lead most often on quadripilotectomic (1) discussions from specialists who have nothing else to do than gossip on any comma. Continue reading
When it comes to calculating a ROI, I keep it simple: I assume that we will reduce maintenance costs in an amount equal to the reduction of the technical debt.
It is a hypothesis that can be found simplistic and therefore debatable, but our ambition is not in numbers of an absolute and exact precision – it would be pretentious and unrealistic – but to provide to the management the elements that will facilitate its decision.
And I think that managers prefer a simple and clear hypothesis, even if not completely accurate, rather than a complex formula that is not necessarily more realistic. Continue reading
As explained in the previous post, we have not customized the SonarQube Quality Profile and the SQALE model for our Legacy application, depending on its context, as it should be.
In fact, I use the results ‘out of the box’ in order to illustrate one possible approach to estimate the cost of refactoring this application, and present some ideas to the project team and management, for further actions – or eventually see them rejected.
In other words, it’s the process that interests us more than the results of our application, at least in the context of these articles.
What can we propose, based on the SQALE plugin? Continue reading
In order to estimate the cost of refactoring our Legacy application, I will use the plugin SQALE of SonarQube, more usually employed to measure the technical debt.
We have already presented this plugin with a PL/SQL Legacy application. So just remember that the SQALE plugin is based on the SQALE quality model, and I will also add, on a method to adapt the model by aligning it with various business objectives, the technology or the contexte of the application. Continue reading
We have seen in the previous post how to use the SonarQube dashboard to estimate the effort of caracterization tests, recommended by Michael Feathers in his book ‘Working Effectively with Legacy Code’.
We categorized the various components of our Legacy application (Microsoft Word 1.1a) in different groups, the simplest functions with Cyclomatic Complexity (CC) of less than 20 points, the complex and very complex functions up to 200 points CC, and finally 6 ‘monster’ components.
We built a formula based on the Cyclomatic Complexity and a readability factor, in order to evaluate the testing effort on each of these groups. Continue reading
In the two previous posts, we presented the definition of characterization tests as proposed by Michael Feathers in his book “Working Effectively with Legacy Code”.
We showed briefly how we can use such tests in order to acquire the knowledge of the application’s behavior. I say briefly because, ideally, we should have developped and presented some tests as examples, but that would require several posts, and this series is already very long. Just have a look to Michael Feathers book if you want to go more in depth into this.
We just have to remember that these tests will facilitate the transfer of knowledge from our Legacy application (Microsoft Word 1.1a), and any subsequent refactoring or reengineering operation will be faster and safer. Continue reading
In our previous post, we have presented Michael Feathers and his book « Working Effectively with Legacy Code » according to which the absence of unit tests is the determinant factor of a Legacy application.
He proposes the concept of characterization tests to understand the behavior of the application, in order to qualify what it actually does, which is not exactly the same than discover through the code what it is supposed to do.
So what about our Legacy application which does not already have unit tests? Can we adress one of our three scenarios – transfering the knowledge of the application to another team – with unit tests? Would it be easier, especially if we also have to think to the other two strategies to evaluate: refactoring and reengineering? Continue reading
Back from summer vacation and back on this series of posts about using SonarQube with a Legacy C application, in this case the first version of Word published by Microsoft in 1990.
We posed the following hypothesis: Microsoft has just been sold and its new owner asks you, as a Quality consultant, to recommend a strategy for this software.
Do not think that this never happens: software editors are purchased every day, and R&D people and software code are at the heart of these acquisitions.