Most organisations reward their project managers for achieving scope, within a given timeframe for a specified budget. While scope is usually measured in terms of user functions most projects usually include the decommissioning of legacy systems. Unfortunately it is the decommissioning step which is most often compromised in the final stages of any project.
Most organisations reward their project managers for achieving scope, within a given timeframe for a specified budget. While scope is usually measured in terms of user functions most projects usually include the decommissioning of legacy systems. Unfortunately it is the decommissioning step which is most often compromised in the final stages of any project.
I’ve previously written about the need to measure complexity (see CIOs need to measure the right things). One of the pieces of feedback I have received from a number of CIOs over the past few months has been that it is very hard to get a business case for decommissioning over the line from a financial perspective. What’s more, even when it is included in the business case for a new system, it is very hard to avoid it being removed during the inevitable scope and budget management that most major projects go through.
One approach to estimating the benefit of decommissioning is to list out the activities that will be simpler as a result of removing the system. These can include duplicated user effort, reduced operational management costs and, most importantly, a reduction in the effort to implement new systems. The problem is that last of these is the most valuable but is very hard to estimate deterministically. Worse, by the time you do know, it is likely to be too late to actually perform the decommissioning. For that reason, it is better to take a modelling approach across the portfolio rather than try to prove the cost savings using a list of known projects.
The complexity that legacy systems introduce to new system development is largely proportional to the cost of developing information interfaces to those systems. Because the number of interfaces grow to the power of the number of systems, the complexity they introduce is a logarithmic function as shown below.
Any model is only going to provide a basis for estimating, but I outline here a simple and effective approach.
Start by defining c as the investment per new system and n as the number of system builds expected over 5 years. Investment cost for a domain is therefore c times n. For this example assume c as $2M and n as 3 giving an investment of $6M.
However the number of legacy systems (l) add complexity at a rate that rapidly increases most initially before trailing off (logarithmic). The complexity factor (f), is dependent on the ratio of the cost of software to development (c) to the average interface cost (i):
For this example assume l as 2 and i as $200K:
The complexity factor can then be applied to the original investment:
In this example the five year saving of decommissioning the three systems in terms of future savings would be of the order of $2.9M. It is important to note that efficiencies in interfacing similarly provide benefit. As the cost of interfacing drops the logarithm base increases and the complexity factor naturally decreases. Even arguing that the proportion of the ratio needs to be adjusted doesn’t substantially affect the complexity factor.
While this is only one method of identifying a savings trend, it provides a good starting point for more detailed modelling of benefits. At the very least it provides the basis for an argument that the value of decommissioning legacy system shouldn’t be ignored simply because the exact benefits cannot be immediately identified.