16 Apr The True Cost of Ownership in a Legacy Infrastructure
Modern technology and exciting performance or manageability improvements are a natural reason to upgrade data center systems. And for many businesses, this is the driving factor in looking at hyperconvergence as the model for their next platform. They have high performing applications that are regularly demanding even more from the infrastructure, and upgrades must provide substantially increased performance for the growing workload.
There is, however, an increasingly prevalent tertiary reason for data center upgrades being set in motion: money. Of course, most business decisions eventually come back to money in one way or another. In the case of a looming data center upgrade, there’s often an even more pressing motivator than the potential for increased performance, capacity, or (insert other technical benefit). The individual or committee holding the checkbook has looked at the cost of upkeep on the organization’s existing solutions, and they are shocked.
Any substantial purchase in the data center likely comes with at least two components: the initial capital purchase (hardware, software, professional services), as well as ongoing costs like support from the vendor or man hours for staff maintaining the equipment. In some cases, customers today are finding that it costs them as much or more to maintain their current solution than it does to just buy a brand new one!
The astronomical cost of some support agreements leaves some organizations with no choice but to retire old systems and purchase new ones. This is no accident, of course. I’m no conspiracy theorist; some simple business logic can explain why a vendor would build in a pricing structure that necessitates a hardware refresh every few years. If it didn’t become expensive to keep old hardware, the vendor would lose money by: having to constantly replace failing hardware, burning time troubleshooting outdated models, and not progressing technically because legacy systems have to be accounted for. This is valid, and I’m not opposed to this scheme for keeping business moving forward. It doesn’t seem to be working anymore, though.
While this plan may have worked in the past when the industry giants basically monopolized the data center space, it’s starting to become quite risky. The problem with the model is that every few years when the forced refresh comes, the customer has the opportunity to replace aging systems with another vendor’s product. With myriad choices to suit any need, the incumbent is often left out in the cold after having been replaced with a more impressive solution from a newer, more agile company at less than the cost of maintenance renewal on the old vendor’s system. At the end of the day, business is about the bottom line, and who wouldn’t want a faster, bigger system for less money?
Care and Feeding
An expensive component of the TCO of any system is day-to-day upkeep. Many enterprises employ multiple people to focus solely on their storage systems, for example. Storage has historically been a quite specialized discipline, so from a salary perspective this may be a very expensive team of administrators. In light of recent developments in the storage and hyperconvergence markets, there a few ways by which legacy systems end up being too expensive to keep.
From an implementation perspective, legacy systems are often non-trivial to expand or upgrade. This means that one or more of the highly paid administrators on that team will have to burn hours upon hours on implementation before the system (or upgrade) is even usable. In a modern data center, this is just unacceptable. The agility available from newer systems allows for changes on the fly, often within a few mouse clicks. It’s not an exaggeration to say that some legacy systems can take a couple of hours at the command line to upgrade, versus a few clicks in the GUI of a newer system. Remember, every hour adds to the TCO of the system.
Many hours are also burned troubleshooting issues in a data center. To some degree, this is unavoidable due to the nature of technology. But some of the time spent troubleshooting can certainly be due to unnecessary complexity brought about by a legacy system’s presence in a modern data center. Complexity increases troubleshooting difficultly, therefore decreasing complexity is quite likely to lead to less hours spent troubleshooting. Simplicity rules the modern data center, and many legacy systems just don’t fit the bill.
A final, potentially overlooked cost that chips away at the economy of legacy systems is lost revenue due to outages. This won’t be seen when looking at the budgetary numbers for a given system or project, but it’s almost a sure thing that a production system being unavailable costs the business money.
One must be careful when looking at this cost, because revenue lost when a system is down due to human error or organizational deficiencies don’t necessarily impact the TCO. That problem may have existed regardless of what technology was in place. However, if the system is inherently unreliable due to architecture, code, or hardware, outages caused by these components directly increase TCO. Can the organization afford not to replace a legacy system, when 3 outages a year costs the same amount as an upgrade would cost, and last year there were 5?
Technical types (myself included) will argue all day about which system is technically superior. This is especially true when it comes to the systems that support virtualized workloads. Regardless of which solution wins on technical merit, the bottom line usually carries the most weight. And as long as all relevant factors are considered, this is usually sound decision making. When evaluating potential data center solutions, and especially when it comes to upgrade time, be sure to review the true cost of owning legacy hardware. Once an organization has uncovered what it truly costs them to avoid upgrading, a rip-and-replace upgrade can seem like a pleasant experience in comparison.