10 May Infrastructure Sizing: Hyperconverged vs. Traditional Infrastructure
When it comes to hyperconverged infrastructure, one of the questions that gets asked is a pretty simple one: What is the difference between sizing a traditional data center environment as opposed to a data center based on hyperconvergence?
Let’s take a look at the way that a traditional data center environment is implemented and maintained, particularly in the SMB and small midmarket space, but also in some enterprises. In most organizations, there is some kind of replacement cycle for all IT equipment, including the systems that comprise the data center. At regular intervals, the IT department is provided capital budget funds to pay for this replacement. The people responsible for the environment consider both current needs as well as any application needs that are expected to arise within the next 3 to 5 years, which is often the range of time that data center equipment is kept around.
The challenge here is that IT pros need to individually size each resource and attempt to predict the future at the same time. In many cases, in order to make sure that the environment can support needs for the foreseeable future, everything is purchased up front for that period. There is a lot of downside to this approach:
- You may overbuy. This leaves money on the table and the organization may not enjoy as positive a return on its investment as it could.
- High sunk costs. By buying so much equipment up front, there is a great deal of cost that is spent but that can’t be leveraged until the organization grows into the environment
- You may under buy. Mid-cycle upgrades aren’t always easy to deal with, but they’re really common as organizations either discover that they didn’t buy enough infrastructure up front or new business needs pop up.
On the positive side, with hyperconverged infrastructure, you add new resources a node at a time, with each node holding compute, RAM, and disk. Depending on the hyperconverged solution you select, you may have the ability to size – at least to a point – the individual nodes that you add.
On the downside, because you have to add all resources as you add new nodes, you may be forced to add compute or RAM even when you just need more storage capacity. However, this is the nature of a linear scale out solution; everything is added as the environment scales so that adding a node does not significantly imbalance any one resource. Most vendors that sell hyperconverged appliances provide the ability to configure individual nodes and also offer a variety of node options so that organizations can choose, for example, a storage heavy node, even though that node may still add some compute and RAM.
So, yes, there are some sizing differences between legacy and hyperconverged data center architectures. To summarize:
- Legacy systems are generally sized up front and implemented at the front end of the replacement cycle. Of course, there is generally a staggering so that the company isn’t ripping and replacing the entire data center all at once, but there are prescribed upgrade times based on a predetermined schedule and budget availability.
- Hyperconverged systems are sized minimally up front and the organization snaps in new nodes as business needs dictate. Sizing is done once up front and each time a node is added, but the sizing happens across resources whereas a single resource may be targeted in a legacy environment upgrade.