7 Myths About Hyperconvergence
Hyperconverged systems have quickly become thought of as the go-to architecture for everything. The industry, however, has seen this type of enthusiasm for a concept, development approach or overall architecture time and again. Industry research firms quickly rushed in to “put their stamp” on this emerging trend and either came down on the side that this approach is good for everything you’re doing, or is still too immature for anything but the most limited workloads.
Here are seven myths about hyperconvergence that have surfaced in the industry over the last couple of years. This list, of course, isn’t exhaustive or complete. Are these thoughts you or your team are holding?
Myth No. 1: it’s the only way to go for greenfield solutions. It’s amazing how often and how quickly a new approach becomes seen as a panacea; that is, the best approach to address every workload. While this approach offers a number of benefits, if used correctly, some workloads, often comprised of older, monolithic applications, may not use the environment in the most effective way. In those cases, other hardware architectures may be a better fit. At least consider the possibility that these systems might be a good fit for many of today’s applications, not just new ones.
Myth No. 2: all hyperconverged systems are based upon open standards. While these systems are based on x86 architectures, the internal bus structures are likely to be proprietary. This often means that processor cards, memory, network adapters and storage components offered by one vendor may not be compatible with systems offered by another. If that type of plug-and-play architecture is important, it’s wise to be aware of the strengths and limitations of products offered by different vendors before wholeheartedly adopting them for every mission.
Myth No. 3: they can’t support critical workloads. Some suppliers will point out that hyperconverged systems can’t be relied on to support the largest and most stressful workloads. They, of course, will then put forward their own systems as the best choice. While it’s true that some workloads are beyond the current capabilities of today’s hyperconverged systems, they are powerful and scalable enough for many solutions an organization needs. Thus, they can safely be used to address those specific requirements. Like any approach, hyperconvergence needs to be understood; including its strengths, weaknesses and the projected future.
Myth No. 4: they’re always the most cost-effective approach. Hyperconverged systems offered by different suppliers have different characteristics, including performance and price. It’s always important to evaluate a vendors’ offering before jumping in with both feet. One vendor may offer products that are the best fit for one type of workload, and another may shine in another way. Take your time and choose wisely.
Myth No. 5: VDI is the only workload it will support effectively. Some competitors are slamming hyperconvergence as nothing more than a solution to a single problem: VDI. Not true; applications built using services, regardless of whether they’re deployed as virtual machines (VMs) or in containers, may be equally effective and scalable when hosted via hyperconvergence. What is likely to be the case, however, is that older, monolithic applications that need more processing, memory, storage or networking than currently available in a hyperconverged system won’t be a good fit today. Tomorrow, however, very well may be another story. The industry is innovating rapidly in this area, and some amazing things are on the horizon.
Myth No. 6: It eliminates all interoperability challenges. No architecture solves all hardware or software interoperability issues. If the organization relies on a product or technology that doesn’t currently work with the newest generation of hyperconverged systems, there will still be interoperability challenges. Since so many suppliers are focused on extending what these systems can do, it’s likely that compatibility and interoperability issues will be increasingly overcome, so stay tuned to industry announcements. Understand that no specific hardware approach is going to make up for the lack of an overall plan or software architecture.
Myth No. 7: It eliminates management issues. Highly distributed, multi-tier, multi-site applications are challenging to manage and often require many different types of expertise and management tools. While implementing hyperconvergence can address some management issues, others will remain a challenge for the foreseeable future.
The history of information technology is full of stories of a new technology or approach appearing that offers the hope of addressing all enterprise issues. Then, as they’re deployed and the industry gains more experience with them, reality sets in and companies can clearly see where they fit and where they don’t fit in the enterprise IT infrastructure.
As with any emerging technology or approach, hyperconvergence doesn’t fit everywhere today, and won’t solve all of a business’s needs. Where they do fit, they’re awesome. Where they don’t currently fit, another approach likely will. Keep watching, however, because hyperconvergence is likely to see many enhancements and improvements in the near future. Over time, where they’ll fit is likely to change.