What are the main ideas behind hyperconvergence as an infrastructure model?
Data center infrastructure is constantly improving, but every so often a whole new way of thinking emerges. These new infrastructure paradigms are almost always birthed as a solution to pressing business needs which the existing technology is incapable of meeting. Hyperconvergence is one of these new ways of doing IT.
So, what is hyperconvergence? Fundamentally, hyperconvergence is a way of constructing private data centers that seeks to emulate public cloud consumption in terms of its operational simplicity, economic model, and scaling granularity. And it provides all of this, of course, without sacrificing the performance, reliability, and workload availability that businesses today rely on.
The Benefits of Hyperconverged Infrastructure
The most exciting benefits for adopters of hyperconverged infrastructure are the following:
Focus on the Workload: For too long, infrastructure policy and management have focused on the wrong constructs. Managing LUNs and hosts and clusters is old school. In the post-cloud era, the workload should be the focus. In the hyperconverged model, the application is the focus.
Data Efficiency: The nature of hyperconverged infrastructure lends itself well to a high degree of data reduction by way of deduplication and compression, which leads to more approachable requirements for storage capacity, network bandwidth, and IOPS requirements.
Elasticity: The beauty of the cloud is that if you need to scale out or in, you just click a few times and it’s done. Hyperconvergence focuses heavily on scaling easily, in bite-sized units; this model stands in stark contrast to the 3- or 5-year purchasing model of traditional IT.
Data Protection: Hyperconvergence is about simplifying and unifying infrastructure features. Rather than manage a separate backup and replication product, hyperconverged systems typically have this critical technology built right in.
It hasn’t been a short journey to get here. Before hyperconvergence came convergence. Although the converged infrastructure model didn’t take the idea quite as far as hyperconvergence has, it was a fantastic step in the right direction and has been foundational for executing on the vision of hyperconvergence as we see it today. Understanding a little bit about convergence with regard to the way it has informed hyperconvergence can inspire some appreciation for how things got to where they are today.
The Evolution of Converged to Hyperconverged
Convergence is a fancy word for a simple idea: the data center has become too complex and the plethora of vendor relationships has become burdensome to manage; therefore, a solution that removes some complexity and reduces the number of relationships would be welcomed. Several offerings emerged that essentially took known data center technologies like shared storage, virtualization, and networking and combined them into a single solution. The solution is based on enterprise-grade, best-in-class platforms and is pre-validated before it leaves the factory to ensure that everything is functioning properly. Once the order arrives at your data center, you literally roll the rack in, plug it in, and go.
The data center has become too complex and the plethora of vendor relationships has become burdensome to manage.
As helpful as convergence was, especially for huge companies like those in the Fortune 500, it was largely inapplicable to the mid-market and below because converged infrastructure orders are placed on the order of racks and rows and the buy in starts at hundreds of thousands of dollars. It’s easy to spec a small converged system that pushes into the 7-figure range. A more granular approach was needed for the rest of the market.
As well, the combination of existing technologies eased the procurement and deployment burden, but did nothing to address the ongoing operation complexity that plagued many organizations. To address that, an entirely new model was needed.
Hyperconvergence as an evolution of convergence addressed both of those challenges by rethinking all the services and hardware that make up a modern infrastructure and building what truly is a single solution out of those disparate parts. The result is a system that is a whole new kind of simple, which leads to operational efficiencies and a reduction in OPEX spending because the same number of personnel can accomplish more than they previously could thanks to the simplicity. Adopters also see CAPEX savings in that the initial purchase can be as small as a couple of servers (as opposed to an entire rack packed full of gear). As the infrastructure expands and more resources are needed, businesses buy only what they currently need.
Next: Sprawl and the dreaded “I/O blender”. Read Part 2…
Do you want massive traffic?
Dignissim enim porta aliquam nisi pellentesque. Pulvinar rhoncus
magnis turpis sit odio pid pulvinar mattis integer aliquam!