The typical IT organization over the past few years has been utterly overwhelmed by the complexity, cost, and sprawl that has overtaken the data center. In response, a few important trends have come front and center in the IT industry which attempt to address these issues head-on. Those trends are: the software-defined data center (SDDC) and convergence/hyperconvergence.

From a business standpoint, virtualization brought several game-changing and important benefits, but over the long term many inefficiencies appeared as well. Server virtualization also did nothing to address inefficiencies in storage and network constructs, despite the gains on the server front. From an administrative standpoint, the overhead of managing 8 to 12 hardware and software products on average has caused pressure on the budget, especially from a personnel perspective. Across the IT landscape and regardless of industry vertical, the general trend has been that actual hardware utilizations have been remarkably low and staff utilization has been through the roof.

The rampant overprovisioning of systems coupled with the fragmentation of management domains has become frustrating to the point that IT executives and administrators both agree that something needs to change. This article explores the notion of hyperconvergence as a means of simplification (among other benefits), and software-defined constructs are a part of that picture.

Before Hyperconverged: Converged

The precursor to hyperconvergence is a model called convergence. The basic premise is that a convergence vendor uses industry-leading infrastructure components like networking, shared storage, and servers from the usual suspects to build a single, pre-built and pre-validated solution that you can buy. The draw to a solution like this is that rather than deal with any number of manufacturers (3+ in most cases), you’ll deal with a single point of contact. This means that the sales and procurement process is simplified, as is the support situation.

Essentially, convergence is the bundling of disparate technologies with a pretty single-point-of-contact bow on it.

That said, the downside of a converged solution is that nothing has been fundamentally redesigned from a technology perspective or a management perspective. Essentially, convergence is the bundling of disparate technologies with a pretty, single-point-of-contact bow on it. Once the solution is in use, however, the experience for the day to day operations team is more or less the same. When it comes to completely re-thinking the experience, that’s where hyperconvergence comes in.

How Hyperconvergence is Different

Hyperconvergence is a horse of a different color. Rather than a repackaging of classic infrastructure systems, hyperconverged infrastructure (HCI) uses a different model entirely. First and foremost, many of the data center services that would have traditionally been a disparate system are a native part of the HCI system. Imagine if, instead of the way most data centers look today, all of the following and more were a part of a single, cohesive platform based solely on x86 servers.

  • Servers + Hypervisors
  • Shared storage systems
  • Data protection products (backup, replication)
  • Deduplication appliances
  • Wide-area network (WAN) optimization appliances
  • Public cloud gateways

Obviously, to provide all of this with nothing more than a stack of servers, lots of software is required. Hyperconvergence as a model is the first step in the realization of the software-defined data center ideology.