The Layers of Hyperconvergence
When I’m talking with clients about hyperconverged systems, I’m often surprised at their misconceptions about what it really is, especially their belief that new forms of hardware are doing all the heavy lifting. While it’s true that modular system design, very high speed internal networks, as well as shared networking, storage, system memory and cache all play an important role, many different categories of software are needed to make the hyperconvergence really work.
It's time to add some clarity, so let’s stroll through the stack of software categories that are at the heart of hyperconverged infrastructure.
Starting down the stack of software from top (nearest the folks using the system) to the bottom (nearest the networking and storage that supports the gathering of workloads), we see the following categories of software in use:
In the past, client-side systems were directly attached to the servers that provided applications and data. Today, client-side devices such as PCs, tablets, smartphones, and Internet of Things devices connect to applications executing in the enterprise data center or to cloud-based applications and services in many ways.
This type of software is common in hyperconverged environments because it makes it easier for enterprises to “glue everything together.”
Software at this layer is available from many vendors. It makes it possible for client-side systems and applications to receive and update data, and use network and cloud-based services without being forced to know where they’re located in the world, whether they’re in the enterprise’s own data center or in the data center of a cloud services provider.
This software also provides info about about the network connection they’re using at the moment or even what vendor is culpable – err, responsible -- for the design of either the client-side system or the server(s) it’s communicating with.
Some industry analysts call this type of software “Access Virtualization.”
Modern client-side and server-side applications may be developed using a large number of development languages, tools and frameworks. Everyone expects applications developed using one language, tool or framework to happily get along with applications developed in other ways.
Enterprises expect that applications developed in Java, one of the “C” family of languages, Python, PHP, Ruby, and many other languages to just work together. Unfortunately, vendors change things when they release new versions of their products, changes that sometimes lead to broken functionality. Industry analysts, ever fond of naming things, sometimes speak about “application virtualization” when talking about this type of software.
Everyone expects hyperconverged systems to support many applications and workloads, without them being forced to have detailed knowledge of what’s supporting them. Software at this level makes it possible for applications to move from place to place to support a changing environment; prevent slowdowns or failures; enable applications to live in multiple places to provide high levels of scalability, reliability and availability; and even, in some cases, allow applications written for one operating system to execute under the gentle guidance of another.
You know this software layer as the hypervisor, but it increasingly also means containers. It has the capability today to make a single system look like many different systems, many different systems to look like a single resource to applications. It also offers the capability to manage processing so that applications vying for the attention of an overloaded or oversubscribed system can be gracefully moved to another system that isn’t breathing so hard.
Most modern data centers use many different types of network media simultaneously. Software makes it possible for applications to use whatever networking links are available to them, without having to know the details of how they work. It also can make it possible for many different network links to be used in parallel to improve performance.
This type of software can also control what other systems applications can speak with by creating an illusionary environment in which only the application and it support functions can see one another. This is “software-defined networking,” or SDN.
Modern applications may rely on local storage, shared storage within the hyperconverged system, shared storage within the data center or cloud-based storage. The key, however, is they access the storage without knowing where it’s located, the vendor who designed it, and what technology it’s based on. In many cases, applications may be using different types of storage simultaneously, and data objects may be moved from place to place to improve performance, reliability or reduce the costs of storage.
This software-defined storage or SDS may do tricky things to reduce the size and the cost of storage objects, such as compress or deduplicate it. Data objects may also be replicated in many places to prevent data loss or application failure, or improve application performance.
SDS can be intelligent enough to move data objects around from high-performance, high-cost storage media to much lower-cost, slower-performance media to reduce costs. Some software in this category can move data objects into and out of the enterprise data center to cloud-based storage.
A brief walkthrough of the exhibit hall at a VMworld conference demonstrates that SDS is one of the most competitive areas of the industry.
It’s A Software-Driven World
Hyperconverged systems, while they’re clever bits of hardware on their own, are an example of today’s software-driven world. Much of what enterprises expect them to be able to do really comes from intelligent software that helps companies break out of the shackles of traditional data center limitations.