Moving Toward Invisible Infrastructure
The previous post in this 10-part series on hyperconverged infrastructure (HCI) looked at the 5 management pillars. Now we’ll look at how to take the next management leap.
The reality of many infrastructures is that most time is spent on the infrastructure itself, not on the applications running on top. Infrastructures are complex beasts, with many sensitive integration points between the different silos; these points often break, especially during upgrades and configuration changes.
Unfortunately, infrastructure on its own does not bring inherent value to an organization. While important, it’s not something of value in and of itself. In reality, it’s more of a necessary evil, a cost center rather than a revenue generator.
This is why simplification of the day-to-day operations and management tasks via automation, smart workflows, and intelligent operations makes sense. If infrastructure holds little intrinsic value, why not make it simple—even invisible?
Making infrastructure invisible and operations intelligent is the goal of many public cloud and HCI solutions in the market; it’s what sets the current generation of IT infrastructure apart from the virtualization platforms of yore.
Infrastructure management in a hyperconverged world isn’t fundamentally different than the traditional model—the basic components that make up the infrastructure are the same. Many of the responsibilities and tasks are identical, too.
The big differences lie in the level of integration between the components and the management software that manages the whole. In older siloed approaches, there would be a management tool or suite for each of the pillars: servers, virtualization, storage, networking, back-up, disaster recovery, databases, and operating systems.
These disparate tools lacked integration, with little to no contextual information flowing between the systems; each tool just gave their own limited view of the infrastructure’s health, performance, and other issues. This made the life of an infrastructure admin more complicated than necessary.
With the infrastructure and virtualization space maturing, the HCI model provided integration of both the hardware and software components into a single-vendor solution, which in turn opened up the opportunity to tightly integrate all the infrastructure management tools into a single solution.
Many HCI players jumped on this opportunity to create a single, unified view of the infrastructure world. They built management software that can manage the entire infrastructure from a single interface, simplifying workflows and adding richer contextual information from different parts of the stack, allowing for comprehensive monitoring and alerting.
Manage Resources in Bulk, Rather Than as Individual Entities
With highly integrated systems like HCI, cluster management is highly integrated as well. Resources are managed as a whole, instead of per-system or per-silo. This vertical (across silos) and horizontal (across many instances) integration allows for much simpler management. See Figure 1.
With this level of control over hardware, hypervisor, storage, and networking, many of the day-to-day workflows can be automated out of the box, leaving as little manual work as possible for operators.
HCI management is often focused on higher-level workflow management, where system admins follow an automated workflow instead of manually going through various steps. Workflow automation allows admins do more work that adds business value, instead of bogging them down with technical minutiae.
Another HCI advantage is the integration level of operational telemetry, which provides admins global system-at-a-glance information, instead of having to correlate monitoring data from different systems manually. This allows for much richer monitoring and alerting, which in turn means quicker troubleshooting of issues. Tracking all environmental mutations by encapsulating most of the changes in an audit log is a major part of this.
In addition, by not having to go through multiple (sometimes up to a dozen or so) monitoring consoles to get a glance at the environment’s health, it also takes much of the guesswork and manual correlation out of the equation; the system only surfaces up admin alerts that require manual action.
This is especially important for correlating performance metrics throughout the infrastructure. Being able to drive granularity down into the virtual machine, virtual disk or container level is becoming more important, since many of these workloads are becoming more ephemeral in nature. Understanding the resource usage patterns of your applications, and knowing how and what to scale, is a crucial capability of infrastructure management.
There are a number of areas where a holistic view of the hyperconverged environment really stands out. One is data resiliency. Just like any storage environment, data consistency and resiliency are important to monitor. In scale-out architectures like HCI, this is doubly true, as too many node failures lead to data consistency problems.
Monitoring the health states of physical nodes and reporting the number of failures that can occur until the critical point is reached gives admins an immediate overview of the status of their cluster.
Finally, at-a-glance health and capacity planning information is vital for delivering a cloud-like experience to the enterprise. Scaling the infrastructure on a just-in-time basis is an important capability that will make your CFO happy.
The management tour continues with the next blog, which is about efficiently managing a virtual environment.