Managing Global Infrastructure
You’ve made it! This is the final installment of our 10-part series on the basics of hyperconverged infrastructure (HCI). The last post dealt with data protection in HCI environments; now we’ll talk about what happens when your operations start expanding.
Managing a global infrastructure is a daunting task. Managing a single location is difficult enough, but managing multiple, geographically dispersed clusters multiplies an admin’s work. Tasks such as upgrading software to a newer version must be repeated for each cluster. Managing applications that circle the globe is nearly impossible without the ability to manage from a single interface.
The generally accepted approach to this is to have a management interface for each cluster, data center, and so forth in smaller environments, but have those rolled up into a global “manager of managers.” The global interface monitors all individual clusters for additional insight across clusters, collating and summarizing information into dashboards as well as providing some distinct features for multi-cluster management.
This approach unifies all assets under management into the proverbial “single pane of glass,” simplifying the day-to-day operations for multi-cluster management.
In addition to a graphical interface, having command-line interfaces and programmatically accessible interfaces is crucial in further simplifying management and automation of the hyperconverged infrastructure. These are especially important in the infrastructure-as-code paradigm, where the state of the infrastructure is stored as configuration code on a version control repository.
Machine Learning-Based Capacity and Performance Planning
A highly-integrated stack is great, but it’s not a major leap forward unless it can drive itself. This is the foundation required to create the self-healing, self-driving data center that operates itself, detects anomalies, and creates actionable insights into capacity and performance planning.
Capacity planning in traditional virtualization is a nightmare. For each resource type, admins have to dive deeply into each silo to gather capacity telemetry. Trend reports then have to be created manually. Finally, the admin has to manually right-size everything. This is, to understate the case, inefficient. Each task is time-consuming, and overlapping and/or missing data can lead to errors in analysis. Admins need specialized knowledge and experience with multiple storage and virtualization vendors.
HCI simplifies capacity planning. All telemetry the software needs is in a single, highly correlated collection. Add in machine learning capabilities and the system will tell you, without any manual effort, when the currently available resources run out.
Automatically surfacing this information in a visualized “runway” allows plenty of time for the admin to add capacity to the cluster before time runs out. As mentioned before, this adds to your bottom line (and makes your CFO happy) because there’s no need to buy capacity that sits idle. You’ll be able to bring the cloud’s pay-as-you-grow capacity strategy to your data center.
To further streamline resource utilization, right-sizing tools can reduce unneeded resource claims by workloads to free up resources that can then be used in new or constrained workloads. Right-sizing is a key element of optimizing resource consumption.
Automatic capacity forecasting, planning, and optimization makes HCI a true self-driving, self-healing data center that optimizes cost, utilization, and infrastructure health.
Traditionally, monitoring systems sent out alerts based on fixed triggers or thresholds. This creates more noise than value. Critical alerts can be missed or even discarded when too many basic alerts must be weeded through.
Machine learning-based algorithms, however, can sift through the noise to find signals that indicate a real issue. Because the system knows expected behavior, it can detect anomalies and serve those up to the admin to address.
Relying on behavioral analysis and predictive monitoring to detect anomalies removes the manual interpretation from troubleshooting and enables finding and remediating bottlenecks automatically.
Every organization is different. Dashboards should be created based on what’s important to your organization. After all, monitoring data just because someone else does isn’t the point. Monitor data you care about with custom dashboards that provide quick, at-a-glance summaries of application and infrastructure status.
Add More Bottom-Line Value
As you’ve seen, infrastructure management is no picnic, no matter what strategy is used. But applying the advantages of HCI can make it more efficient, saving your admins tons of time they can use to work on projects that deliver more bottom-line value to the business. The old ways are hopelessly out-of-date now, especially in the era of public cloud, the Internet of Things, edge computing, artificial intelligence and machine learning, and so on.
The days of silos and static infrastructure that doesn’t leverage automation are gone—or at least, they should be. Doing things the way they’ve always been done because you know how to do it is over. It’s impossible to keep up anymore without streamlining operations and automating as much repetitive work as you can. That is the promise, and reality, of HCI.
We hope you enjoyed this thorough overview of HCI and its many facets. But this is just the briefest glimpse of HCI; this website is a great resource for learning about all aspects of HCI, and we hope you spend more time learning about this game-changing technology.