Data Protection in a Hyperconverged Infrastructure Environment
Resource management was the focus of the last article in this 10-part series on hyperconverged infrastructure (HCI) fundamentals. Now it’s time to discuss the crucial topic of protecting the data that’s the lifeblood of your business.
Data protection is a key element of any infrastructure. For each component in the layer cake of the infrastructure, a data protection strategy is required. With the advent of HCI, these functionalities have been built into the system natively.
Disks are the smallest entity in the layer cake. By using erasure coding, both individual disks within a system, as well as the appliance itself, are protected against failure by storing multiple copies or parity checksums on different disks across different systems. This allows any cluster to sustain one or more appliance failures without data loss.
Backing up data into a completely separated environment protects against failures in both the data itself (like accidental deletion) as well as in the primary software stack. No matter how battle-tested, there is always a risk of catastrophic bugs in software that makes data unreadable, corrupt or inaccessible. A completely separate stack of software and hardware mitigates that issue.
But the most common use case for backups is the ability to granularly restore items, like files, e-mails or individual virtual disks or VMs. This protects against accidental deletions, human errors and malicious behavior.
Restoring backups requires existing infrastructure to restore the data to. In some failures, the primary infrastructure is not available for restoring data. In those cases, the entire dataset is restored, rather than individual items. For this, we need available infrastructure.
Disaster recovery uses different techniques to capture and transport the data to the secondary infrastructure. Replicating and shipping the primary environment’s entire dataset protects against cluster-, rack-, or data center-level failures.
Clusters are usually bound to a physical location such as a single rack or data center. The exception to this rule is a stretched cluster, which spans physical locations. Normal clusters, however, use data replication to another cluster in another physical location to protect against failure within a single location. There are two forms of replication: Synchronous and asynchronous replication.
Each has different characteristics for time-to-recovery (Recovery Time Objectives, or RTO) and amount of lost data (Recovery Point Objectives, or RPO) and has different associated costs and complexity. Stretched clusters are a special form of synchronous replication that allows for more flexibility and tighter integration between the physical locations.
Managing Data Protection
Managing data protection in traditional infrastructure requires a multitude of different tooling and solutions from multiple vendors. As it does for other aspects of management, HCI integrates all these tools into the stack, without compromising the level of protection.
From within a single user interface, admins can protect workloads based on their criticality to the business. Some workloads require just a weekly backup, while other, more business-critical workloads require real-time protection using synchronous replication or stretched clusters.
The complexity of the underlying infrastructure is hidden by the hyperconverged management stack, freeing admins configure protection levels for business outcomes, instead of having to translate the business value into many deeply technical constructs, configurations, and automation platforms.
Remember that there is no inherent value in the infrastructure alone. The infrastructure is important only in that the applications that businesses rely on need the infrastructure to operate. Time spent managing the infrastructure is time away from fine-tuning those business value-generating applications. A hyperconverged infrastructure adds the most value by minimizing the time infrastructure admins spend managing the infrastructure itself.
Virtual Machines and Containers
Let’s take this idea a step further. VMs and containers by themselves are not valuable. They’re a technical requirement for running the applications. Minimizing complexity and the time spent managing these should free up admin time to work on the applications themselves.
HCI hides most of the complexity behind the scenes in managing VMs. Storage, networking, and compute are abstracted away. The technical configuration of VMs and their operating systems are automated into easy-to-use workflows.
This paradigm is massively different than traditional virtualization solutions. Those put the VM at the center of the universe. While a major step up from the previous paradigm of physical servers, it was still not a value-centric approach.
On-Demand Self-Service for App Owners Speeds Up Software Delivery
Hyperconverged solutions are moving up the stack in recognition of the true value of applications. Not unlike Google’s Play Store or Apple’s App Store, the simplicity brought by HCI minimizes the effort admins have to put into managing the lower-level constructs in the infrastructure.
And this brings us to the last entry in our series: how to manage your infrastructure when it breaks beyond your data center walls.