The public cloud is alluring to the enterprise. The service is easy-to-activate, elastic, and the cost of entry is low. But it’s not the right tool for all use cases. It presents major challenges for many, including cost predictability, uncertainty around regulatory compliance, and the specter of vendor lock-in.

Even if the public cloud is not the magic solution some would like it to be, its ubiquity is changing how enterprise IT is done.

Major cloud-based companies like Google and Facebook run massive environments that are nothing like legacy data centers. These “hyperscale” data centers are characterized by scalability and radically different economics. These are achieved in part by running a highly automated, software-defined environment on top of commodity hardware. The success these companies are having with this model at scale is changing expectations of how a data center should operate universally.

Most organizations don’t need their IT footprint measured in hectares. However, the very best design elements from these cloud players have been brought to the hyperconverged world and conveniently packaged in appliances that any company can buy, regardless of how big or small.

Software First

Large-scale cloud providers build their environments without relying on expensive proprietary components. Instead, they buy commodity hardware in bulk.

To some people in IT, “commodity” is a synonym for “cheap” or “unreliable.” They’re not entirely wrong, but this is actually more of a benefit than a detriment.

Keep in mind that in hyperconverged architectures, hardware is less important than software. Hyperconvergence software is built with the understanding that hardware can and inevitably will fail. The software is designed to anticipate and handle these failures with a minimum of vexation.

To some people in IT, “commodity” is a synonym for “cheap” or “unreliable.” They’re not entirely wrong, but this is actually more of a benefit than a detriment.

This creates several advantages. Commodity hardware carries a lower price tag than proprietary hardware. Also, it’s functionally interchangeable. A hyperconvergence vendor can switch its hardware platform without recoding the entire software offering. Because change is quick and easy, hyperconvergence vendors ensure that their customers get affordable hardware without disruption.

It’s About the Workloads

The public cloud is in a state of constant change, so ensuring that changes are made without disruption is critical. No cloud provider wastes time rebuilding individual policies and processes each time it adds equipment to its data center. Automation is key for them; and things should work the same way in the world of enterprise IT. A change in data center hardware shouldn’t necessitate reconfiguration of all your virtual machines and policies.

Suppose, for example, that you define policies that copy workloads between specific logical unit numbers (LUNs) for replication purposes. Now, scale this up by several thousand LUNs. If a storage device dies and needs replacement, you will need to find each individual policy and reconfigure it to point to the new hardware. At scale, this quickly becomes absurd.

Instead, policies should be far more general, allowing the management software to make the granular decisions.

The workload takes center stage in the cloud. In the case of enterprise IT, these workloads are individual VMs and, increasingly, containers. Administrators should be able to define policies as simple as “replicate Accounting apps to colocation facility.” All the proper VMs, containers, and data follow automatically.

Cloud management is all about applying policies to workloads — not to LUNs, shares, data stores, or any other infrastructure constructs. The same is true in hyperconvergence.

Economies of Cloudscale

Cloud providers use very different economic models than enterprise IT organizations. Enterprise IT infrastructure is expected to last many years, so IT teams buy enough capacity and performance to last that long. In many cases, however, the full capability of the infrastructure is never used.

Enterprises may overbuy to ensure that capacity lasts the full life cycle. If they do not, they will need to buy individual resources when they begin to run low. This leads to reactive IT: watching resources constantly and hoping that the existing infrastructure doesn’t have end-of-life status.

Now consider cloud vendors, who don’t make one huge purchase every five years. Doing so would be insane in a few ways. A lot of hardware would have to be purchased up front. Additionally, accurately planning three to five years’ worth of resource needs in these kinds of environments may be impossible.

Cloud companies don’t create complex infrastructure update plans each time they expand. Nor do they make one large upgrade every five years. Doing so would be impractical, both because of the difficulty of planning so far in advance in that environment, and because of the sheer quantity of equipment involved. Rather, they add more standardized units of infrastructure to the data center as necessary, scaling in small increments.

Hyperconverged infrastructure adopts a similar approach to data center growth. Rather than expanding one component or hardware rack at a time, IT teams can simply add another appliance-based node to a homogenous environment. The entire data center becomes a huge virtualized resource pool. When needed, administrators can expand the pool simply and efficiently.

Hyperconvergence brings many of the advantages of the cloud into the on-premises data center.  It changes the economic model to consumption-based planning and purchasing. It adds flexibility to enterprise IT without compromising on performance, reliability, or availability. Rather than planning huge upgrades every few years, IT simply adds appliances to the data center as needed. This approach gives the business much faster time-to-value for the expanded environment, and reduces the likelihood of paying for capacity that will not be needed until years in the future (or ever).

Enterprise IT can have the same agility, automation, and ease of use that people have come to expect from the public cloud. It just takes the right building blocks.