06 Oct Tier 1/Dedicated Application Support
Not every company needs to tear down their entire data center and replace everything with shiny new hyperconverged infrastructure appliances. The chances are pretty good that you can’t really do that even if you wanted to. However, you may have a single application that’s challenging you and needs to be tamed. Or, perhaps you have a new application that you need to deploy, and you can’t deploy it on your existing data center infrastructure.
For you, hyperconverged infrastructure still might be just the answer. In fact, even if you have only a single application, you might still be able to leverage hyperconvergence.
Enterprise Application Needs and Challenges
Not all enterprise applications are created equal. Every application has a unique performance profile, and each requires a varying amount of resources to be dedicated to that application.
Most traditional data centers are not equipped to handle applications that don’t fit a mainstream operational envelope. That is, most traditional data centers are equipped to operate a broad swath of mainstream applications, but don’t always have the capability to support applications with very unique resource needs. The kinds of applications that fit into this category will vary dramatically from company to company. For some, the entire centralized IT function consists of just a file server, so even something as common as an Exchange system would place undue stress on the traditional environment. For others, the traditional environment handily supports Exchange, but SQL server is a step too far.
Every application has some kind of an I/O profile. This I/O profile dictates how the application will perform in various situations and under what kind of load. On top of that, every organization uses their systems a bit differently, so I/O profiles won’t always match between organizations. As you deploy new applications, it might be time to leverage hyperconverged infrastructure.
A lot of people worry about virtualizing some of their resource hungry applications for fear that they won’t perform well. This is why, even to this day, many companies still deploy physical SQL Server, Exchange, and SharePoint clusters. While physical deployment isn’t “wrong,” the benefits of virtualization are well-known and include better overall hardware utilization and better data protection capabilities.
Hyperconvergence and Dedicated Applications
The right hyperconverged infrastructure solution can help you to virtualize even the largest of your Tier 1 mission-critical applications while also ensuring that you have sufficient resources to operate these workloads. Plus, don’t forget the major role hardware acceleration plays in some hyperconverged systems.
By offloading the “heavy lifting” operations, you can more confidently virtualize I/O-heavy applications while also reducing the amount of storage capacity those applications require. With deduplication being handled by a hardware card, you can gain the benefits of deduplication without incurring the typical performance penalty that can be introduced when deduplication has to be handled by a commodity processor.
Elements of the Microsoft stack, including SQL Server and SharePoint, can be safely virtualized and significantly accelerated by moving to hyperconvergence. The same holds true for Oracle. Other I/O hungry applications are growing in popularity, too. Two emerging applications that carry with them pretty significant I/O requirements are Splunk and Hadoop. Splunk is a logging tool subjected to abusive write needs while Hadoop is a big data analytics tool that requires a whole lot of both read and write I/O capability. Both need a lot of storage capacity, too, which is where the aforementioned deduplication features come into play.
Even better, as you need to grow, you just grow. Scalability is a core part of the infrastructure. When you grow, you can add more storage capacity, more storage performance, more CPU, and more RAM as needed, so you don’t need to worry about encountering a resource constraint somewhere along the line. That said, one common misperception about hyperconverged infrastructure is that you are absolutely required to scale all resources at exactly the same rate. This is simply not true. For example, with SimpliVity, you can add compute-only nodes that don’t have any storage. With Nutanix, you can add storage-heavy nodes. It’s not a one-size-fits-all conversation. Even in these cases, scaling is simple.
Moreover, for whichever applications you choose to include in your hyperconverged infrastructure, depending on the hyperconverged infrastructure solution you select, you can gain comprehensive data protection capabilities that will help you more quickly recover in the event of a disaster or another incident. In addition, you can also inherit the ability to manage the hyperconverged environment from a single administrative console.
Finally, if you’re thinking “private cloud” with regard to your data center, you have to virtualize your Tier 1 applications in order to bring them into the centralized, API-driven management fold. A private cloud is a VM-centric construct that requires high levels of virtualization to imbue the environment with the agility and flexibility needed to get things done.
Just when you thought that you had everything solved by virtualizing and moving to hyperconvergence all of your Tier 1 applications, now comes along a directive to consolidate your disparate data centers.