Architecting the Hyperconverged Data Center
[vc_row][vc_column][vc_column_text]Watch the video of HCI explained here!
[dropcaps type='square' color='#ffffff' background_color='#e04646' border_color='']D[/dropcaps]ata centers are dynamic, complex, and sometimes even chaotic. As business needs evolve, so does the data center, with IT staff working hard to ensure that the operating environment is sufficiently robust. Hyperconverged infrastructure starts to change the mechanics behind how these efforts are carried out. With regard to hyperconvergence, there are a number of architectural elements that must be considered in order to determine the best path forward. But always remember: one of the primary goals of hyperconvergence is to simplify infrastructure decisions in the data center.
You don’t need to worry about buying all kinds of different hardware, because with hyperconvergence the traditional silos of compute and storage resources can be wrapped up into a single hyperconverged appliance. Moreover, with the right hyperconverged infrastructure solution, you can converge far more than just servers and storage. You can also include your entire backup-and-recovery process, your deduplication and WAN acceleration appliances, and much more. Your architectural decisions can revolve around higher-order items, such as those described in the following sections.
Decision 1: Server Support
Not all hyperconverged solutions ship in the same kind of packaging. For example, there are appliance-based hyperconverged solutions from companies such as SimpliVity, Nutanix, Scale Computing, and Maxta. And then there are software-only solutions that you install yourself, which include Stratoscale and Maxta. Maxta is on both lists because they support both pre-configured appliances and software-only.
With an appliance-based solution, you’re buying the full package, and you just need to plug everything in and turn it on. These are really easy to get going since most things are already done for you. However, with an appliance-based solution, you generally have to live with whatever constraints the vendor has placed on you. You need to remain within their hardware specifications, and you don’t always get to choose your server platform, although many appliance-based solutions do support servers from multiple vendors. For example, SimpliVity solutions can be shipped on SimpliVity’s Dell server platform or on Cisco UCS thanks to SimpliVity’s partnership with Cisco. Nutanix can be purchased on either Dell or Supermicro, and Maxta has relationships with a variety of server vendors.
If you’d rather go your own way with regard to hardware, you can choose a software-based hyperconverged solution. With these products, you buy your own server hardware and configure what you want for each resource, making sure to stay within the minimum guidelines required by the hyperconverged infrastructure solution. Once you have the server hardware delivered, you install the hyperconverged infrastructure software and configure it to meet your needs.
Software-based solutions are really good for larger organizations with sufficient staff to install and support the hyperconverged infrastructure. Hardware-based solutions are often desired by companies that are looking for a more seamless deployment experience or that do not have sufficient staff to handle these tasks.
Decision 2: The Storage Layer
Let’s face facts. One of the main reasons people are dissatisfied with their data centers is because their storage solution has failed to keep pace with the needs of the business. It’s either too slow to support mission critical applications or it doesn’t have data efficiency features (deduplication and compression), thus forcing the company to buy terabyte after terabyte of new capacity.
Many storage devices are not well-designed when it comes to supporting virtualized workloads, either. Traditional SANs are challenged when attempting to support the wide array of I/O types that are inherent in heavily virtualized environments. At the same time, storage has become more complex, often requiring specialized skill sets to keep things running. For some systems, it’s not easy to do the basics, which can include managing LUNs, RAID groups, aggregates and more.
As companies grow and become more dependent on IT, they also start to have more reliance on data mobility services. Legacy storage systems don’t always do a great job enabling data mobility and often don’t even support services like remote replication and cloning or, if they do, it’s a paid upgrade service. Without good local and remote cloning and replication capabilities, ancillary needs like data protection take on new challenges, too.
None of these situations are sustainable for the long term, but companies have spent inordinate sums of cash dragging inadequate storage devices into the future.
Hyperconverged infrastructure aims to solve this storage challenge once and for all. At the most basic level, hyperconverged infrastructure unifies the compute and storage layers and effectively eliminates the need for a monolithic storage array and SAN.
How does the storage component actually work if there is no more SAN? Let’s unveil the storage secrets you’ve been dying to know.
Software-Defined Storage Defined
Abstract. Pool. Automate. That is the mantra by which the software-defined movement attains its success. Consider the SAN. It’s a huge and expensive device. Software-defined storage (SDS) works in a vastly different way. With SDS, storage resources are abstracted from the underlying hardware. In essence, physical storage resources are logically separated from the system via a software layer.
Hyperconverged infrastructure systems operate by returning to an IT environment that leverages direct-attached storage running on commodity hardware, but many solutions go far beyond this baseline. In these baseline systems, there are a multitude of hard drives and solid state disks installed in each of the x86-based server nodes that comprise the environment. Installed on each of these nodes is the traditional hypervisor along with software to create a shared resource pool of compute and storage.
What’s more is that there are vendors who collapse data protection, cloud gateway technologies, and services such as deduplication, compression and WAN optimization into their solution. In essence, hyperconverged infrastructure leverages the concepts behind software-defined storage systems in order to modernize and simplify the data center environment.
With storage hardware fully abstracted into software, it becomes possible to bring policy-based management and APIs to bear in ways that focus efforts on management on the virtual machine rather than the LUN. The virtual machine (VM) is really the administrative target of interest whereas a LUN is just a supporting element that contributes to how the virtual machine functions. By moving administration up to the VM level, policies can be applied more evenly across the infrastructure.
To VSA or Not to VSA?
There is a lot being written these days about why virtual storage appliances, or VSAs, (which run in user space) are terrible, why VSAs are awesome, why hypervisor converged (kernel space) storage management is terrible, and why hypervisor converged storage management is awesome. In short, should storage management services run in user space (VSA) or kernel space (kernel integrated)?
Defining VSA and Kernel-Integrated Management
Before examining the facts behind these opinions, let’s take a minute to make sure you understand what constitutes a VSA versus a kernel-integrated storage management system. Bear in mind that both VSAs and kernel-integrated management systems are part of the software-defined storage family of storage systems in which storage resides in the server, not on SANs or separate arrays – at least in general.
A VSA is a virtual machine that runs on a host computer. This virtual machine’s purpose is to manage the storage that is local to that host. The VSAs on individual hosts work together to create a shared storage pool and global namespace. This storage is then presented back to the virtual hosts and used to support virtual machines in the environment. Hyperconverged infrastructure companies such as SimpliVity, Nutanix, and Maxta all use VSAs to support the storage element of the solution.
Figure 1 provides a conceptual look at how VSAs operate. The key point here is to understand that the VSA is a virtual machine just like any other.
Figure 1: This is the general architecture that includes a VSA
Most hyperconverged systems on the market use this VSA method for handling storage abstraction.
However, kernel-integrated storage is another method you should understand. Referred to as kernel-integrated storage management or hypervisor-converged storage, this non-VSA storage management method operates through the implementation of a kernel-based module that resides in the hypervisor. In other words, instead of a virtual machine handling local storage management, this hypervisor kernel handles the job. The most well-known kernel-integrated hyperconverged infrastructure solutions are VMware VSAN/EVO:RAIL and Gridstore, which uses an operating system driver to handle storage needs.
Choosing a Method
So, which method is better? Let’s take a look at both options and how they align with needs around hyperconverged infrastructure.
When considering hyperconvergence design selection, keep the importance of hypervisor choice in mind. If you don’t need multi-hypervisor support, then either a VSA or a kernel-integrated kernel module will work equally well. Multi-hypervisor choice is often not a legitimate requirement as long as the intended solution supports the hypervisor you want to use, or plan to use in the future. How do we know this? In our market report, ActualTech Media’s 2015 State of Hyperconverged Infrastructure, only 12% of respondents felt that multiple hypervisor support was a critical feature in a hyperconverged solution. Even then, only 26% of respondents felt that support for a specific hypervisor was important, leading one to believe that people would be willing to migrate to an alternative hypervisor if the hyperconverged infrastructure solution made sense.
As soon as you introduce a need for multi-hypervisor support, your only choice is to work with a VSA. Because a VSA is just another virtual machine running on the host, that VSA can be easily transitioned to run on any other hypervisor. When it comes to portability, VSA is king. There are far more VSA-based hyperconverged infrastructure solutions available on the market.
Hypervisor-integrated systems will lock you into the hypervisor to which the kernel module is tied. For some, that’s a big downside. For others, it’s not a problem since they don’t have any plans or desire to move to a different hypervisor.
Finally, let’s talk reality. VMware has spent years tuning the general hypervisor for performance and has told their customers that it’s more than sufficient for running even their most performance sensitive applications, including monster databases on Oracle and SQL Server, Exchange, and SAP. It boils down to this: If it is good enough for those kinds of really I/O-heavy applications, why can’t it support storage and hyperconvergence?
It’s hard to say that one solution is “better” than the other. Instead, they’re just different ways to achieve the same goal, which is to abstract storage, pool it, and present it back to the hypervisor as a single shared resource pool. The choice really comes down to other goals you may have for your environment.
The Role of Custom Hardware in a Commodity Infrastructure
The first S in SDS stands for software. SDS is very much a software-driven storage architecture. However, this doesn’t mean that custom hardware has no place in the solution. For software-defined purists, having any custom or proprietary hardware anywhere in the software-defined data center might be considered blasphemous. However, don’t forget that we live in a world where not all is black and white. Shades of gray (no reference to the book intended!) permeate everything we do.
The purists are right to a point. Proprietary hardware that doesn’t serve a strategic purpose doesn’t belong in a software defined data center. However, when proprietary hardware provides a significant value-add that significantly differentiates a solution, it’s worth a hard look. The vendor isn’t creating that proprietary hardware for no reason.
SimpliVity is one vendor that has made the strategic decision to include a proprietary hardware card in their hyperconverged infrastructure solution. The company’s accelerator card handles much of the heavy lifting when it comes to complex storage operations.
In the modern data center, some storage truths must be observed. The first is that latency is enemy number one. The more latency that is introduced into the equation, the slower that workloads will operate. SimpliVity’s accelerator card is inserted into a commodity server and uses custom engineered chips to provide ultra-fast write caching services that don’t rely on commodity CPUs. Moreover, the accelerator card enables comprehensive data reduction technologies (deduplication and compression) to take place in real-time with no performance penalty in order to massively reduce the total amount of data that has to be stored to disk and the I/O that it takes to carry out operations.
Even when there is some custom hardware, “software defined” has nothing to do with hardware. Software defined is about abstraction from the underlying hardware, thereby allowing software to present all services to applications.
Decision 3: Data Protection Services
Data protection shouldn’t be considered an afterthought in your data center. It should be considered a core service that is central to how IT operates. Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) should be a key discussion point as your considering hyperconverged infrastructure solutions. Bear in mind that not all hyperconverged products come with the same levels of data protection.
Decision 4: The Management Layer
The data center has become an ugly place when it comes to management. There are separate administrative consoles for everything in the environment. The result is that administrators have no consistency in their work and are burdened with inefficiencies. To simplify management in the datacenter, admins need as few interfaces as possible. Here are the most common options that you need to be aware of when considering a hyperconverged virtual infrastructure:
Virtualization Layer Management
For those using VMware vSphere, vCenter is going to be the virtualization layer management tool that must be in place. Organizations using Microsoft Hyper-V will use System Center Virtual Machine Manager (SCVMM).
Orchestration and Automation Layer Management
Once the hyperconverged infrastructure is running, common tasks must be automated to gain efficiencies. Common orchestration and automation tools are:
• VMware’s vRealize Automation (vRA) — Provides automated provisioning through a service catalog. With the ability to deploy across multi-vendor cloud and virtual infrastructures, vRA allows you to provide the applications to the business as needed.
• Cisco’s Unified Computing System Director (or UCSD) — For those using Cisco UCS Servers, UCSD offers dynamic provisioning, dynamic hardware, and significant reduction in management points.
• OpenStack — Rapidly growing in popularity, OpenStack is being adopted by enterprises to create private cloud infrastructures.
OpenStack is highly customizable and offers the lowest entry cost of any cloud management platform (CMP) because of its open source price tag. KVM is the default hypervisor for OpenStack, therefore your main concern may be whether or not the HCI solution you choose supports multiple hypervisors, and specifically KVM.
Many hyperconvergence solutions provide you with a whole new management interface for your virtual infrastructure. With most hyperconverged solutions running vSphere today, this idea of creating a whole new management tool for the virtual infrastructure disregards that fact that you already have one — VMware vCenter.
A Relative State Transition Application Programming Interface (REST API) provides you with the entry point required to integrate multiple management points and automate your datacenter.
You should ensure that the hyperconvergence solution you choose offers compatibility with the virtualization management, automation, orchestration, and API tools discussed here. Also, ensure that your hyperconvergence solution does whatever is possible to reduce the number of management points and tools that are required for administration and troubleshooting.
By its very nature, hyperconverged infrastructure requires using some kind of hypervisor. The hypervisor has become the standard layer on which most new business applications are deployed. Although there are still services deployed on bare metal servers, they are becoming far less common as virtualization assimilates more and bigger workloads.
With virtualization forming the core for hyperconvergence infrastructure solutions, the question naturally turns to one of hypervisor choice. If there’s one thing IT administrators try to avoid, it’s a lack of choice. Organizations demand choice, and this is also true when considering the server virtualization component of the data center.
Just keep in mind a few key facts when choosing hypervisor support:
• First, although variety in choice is highly desired, it’s not always required for individual hyperconverged infrastructure solutions. There are options on the market today that each support vSphere, KVM, and Hyper-V. If you absolutely demand to be able to use a particular hypervisor, there is likely a solution waiting for you. However, not every hyperconvergence vendor supports every hypervisor.
• Second, for the hyperconverged infrastructure vendors that do support multiple hypervisors, the customer (that’s you!) gets to decide which hypervisor to run on that platform. With that said, we discovered in our 2015 State of Hyperconverged Infrastructure report that people don’t believe that supporting multiple hypervisors is that important.
In terms of which hypervisors are the most popular, it’s pretty common knowledge that the most popular hypervisor on the market today is VMware’s vSphere. With a formidable command of the hypervisor market, vSphere is offered as a primary hypervisor choice on a number of hyperconverged platforms.
Therefore, your decision around hypervisor support is really simple: For which hypervisor do you require support?[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width="1/1"][vc_separator type="transparent" position="center" thickness="10px"][vc_column_text]
[button size='small' style='' text='Download the Full "Gorilla Guide" eBook here!' icon='fa-play' icon_color='' link='https://www.hyperconverged.org/gorilla-guide/?partner=hc' target='_self' color='' hover_color='' border_color='' hover_border_color='' background_color='' hover_background_color='' font_style='' font_weight='' text_align='' margin=''][/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width="1/1"][vc_column_text]
[icons size='' custom_size='' icon='fa-arrow-circle-left' type='normal' position='' border='yes' border_color='' icon_color='#e04646' background_color='' margin='' icon_animation='' icon_animation_delay='' link='https://www.hyperconverged.org/blog/2016/09/26/introduction-to-hci/' target='_self'] Previous Post
Next Post [icons size='' custom_size='' icon='fa-arrow-circle-right' type='normal' position='' border='yes' border_color='' icon_color='#e04646' background_color='' margin='' icon_animation='' icon_animation_delay='' link='https://www.hyperconverged.org/blog/2016/10/06/addressing-dc-metric-pain/' target='_self']