Networking Considerations for Hyperconvergence

Hyper-Converged Infrastructure (HCI) has unique networking requirements.  Realizing the full potential of one’s HCI hardware requires that one pay careful attention to networking needs.  Each solution is different, and each hardware combination presents new challenges, but there are some rules of thumb.

As regular visitors to this site will know, HCI unifies storage located inside individual servers, creating a shared storage solution.  This shared storage enables vital virtualization technologies such as workload migration and High Availability (HA).

HCI’s shared storage is also important to containerization deployments.  For all that containerization is predicated on composable infrastructure and disposable workload instances, the data those workloads operate on has to persist somewhere.  As with many other areas of the data center, HCI is increasingly the storage solution of choice for container administrators.

The special networking considerations come not from what HCI can do, but how it does it.

Storage History 101

Storage comes in two basic forms: local and networked.  Local storage can be disks inside a server or it can be disks in their own enclosures attached to a server (typically through mini-SAS connectors) and called “JBOD”.

Networked storage comes in a few flavours, but it used to be fairly simple.  One used a NAS for file-based storage, typically over an Ethernet network.  SANs were for block storage, with dedicated Fibre Channel networks being all the rage until relatively recently.  While initially viewed as the “poor country cousin”, iSCSI over Ethernet networks eventually gained traction.

On the network side there were – and remain – technology wars around how best to deliver this network-based storage.  Fibre Channel over Ethernet (FCoE) never really caught on, but Remote Direct Memory Access (RDMA) has.  Somewhere in there you have to throw Fibre Channel and a few others as well.

Rather than have servers sprout multiple network links, converged Ethernet was born.  Organizations could have all their storage traffic, workload traffic and so forth running through a single NIC.  This worked, but with varying degrees of success.  Quality of Service controls quickly became important, and the complexity of networking configuration increased.

It is against this backdrop that HCI must be considered.  HCI was born partly in response to some of the madness occurring as regards storage, but also hasn’t managed to avoid it.

HCI was born because network storage had largely collapsed into an oligopoly.  Enterprise-class NAS and SAN arrays were prohibitively expensive and the required networking was complex, expensive or both.

HCI was championed and evangelized by many as the low-cost solution to IT ills.  HCI would solve this by allowing organizations to use commodity drives in commodity servers with commodity networking, while still allowing organizations to run workloads on the same servers as hosted the storage drives.  In reality, HCI ended up occupying a middle ground on price, ease of network design and ease of configuration.

HCI Considerations

In order for HCI to unify storage resources located in a cluster’s various hosts HCI needs to use a network.  Hypothetically, HCI can be made to work in almost any network environment.  Your humble scribe once made a metro VSAN cluster using IrDA sensors, infrared lasers and a lot of mirrors.  This was done just to spite someone in VMware marketing.  Late night Slack conversations are to be avoided.

Just because you can do a thing, however, doesn’t mean it’s a good idea.  It would be criminally negligent, for example, to run any production workloads on a franken-IrDA cluster.  The underlying network solution, while perfectly acceptable for sharing basic internet access, is simply not reliable enough for an HCI cluster.

Because most HCI solutions use TCP/IP, they can work over some pretty horrible networks.  If something happens to a packet then another copy of it can be requested.  This reliability comes at a price, however: as network reliability degrades, latency soars.

In storage, latency matters.  The longer it takes for storage commands to execute the fewer storage commands can occur per second.  This can be somewhat mitigated by the use of Jumbo frames, though many HCI vendors avoid using them because of the problems they can cause.

The network performance of a perfectly reliable IrDA network under ideal circumstances would also be an issue.  Across such a low-bandwidth link one would struggle to achieve the full performance of an HCI solution made out of USB 1.0 thumb drives.  Compare this to modern NVMe-based HCI solutions, which can challenge top-of-the-line 100 gigabit networks.

In Practice

Sizing network capacity is the most obvious HCI networking challenge.  Assuming you have dedicated replication or backplane NICs then the general networking rule of thumb is as follows:

1 gigabit networks are enough for nodes using only mechanical disks

10 gigabit networks are enough for hybrid mechanical/flash or SATA all-flash nodes

25 gigabit or better networks are required for hybrid NVMe/SATA flash or all-NVMe flash nodes

Also as a rule of thumb, if you plan to take the converged Ethernet approach – running your VM traffic on the same network as your replication traffic – then you need to bump up your network links by one.  This means that you’d need 10 gigabit NICs for mechanical only clusters, 25 gigabit or better for clusters with SATA flash, and so forth.

Many HCI vendors require separate VM and storage networks.  Unlike traditional network storage, these don’t have to be physically separate networks; different subnets are fine.  Network resiliency is usually provided by having multiple network cards and using multiple network switches in order to achieve multipath.

This means that your average HCI node will have 4 network cables: VM LAN and Storage LAN links to the primary switch, as well as VM LAN and Storage LAN links to the backup switch.  Some HCI solutions will also have a dedicated management network link, making for 5 network cables per node.

Not exactly converged Ethernet.

TL;DR

Organizations should not buy HCI solutions without giving networking serious thought.  While HCI doesn’t require the physically dedicated networks of yesteryear, nor the network configuration complexity that caused so much grief during the consolidation era, doing HCI by the book can nonetheless require a great many network ports be dedicated to a single host.

The faster and more capable the storage one wishes to pack into a cluster’s nodes, the faster the network will need to be to provide for that storage.  Trying to shoehorn VM traffic and storage traffic onto the same network link will go very badly for high-performance clusters, making lots of high-speed network links even more important.



Download Whitepapers, Ebooks & More!
Trevor Pott

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley start-ups better understand systems administrators and how to sell to them.

Keep Reading...
  • When weighing the pros and cons of Hyperconverged Infrastructure (HCI), the scale is usually tipped in favor of going hyperconverged. The idea of standing your ground with traditional or even converged infrastructure is getting harder to sell to both the management and technical types. Just......

  • Hyperconverged Infrastructure (HCI) is being more widely adopted every day. There is so much hype around HCI that it can sometimes be easy to get lost in the maze of the amazing features inherent with its design. Many organizations are moving away from the traditional......

  • Data center infrastructure is a costly endeavor for any organization. A lot of time and money goes into planning and architecting a data center infrastructure. If the architecture planning goes wrong or capacity planning is underestimated, it will be difficult to see a good return......

PSSSSTTTT....

Don't Forget Your Hyperconvergence Basics Gorilla Guide Ebook!

Master the fundamentals of the hyperconverged infrastructure IT model.  New for 2018!
DOWNLOAD NOW
close-link
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Share with your friends










Submit
Do you want massive traffic?
Dignissim enim porta aliquam nisi pellentesque. Pulvinar rhoncus magnis turpis sit odio pid pulvinar mattis integer aliquam!
  • Goblinus globalus fantumo tubus dia montes
  • Scelerisque cursus dignissim lopatico vutario
  • Montes vutario lacus quis preambul den lacus
  • Leftomato denitro oculus softam lorum quis
  • Spiratio dodenus christmas gulleria tix digit
  • Dualo fitemus lacus quis preambul pat turtulis
* we never share your e-mail with third parties.
COMPANY NAME
221, Mount Olimpus, Rheasilvia, Mars,
Solar System, Milky Way Galaxy
+1 (999) 123-45-67
Thank You. We will contact you as soon as possible.
Do you want more traffic?
Dignissim enim porta aliquam nisi pellentesque. Pulvinar rhoncus magnis turpis sit odio pid pulvinar mattis integer aliquam!
  • Goblinus globalus fantumo tubus dia montes
  • Scelerisque cursus dignissim lopatico vutario
  • Montes vutario lacus quis preambul den lacus
  • Leftomato denitro oculus softam lorum quis
  • Spiratio dodenus christmas gulleria tix digit
  • Dualo fitemus lacus quis preambul pat turtulis
  • Scelerisque cursus dignissim lopatico vutario
  • Montes vutario lacus quis preambul den lacus
SUBSCRIBE TO OUR NEWSLETTER AND START INCREASING YOUR PROFITS NOW!
* we never share your e-mail with third parties.
SUBSCRIBE TO NEWSLETTER
Turpis dis amet adipiscing hac montes odio ac velit? Porta, non rhoncus vut, vel, et adipiscing magna pulvinar adipiscing est adipiscing urna. Dignissim rhoncus scelerisque pulvinar?
SUBSCRIBE TO OUR NEWSLETTER
PGlmcmFtZSB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzcmM9Imh0dHA6Ly93d3cueW91dHViZS5jb20vZW1iZWQvajhsU2NITzJtTTAiIGZyYW1lYm9yZGVyPSIwIiBhbGxvd2Z1bGxzY3JlZW4+PC9pZnJhbWU+
All rights reserved © Company Name, 2014
CONTACT US
COMPANY NAME
221, Mount Olimpus, Rheasilvia, Mars
Solar System, Milky Way Galaxy
+1 (999) 999-99-99
Thank You. We will contact you as soon as possible.
Macbook Pro
* Intel Core i7 (3.8GHz, 6MB cache)
* Retina Display (2880 x 1880 px)
* NVIDIA GeForce GT 750M (Iris)
* 802.11ac Wi-Fi and Bluetooth 4.0
* Thunderbolt 2 (up to 20Gb/s)
* Faster All-Flash Storage (X1)
* Long Lasting Battery (9 hours)
Ivan Churakov, developer
Tel.:
Fax:
E-mail:
Website:
+1 (800) 800-1234, +1 (800) 123-4567
+1 (800) 800-1234 (ext. 1234)
ivan.churakov@domain.tld
http://halfdata.com/
My CodeCanyon Portfolio
Banner Manager Pro - CodeCanyon Item for Sale
Coming Soon and Maintenance Mode - CodeCanyon Item for Sale
Code Shop - CodeCanyon Item for Sale
Keyword Tooltips - CodeCanyon Item for Sale
Subscribe & Download - CodeCanyon Item for Sale
"A placerat mauris placerat et penatibus porta aliquet sed dapibus, pulvinar urna cum aliquet arcu lectus sed tortor aliquet sed dapibus."
John Doe, Astronomer
Bubble Company Inc. © 2011-2014
SUBSCRIBE TO NEWSLETTER
PGlmcmFtZSB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzcmM9Ii8vd3d3LnlvdXR1YmUuY29tL2VtYmVkL3NCV1BDdmR2OEJrP2F1dG9wbGF5PTEiIGZyYW1lYm9yZGVyPSIwIiBhbGxvd2Z1bGxzY3JlZW4+PC9pZnJhbWU+
ENJOY AURORA BOREALIS
SUBSCRIBE TO NEWSLETTER
INTERGALACTIC COMPANY
"Ridiculus enim cras placerat facilisis amet lorem ipsum scelerisque sagittis lorem tis!"
Jojn Doe, CEO
Tel.: +1 (800) 123-45-67, +1 (800) 123-45-68
Fax: +1 (800) 123-45-69 (any time, 24/7/365)
E-mail: info@intergalactic.company
Website: http://www.intergalactic.company
Address:
221, Mount Olimpus,
Rheasilvia region, Mars,
Solar System, Milky Way Galaxy
Do you want more traffic?
Dignissim enim porta aliquam nisi pellentesque. Pulvinar rhoncus magnis turpis sit odio pid pulvinar mattis integer aliquam!
  • Goblinus globalus fantumo tubus dia
  • Scelerisque cursus dignissim lopatico
  • Montes vutario lacus quis preambul
  • Leftomato denitro oculus softam lorum
  • Spiratio dodenus christmas gulleria tix
  • Dualo fitemus lacus quis preambul bela
PGlmcmFtZSB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzcmM9Imh0dHA6Ly93d3cueW91dHViZS5jb20vZW1iZWQvajhsU2NITzJtTTAiIGZyYW1lYm9yZGVyPSIwIiBhbGxvd2Z1bGxzY3JlZW4+PC9pZnJhbWU+
* we never share your details with third parties.
Do you want massive traffic?
Scelerisque augue ac hac, aliquet, nascetur turpis. Augue diam phasellus odio lorem integer, aliquam aliquam sociis nisi adipiscing hacac.
  • Goblinus globalus fantumo tubus dia
  • Scelerisque cursus dignissim lopatico
  • Montes vutario lacus quis preambul
  • Leftomato denitro oculus softam lorum
  • Spiratio dodenus christmas gulleria tix
  • Dualo fitemus lacus quis preambul bela
PGlmcmFtZSB3aWR0aD0iMTAwJSIgaGVpZ2h0PSIxMDAlIiBzcmM9Imh0dHA6Ly93d3cueW91dHViZS5jb20vZW1iZWQvajhsU2NITzJtTTAiIGZyYW1lYm9yZGVyPSIwIiBhbGxvd2Z1bGxzY3JlZW4+PC9pZnJhbWU+
* we never share your details with third parties.