Hyperconvergence Explained

How do the storage features of a hyperconverged platform compare to my legacy storage platform?


[vc_row][vc_column][vc_column_text][dropcaps type='square' color='#ffffff' background_color='#e04646' border_color='']T[/dropcaps]his question is, understandably, commonly coming from a storage administrator who is just positive this hyperconverged widget can’t keep up with his or her beloved monolithic storage array. In terms of integrity (storage folks are notoriously and rightly sensitive about integrity), efficiency, and features, can HCI platforms really keep up?

Unless an organization is already using an innovative monolithic storage array (like the new hybrid arrays flooding the market the past few years), chances are pretty good that the software-defined storage that an HCI platform provides actually has more desirable features than the array that will be replaced. Here are a few examples of where the integrated SDS architecture might have advantages:

  • Inline deduplication – most HCI platforms are performing deduplication at the time of ingestion. For a number of reasons, this is more efficient and cost-effective than the post-process deduplication likely to be present on a legacy array. (A post-process job may still run later, for enhanced data reduction.)
  • Advanced snapshots/clones – due to the design of storage for HCI, snapshotting and cloning will typically take place at a virtual machine granularity. This is substantially more efficient than snapshotting a LUN or share. Because of this granularity, cloning of virtual machines is also very efficient as the clone is simply a metadata clone rather than a full clone of all the data that makes up the VM.
  • Inherent WAN optimization – the way that HCI platforms are designed allows for efficient replication between sites due to global deduplication without the need for dedicated WAN acceleration appliances. For application-level purposes, an organization may still need to leverage WAN acceleration technology, but no longer for storage replication.
  • Data Integrity – HCI platforms have as much or more failure tolerance that a monolithic array due to network RAID and RAIN-like protection mechanisms managed by the VSAs or kernel modules. Writes will be protected before acknowledged just like one would expect in an enterprise-class storage array.

It may be a challenging paradigm shift in a storage world that can be so focused on hardware, but SDS provides leverage to accomplish more than ever before, and oftentimes do it with less effort and cost.

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width="1/1"][vc_column_text]


[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width="1/2"][vc_column_text]

[icons size='' custom_size='' icon='fa-arrow-circle-left' type='normal' position='' border='yes' border_color='' icon_color='#e04646' background_color='' margin='' icon_animation='' icon_animation_delay='' link='https://www.hyperconverged.org/blog/2015/12/21/hyperc-hardware-software/' target='_self'] Previous Post

[/vc_column_text][/vc_column][vc_column width="1/2"][vc_column_text]

Next Post [icons size='' custom_size='' icon='fa-arrow-circle-right' type='normal' position='' border='yes' border_color='' icon_color='#e04646' background_color='' margin='' icon_animation='' icon_animation_delay='' link='https://www.hyperconverged.org/blog/2016/01/06/hyper-change-hardware-refresh-cycle/' target='_self']

[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width="1/1"][vc_column_text]


[/vc_column_text][/vc_column][/vc_row]