was successfully added to your cart.

Storage Remains a Factor in Hyper-converged Infrastructure Deployment Decisions

By September 9, 2016 Hyper-converged

Anyone in attendance at VMworld last week in Las Vegas and walking through the exhibit hall where all of the vendors showcased their wares could hardly miss the vast numbers of hyper-converged infrastructure and hyper-converged like vendors in attendance. Cisco, Dell, EMC, HPE, Nutanix, Maxta, Cohesity, Pivot3, Rubrik, Simplivity and Datrium, just to name a few, and I am sure there were others. Yet what caught my attention is speaking to their representatives and some of their users is how storage remains a factor in the architecture of hyper-converged infrastructure (HCI) solutions.

One of the premises of HCI is that its management should get simpler. By creating a hyper-converged server that contains compute, storage, and memory and then layering software that offers virtualization, networking, data protection, and scale-out clustering across these servers, organizations get the benefits of the traditional server, network, and storage array stack without all of its management complexities. Further, by using flash in part or in total within these servers, organizations get the performance they need at a lower cost.

While this premise largely holds up, cracks may appear in this architecture as organizations look to use HCI more broadly in their environment. Ironically, it is storage and its effective management and use within the HCI that again creates challenges for organizations in two specific areas:

  1. Cost containment
  2. Optimized application performance

In talking with Datrium, it seeks to contain these two costs associated with HCI deployments. One of the issues that organizations find as they look to deploy hyper-converged infrastructure solutions is that their costs for such a solution may rival that of SAN or NAS solution consisting of servers, networking, and storage. While a HCI solution may arguably be easier to manage than these SAN or NAS solutions, justifying the comparable cost of a hyper-converged solution can become a head scratcher since these consist of what should be commodity components.

The Datrium DVX solution seeks to address this dilemma using a two-fold approach. It first permits organizations to continue down their existing path of using their existing blade or installed servers equipped with flash drives and a VMware ESX hypervisor. However, to these ESX servers it adds its own software – a Hyperdriver – that stores data both on the local ESX server flash drives and on a central Datrium DVX NetShelf server/storage appliance. Using this design, writes still occur quickly and are protected across multiple devices while reads occur much more quickly since all data is retrieved from the local server’s data store residing on flash.

datrium-server-powered-storage

Source: Datrium

The desirability of Datrium’s design approach to HCI is that organizations may continue to use existing servers or blade servers while introducing its value-add in the server/storage NetShelf that resides in the network and which all servers access. Further, this design approach also helps to maintain application performance for virtual machines (VMs) on each ESX server initially and over time.

I also spoke at length with Pivot3 while at VMworld about its HCI models. While Pivot3 is not following the same design path as Datrium in terms of using storage specific servers as part of its hyper-converged architecture, it definitely recognizes that properly deploying and utilizing storage as part of its HCI solution plays an important role to controlling costs and optimizing application performance.

One of the key ways Pivot3 addresses these dual concerns is by leveraging its acquisition earlier this year of NexGen Storage to take advantage of some of the key quality of service (QoS) features and the PCI-e Flash architecture found on the NexGen arrays. An issue that can emerge in hyper-converged deployments is the inability of applications/VMs to get the levels of performance that a particular VM requires. By adding Granular QoS controls, Pivot3 can guarantee application performance to even the most demanding low-latency workloads.  While there are a number of organic workarounds to this issue, most involve manual intervention and spending more money to solve the issue.

pivot3-qos

Source: Pivot3

Pivot3 elected to, in part, automate the resolution to these issues with its acquisition of NexGen Storage. By giving organizations the flexibility to incorporate its N5 PCIe Flash Arrays into its HCI solution, organizations get dynamic QoS for their VMs. This approach serves to both lower costs (since they need fewer servers that require less time to manage) and accelerate application performance by using QoS to ensure each VM gets the level of performance its applications demand.

DCIG sees HCI becoming the predominant architecture that organizations of all sizes adopt and migrate to over time as a means to host and manage the majority of their virtualized infrastructure. But before they do, these solutions need to take into account how to optimize the storage that is part of their design so they remain an enabler (as opposed to an inhibitor) for continued HCI adoption. Based upon what DCIG saw in providers such as Datrium and Pivot3, solutions such as these are well on their way to delivering the key features that organizations need to enable them to scale HCI solutions to meet their application requirements now and into the future.

Note: This blog entry was updated at 9:30 am on September 16, 2016, to properly reflect some technical capabilities of Pivot3’s product.

image_pdfimage_print
Jerome M. Wendt

About Jerome M. Wendt

President & Lead Analyst of DCIG, Inc. Jerome Wendt is the President and Lead Analyst of DCIG Inc., an independent storage analyst and consulting firm. Mr. Wendt founded the company in September 2006.

Leave a Reply