DCIG 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide which helps enterprises assess the enter­prise deduplication backup target appliance marketplace and identify which appliance may be the best fit for their environment. This Buyer’s Guide includes data sheets for 19 enterprise deduplication backup target appliances that achieved rankings of Recommended and Excellent. These products are available from five vendors including Cohesity, Dell EMC, ExaGrid, HPE, and NEC.

Enterprises rarely want to talk about the make-up of the infrastructure of their data centers anymore. They prefer to talk about artificial intelligence, cloud adoption, data analytics, machine learning, software-defined data centers, and uninterrupted business operations. As part of those discussions, they want to leverage current technologies to drive new insights into their business and, ultimately, create new opportunities for busi­ness growth or cost savings because their underlying data center technologies work as expected.

The operative phrase here becomes “works as expected”, especially as it relates to Enterprise Deduplication Backup Target Appliances. Expectations as to the exact features that an enterprise deduplication backup target appliance should deliver can vary widely.

If an enterprise only wants an enterprise deduplication backup target appliance that meets traditional data center requirements, every appliance covered in this Buyer’s Guide satisfies those needs. Each one can:

  • Serve as a target for backup software.
  • Analyze and break apart data in backup streams to optimize deduplication ratios.
  • Replicate backup data to other sites
  • Replicate data to the cloud for archive, disaster recovery, and long-term data retention.

While the appliances from each provider uses different techniques to accomplish these objectives and some perform these tasks better than others depending on the use case, each one does deliver on these objectives.

But for enterprises looking for a solution that enables them to meet their broader, more strategic objectives, only a couple of providers covered in this Buyer’s Guide, appear to be taking the appropriate steps to position enterprises for the software-defined hybrid data center of the future. Appliances from these provid­ers better position enterprises to perform next generation data lifecycle management tasks while still providing enterprises with the necessary features to accomplish traditional backup and recovery tasks.

It is in this context that DCIG presents its DCIG 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide. As in the development of all prior DCIG Buyer’s Guides, DCIG has already done the heavy lifting for enterprise technology buyers by:

  • Identifying a common technology need with competing solutions
  • Scanning the environment to identify available products in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Describing key product considerations and important changes in the marketplace
  • Presenting DCIG’s opinions and product feature data in a way that facilitates the rapid comparisons of various products and product features

The products that DCIG ranks as Recommended in this Guide are as follows (in alphabetical order):

Access to this Buyer’s Guide edition is available immediately by following this link to any of the following DCIG partner sites:



Ways Persistent Memory is Showing Up in Enterprise Storage in 2019

Persistent Memory is bringing a revolution in performance, cost and capacity that will change server, storage system, data center and software design over the next decade. This article describes some ways storage vendors are integrating persistent memory into enterprise storage systems in 2019.

Intel Optane DC Persistent Memory Modules (PMM)

picture of an Intel® Optane™ DC persistent memory stickAs noted in the second article in the series–NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise—the lack of a standard DIMM format for persistent memory is a key barrier to the development of NVDIMMs. Nevertheless, Intel recently announced general availability of pre-standard Optane DIMMs, branded Intel Optane DC Persistent Memory Modules (PMM).

Intel supports multiple modes for accessing Optane PMM. Each mode exposes different capabilities for systems to exploit. In “Memory Mode” DRAM acts as a hot-data cache in front of the Optane capacity tier. Somewhat strangely, in memory mode the Optane provides a large pool of volatile memory. A second mode for Optane PMM is called “App Direct Mode”. In App Direct Mode, Optane is persistent memory, and applications write to the Optane using load/store memory semantics.

NetApp demonstrates one way this technology can be integrated into existing enterprise storage systems. It uses Optane DIMMs in application servers as part of the NetApp Memory Accelerated (MAX) Data solution. MAX Data writes to Optane PMM in App Direct Mode as the hot storage tier. The solution tiers cold data to NetApp AFF all-flash arrays. With NetApp MAX, applications do not need to be rewritten to take advantage of Optane. Instead, the solution presents the Optane memory as POSIX-compliant storage.

Storage Vendors are Using Optane SSDs in Multiple Ways

As noted in the first article in this series, multiple storage system providers are taking advantage of Optane SSDs. Some storage vendors, such as HPE, use the Optane SSDs to provide a large ultra-low-latency read cache. Some vendors, including E8 Storage, use Optane SSDs as primary storage. Still others use Optane SSDs as the highest performing tier of storage in a multi-tiered storage environment.

A startup called VAST Data recently emerged from stealth. Its solution uses Optane SSDs as a write buffer and metadata store in front of the primary storage pool. It uses the least expensive flash memory–currently QLC SSDs–as the only capacity tier. The architecture also disaggregates storage processing from the storage pool by running the logic in containers on servers that talk to the storage nodes via NVMe-oF.

MRAM is Being Embedded Into Storage Components

At the SNIA Persistent Memory Summit, one presenter said that the largest uses of MRAM in the data center are in enterprise SSDs, RAID controllers, storage accelerator add-in cards and network adapters. For example, IBM uses MRAM in its Flashcore Modules, its most recent generation of 2.5-inch U.2 SSDs. The MRAM replaced supercapacitors plus DRAM it used in the prior generation of SSDs, simplifying the design and enabling more capacity in less space without the risk of data loss.

Persistent Memory Will Impact All Aspects of Data Processing

Technology companies have invested many millions of dollars into the development of a variety of persistent memory technologies. Some of these technologies exist only in the laboratories of these companies. But today, multiple vendors are incorporating Intel’s Optane 3D XPoint and MRAM into a variety of data center products.

We are in the very early phases of a persistent-memory-enabled revolution in performance, cost and capacity that will change server, storage system, data center and software design over the next decade. Although some aspects of this revolution are being held back by a lack of standards, multiple vendors are now shipping storage class memory as part of their enterprise storage systems. The revolution has begun.


This is the third in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

This article was updated on 4/5/2019 to add a link to the prior article in the series.

NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise

logo of the persistent memory summitThe SNIA Persistent Memory Summit held in late January 2019 provided a good view into the current state of industry. Some key technologies and standards related to persistent memory are moving forward more slowly than expected. Others are finally transitioning from promise to products. This article summarizes a few key takeaways from the event as they relate to enterprise storage systems.

Great Performance Gains Possible Without Modifying Software

One point the presenters at this SNIA-sponsored event took pains to make clear is that great performance gains from storage class memory are possible without making any changes to the software that uses the storage. For example, a machine learning test using Optane to extend server memory capacity allowed a standard host to complete 3x more analytics models.

These results are being obtained due to the efforts of SNIA and its member organizations. They developed the SNIA NVM Programming Model and a set of persistent memory libraries. Both Microsoft Windows and multiple Linux variants take advantage of these libraries to enable any application running on those operating systems to benefit from persistent memory.

Optane is a Gap Filler in the Storage Hierarchy, Not a DRAM Replacement

chart showing place of optane in storage memory hierarchy between DRAM and NAND SSD

Slide from Intel Presentation at SNIA PM Summit

One fact made clear across multiple presentations is that Optane (Intel’s brand name for 3D XPoint persistent memory) fills an important gap in the storage hierarchy, but falls short as a non-volatile replacement for DRAM. Every storage medium has strengths and weaknesses. Optane has excellent read latency and bandwidth, so deploying it as a persistent read-cache as HPE is doing may be its primary use case in enterprise storage systems.

MRAM is Shipping Now and Being Embedded Into Many Products

The main surprise for me from the event was the extent to which MRAM has become a real product. In addition to Everspin and Avalanche, both Intel and Samsung have announced that they are ready to ship STT-MRAM (spin-transfer torque magnetic RAM) in commercial production volumes.

MRAM offers read/write speeds similar to DRAM, and enough endurance to be used as a DRAM replacement in many scenarios. The initial focus of MRAM shipments is embedded devices, where the necessary surrounding standards are already in place. MRAM’s capacity, endurance and low power draw make it a great fit with the requirements of next-generation embedded edge devices.

photo of Kevin Conley CEO of Everspin and the memory landscape

Kevin Conley presenting at the PM Summit

Kevin Conley, CEO of Everspin Technologies, gave an especially helpful presentation describing the characteristics of MRAM and how it fits into the memory technology landscape. He stated that MRAM is currently being used in enterprise SSDs, RAID controllers and storage accelerator cards. His 10-minute presentation begins approximately 13 minutes into this video recording.

Persistent Memory Moving Onto the NIC

One new use case for persistent memory is to place it on network interface cards. The idea is to persist writes on the NIC before the data leaves the host server, eliminating the network and back-end storage system from the write-latency equation. It will be interesting to see how providers will integrate this capability into their storage solutions.

MRAM Memory Sticks Waiting on DDR5 and NVDIMM-P Standards

One factor holding back MRAM and other storage-class memories from being used in the familiar DIMM format is the lack of critical standards. The NVDIMM-P is the standard for placing non-volatile memory on DIMMs. The DDR5 standard will permit large capacity DIMMs. Both standards were originally expected to be completed in 2018, but that did not happen. No firm date for their completion was provided at the Summit.

Not all are waiting for the standards to be finalized. Intel is shipping its Optane DC Persistent Memory in DDR4-compatible DIMM format without waiting for the NVDIMM-P standard. The modules are available in capacities of 128, 256 and 512GB–a foretaste of what NVDIMM-P will do for memory capacities. While it is good to see some pre-standard NVDIMM products being introduced, the NVDIMM-P and DDR5 standards will be key to the broad adoption of persistent memory, just as the CCITT Group 3 and IEEE 802.3 standards were to fax and networking.

NVDIMM-N Remains the Predominant Non-Volatile Memory Technology for 2019 and 2020

The predominant technology for providing non-volatile memory on the memory bus is based on NVDIMM-N standard. These NVDIMMs pair DRAM with flash memory and a battery or capacitor. The DRAM handles I/O until a shutdown or power loss triggers the contents of DRAM to be copied to the flash memory.

NVDIMM-N modules provide the performance of DRAM and the persistence of flash memory. This makes them excellent for use as a write-cache, as iXsystems and Western Digital do in their respective TrueNAS and IntelliFlash enterprise storage arrays.

NVMe-oF Delivers in 2019 and 2020

If the DDR5 and NVDIMM-P standards are published by the end of 2019, we may see MRAM and other storage class memory technologies in enterprise storage systems by 2021. In the meantime, enterprise storage providers will focus on integrating NVMe and NVMe-oF into their products to provide advances in storage performance. Multiple vendors are already shipping NVMe-oF compliant products. These include E8 Storage, Pavilion Data Systems, Kaminario, and Pure Storage.

Learn More About Persistent Memory

DCIG focuses most of its efforts on enterprise technology that is currently available in the marketplace. Nevertheless, we believe that persistent memory will have significant implications for servers, storage and data center designs within the technology planning horizons of most enterprises. As such, it is important for anyone involved in enterprise information technology to understand those implications.

You can learn more about persistent memory from the people and organizations that are driving the industry forward. SNIA is making all the presentations from the Persistent Memory Summit available for viewing at https://www.snia.org/pm-summit.

DCIG will continue to cover developments in persistent memory, especially as it makes its way into enterprise technology products. If you haven’t already done so, please signup for the weekly DCIG Newsletter so that we can keep you informed of these developments.


This is the second in a series of articles about Persistent Memory and its use in enterprise storage. The first article in the series is Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems. The third article is Ways Persistent Memory is Showing Up in Enterprise Storage in 2019.

This article was updated on 4/1/2019 to add more detail about MRAM and NVDIMM-P, and on 4/5/2019 to add links to the other articles in the series.

DCIG Introduces Two New Offerings in 2019

DCIG often gets so busy covering all the new and emerging technologies in multiple markets that we can neglect to inform our current and prospective clients of new offerings that DCIG has brought to market. Today I address this oversight.

While many of you know DCIG for its Buyer’s Guides, blogs, and executive white papers, DCIG now offers the following two assets that companies can contract DCIG to create:

1.      DCIG Competitive Intelligence Reports. These reports start by taking a subset of the information we gather as part of creating the DCIG Buyer’s Guides. These reports compare features from two to five selected products and examines how they deliver on these features. The purpose of these reports is not to declare which feature implementation is “best”. Rather, it examines how each product implements these select features and what the most appropriate use case is for those features.

2.      DCIG Content Bundle. In today’s world, people consume the same content in multiple ways. Some prefer to hear it via podcasts. Some prefer to watch it on video. Some want to digest it in bite size chunks in blog entries. Still others want the whole enchilada in the form of a white paper. To meet these various demands, DCIG delivers the same core set of content in all four of these formats as part of its newly created content bundle.

If any of these new offerings pique your interest, let us know! We would love to have the opportunity to explain how they work and provide you with a sample of these offerings. Simply click on this link to send us an email to inquire about these services.

Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems

Dell EMC announced that it will soon add Optane-based storage to its PowerMAX arrays, and that PowerMAX will use Optane as a storage tier, not “just” cache. This statement implies using Optane as a storage tier is superior to using it as a cache. But is it?

PowerMAX will use Storage Class Memory as Tier in All-NVMe System

Some people criticized Dell EMC for taking an all-NVMe approach–and therefore eliminating hybrid (flash memory plus HDD) configurations. Yet the all-NVMe decision gave the engineers an opportunity to architect PowerMAX around the inherent parallelism of NVMe. Dell EMC’s design imperative for the PowerMAX is performance over efficiency. And it does perform:

  • 290 microsecond latency
  • 150 GB per second of throughput
  • 10 million IOPS

These results were achieved with standard flash memory NVMe SSDs. The numbers will get even better when Dell EMC adds Optane-based storage class memory (SCM) as a tier. Once SCM has been added to the array, Dell EMC’s fully automated storage tiering (FAST) technology will monitor array activity and automatically move the most active data to the SCM tier and less active data to the flash memory SSDs.

The intelligence of the tiering algorithms will be key to delivering great results in production environments. Indeed, Dell EMC states that, “Built-in machine learning is the only cost-effective way to leverage SCM”.

HPE “Memory-Driven Flash” uses Storage Class Memory as Cache

HPE is one of many vendors taking the caching path to integrating SCM into their products. It recently began shipping Optane-based read caching via 750 GB NVMe SCM Module add-in cards. In testing, HPE 3PAR 20850 arrays equipped with this “HPE Memory-Driven Flash” delivered:

  • Sub-200 microseconds of latency for most IO
  • Nearly 100% of IO in under 300 microseconds
  • 75 GB per second of throughput
  • 4 million IOPS

These results were achieved with standard 12 Gb SAS SSDs providing the bulk of the storage capacity. HPE Memory-Driven Flash is currently shipping for HPE 3PAR Storage, with availability on HPE Nimble Storage yet in 2019.

An advantage of caching approach is that even a relatively small amount of SCM can enable a storage system to deliver SCM performance by dynamically caching hot data, even when it is storing most of the data on much slower and less expensive media. As with tiering, the intelligence of the algorithms is key to delivering great results in production environments.

The performance HPE is achieving with SCM is good news for other arrays based on caching-oriented storage operating systems. In particular, ZFS-based products such as those offered by Tegile, iXsystems and OpenDrives, should see substantial performance gains when they switch to using SCM for the L2ARC read cache.

What is Best – Tier or Cache?

I favor the caching approach. Caching is more dynamic than tiering, responding to workloads immediately rather than waiting for a tiering algorithm to move active data to the fastest tier on some scheduled basis. A tiering-based system may completely miss out on the opportunity to accelerate some workloads. I also favor caching because I believe it will bring the benefits of SCM within reach of more organizations.

Whether using SCM as a capacity tier or as a cache, the intelligence of the algorithms that automate the placement of data is critical. Many storage vendors talk about using artificial intelligence and machine learning (AI/ML) in their storage systems. SCM provides a new, large, persistent, low-latency class of storage for AI/ML to work with in order to deliver more performance in less space and at a lower cost per unit of performance.

The right way to integrate NVMe and SCM into enterprise storage is to do so–as a tier, as a cache or as both tier and cache–and then use automated intelligent algorithms to make the most of the storage class memory that is available.

Prospective enterprise storage array purchasers should take a close look at how the systems use (or plan to use) storage class memory and how they use AI/ML to inform caching and/or storage tiering decisions to deliver cost-effective performance.


This is the first in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

Revised on 4/5/2019 to add the link to the next article in the series.

Your Data Center is No Place for a Space Odyssey

The first movie I remember seeing in a theater was 2001: A Space Odyssey. If you saw it, I am guessing that you remember it, too. At the core of the story is HAL, a sophisticated computer that controls everything on a space ship en route to Jupiter. The movie is ultimately a story of artificial intelligence gone awry.

When the astronauts realize that HAL has become dangerous due to a malfunction, they decide they need to turn HAL off. I still recall the chill I experienced when one of the astronauts issues the command, “Open the pod bay doors please, HAL.” And HAL responds with, “I’m sorry, Dave. I’m afraid I can’t do that.”

Artificial Intelligence is Real Today, but not Perfect

Today, we are finally experiencing voice interaction with a computer that feels as sophisticated as what that movie depicted more than 50 years ago. But sometimes with unintended or unexpected consequences.

Artificial intelligence (AI) is great, except when it is not. My sister recently purchased a vehicle with collision avoidance technology built in. Surprisingly, it engaged the emergency stop procedure on a rural highway when no traffic was approaching. Fortunately, there was no vehicle following close behind or this safety feature might have actually caused an accident. (The dealer eventually accepted the return of the vehicle.)

Artificial Intelligence in Data Center Infrastructure Products

Artificial intelligence and machine learning technologies are being incorporated into data center infrastructure products. Some of these implementations are delivering measurable value to the customers who use these products. AI/ML enabled capabilities may include:

  • AI/ML enabled by default… Yay!
  • Cloud-based analytics…Yay!
  • Proactive fault remediation… Yay!
  • Recommendations… Yay!
  • Totally autonomous operations… I’m not sure about that.

Examples of Artificial Intelligence and Machine Learning Done Right

  • HPE InfoSight – all the “Yay!” items above. For example, HPE claims that with InfoSight, 86% of problems are predicted and automatically resolved before customers even realize there is an issue.
  • HPE Memory-Driven Flash is now shipping for HPE 3PAR arrays. It is implemented as an 750 GB NVMe Intel Optane SSD add-in card that provides an extremely low-latency read cache. The read cache uses sophisticated caching algorithms to complete nearly all I/O operations in under 300 microseconds. Yet, system administrators can enable this cache per volume, giving humans the opportunity to specify which workloads are of the highest value to the business.
  • Pivot3 Dynamic QoS provides policy-based quality of service management based on the business value of workloads. The system automatically applies a set of default policies, and dynamically enforces those policies. But administrators can change the policies and change which workloads are assigned to each policy on-the-fly.

When evaluating the AI/ML capabilities of data center infrastructure products, enterprises should look for products that enable AI/ML by default, yet which humans can override based on site-specific priorities, preferably on a granular basis.

After all, when a critical line of business application is not getting the priority it deserves, the last thing you want to hear from your infrastructure is, “I’m sorry, Dave. I’m afraid I can’t do that.”


Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.

Leading Hyperconverged Infrastructure Solutions Diverge Over QoS

Hyperconvergence is Reshaping the Enterprise Data Center

Virtualization largely shaped the enterprise data center landscape for the past ten years. Hyper-converged infrastructure (HCI) is beginning to have the same type of impact, re-shaping the enterprise data center to fully capitalize on the benefits that virtualizing the infrastructure affords them.

Hyperconverged Infrastructure Defined

DCIG defines a hyperconverged infrastructure (HCI) as a solution that pre-integrates virtualized compute, storage and data protection functions along with a hypervisor and scale-out cluster management software. HCI vendors may offer their solutions as turnkey appliances, installable software or as an instance running on public cloud infrastructure. The most common physical instantiation of—and unit of scaling for—hyperconverged infrastructure is a 1U or 2U rack-mountable appliance contain­ing 1–4 cluster nodes.

HCI Adoption Exceeding Analyst Forecasts

Hyperconverged Infrastructure (HCI)–and the software-defined storage (SDS) technology that is a critical component of these solutions–is still in the early stages of adoption. Yet according to IDC data, spending on HCI already exceeds $5 Billion annually and is growing at a rate that substantially outpaces many analyst forecasts.Graph comparing analyst forecasts with actual hyperconverged sales growth

HCI Requirements for Next-Generation Datacenter Adoption

The success of initial HCI deployments in reducing complexity, speeding time to deployment, and lowering costs compared to traditional architectures has opened the door to an expanded role in the enterprise data center. Indeed, HCI is rapidly becoming the core technology of the next-generation enterprise data center. In order to succeed as a core technology these HCI solutions must meet a new and demanding set of expectations. These expectations include:

  • Simplified management, including at scale
  • Workload consolidation, including mission-critical

The Role of Quality of Service in Simplifying Management and Consolidating Workloads

Three performance elements that are candidates for quality of service (QoS) management are latency, IOPS, and throughput. Some HCI solutions address all three elements, others manage just a single element.

HCI solutions also take varied approaches to managing QoS in terms of fixed assignments versus relative priority. The fixed assignment approach involves assigning minimum, maximum and/or target values per volume. The relative priority approach involves assigning each volume to a priority group–like Gold, Silver or Bronze.

Superior QoS technology creates business value by driving down operating expenses (OPEX). It dramatically reduces the amount of time IT staff must spend troubleshooting service level agreement (SLA) related problems.

Superior QoS also creates business value by driving down capital expenses (CAPEX). It enables more workloads to be confidently consolidated onto less hardware. The more intelligent it is, the less over-provisioning (and over-purchasing) of hardware will be required.

Finally, QoS can be applied to workload performance alone or to performance and data protection to meet service level agreements in both domains.

How Some Popular Hyperconverged Infrastructure Solutions Diverge Over QoS

DCIG is in the process of updating its research on hyperconverged infrastructure solutions. In the process we have observed that these solutions take very divergent approaches to quality of service.

Cisco HyperFlex offers QoS on the NIC, which is useful for converged networking, but does not offer storage QoS that addresses application priority within the solution itself.

Dell EMC VxRail QoS is very basic. Administrators can assign fixed IOPS limits per volume. Workloads using those volumes get throttled even when there is no resource contention, yet still compete for IOPS with more important workloads. This approach to QoS does protect a cluster from a rogue application consuming too many resources, but is probably a better fit for managed service providers than for most enterprises.

Nutanix “Autonomic QoS” automatically prioritizes user applications over back end operations whenever contention occurs. Nutanix AI/ML technology understands common workloads and prioritizes different kinds of IO from a given application accordingly. This approach offers great appeal because it is fully automatic. However, it is global and not user configurable.

Pivot3 offers intelligent policy-based QoS. Administrators assign one of five QoS policies to each volume when it is created. In addition to establishing priority, each policy assigns targets for latency, IOPS and throughput. Pivot3’s Intelligence Engine then prioritizes workloads in real-time based on those policies. The administrator assigning the QoS policy to the volume must know the relative importance of the associated workload; but once the policy has been assigned, performance management is “set it and forget it”. Pivot3 QoS offers other advanced capabilities including applying QoS to data protection and the ability to change QoS settings on-the-fly or on a scheduled basis.

QoS Ideal = Automatic, Intelligent and Configurable

The ideal quality of service technology would be automatic and intelligent, yet configurable. Though none of these hyperconverged solutions may fully realize that ideal, Nutanix and Pivot3 both bring significant elements of this ideal to market as part of their hyperconverged infrastructure solutions.

Enterprises considering HCI as a replacement for existing core data center infrastructure should give special attention to how the solution implements quality of service technology. Superior QoS technology will reduce OPEX by simplifying management and reduce CAPEX by consolidating many workloads onto the solution.

The Early Implications of NVMe/TCP on Ethernet Network Designs

The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.

First, NVMe/TCP will accelerate the deployment of NVMe in enterprises.

NVMe is already available in networked storage environments using competing protocols such as RDMA which ships as RoCE (RDMA over Converged Ethernet). The challenge is no one (well, very few anyway) use RDMA in any meaningful way in their environment so using RoCE to run NVMe never gained and will likely never gain any momentum.

The availability of NVMe over TCP changes that. Companies already understand TCP, deploy it everywhere, and know how to scale and run it over their existing Ethernet networks. NVMe/TCP will build on this legacy infrastructure and knowledge.

Second, any latency that NVMe/TCP introduces still pales in comparison to existing storage networking protocols.

Running NVMe over TCP does introduces latency versus using RoCE. However, the latency that TCP introduces is nominal and will likely be measured in microseconds in most circumstances. Most applications will not even detect this level of latency due to the substantial jump in performance that natively running NVMe over TCP will provide versus using existing storage protocols such as iSCSI and FC.

Third, the introduction of NVMe/TCP will require companies implement Ethernet network designs that minimize latency.

Ethernet networks may implement buffering in Ethernet switches to handle periods of peak workloads. Companies will need to modify that network design technique when deploying NVMe/TCP as buffering introduces latency into the network and NVMe is highly latency sensitive. Companies will need to more carefully balance how much buffering they introduce on Ethernet switches.

Fourth, get familiar with the term “incast collapse” on Ethernet networks and how to mitigate it.

NVMe can support up to 64,000 queues. Every queue that NVMe opens up initiates a TCP session. Here is where challenges may eventually surface. Simultaneously opening up multiple queues will result in multiple TCP sessions initiating at the same time. This could, in turn, have all these sessions arrive at a common congestion point in the Ethernet network at the same time. The network remedies this by having all TCP sessions backing off at the same time, or an incast collapse, creating latency in the network.

Source: University of California-Berkeley

Historically this has been a very specialized and rare occurrence in networking due to the low probability that such an event would ever take place. But the introduction of NVMe/TCP into the network makes the possibility of such a event much more likely to occur, especially as more companies deploy NVMe/TCP into their environment.

The Ratification of the NVMe/TCP

Ratification of the NVMe/TCP standard potentially makes every enterprise data center a candidate for storage systems that can deliver dramatically better performance to their work loads. Until the performance demands of every workload in a data center are met instantaneously, some workload requests will queue up behind a bottleneck in the data center infrastructure.

Just as introducing flash memory into enterprise storage systems revealed bottlenecks in storage operating system software and storage protocols, NVMe/TCP-based storage systems will reveal bottlenecks in data center networks. Enterprises seeking to accelerate their applications by implementing NVMe/TCP-based storage systems may discover bottlenecks in their networks that need to be addressed in order to see the full benefits that NVMe/TCP-based storage.

To view this presentation in its entirety, follow this link.

Three Hallmarks of an Effective Competitive Intelligence System

Across more than twenty years as an IT Director, I had many sales people incorrectly tell me that their product was the only one that offered a particular benefit. Did their false claims harm their credibility? Absolutely. Were they trying to deceive me? Possibly. But it is far more likely that they sincerely believed their claims. 

Their lack was not truthfulness but accuracy. They lacked accurate and up-to-date information about the current capabilities of competing products in the marketplace. Their competitive intelligence system had failed them.

When DCIG was recruiting me to become an analyst I asked DCIG’s founder, Jerome Wendt, what were the most surprising things he had learned since founding DCIG. One of the three things he mentioned in his response was the degree to which vendors lack a knowledge of the product features and capabilities of their key competitors.

Reasons Vendors Lack Good Competitive Intelligence

There are many reasons why vendors lack good competitive intelligence. These include:

  • They are focused on delivering and enhancing their own product to meet the perceived needs of current and prospective customers.
  • Collecting and maintaining accurate data about even key competitor’s products can be time consuming and challenging.
  • Staff transitions may result in a loss of data continuity.

Benefits of an Effective Competitive Intelligence System

An effective competitive intelligence system increases sales by enabling partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits. Thus, it enhances the onboarding of new personnel and their opportunity for success.

Three Hallmarks of an Effective Competitive Intelligence System

The hallmarks of an effective competitive intelligence system center around three themes: data, insight and communication.

Regarding Data, the system must:

  • Capture current, accurate data about key competitor products
  • Provide data continuity across staff transitions
  • Provide analyses that surfaces commonalities and differences between products


Regarding Insight, the system must:

  • Clearly identify product differentiators
  • Clearly articulate the business benefits of those differentiators


Regarding Communication, the system must:

  • Provide concise content that enables partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits for CxOs and line of business executives
  • Bridge the gap between sales and marketing with messages that are tailored to be consistent with product branding
  • Provide the content at the right time and in the right format

Whatever combination of software, services and competitive intelligence personnel a company employs, an effective competitive intelligence system is an important asset for any company seeking to thrive in a competitive marketplace.

DCIG’s Competitive Intelligence Track Record

DCIG Buyer’s Guides

Since 2010, DCIG Buyer’s guides have provided hundreds of thousands with an independent look at the many products in each market DCIG covers. Each Buyer’s Guide gives decision makers insight into the features that merit particular attention, what is available now and key directions in the marketplace. DCIG produces Buyer’s Guides based on our larger bodies of research in data protection, enterprise storage and converged infrastructure.

DCIG Pocket Analyst Reports

DCIG leverages much of the Buyer’s Guide research methodology–and the competitive intelligence platform that supports that research–to create focused reports that highlight the differentiators between two products that are frequently making it onto the same short lists.

Our Pocket Analyst Reports are published and made available for sale on a third party website to substantiate the independence of each report. Vendors can license these reports for use in lead generation, internal sales training and for use with prospective clients. 

DCIG Competitive Intelligence Reports

DCIG also uses its Competitive Intelligence Platform to produce reports for internal use by our clients. These concise reports enable partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits that make sense to CxOs and line of business executives. Because these reports are for internal use, the client can have substantial input into the messaging.

DCIG Battle Cards

Each DCIG Battle Card is a succinct 2-page document that compares the client’s product or product family to one other product or product family. The client and DCIG collaborate to identify the key product features to compare, the key strengths that the client’s product offers over the competing product, and the appropriate messaging to include on the battle card. Content may be contributed by the client for inclusion on the battle card. The battle card is only for the internal use of the client and its partners and may not be distributed.

DCIG Competitive Intelligence Platform

The DCIG Competitive Intelligence (CI) Platform is a multi-tenant, platform-as-a-service (PaaS) offering backed by support from DCIG analysts. The DCIG Competitive Intelligence Platform offers the flexibility to centrally store data and compare features on competitive products. Licensees receive the ability to centralize competitive intelligence data in the cloud with the data made available internally to their employees and partners via reports prepared by DCIG analysts.

DCIG Competitive Intelligence platform and associated analyst services strengthen the competitive intelligence capabilities of our clients. Sometimes in unexpected ways…

  • Major opportunity against a competitor never faced before
  • Strategic supplier negotiation and positioning of competitor


In each case, DCIG analysis identified differentiators and 3rd party insights that helped close the deal.

HCI Comparison Report Reveals Key Differentiators Between Dell EMC VxRail and Nutanix NX

Many organizations view hyper-converged infrastructure (HCI) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

HCI Appliances Deliver Radical Simplicity

Hyper-converged infrastructure appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. Thus, the stakes are high for vendors such as Dell EMC and Nutanix as they compete to own this critical piece of data center real estate.

In the last several years, HCI has also emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, plus non-disruptive hardware upgrades and data migrations are among the features that enterprises love about these solutions.

HCI Appliances Are Not All Created Equal

Many enterprises are considering HCI solutions from providers Dell EMC and Nutanix. A cursory examination of these two vendors and their solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their HCI appliances. Also, both providers pretest firmware and software updates and automate cluster-wide roll-outs.

Nevertheless, important differences remain between the products. Due to the high level of interest in these products, DCIG published an initial comparison in November 2017. Both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first page of HCI comparison reportUpdated DCIG Pocket Analyst Report Reveals Key HCI Differentiators

In this updated report, DCIG identifies six ways the HCI solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed feature matrix as well as insight into key differentiators between these two HCI solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.

VMware vSphere and Nutanix AHV Hypervisors: An Updated Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform. Each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its updated DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors.

blurred image of first page of reportThis succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  • Data protection ecosystem
  • Support for Guest OSes
  • Support for VDI platforms
  • Certified enterprise applications
  • Fit with corporate direction
  • More favorable licensing model
  • Simpler management

This DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.

Dell EMC VxRail vs Nutanix NX: Six Key HCIA Differentiators

Many organizations view hyper-converged infrastructure appliances (HCIAs) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises consider HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their respective HCIA solutions. Both pre-test firmware and software updates holistically and automate cluster-wide roll-outs.

Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG published its first comparison of these products in November 2017. There is a high level of interest in these products, and both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first pageIn this updated report, DCIG identifies six ways the HCIA solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed product matrix as well as insight into key differentiators between these two HCIA solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace.

Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.


Storage Analytics and Latency Matters

Some pretty amazing storage performance numbers are being bandied about these days. Generally speaking, these heretofore unheard of claims of millions of IOPS and latencies measured in microseconds include references to NVMe and perhaps storage class memories. What ultimately matters to a business is the performance of its applications, not just storage arrays. When an application is performing poorly, identifying the root cause can be a difficult and time-consuming challenge. This is particularly true in virtualized infrastructures. But meaningful help is now available to address this challenge through advances in storage analytics.

Storage Analytics Delivers Quantifiable Value

In a previous blog article about the benefits of Predictive Analytics in Enterprise Storage, I mentioned HPE’s InfoSight predictive analytics and the VMVision cross-stack analytics tool they released in mid-2015. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight VMVision cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers


Pure Storage also offers predictive analytics, called Pure1 Meta. On September 20, 2018, Pure Storage released an extension of the Pure1 Meta platform called VM Analytics. Even in this first release, VM Analytics is clearly going to simplify and accelerate the process of resolving performance problems for Pure Storage FlashArray customers.

Application Latency is a Systemic Issue

The online demonstration of VM Analytics quickly impressed me with the fact that application latency is a systemic issue, not just a storage performance issue. The partial screen shot from the Pure1 VM Analytics tool included below shows a virtual machine delivering an average latency of 7.4 milliseconds. This view into performance provided by VM Analytics enables IT staff to quickly zero in on the VM itself as the place to focus in resolving the performance issue.screen shot of vm analytics

This view also shows that the datastore is responsible for less than 1 millisecond of that 7.4 milliseconds of latency. My point is that application latency depends on factors beyond the storage system. It must be addressed as a systemic issue.

Storage Analytics Simplify the Data Center Balancing Act

The key performance resources in a data center include CPU cycles, DRAM, storage systems and the network. Unless a system is dramatically over-provisioned, one of these resources will always constrain the performance of applications. Storage has historically been the limiting factor in application performance but the flash-enabled transformation of the data center has changed that dynamic.

Tools like VMVision and VM Analytics create value by giving data center administrators new levels of visibility into infrastructure performance. Therefore, technology purchasers should carefully evaluate these storage analytics tools as part of the purchase process. IT staff should use these tools to balance the key performance resources in the data center and deliver the best possible application performance to the business.

DCIG’s VMworld 2018 Best of Show

VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.

VMworld 2018 Best of Show #1: QoreStor Software-defined Secondary Storage

Software defined dedupe in the form of QoreStor has arrived. A few years ago Dell Technologies sold off its Dell Software division that included an assortment (actually a lot) of software products that emerged as Quest Software. Since then, Quest Software has done nothing but innovate like bats out of hell. One of the products coming out of this lab of mad scientists is QoreStor, a stand-alone software-defined secondary storage solution. QoreStor installs on any server hardware platform and works with any backup software. QoreStor provides a free download that will deduplicate up to 1TB of data at no charge. While at least one other vendor (Dell Technologies) offers a deduplication product that is available as a virtual appliance, only Quest QoreStor gives organizations the flexibility to install it on any hardware platform.

VMworld 2018 Best of Show #2: LoginVSI Load Testing

Stop guessing on the impact of VDI growth with LoginVSI. Right sizing the hardware for a new VDI deployment from any VDI provider (Citrix, Microsoft, VMware, etc.) is hard enough. But to keep the hardware appropriately sized as more VDIs are added or patches and upgrades are made to existing VDI instances, that can be dicey at best and downright impossible at worst.

LoginVSI helps take the guesswork out of any organization’s future VDI growth by:

  • examining the hardware configuration underlying the current VDI environment,
  • comparing that to the changes proposed for the environment, and then
  • making recommendations as to what changes, if any, need to be made to the environment to support
    • more VDI instances or
    • changes to existing VDI instances.

VMworld 2018 Best of Show #3: Runecast vSphere Environment Analyzer

Plug vSphere’s holes and keep it stable with Runecast. Runecast was formed by a group of former IBM engineers who over the years had become experts at two things while working at IBM: (1) Installing and configuring VMware vSphere; and (2) Pulling out their hair trying to keep each installation of VMware vSphere stable and up-to-date. To rectify this second issue, they left IBM and formed Runecast,

Runecast provides organizations with the software and tools they need to proactively:

  • identify needed vSphere patches and fixes,
  • identify security holes, and even
  • recommend best practices for tuning your vSphere deployments
  • with minimal to no manual intervention.

VMworld 2018 Best of Show #4: VxRail Hyper-converged Infrastructure

Kudos to Dell Technologies VxRail solution for saving every man, woman, and child in the United States $2/person in 2018. I was part of a breakfast conversation on Wednesday morning with one of the Dell Technologies VxRail product managers. He anecdotally shared with the analysts present that Dell Technologies recently completed two VxRail hyper-converged infrastructure deployments in two US government agencies. Each deployment is saving each agency $350 million annually. Collectively that amounts to $700 million or $2/person for every person residing in the US. Thank you, Dell.

Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.

Data Center Challenges and Technology Advances Revealed at Flash Memory Summit 2018

If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.

Flash Memory Summit logoKey themes at the conference include:

  • Processing and storing ever growing amounts of data is becoming more and more challenging. Faster connections and higher capacity drives are coming but are not the whole answer. We need to completely rethink data center architecture to meet these challenges.
  • Artificial intelligence and machine learning are expanding beyond their traditional high-performance computing research environments and into the enterprise.
  • Processing must be moved closer to—and perhaps even into—storage.

Multiple approaches to addressing these challenges were championed at the conference that range from composable infrastructure to computational storage. Some of these solutions will complement one another. Others will compete with one another for mindshare.

NVMe and NVMe-oF Get Real

For the near term, NVMe and NVMe over Fabrics (NVMe-oF) are clear mindshare winners. NVMe is rapidly becoming the primary protocol for connecting controllers to storage devices. A clear majority of product announcements involved NVMe.

WD Brings HDDs into the NVMe World

Western Digital announced the OpenFlex™ architecture and storage products. OpenFlex speaks NVMe-oF across an Ethernet network. In concert with OpenFlex, WD announced Kingfish™, an open API for orchestrating and managing OpenFlex™ infrastructures.

Western Digital is the “anchor tenant” for OpenFlex with a total of seventeen (17) data center hardware and software vendors listed as launch partners in the press release. Notably, WD’s NVMe-oF attached 1U storage device provides up to 168TB of HDD capacity. That’s right – the OpenFlex D3000 is filled with hard disk drives.

While NVMe’s primary use case it to connect to flash memory and emerging ultra-fast memory, companies still want their HDDs. Using Western Digital, organizations can have their NVMe and still get the lower cost HDDs they want.

Gen-Z Composable Infrastructure Standard Gains Momentum

The emerging Gen-Z as a memory-centric architecture is designed for nanosecond latencies. Since last year’s FMS, Gen-Z has made significant progress toward this objective. Consider:

  • The consortium publicly released the Gen-Z Core Specification 1.0 on February 13, 2018. Agreement upon a set of 1.0 standards is a critical milestone in the adoption of any new standard. The fact that the consortium’s 54 members agreed to it suggest broad industry adoption.
  • Intel’s adoption of the SFF-TA-1002 “Gen-Z” universal connector for its “Ruler” SSDs reflects increased adoption of the Gen-Z standards. Making this announcement notable is that Intel is NOT currently a member of the Gen-Z consortium which indicates that Gen-Z standards are gaining momentum even outside of the consortium.
  • The Gen-Z booth included a working Gen-Z connection between a server and a removable DRAM module in another enclosure. This is the first example of a processor being able to use DRAM that is not local to the processor but is instead coming out of a composable pool. This is a concept similar to how companies access shared storage in today’s NAS and SAN environments.

Other Notable FMS Announcements

Many other innovative solutions to data center challenges were also made at the FMS 2018 which included:

  • Solarflare NVMe over TCP enables rapid low-latency data movement over standard Ethernet networks.
  • ScaleFlux computational storage avoids the need to move the data by integrating FPGA-based computation into storage devices.
  • Intel’s announcement of its 660P Series of SSDs that employ quad level cell (QLC) technology. QLC stores more data in less space and at a lower cost.


Based on the impressive progress we observed at Flash Memory Summit 2018, we can reaffirm the recommendations we made coming out of last year’s summit…

  • Enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.
  • Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.

Differentiating between High-end HCI and Standard HCI Architectures

Hyper-converged infrastructure (HCI) architec­tures have staked an early claim as the data center architecture of the future. The abilities of all HCI solutions to easily scale capacity and performance, simplify ongoing management, and deliver cloud-like features have led many enterprises to view it as a logical path forward for building on-premise clouds.

But when organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high-end and standard hyper-converged architectures deliver the same bene­fits. They each virtualize compute, memory, storage networking, and data stor­age in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snap­shots and replication. In short, these two archi­tectures mirror each other in many ways.

However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments. In these envi­ronments, enterprises often need to:

  • Host thousands of VMs in a single, logical environment.
  • Provide guaranteed, sustainable performance for each VM by insulating them from one another.
  • Offer connectivity to multiple public cloud providers for archive and/or backup.
  • Seamlessly move VMs to and from the cloud.

It is in these circumstances that differences between high-end and standard architectures emerge with the high-end HCI architecture possessing four key advantages over standard HCI architectures. Consider:

  1. Data availability. Both high-end and standard HCI solutions distribute data across multiple nodes to ensure high levels of data availability. However, only high-end HCI architectures use separate, distinct compute and data nodes with the data nodes dedicated to guar­anteeing high levels of data availability for VMs on all compute nodes. This ensures that should any compute node become unavail­able, the data associated with any VM on it remains available.
  2. Flash/performance optimization. Both high-end and standard HCI architectures take steps to keep data local to the VM by storing the data of each VM on the same node on which the VM runs. However, solutions based on high-end HCI architectures store a copy of each VM’s data on the VM’s compute node as well as on the high-end HCI architecture’s underlying data nodes to improve and optimize flash performance. High-end HCI solutions such as the Datrium DVX also deduplicate and compress all data while limiting the amount of inter-nodal communication to free resources for data processing.
  3. Platform manageability/scalability. As a whole, HCI architectures are intuitive to size, manage, scale, and understand. In short, if you need more performance and/or capacity, one only needs to add another node. However, enterprises often must support hundreds or even thousands of VMs that each must perform well and be protected. Trying to deliver on those requirements at scale using a standard HCIA solution where inter­-nodal communication is a prerequisite becomes almost impos­sible to achieve. High-end HCI solutions facilitate granular deployments of compute and data nodes while limiting inter-nodal communication.
  4. VM insulation. Noisy neighbors—VMs that negatively impact the performance of adjacent VMs—remain a real problem in virtual environments. While both high-end and standard HCI architectures support storing data close to VMs, high-end HCI architectures do not rely on the availability of other compute nodes. This reduces inter-nodal communication and the probability that the noisy neighbor problem will surface.

HCI architectures have resonated with organizations in large part because of their ability to deliver cloud-like features and functionality with maintaining their simplicity of deployment and ongoing maintenance. However, next generation high-end HCI architecture with solutions available from providers like Datrium provide organizations greater flexibility to deliver cloud-like functionality at scale such as offering better management and optimization of flash performance, improved data availability, and increased insulation between VM instances. To learn more about how standard and high-end HCI architectures compare, check out this recent DCIG pocket analyst report that is available on the TechTrove website.

Note: Image added on 7/27/2018.

Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.