Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.




Leading Hyperconverged Infrastructure Solutions Diverge Over QoS

Hyperconvergence is Reshaping the Enterprise Data Center

Virtualization largely shaped the enterprise data center landscape for the past ten years. Hyper-converged infrastructure (HCI) is beginning to have the same type of impact, re-shaping the enterprise data center to fully capitalize on the benefits that virtualizing the infrastructure affords them.

Hyperconverged Infrastructure Defined

DCIG defines a hyperconverged infrastructure (HCI) as a solution that pre-integrates virtualized compute, storage and data protection functions along with a hypervisor and scale-out cluster management software. HCI vendors may offer their solutions as turnkey appliances, installable software or as an instance running on public cloud infrastructure. The most common physical instantiation of—and unit of scaling for—hyperconverged infrastructure is a 1U or 2U rack-mountable appliance contain­ing 1–4 cluster nodes.

HCI Adoption Exceeding Analyst Forecasts

Hyperconverged Infrastructure (HCI)–and the software-defined storage (SDS) technology that is a critical component of these solutions–is still in the early stages of adoption. Yet according to IDC data, spending on HCI already exceeds $5 Billion annually and is growing at a rate that substantially outpaces many analyst forecasts.Graph comparing analyst forecasts with actual hyperconverged sales growth

HCI Requirements for Next-Generation Datacenter Adoption

The success of initial HCI deployments in reducing complexity, speeding time to deployment, and lowering costs compared to traditional architectures has opened the door to an expanded role in the enterprise data center. Indeed, HCI is rapidly becoming the core technology of the next-generation enterprise data center. In order to succeed as a core technology these HCI solutions must meet a new and demanding set of expectations. These expectations include:

  • Simplified management, including at scale
  • Workload consolidation, including mission-critical

The Role of Quality of Service in Simplifying Management and Consolidating Workloads

Three performance elements that are candidates for quality of service (QoS) management are latency, IOPS, and throughput. Some HCI solutions address all three elements, others manage just a single element.

HCI solutions also take varied approaches to managing QoS in terms of fixed assignments versus relative priority. The fixed assignment approach involves assigning minimum, maximum and/or target values per volume. The relative priority approach involves assigning each volume to a priority group–like Gold, Silver or Bronze.

Superior QoS technology creates business value by driving down operating expenses (OPEX). It dramatically reduces the amount of time IT staff must spend troubleshooting service level agreement (SLA) related problems.

Superior QoS also creates business value by driving down capital expenses (CAPEX). It enables more workloads to be confidently consolidated onto less hardware. The more intelligent it is, the less over-provisioning (and over-purchasing) of hardware will be required.

Finally, QoS can be applied to workload performance alone or to performance and data protection to meet service level agreements in both domains.

How Some Popular Hyperconverged Infrastructure Solutions Diverge Over QoS

DCIG is in the process of updating its research on hyperconverged infrastructure solutions. In the process we have observed that these solutions take very divergent approaches to quality of service.

Cisco HyperFlex offers QoS on the NIC, which is useful for converged networking, but does not offer storage QoS that addresses application priority within the solution itself.

Dell EMC VxRail QoS is very basic. Administrators can assign fixed IOPS limits per volume. Workloads using those volumes get throttled even when there is no resource contention, yet still compete for IOPS with more important workloads. This approach to QoS does protect a cluster from a rogue application consuming too many resources, but is probably a better fit for managed service providers than for most enterprises.

Nutanix “Autonomic QoS” automatically prioritizes user applications over back end operations whenever contention occurs. Nutanix AI/ML technology understands common workloads and prioritizes different kinds of IO from a given application accordingly. This approach offers great appeal because it is fully automatic. However, it is global and not user configurable.

Pivot3 offers intelligent policy-based QoS. Administrators assign one of five QoS policies to each volume when it is created. In addition to establishing priority, each policy assigns targets for latency, IOPS and throughput. Pivot3’s Intelligence Engine then prioritizes workloads in real-time based on those policies. The administrator assigning the QoS policy to the volume must know the relative importance of the associated workload; but once the policy has been assigned, performance management is “set it and forget it”. Pivot3 QoS offers other advanced capabilities including applying QoS to data protection and the ability to change QoS settings on-the-fly or on a scheduled basis.

QoS Ideal = Automatic, Intelligent and Configurable

The ideal quality of service technology would be automatic and intelligent, yet configurable. Though none of these hyperconverged solutions may fully realize that ideal, Nutanix and Pivot3 both bring significant elements of this ideal to market as part of their hyperconverged infrastructure solutions.

Enterprises considering HCI as a replacement for existing core data center infrastructure should give special attention to how the solution implements quality of service technology. Superior QoS technology will reduce OPEX by simplifying management and reduce CAPEX by consolidating many workloads onto the solution.




The Early Implications of NVMe/TCP on Ethernet Network Designs

The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.

First, NVMe/TCP will accelerate the deployment of NVMe in enterprises.

NVMe is already available in networked storage environments using competing protocols such as RDMA which ships as RoCE (RDMA over Converged Ethernet). The challenge is no one (well, very few anyway) use RDMA in any meaningful way in their environment so using RoCE to run NVMe never gained and will likely never gain any momentum.

The availability of NVMe over TCP changes that. Companies already understand TCP, deploy it everywhere, and know how to scale and run it over their existing Ethernet networks. NVMe/TCP will build on this legacy infrastructure and knowledge.

Second, any latency that NVMe/TCP introduces still pales in comparison to existing storage networking protocols.

Running NVMe over TCP does introduces latency versus using RoCE. However, the latency that TCP introduces is nominal and will likely be measured in microseconds in most circumstances. Most applications will not even detect this level of latency due to the substantial jump in performance that natively running NVMe over TCP will provide versus using existing storage protocols such as iSCSI and FC.

Third, the introduction of NVMe/TCP will require companies implement Ethernet network designs that minimize latency.

Ethernet networks may implement buffering in Ethernet switches to handle periods of peak workloads. Companies will need to modify that network design technique when deploying NVMe/TCP as buffering introduces latency into the network and NVMe is highly latency sensitive. Companies will need to more carefully balance how much buffering they introduce on Ethernet switches.

Fourth, get familiar with the term “incast collapse” on Ethernet networks and how to mitigate it.

NVMe can support up to 64,000 queues. Every queue that NVMe opens up initiates a TCP session. Here is where challenges may eventually surface. Simultaneously opening up multiple queues will result in multiple TCP sessions initiating at the same time. This could, in turn, have all these sessions arrive at a common congestion point in the Ethernet network at the same time. The network remedies this by having all TCP sessions backing off at the same time, or an incast collapse, creating latency in the network.

Source: University of California-Berkeley

Historically this has been a very specialized and rare occurrence in networking due to the low probability that such an event would ever take place. But the introduction of NVMe/TCP into the network makes the possibility of such a event much more likely to occur, especially as more companies deploy NVMe/TCP into their environment.

The Ratification of the NVMe/TCP

Ratification of the NVMe/TCP standard potentially makes every enterprise data center a candidate for storage systems that can deliver dramatically better performance to their work loads. Until the performance demands of every workload in a data center are met instantaneously, some workload requests will queue up behind a bottleneck in the data center infrastructure.

Just as introducing flash memory into enterprise storage systems revealed bottlenecks in storage operating system software and storage protocols, NVMe/TCP-based storage systems will reveal bottlenecks in data center networks. Enterprises seeking to accelerate their applications by implementing NVMe/TCP-based storage systems may discover bottlenecks in their networks that need to be addressed in order to see the full benefits that NVMe/TCP-based storage.

To view this presentation in its entirety, follow this link.




Three Hallmarks of an Effective Competitive Intelligence System

Across more than twenty years as an IT Director, I had many sales people incorrectly tell me that their product was the only one that offered a particular benefit. Did their false claims harm their credibility? Absolutely. Were they trying to deceive me? Possibly. But it is far more likely that they sincerely believed their claims. 

Their lack was not truthfulness but accuracy. They lacked accurate and up-to-date information about the current capabilities of competing products in the marketplace. Their competitive intelligence system had failed them.

When DCIG was recruiting me to become an analyst I asked DCIG’s founder, Jerome Wendt, what were the most surprising things he had learned since founding DCIG. One of the three things he mentioned in his response was the degree to which vendors lack a knowledge of the product features and capabilities of their key competitors.

Reasons Vendors Lack Good Competitive Intelligence

There are many reasons why vendors lack good competitive intelligence. These include:

  • They are focused on delivering and enhancing their own product to meet the perceived needs of current and prospective customers.
  • Collecting and maintaining accurate data about even key competitor’s products can be time consuming and challenging.
  • Staff transitions may result in a loss of data continuity.

Benefits of an Effective Competitive Intelligence System

An effective competitive intelligence system increases sales by enabling partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits. Thus, it enhances the onboarding of new personnel and their opportunity for success.

Three Hallmarks of an Effective Competitive Intelligence System

The hallmarks of an effective competitive intelligence system center around three themes: data, insight and communication.

Regarding Data, the system must:

  • Capture current, accurate data about key competitor products
  • Provide data continuity across staff transitions
  • Provide analyses that surfaces commonalities and differences between products

 

Regarding Insight, the system must:

  • Clearly identify product differentiators
  • Clearly articulate the business benefits of those differentiators

 

Regarding Communication, the system must:

  • Provide concise content that enables partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits for CxOs and line of business executives
  • Bridge the gap between sales and marketing with messages that are tailored to be consistent with product branding
  • Provide the content at the right time and in the right format

Whatever combination of software, services and competitive intelligence personnel a company employs, an effective competitive intelligence system is an important asset for any company seeking to thrive in a competitive marketplace.

DCIG’s Competitive Intelligence Track Record

DCIG Buyer’s Guides

Since 2010, DCIG Buyer’s guides have provided hundreds of thousands with an independent look at the many products in each market DCIG covers. Each Buyer’s Guide gives decision makers insight into the features that merit particular attention, what is available now and key directions in the marketplace. DCIG produces Buyer’s Guides based on our larger bodies of research in data protection, enterprise storage and converged infrastructure.

DCIG Pocket Analyst Reports

DCIG leverages much of the Buyer’s Guide research methodology–and the competitive intelligence platform that supports that research–to create focused reports that highlight the differentiators between two products that are frequently making it onto the same short lists.

Our Pocket Analyst Reports are published and made available for sale on a third party website to substantiate the independence of each report. Vendors can license these reports for use in lead generation, internal sales training and for use with prospective clients. 

DCIG Competitive Intelligence Reports

DCIG also uses its Competitive Intelligence Platform to produce reports for internal use by our clients. These concise reports enable partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits that make sense to CxOs and line of business executives. Because these reports are for internal use, the client can have substantial input into the messaging.

DCIG Battle Cards

Each DCIG Battle Card is a succinct 2-page document that compares the client’s product or product family to one other product or product family. The client and DCIG collaborate to identify the key product features to compare, the key strengths that the client’s product offers over the competing product, and the appropriate messaging to include on the battle card. Content may be contributed by the client for inclusion on the battle card. The battle card is only for the internal use of the client and its partners and may not be distributed.

DCIG Competitive Intelligence Platform

The DCIG Competitive Intelligence (CI) Platform is a multi-tenant, platform-as-a-service (PaaS) offering backed by support from DCIG analysts. The DCIG Competitive Intelligence Platform offers the flexibility to centrally store data and compare features on competitive products. Licensees receive the ability to centralize competitive intelligence data in the cloud with the data made available internally to their employees and partners via reports prepared by DCIG analysts.

DCIG Competitive Intelligence platform and associated analyst services strengthen the competitive intelligence capabilities of our clients. Sometimes in unexpected ways…

  • Major opportunity against a competitor never faced before
  • Strategic supplier negotiation and positioning of competitor

 

In each case, DCIG analysis identified differentiators and 3rd party insights that helped close the deal.




HCI Comparison Report Reveals Key Differentiators Between Dell EMC VxRail and Nutanix NX

Many organizations view hyper-converged infrastructure (HCI) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

HCI Appliances Deliver Radical Simplicity

Hyper-converged infrastructure appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. Thus, the stakes are high for vendors such as Dell EMC and Nutanix as they compete to own this critical piece of data center real estate.

In the last several years, HCI has also emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, plus non-disruptive hardware upgrades and data migrations are among the features that enterprises love about these solutions.

HCI Appliances Are Not All Created Equal

Many enterprises are considering HCI solutions from providers Dell EMC and Nutanix. A cursory examination of these two vendors and their solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their HCI appliances. Also, both providers pretest firmware and software updates and automate cluster-wide roll-outs.

Nevertheless, important differences remain between the products. Due to the high level of interest in these products, DCIG published an initial comparison in November 2017. Both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first page of HCI comparison reportUpdated DCIG Pocket Analyst Report Reveals Key HCI Differentiators

In this updated report, DCIG identifies six ways the HCI solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed feature matrix as well as insight into key differentiators between these two HCI solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




VMware vSphere and Nutanix AHV Hypervisors: An Updated Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform. Each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its updated DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors.

blurred image of first page of reportThis succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  • Data protection ecosystem
  • Support for Guest OSes
  • Support for VDI platforms
  • Certified enterprise applications
  • Fit with corporate direction
  • More favorable licensing model
  • Simpler management

This DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




Dell EMC VxRail vs Nutanix NX: Six Key HCIA Differentiators

Many organizations view hyper-converged infrastructure appliances (HCIAs) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises consider HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their respective HCIA solutions. Both pre-test firmware and software updates holistically and automate cluster-wide roll-outs.

Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG published its first comparison of these products in November 2017. There is a high level of interest in these products, and both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first pageIn this updated report, DCIG identifies six ways the HCIA solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed product matrix as well as insight into key differentiators between these two HCIA solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 




Storage Analytics and Latency Matters

Some pretty amazing storage performance numbers are being bandied about these days. Generally speaking, these heretofore unheard of claims of millions of IOPS and latencies measured in microseconds include references to NVMe and perhaps storage class memories. What ultimately matters to a business is the performance of its applications, not just storage arrays. When an application is performing poorly, identifying the root cause can be a difficult and time-consuming challenge. This is particularly true in virtualized infrastructures. But meaningful help is now available to address this challenge through advances in storage analytics.

Storage Analytics Delivers Quantifiable Value

In a previous blog article about the benefits of Predictive Analytics in Enterprise Storage, I mentioned HPE’s InfoSight predictive analytics and the VMVision cross-stack analytics tool they released in mid-2015. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight VMVision cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

 

Pure Storage also offers predictive analytics, called Pure1 Meta. On September 20, 2018, Pure Storage released an extension of the Pure1 Meta platform called VM Analytics. Even in this first release, VM Analytics is clearly going to simplify and accelerate the process of resolving performance problems for Pure Storage FlashArray customers.

Application Latency is a Systemic Issue

The online demonstration of VM Analytics quickly impressed me with the fact that application latency is a systemic issue, not just a storage performance issue. The partial screen shot from the Pure1 VM Analytics tool included below shows a virtual machine delivering an average latency of 7.4 milliseconds. This view into performance provided by VM Analytics enables IT staff to quickly zero in on the VM itself as the place to focus in resolving the performance issue.screen shot of vm analytics

This view also shows that the datastore is responsible for less than 1 millisecond of that 7.4 milliseconds of latency. My point is that application latency depends on factors beyond the storage system. It must be addressed as a systemic issue.

Storage Analytics Simplify the Data Center Balancing Act

The key performance resources in a data center include CPU cycles, DRAM, storage systems and the network. Unless a system is dramatically over-provisioned, one of these resources will always constrain the performance of applications. Storage has historically been the limiting factor in application performance but the flash-enabled transformation of the data center has changed that dynamic.

Tools like VMVision and VM Analytics create value by giving data center administrators new levels of visibility into infrastructure performance. Therefore, technology purchasers should carefully evaluate these storage analytics tools as part of the purchase process. IT staff should use these tools to balance the key performance resources in the data center and deliver the best possible application performance to the business.




DCIG’s VMworld 2018 Best of Show

VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.

VMworld 2018 Best of Show #1: QoreStor Software-defined Secondary Storage

Software defined dedupe in the form of QoreStor has arrived. A few years ago Dell Technologies sold off its Dell Software division that included an assortment (actually a lot) of software products that emerged as Quest Software. Since then, Quest Software has done nothing but innovate like bats out of hell. One of the products coming out of this lab of mad scientists is QoreStor, a stand-alone software-defined secondary storage solution. QoreStor installs on any server hardware platform and works with any backup software. QoreStor provides a free download that will deduplicate up to 1TB of data at no charge. While at least one other vendor (Dell Technologies) offers a deduplication product that is available as a virtual appliance, only Quest QoreStor gives organizations the flexibility to install it on any hardware platform.

VMworld 2018 Best of Show #2: LoginVSI Load Testing

Stop guessing on the impact of VDI growth with LoginVSI. Right sizing the hardware for a new VDI deployment from any VDI provider (Citrix, Microsoft, VMware, etc.) is hard enough. But to keep the hardware appropriately sized as more VDIs are added or patches and upgrades are made to existing VDI instances, that can be dicey at best and downright impossible at worst.

LoginVSI helps take the guesswork out of any organization’s future VDI growth by:

  • examining the hardware configuration underlying the current VDI environment,
  • comparing that to the changes proposed for the environment, and then
  • making recommendations as to what changes, if any, need to be made to the environment to support
    • more VDI instances or
    • changes to existing VDI instances.

VMworld 2018 Best of Show #3: Runecast vSphere Environment Analyzer

Plug vSphere’s holes and keep it stable with Runecast. Runecast was formed by a group of former IBM engineers who over the years had become experts at two things while working at IBM: (1) Installing and configuring VMware vSphere; and (2) Pulling out their hair trying to keep each installation of VMware vSphere stable and up-to-date. To rectify this second issue, they left IBM and formed Runecast,

Runecast provides organizations with the software and tools they need to proactively:

  • identify needed vSphere patches and fixes,
  • identify security holes, and even
  • recommend best practices for tuning your vSphere deployments
  • with minimal to no manual intervention.

VMworld 2018 Best of Show #4: VxRail Hyper-converged Infrastructure

Kudos to Dell Technologies VxRail solution for saving every man, woman, and child in the United States $2/person in 2018. I was part of a breakfast conversation on Wednesday morning with one of the Dell Technologies VxRail product managers. He anecdotally shared with the analysts present that Dell Technologies recently completed two VxRail hyper-converged infrastructure deployments in two US government agencies. Each deployment is saving each agency $350 million annually. Collectively that amounts to $700 million or $2/person for every person residing in the US. Thank you, Dell.




Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.




Data Center Challenges and Technology Advances Revealed at Flash Memory Summit 2018

If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.

Flash Memory Summit logoKey themes at the conference include:

  • Processing and storing ever growing amounts of data is becoming more and more challenging. Faster connections and higher capacity drives are coming but are not the whole answer. We need to completely rethink data center architecture to meet these challenges.
  • Artificial intelligence and machine learning are expanding beyond their traditional high-performance computing research environments and into the enterprise.
  • Processing must be moved closer to—and perhaps even into—storage.

Multiple approaches to addressing these challenges were championed at the conference that range from composable infrastructure to computational storage. Some of these solutions will complement one another. Others will compete with one another for mindshare.

NVMe and NVMe-oF Get Real

For the near term, NVMe and NVMe over Fabrics (NVMe-oF) are clear mindshare winners. NVMe is rapidly becoming the primary protocol for connecting controllers to storage devices. A clear majority of product announcements involved NVMe.

WD Brings HDDs into the NVMe World

Western Digital announced the OpenFlex™ architecture and storage products. OpenFlex speaks NVMe-oF across an Ethernet network. In concert with OpenFlex, WD announced Kingfish™, an open API for orchestrating and managing OpenFlex™ infrastructures.

Western Digital is the “anchor tenant” for OpenFlex with a total of seventeen (17) data center hardware and software vendors listed as launch partners in the press release. Notably, WD’s NVMe-oF attached 1U storage device provides up to 168TB of HDD capacity. That’s right – the OpenFlex D3000 is filled with hard disk drives.

While NVMe’s primary use case it to connect to flash memory and emerging ultra-fast memory, companies still want their HDDs. Using Western Digital, organizations can have their NVMe and still get the lower cost HDDs they want.

Gen-Z Composable Infrastructure Standard Gains Momentum

The emerging Gen-Z as a memory-centric architecture is designed for nanosecond latencies. Since last year’s FMS, Gen-Z has made significant progress toward this objective. Consider:

  • The consortium publicly released the Gen-Z Core Specification 1.0 on February 13, 2018. Agreement upon a set of 1.0 standards is a critical milestone in the adoption of any new standard. The fact that the consortium’s 54 members agreed to it suggest broad industry adoption.
  • Intel’s adoption of the SFF-TA-1002 “Gen-Z” universal connector for its “Ruler” SSDs reflects increased adoption of the Gen-Z standards. Making this announcement notable is that Intel is NOT currently a member of the Gen-Z consortium which indicates that Gen-Z standards are gaining momentum even outside of the consortium.
  • The Gen-Z booth included a working Gen-Z connection between a server and a removable DRAM module in another enclosure. This is the first example of a processor being able to use DRAM that is not local to the processor but is instead coming out of a composable pool. This is a concept similar to how companies access shared storage in today’s NAS and SAN environments.

Other Notable FMS Announcements

Many other innovative solutions to data center challenges were also made at the FMS 2018 which included:

  • Solarflare NVMe over TCP enables rapid low-latency data movement over standard Ethernet networks.
  • ScaleFlux computational storage avoids the need to move the data by integrating FPGA-based computation into storage devices.
  • Intel’s announcement of its 660P Series of SSDs that employ quad level cell (QLC) technology. QLC stores more data in less space and at a lower cost.

Recommendations

Based on the impressive progress we observed at Flash Memory Summit 2018, we can reaffirm the recommendations we made coming out of last year’s summit…

  • Enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.
  • Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.



Differentiating between High-end HCI and Standard HCI Architectures

Hyper-converged infrastructure (HCI) architec­tures have staked an early claim as the data center architecture of the future. The abilities of all HCI solutions to easily scale capacity and performance, simplify ongoing management, and deliver cloud-like features have led many enterprises to view it as a logical path forward for building on-premise clouds.

But when organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high-end and standard hyper-converged architectures deliver the same bene­fits. They each virtualize compute, memory, storage networking, and data stor­age in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snap­shots and replication. In short, these two archi­tectures mirror each other in many ways.

However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments. In these envi­ronments, enterprises often need to:

  • Host thousands of VMs in a single, logical environment.
  • Provide guaranteed, sustainable performance for each VM by insulating them from one another.
  • Offer connectivity to multiple public cloud providers for archive and/or backup.
  • Seamlessly move VMs to and from the cloud.

It is in these circumstances that differences between high-end and standard architectures emerge with the high-end HCI architecture possessing four key advantages over standard HCI architectures. Consider:

  1. Data availability. Both high-end and standard HCI solutions distribute data across multiple nodes to ensure high levels of data availability. However, only high-end HCI architectures use separate, distinct compute and data nodes with the data nodes dedicated to guar­anteeing high levels of data availability for VMs on all compute nodes. This ensures that should any compute node become unavail­able, the data associated with any VM on it remains available.
  2. Flash/performance optimization. Both high-end and standard HCI architectures take steps to keep data local to the VM by storing the data of each VM on the same node on which the VM runs. However, solutions based on high-end HCI architectures store a copy of each VM’s data on the VM’s compute node as well as on the high-end HCI architecture’s underlying data nodes to improve and optimize flash performance. High-end HCI solutions such as the Datrium DVX also deduplicate and compress all data while limiting the amount of inter-nodal communication to free resources for data processing.
  3. Platform manageability/scalability. As a whole, HCI architectures are intuitive to size, manage, scale, and understand. In short, if you need more performance and/or capacity, one only needs to add another node. However, enterprises often must support hundreds or even thousands of VMs that each must perform well and be protected. Trying to deliver on those requirements at scale using a standard HCIA solution where inter­-nodal communication is a prerequisite becomes almost impos­sible to achieve. High-end HCI solutions facilitate granular deployments of compute and data nodes while limiting inter-nodal communication.
  4. VM insulation. Noisy neighbors—VMs that negatively impact the performance of adjacent VMs—remain a real problem in virtual environments. While both high-end and standard HCI architectures support storing data close to VMs, high-end HCI architectures do not rely on the availability of other compute nodes. This reduces inter-nodal communication and the probability that the noisy neighbor problem will surface.

HCI architectures have resonated with organizations in large part because of their ability to deliver cloud-like features and functionality with maintaining their simplicity of deployment and ongoing maintenance. However, next generation high-end HCI architecture with solutions available from providers like Datrium provide organizations greater flexibility to deliver cloud-like functionality at scale such as offering better management and optimization of flash performance, improved data availability, and increased insulation between VM instances. To learn more about how standard and high-end HCI architectures compare, check out this recent DCIG pocket analyst report that is available on the TechTrove website.

Note: Image added on 7/27/2018.



Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.




Five Key Differentiators between the Latest NetApp AFF A-Series and Hitachi Vantara VSP F-Series Platforms

Every enterprise-class all-flash array (AFA) delivers sub-1 millisecond response times using standard 4K & 8K performance benchmarks, high levels of availability, and a relatively full set of core data management features. As such, enterprises must examine AFA products to determine the differentiators between them. It is when DCIG compared the newest AFAs from leading providers such as Hitachi Vantara and NetApp in its latest DCIG Pocket Analyst Report that differences between them quickly emerged.

Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. In so doing, many of the similarities between the products from these providers persisted in that they both continue to natively offer:

  • Unified SAN and NAS interfaces
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

However, the latest AFA product refreshes from each of these two vendors also introduced some key areas where they diverge. While some of these changes reinforced the strengths of each of their respective product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. This resulted in some key differences in product functionality emerging between the products from these two vendors that will impact them in the years to come.

Clicking on image above will take you to a third party website to access this report.

To help enterprises select the solution that best fits their needs, there are five key ways that the latest AFA products from Hitachi Vantara and NetApp differentiate themselves from one another. These five key differentiators include:

  1. Data protection and data reduction
  2. Flash performance optimization
  3. Predictive analytics
  4. Public cloud support
  5. Storage networking protocols

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares these newest all-flash arrays from Hitachi Vantara and NetApp. This report is currently available for sale on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report becomes available at no charge at some future time.




NVMe Unleashing Performance and Storage System Innovation

Mainstream enterprise storage vendors are embracing NVMe. HPE, NetAppPure Storage, Dell EMC, Kaminario and Tegile all offer all-NVMe arrays. According to these vendors, the products will soon support storage class memory as well. NVMe protocol access to flash memory SSDs is a big deal. Support for storage class memory may become an even bigger deal.

NVMe Flash Delivers More Performance Than SAS

NVM express logo

Using the NVMe protocol to talk to SSDs in a storage system increases the efficiency and effective performance capacity of each processor and of the overall storage system. The slimmed down NVMe protocol stack reduces processing overhead compared to legacy SCSI-based protocols. This yields lower storage latency and more IOPS per processor. This is a good thing.

NVMe also delivers more bandwidth per SSD. Most NVMe SSDs connect via four (4) PCIe channels. This yields up to 4 GB/s bandwidth, an increase of more than 50% compared to the 2.4 GB/s maximum of a dual-ported SAS SSD. Since many all-flash arrays can saturate the path to the SSDs, this NVMe advantage translates directly to an increase in overall performance.

The newest generation of all-flash arrays combine these NVMe benefits with a new generation of Intel processors to deliver more performance in less space. It is this combination that, for example, enables HPE to claim that its new Nimble Storage arrays offer twice the scalability of the prior generation of arrays. This is a very good thing.

The early entrants into the NVMe array marketplace charged a substantial premium for NVMe performance. As NVMe goes mainstream, the price gap between NVMe SSDs and SAS SSDs is rapidly narrowing. With many vendors now offering NVMe arrays, competition should soon eliminate the price premium. Indeed, Pure Storage claims to have done so already.

Storage Class Memory is Non-Volatile Memory

Non-volatile memory (NVM) refers to memory that retains data even when power is removed. The term applies to many technologies that have been widely used for decades. These include EPROM, ROM, NAND flash (the type of NVM commonly used in SSDs and memory cards). NVM also refers to newer or less widely used technologies including 3D XPoint, ReRAM, MRAM and STT-RAM.

Because NVM properly refers to a such wide range of technologies, many people are using the term Storage Class Memory (SCM) to refer to emerging byte-addressable non-volatile memory technologies that may soon be used in enterprise storage systems. These SCM technologies include 3D XPoint, ReRAM, MRAM and STT-RAM. SCM offers several advantages compared to NAND flash:

  • Much lower latency
  • Much higher write endurance
  • Byte-addressable (like DRAM memory)

Storage Class Memory Enables Storage System Innovation

Byte-addressable non-volatile memory on NVMe/PCIe opens up a wonderful set of opportunities to system architects. Initially, storage class memory will generally be used as an expanded cache or as the highest performing tier of persistent storage. Thus it will complement rather than replace NAND flash memory in most storage systems. For example, HPE has announced it will use Intel Optane (3D XPoint) as an extension of DRAM cache. Their tests of HPE 3PAR 3D Cache produced a 50% reduction in latency and an 80% increase in IOPS.

Some of the innovative uses of SCM will probably never be mainstream, but will make sense for a specific set of use cases where microseconds can mean millions of dollars. For example, E8 Storage uses 100% Intel Optane SCM in its E8-X24 centralized NVMe appliance to deliver extreme performance.

Remain Calm, Look for Short Term Wins, Anticipate Major Changes

We humans have a tendency to overestimate short term and underestimate long term impacts. In a recent blog article we asserted that NVMe is an exciting and needed breakthrough, but that differences persist between what NVMe promises for all-flash array and hyperconverged solutions and what they can deliver in 2018. Nevertheless, IT professionals should look for real application and requirements-based opportunities for NVMe, even in the short term.

Longer term, the emergence of NVMe and storage class memory are steps on the path to a new data centric architecture. As we have previously suggested, enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture.




Four Implications of Public Cloud Adoption and Three Risks to Address

Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.

Implication #1: No enterprise IT dept will be able to keep pace with the level of services innovation available from cloud providers

The battle is over. Cloud wins. Deal with it.

Dealing with it does not necessarily mean that every business will move every workload to the cloud. It does mean that it is time for business IT departments to build awareness of the services available from public cloud providers. One way to do this is to tap into the flow of service updates from one or more of the major cloud providers.

four public cloud logosFor Amazon Web Services, I like What’s New with AWS. Easy filtering by service category is combined with sections for featured announcements, featured video announcements, and one-line listings of the most recent announcements from AWS. The one-line listings include links to service descriptions and to longer form articles on the AWS blog.

For Microsoft Azure, I like Azure Updates. As its subtitle says, “One place. All updates.” The Azure Updates site provides easy filtering by product, update type and platform. I especially like the ability to filter by update type for General Availability and for Preview. The site also includes links to the Azure roadmap, blog and other resources. This site is comprehensive without being overwhelming.

For Google Cloud Platform, its blog may be the best place to start. The view can be filtered by label, including by announcements. This site is less functional than the AWS and Microsoft Azure resources cited above.

For IBM Cloud, the primary announcements resource is What’s new with IBM Cloud. Announcements are presented as one-line listings with links to full articles.

Visit these sites, subscribe to their RSS feeds, or follow them via social media platforms. Alternatively, subscribe to their weekly or monthly newsletters via email. Once a business has workloads running in one of the public clouds at a minimum an IT staff member should follow the updates site.

Implication #2: Pressure will mount on Enterprise IT to connect business data to public cloud services

The benefits of bringing public cloud services to bear on the organization’s data will create pressure on enterprise IT departments to connect business data to those services. There are many options for accomplishing this objective, including:

  1. All-in with one public cloud
  2. Hybrid: on-prem plus one public
  3. Hybrid: on-prem plus multiple public
  4. Multi-cloud (e.g. AWS + Azure)

The design of the organization and the priorities of the business should drive the approach taken to connect business data with cloud services.

Implication #3: Standard data protection requirements now extend to data and workloads in the public cloud

No matter what approach it taken when embracing the public cloud, standard data protection requirements extend to data and workloads in the cloud. Address these requirements up front. Explore alternative solutions and select one that meets the organizations data protection requirements.

Implication #4: Cloud Data Protection and DRaaS are on-ramps to public cloud adoption

For most organizations the transition to the cloud will be a multi-phased process. Data protection solutions that can send backup data to the cloud are a logical early phase. Disaster recovery as a service (DRaaS) offerings represent another relatively low-risk path to the cloud that may be more robust and/or lower cost that existing disaster recovery setups. These solutions move business data into public cloud repositories. As such, cloud data protection and DRaaS may be considered on-ramps to public cloud adoption.

Once corporate data has been backed up or replicated to the cloud, tools are available to extract and transform the data into formats that make it available for use/analysis by that cloud provider’s services. With proper attention, this can all be accomplished in ways that comply with security and data governance requirements. Nevertheless, there are risks to be addressed.

Risk to Address #1: Loss of change control

The benefit of rapid innovation has a downside. Any specific service may be upgraded or discarded by the provider without much notice. Features used by a business may be enhanced or decremented. This can force changes in other software that integrates with the service or in procedures used by staff and the associated documentation for those procedures.

For example, Office365 and Google G Suite features can change without much notice. This creates a “Where did that menu option go?” experience for end users. Some providers reduce this pain by providing an quick tutorial for new features within the application itself. Others provide online learning centers that make new feature tutorials easy to discover.

Accept this risk as an unavoidable downside to rapid innovation. Where possible, manage the timing of these releases to an organization’s users, giving them advance notice of the changes along with access to tutorials.

Risk to Address #2: Dropped by provider

A risk that may not be obvious to many business leaders is that of being dropped by a cloud service provider. A business with unpopular opinions might have services revoked, sometimes with little notice. Consider how quickly the movement to boycott the NRA resulted in severed business-to-business relationships. Even an organization as large as the US Military faces this risk. As was highlighted in recent news, Google will not renew its military AI project due in large part to pressure from Google employees.

Mitigate this risk through contracts and architecture. This is perhaps one argument in favor of a hybrid on-prem plus cloud approach to the public cloud versus an all-in approach.

Risk to Address #3: Unpredictable costs

It can be difficult to predict the costs of running workloads in the public cloud, and these costs can change rapidly. Address this risk by setting cost thresholds that trigger an alert. Consider subscribing to a service such as Nutanix Beam to gain granular visibility into and optimization of public cloud costs.

Its time to get real about the public cloud

Many business are ready to embrace the public cloud. IT departments should make themselves aware of services that may create value for their business. They should also work through the implications of moving corporate data and workloads to the cloud, and make plans for managing the attendant risks.




DCIG 2018-19 All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 All-flash Array Buyer’s Guide edition developed from its enterprise storage array body of research. This 64-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-two (32) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent based on a comprehensive scoring of product featuresThese products come from seven (7) vendors including Dell EMCHitachi VantaraHPE, Huawei, NetAppPure Storage and Tegile.

graphical icon for the All-flash Array Buyer's Guide

DCIG’s succinct analysis provides insight into the state of the all-flash array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide edition is available through the following DCIG partner sites: TechTrove.




Seven Significant Trends in the All-Flash Array Marketplace

Much has changed since DCIG published the DCIG 2017-18 All-Flash Array Buyer’s Guide just one year ago. The DCIG analyst team is in the final stages of preparing a fresh snapshot of the all-flash array (AFA) marketplace. As we reflected on the fresh all-flash array data and compared it to the data we collected just a year ago, we observed seven significant trends in the all-flash array marketplace that will influence buying decisions through 2019.

Trend #1: New Entrants, but Marketplace Consolidation Continues

Although new storage providers continue to enter the all-flash array marketplace—primarily focused on NVMe over Fabrics–the larger trend is continued consolidation. HPE acquired Nimble Storage. Western Digital acquired Tegile.

Every well-known provider has made at least one all-flash acquisition. Consequently, some providers are in the process of “rationalizing” their all-flash portfolios. For example, HPE has decided to position Nimble Storage AFAs as “secondary flash”. HPE also announced it will implement Nimble’s InfoSight predictive analytics platform across HPE’s entire portfolio of data center products, beginning with 3PAR StoreServ storage. Dell EMC seems to be positioning VMAX as its lead product for mission critical workloads, Unity for organizations that value simplified operations, XtremIO for VDI/test/dev, and SC for low cost capacity.

Nearly all the AFA providers also offer at least one hyperconverged infrastructure product. These hyperconverged products compete with AFAs for marketing and data center infrastructure budgets. This will create additional pressure on AFA providers and may drive further consolidation in the marketplace.

Trend #2: Flash Capacity is Increasing Dramatically

The raw capacity of the more than 100 all-flash arrays DCIG researched averaged 4.4 petabytes. This is a 5-fold increase compared to the products in the 2017-18 edition. The highest capacity product can provide 70 petabytes (PB) of all-flash capacity. This is a 7-fold increase. Thus, AFAs now offer the capacity required to be the storage resource for all active workloads in any organization.

graph of all-flash array capacity

Source: DCIG, n=102

Trend #3: Storage Density is Increasing Dramatically

The average AFA flash density of the products continues to climb. Fully half of the AFAs that DCIG researched achieve greater than 50 TB/RU. Some AFAs can provide over 200 TB/RU. The combination of all-flash performance and high storage density means that an AFA may be able to meet an organization’s performance and capacity requirements in 1/10th the space of legacy HDD storage systems and the first generation of all-flash arrays. This creates an opportunity for many organizations to realize significant data center cost reductions. Some have eliminated data centers. Others have been able to delay building new data centers.

graph of all-flash array storage density

Source: DCIG, n=102

Trend #4: Rapid Uptake in Components that Increase Performance

Increases in flash memory capacity and density are being matched with new components that increase array performance. These components include:

  • a new generation of multi-core CPUs from Intel
  • 32 Gb Fibre Channel and 25/40/100 Gb Ethernet
  • GPUs
  • ASICS to offload storage tasks
  • NVMe connectivity to SSDs.

Each of these components can unlock more of the performance available from flash memory. Organizations should assess how well these components are integrated to systemically unlock the performance of flash memory and of their own applications.

chart of front end connectivity percentages

Source: DCIG, n=102

Trend #5: Unified Storage is the New Normal

The first generations of all-flash arrays were nearly all block-only SAN arrays. Tegile was perhaps the only truly unified AFA provider. Today, more than half of all all-flash arrays DCIG researched support unified storage. This support for multiple concurrent protocols creates an opportunity to consolidate and accelerate more types of workloads.

Trend #6: Most AFAs can use Public Cloud Storage as a Target

Most AFAs can now use public cloud storage as a target for cold data or for snapshots as part of a data protection mechanism. In many cases this target is actually one of the provider’s own arrays running in a cloud data center or a software-defined storage instance of its stor­age system running in one of the true public clouds.

Trend #7: Predictive Analytics Get Real

Some storage providers can document how predictive stor­age analytics is enabling increased availability, reliability, and application performance. The promise is huge. Progress varies. Every prospective all-flash array purchaser should incorporate predictive analytics capabilities into their evaluation of these products, particularly if the organization intends to consolidate multiple workloads onto a single all-flash array.

Conclusion: All Active Workloads Belong on All-Flash Storage

Any organization that has yet to adopt an all-flash storage infrastructure for all active workloads is operating at a competitive disadvantage. The current generation of all-flash arrays create business value by…

  • making existing applications run faster even as data sets grow
  • accelerating application development
  • enabling IT departments to say, “Yes” to new workloads and then get those new workloads producing results in record time
  • driving down data center capital and operating costs

DCIG expects to finalize our analysis of all-flash arrays and present the resulting snapshot of this dynamic marketplace in a series of buyer’s guides during the second quarter of 2018.




Predictive Analytics in Enterprise Storage: More Than Just Highfalutin Mumbo Jumbo

Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding the startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.

The Benefits of Predictive Analytics for Enterprise Storage

Picture of Gilbert and Anne from Anne of Avonlea

Gilbert advises Anne to stop using “highfalutin mumbo jumbo” in her writing. (Note 1)

The end goal of predictive analytics for the more visionary startups goes beyond eliminating downtime. Their goal is to enable data center infrastructures to autonomously optimize themselves for application availability, performance and total cost of ownership based on the customer’s priorities.

The vendors that commit to this path and execute better than their competitors are creating value for their customers. They are also enabling their own organizations to scale up revenues without scaling out staff. Vendors that succeed in applying predictive analytics to storage today also position themselves to win tomorrow in the era of software-defined data centers (SDDC) built on top of composable infrastructures.

To some people this may sound like a bunch of “highfalutin mumbo jumbo”, but vendors are making real progress in applying predictive analytics to enterprise storage and other elements of the technical infrastructure. These vendors and their customers are achieving meaningful benefits including:

  • Measurably reducing downtime
  • Avoiding preventable downtime
  • Optimizing application performance
  • Significantly reducing operational expenses
  • Improving NPS

HPE Quantifies the Benefits of InfoSight Predictive Analytics

Incumbent technology vendors are responding to this pressure from startups in a variety of ways. HPE purchased Nimble Storage, the prime mover in this space, and plans to extend the benefits of Nimble’s InfoSight predictive analytics to its other enterprise infrastructure products. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

The Current State of Affairs in Predictive Analytics

HPE is certainly not alone on this journey. In fact, vendors are claiming some use of predictive analytics for more than half of the all-flash arrays DCIG researched.

Source: DCIG; N = 103

Telemetry Data is the Foundation for Predictive Analytics

Storage array vendors use telemetry data collected from the installed product base in a variety of ways. Most vendors evaluate fault data and advise customers how to resolve problems, or they remotely log in and resolve problems for their customers.

Many all-flash arrays transmit not just fault data, but extensive additional telemetry data about workloads back to the vendors. This data includes IOPS, bandwidth, and latency associated with workloads, front end ports, storage pools and more. Some vendors apply predictive analytics and machine learning algorithms to data collected across the entire installed base to identify potential problems and optimization opportunities for each array in the installed base.

Predictive Analytics Features that Matter

Proactive interventions identify something that is going to create a problem and then notify clients about the issue. Interventions may consist of providing guidance in how to avoid the problem or implementing the solution for the client. A wide range of interventions are possible including identifying the date when an array will reach full capacity or identifying a network configuration that could create a loop condition.

Recommending configuration changes enhances application performance at a site by comparing the performance of the same application at similar sites, discovering optimal configurations, and recommending configuration changes at each site.

Tailored configuration changes prevent outages or application performance issues based on the vendor seeing and fixing problems caused by misconfigurations. The vendor deploys the fix to other sites that run the same applications, eliminating potential problems. The vendor goes beyond recommending changes by packaging the changes into an installation script that the customer can run, or by implementing the recommended changes on the customer’s behalf.

Tailored software upgrades eliminate outages based on the vendor seeing and fixing incompatibilities they discover between a software update and specific data center environments. These vendors use analytics to identify similar sites and avoid making the software update available to those other sites until they have resolved the incompatibilities. Consequently, site administrators are only presented with software updates that are believed to be safe for their environment.

Predictive Analytics is a Significant Yet Largely Untapped Opportunity

Vendors are already creating much value by applying predictive analytics to enterprise storage. Yet no vendor or product comes close to delivering all the value that is possible. A huge opportunity remains, especially considering the trends toward software-defined data centers and composable infrastructures. Reflecting for even a few minutes on the substantial benefits that predictive analytics is already delivering should prompt every prospective all-flash array purchaser to incorporate predictive analytics capabilities into their evaluation of these products and the vendors that provide them.

Note 1: Image source: https://jamesmacmillan.wordpress.com/2012/04/02/highfalutin-mumbo-jumbo/