TrueNAS Plugins Converge Services for Simple Hybrid Cloud Enablement

iXsystems is taking simplified service delivery to a new level by enabling a curated set of third-party services to run directly on its TrueNAS arrays. TrueNAS already provided multi-protocol unified storage to include file, block and S3-compatible object storage. Now preconfigured plugins converge additional services onto TrueNAS for simple hybrid cloud enablement.

TrueNAS Technology Provides a Robust Foundation for Hybrid Cloud Functionality

iXsystems is known for enterprise-class storage software and rock-solid storage hardware. This foundation lets iXsystems customers run select third-party applications as plugins directly on the storage arrays—whether TrueNAS, FreeNAS Mini or FreeNAS Certified. Several of these plugins dramatically simplify the deployment of hybrid public and private clouds.

How it Works

iXsystems works with select technology partners to preconfigure their solutions to run on TrueNAS using FreeBSD jails, iocage plugins, and bhyve virtual machines. By collaborating with these technology partners, iXsystems enables rapid IT service delivery and drives down the total cost of technology infrastructure. The flexibility to extend TrueNAS functionality via these plugins transforms the appliances into complete solutions that streamline common workflows.

Benefits of Curated Third-party Service Plugins

There are many advantages to this pre-integrated plugin approach:

  • Plugins are preconfigured for optimal operation on TrueNAS
  • Services can be added any time through the web interface
  • Simply turn it on, download the plugin and enter the associated login credentials
  • Plugins eliminate network latency by moving processing to the storage array
  • Third party applications can be run in a virtual machine without purchasing separate server hardware

Hybrid Cloud Data Protection

The integrated Asigra Cloud Backup software protects cloud, physical, and virtual environments. It is an enterprise-class backup solution that uniquely helps prevent malware from compromising backups. Asigra embeds cybersecurity software in its Cloud Backup software. It goes the extra mile to protect backup repositories, ensuring businesses can recover from malware attacks in their production environments.

Asigra is also one of the only enterprise backup solutions that offers agentless backup support across all types of environments: cloud, physical, and virtual. This flexibility makes adopting and deploying Asigra Cloud Backup easy with zero disruption to clients and servers. The integration of Asigra with TrueNAS is Storage Magazine’s Backup Product of the year for 2018.

Hybrid Cloud Media Management

TrueNAS arrays from iXsystems are heavily used in the media and entertainment industry, including several major film and television studios. iXsystems storage accelerates workflows with any device file sharing, multi-tier caching technology, and the latest interconnect technologies on the marketplace.  iXsystems recently announced a partnership with Cantemo to integrate its iconik software.

iconik is a hybrid cloud-based video and content management hub. Its main purpose is managing processes including ingestion, annotation, cataloging, collaboration, storage, retrieval, and distribution of digital assets. The main strength of the product is the support for managing metadata and transcoding of audio, video, and image files, but can store essentially all file formats. Users can choose to keep large original files on-premise yet still view and access the entire library in the cloud using proxy versions where required.

The Cantemo solutions are used to manage media across the entire asset lifecycle, from ingest to archive. iconik is used across a variety of industries including Fortune 500 IT companies, advertising agencies, broadcasters, houses of worship, and media production houses. Cantemo’s clients include BBC Worldwide, Nike, Madison Square Garden, The Daily Telegraph, The Guardian and many other leading media companies.

Enabling iconik on TrueNAS streamlines multimedia workflows and increases productivity for iXsystems customers who choose to enable the Cantemo service.

Cloud Sync

Both Asigra and Cantemo include hybrid cloud data management capabilities within their feature sets. iXsystems also supports file synchronization with many business-oriented and personal public cloud storage services. These enable staff to be productive anywhere—whether working with files locally or in the cloud.

Supported public cloud providers include Amazon Cloud Drive, Amazon S3, Backblaze B2, Box, Dropbox, Google Cloud Storage, Google Drive, Hubic, Mega, Microsoft Azure Blob Storage, Microsoft OneDrive, pCloud and Yandex. The Cloud Sync tool also supports file sync via SFTP and WebDAV.

More Technology Partnerships Planned

According to iXsystems, they will extend TrueNAS pre-integration to more technology partners where such partnerships provide win-win benefits for all involved. This intelligent strategy allows iXsystems to focus on enhancing core TrueNAS storage services, and it enables TrueNAS customers to quickly and confidently implement best-of-breed applications directly on their TrueNAS arrays.

All TrueNAS Owners Benefit

TrueNAS plugins provide a simple and flexible way for all iXsystems customers to add sophisticated hybrid-cloud media management and data protection services to their IT environments. Existing TrueNAS customers can gain the benefits of this plugin capability by updating to the most recent version of the TrueNAS software.




Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems

Dell EMC announced that it will soon add Optane-based storage to its PowerMAX arrays, and that PowerMAX will use Optane as a storage tier, not “just” cache. This statement implies using Optane as a storage tier is superior to using it as a cache. But is it?

PowerMAX will use Storage Class Memory as Tier in All-NVMe System

Some people criticized Dell EMC for taking an all-NVMe approach–and therefore eliminating hybrid (flash memory plus HDD) configurations. Yet the all-NVMe decision gave the engineers an opportunity to architect PowerMAX around the inherent parallelism of NVMe. Dell EMC’s design imperative for the PowerMAX is performance over efficiency. And it does perform:

  • 290 microsecond latency
  • 150 GB per second of throughput
  • 10 million IOPS

These results were achieved with standard flash memory NVMe SSDs. The numbers will get even better when Dell EMC adds Optane-based storage class memory (SCM) as a tier. Once SCM has been added to the array, Dell EMC’s fully automated storage tiering (FAST) technology will monitor array activity and automatically move the most active data to the SCM tier and less active data to the flash memory SSDs.

The intelligence of the tiering algorithms will be key to delivering great results in production environments. Indeed, Dell EMC states that, “Built-in machine learning is the only cost-effective way to leverage SCM”.

HPE “Memory-Driven Flash” uses Storage Class Memory as Cache

HPE is one of many vendors taking the caching path to integrating SCM into their products. It recently began shipping Optane-based read caching via 750 GB NVMe SCM Module add-in cards. In testing, HPE 3PAR 20850 arrays equipped with this “HPE Memory-Driven Flash” delivered:

  • Sub-200 microseconds of latency for most IO
  • Nearly 100% of IO in under 300 microseconds
  • 75 GB per second of throughput
  • 4 million IOPS

These results were achieved with standard 12 Gb SAS SSDs providing the bulk of the storage capacity. HPE Memory-Driven Flash is currently shipping for HPE 3PAR Storage, with availability on HPE Nimble Storage yet in 2019.

An advantage of caching approach is that even a relatively small amount of SCM can enable a storage system to deliver SCM performance by dynamically caching hot data, even when it is storing most of the data on much slower and less expensive media. As with tiering, the intelligence of the algorithms is key to delivering great results in production environments.

The performance HPE is achieving with SCM is good news for other arrays based on caching-oriented storage operating systems. In particular, ZFS-based products such as those offered by Tegile, iXsystems and OpenDrives, should see substantial performance gains when they switch to using SCM for the L2ARC read cache.

What is Best – Tier or Cache?

I favor the caching approach. Caching is more dynamic than tiering, responding to workloads immediately rather than waiting for a tiering algorithm to move active data to the fastest tier on some scheduled basis. A tiering-based system may completely miss out on the opportunity to accelerate some workloads. I also favor caching because I believe it will bring the benefits of SCM within reach of more organizations.

Whether using SCM as a capacity tier or as a cache, the intelligence of the algorithms that automate the placement of data is critical. Many storage vendors talk about using artificial intelligence and machine learning (AI/ML) in their storage systems. SCM provides a new, large, persistent, low-latency class of storage for AI/ML to work with in order to deliver more performance in less space and at a lower cost per unit of performance.

The right way to integrate NVMe and SCM into enterprise storage is to do so–as a tier, as a cache or as both tier and cache–and then use automated intelligent algorithms to make the most of the storage class memory that is available.

Prospective enterprise storage array purchasers should take a close look at how the systems use (or plan to use) storage class memory and how they use AI/ML to inform caching and/or storage tiering decisions to deliver cost-effective performance.

 

This is the first in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

Revised on 4/5/2019 to add the link to the next article in the series.




Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.




The Early Implications of NVMe/TCP on Ethernet Network Designs

The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.

First, NVMe/TCP will accelerate the deployment of NVMe in enterprises.

NVMe is already available in networked storage environments using competing protocols such as RDMA which ships as RoCE (RDMA over Converged Ethernet). The challenge is no one (well, very few anyway) use RDMA in any meaningful way in their environment so using RoCE to run NVMe never gained and will likely never gain any momentum.

The availability of NVMe over TCP changes that. Companies already understand TCP, deploy it everywhere, and know how to scale and run it over their existing Ethernet networks. NVMe/TCP will build on this legacy infrastructure and knowledge.

Second, any latency that NVMe/TCP introduces still pales in comparison to existing storage networking protocols.

Running NVMe over TCP does introduces latency versus using RoCE. However, the latency that TCP introduces is nominal and will likely be measured in microseconds in most circumstances. Most applications will not even detect this level of latency due to the substantial jump in performance that natively running NVMe over TCP will provide versus using existing storage protocols such as iSCSI and FC.

Third, the introduction of NVMe/TCP will require companies implement Ethernet network designs that minimize latency.

Ethernet networks may implement buffering in Ethernet switches to handle periods of peak workloads. Companies will need to modify that network design technique when deploying NVMe/TCP as buffering introduces latency into the network and NVMe is highly latency sensitive. Companies will need to more carefully balance how much buffering they introduce on Ethernet switches.

Fourth, get familiar with the term “incast collapse” on Ethernet networks and how to mitigate it.

NVMe can support up to 64,000 queues. Every queue that NVMe opens up initiates a TCP session. Here is where challenges may eventually surface. Simultaneously opening up multiple queues will result in multiple TCP sessions initiating at the same time. This could, in turn, have all these sessions arrive at a common congestion point in the Ethernet network at the same time. The network remedies this by having all TCP sessions backing off at the same time, or an incast collapse, creating latency in the network.

Source: University of California-Berkeley

Historically this has been a very specialized and rare occurrence in networking due to the low probability that such an event would ever take place. But the introduction of NVMe/TCP into the network makes the possibility of such a event much more likely to occur, especially as more companies deploy NVMe/TCP into their environment.

The Ratification of the NVMe/TCP

Ratification of the NVMe/TCP standard potentially makes every enterprise data center a candidate for storage systems that can deliver dramatically better performance to their work loads. Until the performance demands of every workload in a data center are met instantaneously, some workload requests will queue up behind a bottleneck in the data center infrastructure.

Just as introducing flash memory into enterprise storage systems revealed bottlenecks in storage operating system software and storage protocols, NVMe/TCP-based storage systems will reveal bottlenecks in data center networks. Enterprises seeking to accelerate their applications by implementing NVMe/TCP-based storage systems may discover bottlenecks in their networks that need to be addressed in order to see the full benefits that NVMe/TCP-based storage.

To view this presentation in its entirety, follow this link.




All-inclusive Software Licensing: Best Feature Ever … with Caveats

On the surface, all-inclusive software licensing sounds great. You get all the software features that the product offers at no additional charge. You can use them – or not use them – at your discretion. It simplifies product purchases and ongoing licensing.

But what if you opt not to use all the product’s features or only need a small subset of them? In those circumstances, you need to take a hard look at any product that offers all-inclusive software licensing to determine if it will deliver the value that you expect.

Why We Like All-Inclusive Software Licensing

All-inclusive software licensing has taken off in recent years with more enterprise data storage and data protection products than ever delivering their software licensing in this manner. Further, this trend shows no signs of abating for the following reasons:

  • It makes lives easier for the procurement since they do not have manage and negotiate software licensing separately.
  • It makes lives easier for the IT staff who want to use its features only to find out they cannot use them because they do not have a license to use them.
  • It helps the vendors because their customers use their features. The more they use and like the features, the more apt they are to keep using the product long term.
  • It provides insurance for the companies involved that if they do unexpectedly need a feature, they do not have to go back to the proverbial well and ask for more money to license it.
  • It helps IT be more responsive to changes in business requirements. Business need can change unexpectedly. It happens where IT is assured that a certain feature will never be of interest to the end user. Suddenly, this “never gonna need it” becomes a “gotta have it” requirement.

All-inclusive software licensing solves these dilemmas and others.

The Best Feature Ever … Has Some Caveats

The reasons as to why companies may consider all-inclusive software licensing the best feature ever are largely self-evident. But there are some caveats as to why companies should minimally examine all-inclusive software licensing before they select any product that supports it.

  1. Verify you will use the features offered by the platform. It is great that a storage platform offers deduplication, compression, thin provisioning, snapshots, replication, metro clusters, etc., etc. at no extra charge. But if you do not use these features now and have no plans to use them, guess what? You are still going to indirectly pay for them if you buy the product.
  2. Verify the provider measures and knows which of its features are used. When you buy all-inclusive software licensing, you generally expect the vendor to support it and continue to develop it. But how does the vendor know which of its features are being used, when they are being used, and for what purposes? It makes no sense for the provider to staff its support lines with experts in replication or continue developing its replication features if no one uses it. Be sure you select a product that regularly monitors and reports back to the providers which of its features are used, how they are used and actively supports and develops them.
  3. Match your requirements to the features available on the product. It still pays to do your homework. Know your requirements and then evaluate products with all-inclusive software licensing based upon them.
  4. Verify the software works well in your environment. I have run across a few providers who led the way in providing all-inclusive software licensing. Yet the ones who selected the product based on this offering found out the features were not as robust as they anticipated or were so difficult to use that they had to abandon using them. In short, having a license to use software that does not work in your environment does not help anyone.
  5. Try to quantify if other companies use the specific software features. Ideally, you want to know that others like you use the feature in production. This can help you avoid become an unsuspecting beta-tester for that feature.

Be Grateful but Wary

I, for one, am grateful that providers have come around with more of them making all-inclusive software licensing available as a licensing option for their products. But the software features that vendors include with their all-inclusive software licensing vary from product to product. They also differ in their maturity, robustness, and fullness of support.

It behooves everyone to hop on the all-inclusive software licensing bandwagon. But as you do, verify to which train you hitched your wagon and that it will take you to where you want to go.




TrueNAS M-Series Turns Tech Buzz into Music

NVMe and other advances in non-volatile memory technology are generating a lot of buzz in the enterprise technology industry, and rightly so. As providers integrate these technologies into storage systems, they are closing the gap between the dramatic advances in processing power and the performance of the storage systems that support them. The TrueNAS M-Series from iXsystems provides an excellent example of what can be achieved when these technologies are thoughtfully integrated into a storage system.

DCIG Quick Look

In the process of refreshing its research on enterprise midrange arrays, DCIG discovered that the iXsystems TrueNAS M-Series all-flash and hybrid storage arrays leverage many of the latest technologies, including:

  • Intel® Xeon® Scalable Family Processors

  • Large DRAM caches
  • NVDIMMs

  • NVMe SSDs

  • Flash memory

  • High-capacity hard disk drives

 

The TrueNAS M-Series lineup comprises two models: the M40 and the M50. The M40 is lower entry cost, scalable to 2 PB, and includes 40 GbE connectivity with SAS SSD caching. The M50 scales to 10 PB and adds 100 GbE connectivity with NVMe-based caching.

Both models come standard with redundant storage controllers for high-availability and 24×7 service. Though single-controller configurations are available for less critical applications. 

Advanced Technologies in Perfect Harmony

DCIG analysts are impressed with the way iXsystems engineers have orchestrated the latest technologies in the M50 storage array, achieving maximum end-to-end cost-efficient performance.

The M50 marries 40 Intel® Xeon® Scalable Family Processor cores with up to 3 TB of DRAM, a 32 GB NVDIMM write cache and 15.2 TB of NVMe SSD read-cache in front of up to 10 PB of hard disk storage. (The M-Series can also be configured as an all-flash array.) Moreover, iXsystems attaches each storage expansion shelf directly to each controller via 12 Gb SAS ports. This approach adds back end throughput to the storage system as each shelf is added.

image of TrueNAS M50 array rear view
iXsystems TrueNAS M50

This well-balanced approach carries through to front-end connectivity. The M50 supports the latest advances in high-speed networking, including up to 4 ports of 40/100 Gb Ethernet and 16/32 Gb Fibre Channel connectivity per controller.

TrueNAS is Enterprise Open Source

TrueNAS is built on BSD and ZFS Open Source technology. iXsystems is uniquely positioned to support the full Open Source stack behind TrueNAS. It has developers and expertise in the operating system, file systems and NAS software.

iXsystems also stewards the popular (>10 million downloads) FreeNAS software-defined storage platform. Among other things, FreeNAS functions as the experimental feature and QA testbed for TrueNAS. TrueNAS can even replicate data to and from FreeNAS. Thus, TrueNAS owners benefit from the huge ZFS and FreeNAS Open Source ecosystems.

NVM Advances are in Tune with the TrueNAS Architecture

The recent advances in non-volatile memory are a perfect fit with the TrueNAS architecture.

Geeking out just a bit…

diagram of TrueNAS M50 cacheZFS uses DRAM as a read cache to accelerate read operations. This primary read cache is called the ARC. ZFS also supports a secondary read cache called L2ARC. The M50 can use much of the 1.5 TB of DRAM in each storage controller for the ARC, and combine it with up to 15.2 TB of NVMe-based L2ARC to provide a huge low-latency read cache that offers up to 8 GB/s throughput.

The ZFS Intent Log (ZIL) is where all data to be written is initially stored. These writes are later flushed to disk. The M50 uses NVDIMMs for the ZIL write cache. The NVDIMMs safely provide near-DRAM-speed write caching. This enables the array to quickly acknowledge writes on the front end while efficiently coalescing many random writes into sequential disk operations on the back end.

Broad Protocol Support Enables Many Uses

TrueNAS supports AFP, SMB, NFS, iSCSI and FC protocols plus S3-compliant object storage. It also offers Asigra backup as an integrated service that runs natively on the array. This broad protocol support enables the M50 to cost-effectively provide high performance storage for:

  • File sharing
  • Virtual machine storage
  • Cloud-native apps
  • Backup target

 

All-inclusive Licensing Adds Value

TrueNAS software licensing is all-inclusive; with unlimited snapshots, clones and replication. Thus, there are no add-on license fees to negotiate and no additional PO’s to wait for. This reduces costs, promotes full utilization of the extensive capabilities of the TrueNAS M-Series and increases business agility. 

TrueNAS M50 Turns Tech Buzz into Music

The TrueNAS M50 integrates multiple buzz-worthy technologies to deliver large amounts of low-latency storage. The M50 accelerates a broad range of workloads–safely and economically. Speaking of economics, according to the iXsystems web site, TrueNAS storage can be expanded for less than $100/TB. That should be music to the ears of business people everywhere.




NVMe: Four Key Trends Set to Drive Its Adoption in 2019 and Beyond

Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.

Many individuals have heard about the performance benefits of NVMe. Using it, companies can reduce latency with response times measured in few hundred microseconds or less. Further, applications can leverage the many more channels that NVMe has to offer to drive throughput to hundreds of GBs per second and achieve millions of IOPs. These types of performance characteristics have many companies eagerly anticipating NVMe’s widespread availability.

To date, however, few companies have experienced the full breadth of performance characteristics that NVMe offers. This stems from:

  • The lack of AFAs on the market that fully support NVMe (about 20%).
  • The relatively small performance improvements that NVMe offers over existing SAS-attached solid-state drives (SSDs); and,
  • The high level of difficulty and cost associated with deploying NVMe in existing data centers.

This is poised to change in the next 12-24 months with four key trends poised to converge that will open up NVMe to a much wider audience.

  1. Large storage vendors getting ready to enter the NVMe market. AFA providers such as Tegile (Western Digital), iXsystems, Huawei, Lenovo, and others ship products that support NVMe. These vendors represent the leading edge of where NVMe innovation has occurred. However, their share of the storage market remains relatively small compared to providers such as Dell EMC, HPE, IBM, and NetApp. As these large storage providers enter the market with AFAs that support NVMe, expect market acceptance and adoption of NVMe to take off.
  2. The availability of native NVMe drivers on all major operating systems. The only two major enterprise operating systems that have currently native NVMe drivers for their OSes are Linux and VMware. However, until Microsoft and, to a lesser degree, Solaris, offer native NVMe drives, many companies will have to hold off on deploying NVMe in their environments. The good news is that all these major OS providers are actively working on NVMe drivers. Further, expect that the availability of these drivers will closely coincide with the availability of NVMe AFAs from the major storage providers and the release of the NVMe-oF TCP standard.
  3. NVMe-oF TCP protocol standard set to be finalized yet in 2018. Connecting the AFA controller to its backend SSDs via NVMe is only one half – and much easier part – of solving the performance problem. The much larger and more difficult problem is easily connecting hosts to AFAs over existing storage networks as it is currently difficult to setup and scale NVMe-oF. The establishment of the NVMe-oF TCP standard will remedy this and facilitate the introduction and use of NVMe-oF between hosts and AFAs using TCP/IP over existing Ethernet storage networks.
  4. The general availability of NVMe-oF TCP offload cards. To realize the full performance benefits of NVMe-oF using TCP, companies are advised to use NVMe-oF TCP offload cards. Using standard Ethernet cards with no offload engine, companies will still see high throughput but very high CPU utilization (up to 50 percent.) Using the forthcoming NVMe-oF TCP offload cards, performance increases by anywhere from 33 to 150 percent versus native TCP cards while only introducing nominal amounts of latency (single to double digit microseconds.)

The business need for NVMe technology is real. While today’s all-flash arrays have tremendously accelerated application performance, NVMe stands poised to unleash another round of up to 10x or more performance improvements. But to do that, a mix of technologies, standards, and programming changes to existing operating systems must converge for mass adoption in enterprises to occur. This combination of events seems poised to happen in the next 12-24 months.




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.




Data Center Challenges and Technology Advances Revealed at Flash Memory Summit 2018

If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.

Flash Memory Summit logoKey themes at the conference include:

  • Processing and storing ever growing amounts of data is becoming more and more challenging. Faster connections and higher capacity drives are coming but are not the whole answer. We need to completely rethink data center architecture to meet these challenges.
  • Artificial intelligence and machine learning are expanding beyond their traditional high-performance computing research environments and into the enterprise.
  • Processing must be moved closer to—and perhaps even into—storage.

Multiple approaches to addressing these challenges were championed at the conference that range from composable infrastructure to computational storage. Some of these solutions will complement one another. Others will compete with one another for mindshare.

NVMe and NVMe-oF Get Real

For the near term, NVMe and NVMe over Fabrics (NVMe-oF) are clear mindshare winners. NVMe is rapidly becoming the primary protocol for connecting controllers to storage devices. A clear majority of product announcements involved NVMe.

WD Brings HDDs into the NVMe World

Western Digital announced the OpenFlex™ architecture and storage products. OpenFlex speaks NVMe-oF across an Ethernet network. In concert with OpenFlex, WD announced Kingfish™, an open API for orchestrating and managing OpenFlex™ infrastructures.

Western Digital is the “anchor tenant” for OpenFlex with a total of seventeen (17) data center hardware and software vendors listed as launch partners in the press release. Notably, WD’s NVMe-oF attached 1U storage device provides up to 168TB of HDD capacity. That’s right – the OpenFlex D3000 is filled with hard disk drives.

While NVMe’s primary use case it to connect to flash memory and emerging ultra-fast memory, companies still want their HDDs. Using Western Digital, organizations can have their NVMe and still get the lower cost HDDs they want.

Gen-Z Composable Infrastructure Standard Gains Momentum

The emerging Gen-Z as a memory-centric architecture is designed for nanosecond latencies. Since last year’s FMS, Gen-Z has made significant progress toward this objective. Consider:

  • The consortium publicly released the Gen-Z Core Specification 1.0 on February 13, 2018. Agreement upon a set of 1.0 standards is a critical milestone in the adoption of any new standard. The fact that the consortium’s 54 members agreed to it suggest broad industry adoption.
  • Intel’s adoption of the SFF-TA-1002 “Gen-Z” universal connector for its “Ruler” SSDs reflects increased adoption of the Gen-Z standards. Making this announcement notable is that Intel is NOT currently a member of the Gen-Z consortium which indicates that Gen-Z standards are gaining momentum even outside of the consortium.
  • The Gen-Z booth included a working Gen-Z connection between a server and a removable DRAM module in another enclosure. This is the first example of a processor being able to use DRAM that is not local to the processor but is instead coming out of a composable pool. This is a concept similar to how companies access shared storage in today’s NAS and SAN environments.

Other Notable FMS Announcements

Many other innovative solutions to data center challenges were also made at the FMS 2018 which included:

  • Solarflare NVMe over TCP enables rapid low-latency data movement over standard Ethernet networks.
  • ScaleFlux computational storage avoids the need to move the data by integrating FPGA-based computation into storage devices.
  • Intel’s announcement of its 660P Series of SSDs that employ quad level cell (QLC) technology. QLC stores more data in less space and at a lower cost.

Recommendations

Based on the impressive progress we observed at Flash Memory Summit 2018, we can reaffirm the recommendations we made coming out of last year’s summit…

  • Enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.
  • Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.



Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.




Five Key Differentiators between the Latest NetApp AFF A-Series and Hitachi Vantara VSP F-Series Platforms

Every enterprise-class all-flash array (AFA) delivers sub-1 millisecond response times using standard 4K & 8K performance benchmarks, high levels of availability, and a relatively full set of core data management features. As such, enterprises must examine AFA products to determine the differentiators between them. It is when DCIG compared the newest AFAs from leading providers such as Hitachi Vantara and NetApp in its latest DCIG Pocket Analyst Report that differences between them quickly emerged.

Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. In so doing, many of the similarities between the products from these providers persisted in that they both continue to natively offer:

  • Unified SAN and NAS interfaces
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

However, the latest AFA product refreshes from each of these two vendors also introduced some key areas where they diverge. While some of these changes reinforced the strengths of each of their respective product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. This resulted in some key differences in product functionality emerging between the products from these two vendors that will impact them in the years to come.

Clicking on image above will take you to a third party website to access this report.

To help enterprises select the solution that best fits their needs, there are five key ways that the latest AFA products from Hitachi Vantara and NetApp differentiate themselves from one another. These five key differentiators include:

  1. Data protection and data reduction
  2. Flash performance optimization
  3. Predictive analytics
  4. Public cloud support
  5. Storage networking protocols

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares these newest all-flash arrays from Hitachi Vantara and NetApp. This report is currently available for sale on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report becomes available at no charge at some future time.




NVMe Unleashing Performance and Storage System Innovation

Mainstream enterprise storage vendors are embracing NVMe. HPE, NetAppPure Storage, Dell EMC, Kaminario and Tegile all offer all-NVMe arrays. According to these vendors, the products will soon support storage class memory as well. NVMe protocol access to flash memory SSDs is a big deal. Support for storage class memory may become an even bigger deal.

NVMe Flash Delivers More Performance Than SAS

NVM express logo

Using the NVMe protocol to talk to SSDs in a storage system increases the efficiency and effective performance capacity of each processor and of the overall storage system. The slimmed down NVMe protocol stack reduces processing overhead compared to legacy SCSI-based protocols. This yields lower storage latency and more IOPS per processor. This is a good thing.

NVMe also delivers more bandwidth per SSD. Most NVMe SSDs connect via four (4) PCIe channels. This yields up to 4 GB/s bandwidth, an increase of more than 50% compared to the 2.4 GB/s maximum of a dual-ported SAS SSD. Since many all-flash arrays can saturate the path to the SSDs, this NVMe advantage translates directly to an increase in overall performance.

The newest generation of all-flash arrays combine these NVMe benefits with a new generation of Intel processors to deliver more performance in less space. It is this combination that, for example, enables HPE to claim that its new Nimble Storage arrays offer twice the scalability of the prior generation of arrays. This is a very good thing.

The early entrants into the NVMe array marketplace charged a substantial premium for NVMe performance. As NVMe goes mainstream, the price gap between NVMe SSDs and SAS SSDs is rapidly narrowing. With many vendors now offering NVMe arrays, competition should soon eliminate the price premium. Indeed, Pure Storage claims to have done so already.

Storage Class Memory is Non-Volatile Memory

Non-volatile memory (NVM) refers to memory that retains data even when power is removed. The term applies to many technologies that have been widely used for decades. These include EPROM, ROM, NAND flash (the type of NVM commonly used in SSDs and memory cards). NVM also refers to newer or less widely used technologies including 3D XPoint, ReRAM, MRAM and STT-RAM.

Because NVM properly refers to a such wide range of technologies, many people are using the term Storage Class Memory (SCM) to refer to emerging byte-addressable non-volatile memory technologies that may soon be used in enterprise storage systems. These SCM technologies include 3D XPoint, ReRAM, MRAM and STT-RAM. SCM offers several advantages compared to NAND flash:

  • Much lower latency
  • Much higher write endurance
  • Byte-addressable (like DRAM memory)

Storage Class Memory Enables Storage System Innovation

Byte-addressable non-volatile memory on NVMe/PCIe opens up a wonderful set of opportunities to system architects. Initially, storage class memory will generally be used as an expanded cache or as the highest performing tier of persistent storage. Thus it will complement rather than replace NAND flash memory in most storage systems. For example, HPE has announced it will use Intel Optane (3D XPoint) as an extension of DRAM cache. Their tests of HPE 3PAR 3D Cache produced a 50% reduction in latency and an 80% increase in IOPS.

Some of the innovative uses of SCM will probably never be mainstream, but will make sense for a specific set of use cases where microseconds can mean millions of dollars. For example, E8 Storage uses 100% Intel Optane SCM in its E8-X24 centralized NVMe appliance to deliver extreme performance.

Remain Calm, Look for Short Term Wins, Anticipate Major Changes

We humans have a tendency to overestimate short term and underestimate long term impacts. In a recent blog article we asserted that NVMe is an exciting and needed breakthrough, but that differences persist between what NVMe promises for all-flash array and hyperconverged solutions and what they can deliver in 2018. Nevertheless, IT professionals should look for real application and requirements-based opportunities for NVMe, even in the short term.

Longer term, the emergence of NVMe and storage class memory are steps on the path to a new data centric architecture. As we have previously suggested, enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture.




DCIG 2018-19 All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 All-flash Array Buyer’s Guide edition developed from its enterprise storage array body of research. This 64-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-two (32) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent based on a comprehensive scoring of product featuresThese products come from seven (7) vendors including Dell EMCHitachi VantaraHPE, Huawei, NetAppPure Storage and Tegile.

graphical icon for the All-flash Array Buyer's Guide

DCIG’s succinct analysis provides insight into the state of the all-flash array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide edition is available through the following DCIG partner sites: TechTrove.




DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide developed from its enterprise storage array body of research. This 72-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-eight (38) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent. These products come from nine (9) vendors including Dell EMC, Hitachi Vantara, HPE, Huawei, IBM, Kaminario, NetApp, Pure Storage and Tegile.

DCIG’s succinct analysis provides insight into the state of the enterprise all-flash storage array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 Enterprise General Purpose All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide is available through the following DCIG partner sites: TechTrove




Six Best Practices for Implementing All-flash Arrays

Almost any article published today related to enterprise data storage will talk about the benefits of flash memory. However, while many organizations now use flash in their enterprise, most are only now starting to use it at a scale where they use it to host more than a handful of their applications. As organizations look to deploy flash more broadly in their enterprises, here are six best practices to keep in mind as they do so.

The six best practices outlined below are united by a single overarching principle. That overarching principle is that the data center is not merely a collection of components, it is an interdependent system. Therefore, the results achieved by changing any key component will be constrained by its interactions with the performance limits of other components. Optimal results come from optimizing the data center as a system.

Photograph of scaffolding on a building

Photo by Dan Gold on Unsplash

Best Practice #1: Focus on Accelerating Applications

Business applications are the reason businesses run data centers. Therefore, accelerating applications is a useful focus in evaluating data center infrastructure investments. Eliminating storage perfor­mance bottlenecks by implementing an all-flash array (AFA) may reveal bottlenecks elsewhere in the infrastructure, including in the applications themselves.

Getting the maximum performance benefit from an AFA may require more or faster connections to the data center network, changes to how the network is structured and other network configuration details. Application servers may require new network adapters, more DRAM, adjustments to cache sizes and other server configuration details. Applications may require configuration changes or even some level of recoding. Some AFAs include utilities that will help identify the bottle­necks wherever they occur along the data path.

Best Practice #2: Mind the Failure Domain

Consolidation can yield dramatic savings, but it is prudent to consider the failure domain, and how much of an organization’s infrastructure should depend on any one component—including an all-flash array. While all the all-flash arrays that DCIG covers in its All-flash Array Buyer’s Guides are “highly available” by design, some are better suited to deliver high availability than others. Be sure the one you select matches your requirements and your data center design.

Best Practice #3: Use Quality of Service Features and Multi-tenancy to Consolidate Confidently

Quality of Service (QoS) features enable an array to give criti­cal business applications priority access to storage resources. Multi-tenancy allocates resources to specific business units and/or departments and limits the percentage of resources that they can consume on the all-flash array at one time. Together, these features protect the array from being monopolized by any one application or bad actor.

Best Practice #4: Pursue Automation

Automation can dramatically reduce the amount of time spent on routine storage management and enable new levels of IT agility. This is where features such as predictive analytics come into play. They help to remove the risk associated with managing all-flash arrays in complex, consolidated environments. For instance, they can proactively intervene by identifying problems before they impact production apps and take steps to resolve them.

Best Practice #5: Realign Roles and Responsibilities

Implementing an all-flash storage strategy involves more than technology. It can, and should, reshape roles and responsibilities within the central IT department and between central IT, develop­ers and business unit technologists. Thinking through the possible changes with the various stakeholders can reduce fear, eliminate obstacles, and uncover opportunities to create additional value for the business.

Best Practice #6: Conduct a Proof of Concept Implementation

A good proof-of-concept can validate feature claims and uncover perfor­mance-limiting bottlenecks elsewhere in the infrastructure. However, key to implementing a good proof-of-concept is having an environment where you can accurately host and test your production environment on the AFA.

A Systems Approach Will Yield the Best Result

Organizations that approach the AFA evaluation from a systems perspective–recognizing and honoring the fact that the data center is an interdependent system that includes hardware, software and people—and that apply these six best practices during an all-flash array purchase decision are far more likely to achieve the objectives that prompted them to look at all-flash arrays in the first place.

DCIG is preparing a series of all-flash array buyer’s guides that will help organizations considering the purchase of an all-flash array. DCIG buyer’s guides accelerate the evaluation process and facilitate better-informed decisions. Look for these buyer’s guides beginning in the second quarter of 2018. Visit the DCIG web site to discover more articles that provide actionable analysis for your data center infrastructure decisions.




Seven Significant Trends in the All-Flash Array Marketplace

Much has changed since DCIG published the DCIG 2017-18 All-Flash Array Buyer’s Guide just one year ago. The DCIG analyst team is in the final stages of preparing a fresh snapshot of the all-flash array (AFA) marketplace. As we reflected on the fresh all-flash array data and compared it to the data we collected just a year ago, we observed seven significant trends in the all-flash array marketplace that will influence buying decisions through 2019.

Trend #1: New Entrants, but Marketplace Consolidation Continues

Although new storage providers continue to enter the all-flash array marketplace—primarily focused on NVMe over Fabrics–the larger trend is continued consolidation. HPE acquired Nimble Storage. Western Digital acquired Tegile.

Every well-known provider has made at least one all-flash acquisition. Consequently, some providers are in the process of “rationalizing” their all-flash portfolios. For example, HPE has decided to position Nimble Storage AFAs as “secondary flash”. HPE also announced it will implement Nimble’s InfoSight predictive analytics platform across HPE’s entire portfolio of data center products, beginning with 3PAR StoreServ storage. Dell EMC seems to be positioning VMAX as its lead product for mission critical workloads, Unity for organizations that value simplified operations, XtremIO for VDI/test/dev, and SC for low cost capacity.

Nearly all the AFA providers also offer at least one hyperconverged infrastructure product. These hyperconverged products compete with AFAs for marketing and data center infrastructure budgets. This will create additional pressure on AFA providers and may drive further consolidation in the marketplace.

Trend #2: Flash Capacity is Increasing Dramatically

The raw capacity of the more than 100 all-flash arrays DCIG researched averaged 4.4 petabytes. This is a 5-fold increase compared to the products in the 2017-18 edition. The highest capacity product can provide 70 petabytes (PB) of all-flash capacity. This is a 7-fold increase. Thus, AFAs now offer the capacity required to be the storage resource for all active workloads in any organization.

graph of all-flash array capacity

Source: DCIG, n=102

Trend #3: Storage Density is Increasing Dramatically

The average AFA flash density of the products continues to climb. Fully half of the AFAs that DCIG researched achieve greater than 50 TB/RU. Some AFAs can provide over 200 TB/RU. The combination of all-flash performance and high storage density means that an AFA may be able to meet an organization’s performance and capacity requirements in 1/10th the space of legacy HDD storage systems and the first generation of all-flash arrays. This creates an opportunity for many organizations to realize significant data center cost reductions. Some have eliminated data centers. Others have been able to delay building new data centers.

graph of all-flash array storage density

Source: DCIG, n=102

Trend #4: Rapid Uptake in Components that Increase Performance

Increases in flash memory capacity and density are being matched with new components that increase array performance. These components include:

  • a new generation of multi-core CPUs from Intel
  • 32 Gb Fibre Channel and 25/40/100 Gb Ethernet
  • GPUs
  • ASICS to offload storage tasks
  • NVMe connectivity to SSDs.

Each of these components can unlock more of the performance available from flash memory. Organizations should assess how well these components are integrated to systemically unlock the performance of flash memory and of their own applications.

chart of front end connectivity percentages

Source: DCIG, n=102

Trend #5: Unified Storage is the New Normal

The first generations of all-flash arrays were nearly all block-only SAN arrays. Tegile was perhaps the only truly unified AFA provider. Today, more than half of all all-flash arrays DCIG researched support unified storage. This support for multiple concurrent protocols creates an opportunity to consolidate and accelerate more types of workloads.

Trend #6: Most AFAs can use Public Cloud Storage as a Target

Most AFAs can now use public cloud storage as a target for cold data or for snapshots as part of a data protection mechanism. In many cases this target is actually one of the provider’s own arrays running in a cloud data center or a software-defined storage instance of its stor­age system running in one of the true public clouds.

Trend #7: Predictive Analytics Get Real

Some storage providers can document how predictive stor­age analytics is enabling increased availability, reliability, and application performance. The promise is huge. Progress varies. Every prospective all-flash array purchaser should incorporate predictive analytics capabilities into their evaluation of these products, particularly if the organization intends to consolidate multiple workloads onto a single all-flash array.

Conclusion: All Active Workloads Belong on All-Flash Storage

Any organization that has yet to adopt an all-flash storage infrastructure for all active workloads is operating at a competitive disadvantage. The current generation of all-flash arrays create business value by…

  • making existing applications run faster even as data sets grow
  • accelerating application development
  • enabling IT departments to say, “Yes” to new workloads and then get those new workloads producing results in record time
  • driving down data center capital and operating costs

DCIG expects to finalize our analysis of all-flash arrays and present the resulting snapshot of this dynamic marketplace in a series of buyer’s guides during the second quarter of 2018.




Two Most Disruptive Storage Technologies at the NAB 2018 Show

The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.

Storbyte. Walking the floor at NAB, a tall, blond individual literally yanked me by the arm as I was walking by and asked me if I had ever heard of Storbyte. Truthfully, the answer was No. This person turned out to be Steve Groenke, Storbyte’s CEO, and what ensued was a great series of conversations while at NAB.

Storbyte has come to market with an all-flash array. However, it took a very different approach to solve the problems of longevity, availability and sustainable high write performance in SSDs and storage systems built with them. What makes it so disruptive is it created a product that meets the demand for extreme sustained write performance by slowing down flash and it does so at a fraction of the cost of what other all-flash arrays cost.

In looking at today’s all-flash designs, every flash vendor is actively pursuing high performance storage. The approach they take is to maximize the bandwidth to each SSD. This means their systems must use PCIe attached SSDs addressed via the new NVMe protocol.

Storbyte chose to tackle the problem differently. Its initial target customers had continuous, real-time capture and analysis requirements as they routinely burned through the most highly regarded enterprise class SSDs in about seven months. Two things killed NAND flash in these environments: heat and writes.

To address this problem, Storbyte reduces heat and the number of writes that each flash module experiences by incorporating sixteen mSATA SSDs into each of its Eco*Flash SSDs. Further, Storbyte slows down the CPUs in each of the mSATA module on its system and then wide-stripes writes across all of them. According to Storbyte, this only requires about 25% of the available CPU on each mSATA module so they use less power. By also managing the writes, Storbyte simultaneously extends the life of each mSATA module on its Eco-flash drives.

The end result is a low cost, high performance, very dense, power-efficient all-flash array built using flash cards that rely upon “older”, “slower”, consumer-grade mSATA flash memory modules that can drive 1.6 million IOPS on a 4U system. More notably, its systems cost about a quarter of that of competitive “high performance” all-flash arrays while packing more than a petabyte of raw flash memory capacity in 4U of rack space that use less power than almost any other all-flash array.

Wasabi. Greybeards in the storage world may recognize the Wasabi name as a provider of iSCSI SANs. Well, right name but different company. The new Wasabi recently came out of stealth mode as a low cost, high performance, cloud storage provider. By low cost, we mean 1/5 of the cost of Amazon’s slowest offering (Glacier) and at 6x the speed of Amazon’s highest performing S3 offering. In other words, you can have your low cost cloud storage and eat it too.

What makes its offering so compelling is that it offers storage capacity at $4.99/TB per month. That’s it. No additional egress charges for every time you download files. No complicated monthly statements to decipher to figure out how much you are spending and where. No costly storage architects to hire to figure out how to tier data to optimize performance and costs. This translates into one fast cloud storage tier at a much lower cost than the Big 3 (Amazon AWS, Google Cloud, and Microsoft Azure.)

Granted, Wasabi is a cloud storage provider start-up so there is an element of buyer beware. However, it is privately owned and well-funded. It is experiencing explosive growth with over 1600 customers in just its few months of operation. It anticipates raising another round of funding. It already has data centers scattered throughout the United States and around the world with more scheduled to open.

Even so, past horror stories about cloud providers shutting their doors give every company pause by using a relatively unknown quantity to store their data. In these cases, Wasabi recommends that companies use its solution as your secondary cloud.

Its cloud offering is fully S3 compatible and most companies want a cloud alternative anyway. In this instances, store copies of your data to both Amazon and Wasabi. Once stored, run any queries, production, etc. against the Wasabi cloud. The Amazon egress charges that your company avoids by accessing its data on the Wasabi cloud will more than justify taking the risk of storing the data you routinely access on Wasabi. Then in the unlikely event Wasabi does go out of business (not that it has any plans to do so,) companies still have a copy of data with Amazon that they can fail back to.

This argument seems to resonate well with prospects. While I could not substantiate these claims, Wasabi said that they are seeing multi-petabyte deals coming their way on the NAB show floor. By using Wasabi instead of Amazon in the use case just described, these companies can save hundreds of thousands of dollars per month just by avoiding Amazon’s egress charges while mitigating their risk associated with using a start-up cloud provider such as Wasabi.

Editor’s Note: The spelling of Storbyte was corrected on 4/24.




Predictive Analytics in Enterprise Storage: More Than Just Highfalutin Mumbo Jumbo

Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding the startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.

The Benefits of Predictive Analytics for Enterprise Storage

Picture of Gilbert and Anne from Anne of Avonlea

Gilbert advises Anne to stop using “highfalutin mumbo jumbo” in her writing. (Note 1)

The end goal of predictive analytics for the more visionary startups goes beyond eliminating downtime. Their goal is to enable data center infrastructures to autonomously optimize themselves for application availability, performance and total cost of ownership based on the customer’s priorities.

The vendors that commit to this path and execute better than their competitors are creating value for their customers. They are also enabling their own organizations to scale up revenues without scaling out staff. Vendors that succeed in applying predictive analytics to storage today also position themselves to win tomorrow in the era of software-defined data centers (SDDC) built on top of composable infrastructures.

To some people this may sound like a bunch of “highfalutin mumbo jumbo”, but vendors are making real progress in applying predictive analytics to enterprise storage and other elements of the technical infrastructure. These vendors and their customers are achieving meaningful benefits including:

  • Measurably reducing downtime
  • Avoiding preventable downtime
  • Optimizing application performance
  • Significantly reducing operational expenses
  • Improving NPS

HPE Quantifies the Benefits of InfoSight Predictive Analytics

Incumbent technology vendors are responding to this pressure from startups in a variety of ways. HPE purchased Nimble Storage, the prime mover in this space, and plans to extend the benefits of Nimble’s InfoSight predictive analytics to its other enterprise infrastructure products. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

The Current State of Affairs in Predictive Analytics

HPE is certainly not alone on this journey. In fact, vendors are claiming some use of predictive analytics for more than half of the all-flash arrays DCIG researched.

Source: DCIG; N = 103

Telemetry Data is the Foundation for Predictive Analytics

Storage array vendors use telemetry data collected from the installed product base in a variety of ways. Most vendors evaluate fault data and advise customers how to resolve problems, or they remotely log in and resolve problems for their customers.

Many all-flash arrays transmit not just fault data, but extensive additional telemetry data about workloads back to the vendors. This data includes IOPS, bandwidth, and latency associated with workloads, front end ports, storage pools and more. Some vendors apply predictive analytics and machine learning algorithms to data collected across the entire installed base to identify potential problems and optimization opportunities for each array in the installed base.

Predictive Analytics Features that Matter

Proactive interventions identify something that is going to create a problem and then notify clients about the issue. Interventions may consist of providing guidance in how to avoid the problem or implementing the solution for the client. A wide range of interventions are possible including identifying the date when an array will reach full capacity or identifying a network configuration that could create a loop condition.

Recommending configuration changes enhances application performance at a site by comparing the performance of the same application at similar sites, discovering optimal configurations, and recommending configuration changes at each site.

Tailored configuration changes prevent outages or application performance issues based on the vendor seeing and fixing problems caused by misconfigurations. The vendor deploys the fix to other sites that run the same applications, eliminating potential problems. The vendor goes beyond recommending changes by packaging the changes into an installation script that the customer can run, or by implementing the recommended changes on the customer’s behalf.

Tailored software upgrades eliminate outages based on the vendor seeing and fixing incompatibilities they discover between a software update and specific data center environments. These vendors use analytics to identify similar sites and avoid making the software update available to those other sites until they have resolved the incompatibilities. Consequently, site administrators are only presented with software updates that are believed to be safe for their environment.

Predictive Analytics is a Significant Yet Largely Untapped Opportunity

Vendors are already creating much value by applying predictive analytics to enterprise storage. Yet no vendor or product comes close to delivering all the value that is possible. A huge opportunity remains, especially considering the trends toward software-defined data centers and composable infrastructures. Reflecting for even a few minutes on the substantial benefits that predictive analytics is already delivering should prompt every prospective all-flash array purchaser to incorporate predictive analytics capabilities into their evaluation of these products and the vendors that provide them.

Note 1: Image source: https://jamesmacmillan.wordpress.com/2012/04/02/highfalutin-mumbo-jumbo/