HCI Comparison Report Reveals Key Differentiators Between Dell EMC VxRail and Nutanix NX

Many organizations view hyper-converged infrastructure (HCI) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

HCI Appliances Deliver Radical Simplicity

Hyper-converged infrastructure appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. Thus, the stakes are high for vendors such as Dell EMC and Nutanix as they compete to own this critical piece of data center real estate.

In the last several years, HCI has also emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, plus non-disruptive hardware upgrades and data migrations are among the features that enterprises love about these solutions.

HCI Appliances Are Not All Created Equal

Many enterprises are considering HCI solutions from providers Dell EMC and Nutanix. A cursory examination of these two vendors and their solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their HCI appliances. Also, both providers pretest firmware and software updates and automate cluster-wide roll-outs.

Nevertheless, important differences remain between the products. Due to the high level of interest in these products, DCIG published an initial comparison in November 2017. Both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first page of HCI comparison reportUpdated DCIG Pocket Analyst Report Reveals Key HCI Differentiators

In this updated report, DCIG identifies six ways the HCI solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed feature matrix as well as insight into key differentiators between these two HCI solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




VMware vSphere and Nutanix AHV Hypervisors: An Updated Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform. Each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its updated DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors.

blurred image of first page of reportThis succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  • Data protection ecosystem
  • Support for Guest OSes
  • Support for VDI platforms
  • Certified enterprise applications
  • Fit with corporate direction
  • More favorable licensing model
  • Simpler management

This DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




Dell EMC VxRail vs Nutanix NX: Six Key HCIA Differentiators

Many organizations view hyper-converged infrastructure appliances (HCIAs) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises consider HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their respective HCIA solutions. Both pre-test firmware and software updates holistically and automate cluster-wide roll-outs.

Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG published its first comparison of these products in November 2017. There is a high level of interest in these products, and both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first pageIn this updated report, DCIG identifies six ways the HCIA solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed product matrix as well as insight into key differentiators between these two HCIA solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace.




Storage Analytics and Latency Matters

Some pretty amazing storage performance numbers are being bandied about these days. Generally speaking, these heretofore unheard of claims of millions of IOPS and latencies measured in microseconds include references to NVMe and perhaps storage class memories. What ultimately matters to a business is the performance of its applications, not just storage arrays. When an application is performing poorly, identifying the root cause can be a difficult and time-consuming challenge. This is particularly true in virtualized infrastructures. But meaningful help is now available to address this challenge through advances in storage analytics.

Storage Analytics Delivers Quantifiable Value

In a previous blog article about the benefits of Predictive Analytics in Enterprise Storage, I mentioned HPE’s InfoSight predictive analytics and the VMVision cross-stack analytics tool they released in mid-2015. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight VMVision cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

 

Pure Storage also offers predictive analytics, called Pure1 Meta. On September 20, 2018, Pure Storage released an extension of the Pure1 Meta platform called VM Analytics. Even in this first release, VM Analytics is clearly going to simplify and accelerate the process of resolving performance problems for Pure Storage FlashArray customers.

Application Latency is a Systemic Issue

The online demonstration of VM Analytics quickly impressed me with the fact that application latency is a systemic issue, not just a storage performance issue. The partial screen shot from the Pure1 VM Analytics tool included below shows a virtual machine delivering an average latency of 7.4 milliseconds. This view into performance provided by VM Analytics enables IT staff to quickly zero in on the VM itself as the place to focus in resolving the performance issue.screen shot of vm analytics

This view also shows that the datastore is responsible for less than 1 millisecond of that 7.4 milliseconds of latency. My point is that application latency depends on factors beyond the storage system. It must be addressed as a systemic issue.

Storage Analytics Simplify the Data Center Balancing Act

The key performance resources in a data center include CPU cycles, DRAM, storage systems and the network. Unless a system is dramatically over-provisioned, one of these resources will always constrain the performance of applications. Storage has historically been the limiting factor in application performance but the flash-enabled transformation of the data center has changed that dynamic.

Tools like VMVision and VM Analytics create value by giving data center administrators new levels of visibility into infrastructure performance. Therefore, technology purchasers should carefully evaluate these storage analytics tools as part of the purchase process. IT staff should use these tools to balance the key performance resources in the data center and deliver the best possible application performance to the business.




Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.




Data Center Challenges and Technology Advances Revealed at Flash Memory Summit 2018

If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.

Flash Memory Summit logoKey themes at the conference include:

  • Processing and storing ever growing amounts of data is becoming more and more challenging. Faster connections and higher capacity drives are coming but are not the whole answer. We need to completely rethink data center architecture to meet these challenges.
  • Artificial intelligence and machine learning are expanding beyond their traditional high-performance computing research environments and into the enterprise.
  • Processing must be moved closer to—and perhaps even into—storage.

Multiple approaches to addressing these challenges were championed at the conference that range from composable infrastructure to computational storage. Some of these solutions will complement one another. Others will compete with one another for mindshare.

NVMe and NVMe-oF Get Real

For the near term, NVMe and NVMe over Fabrics (NVMe-oF) are clear mindshare winners. NVMe is rapidly becoming the primary protocol for connecting controllers to storage devices. A clear majority of product announcements involved NVMe.

WD Brings HDDs into the NVMe World

Western Digital announced the OpenFlex™ architecture and storage products. OpenFlex speaks NVMe-oF across an Ethernet network. In concert with OpenFlex, WD announced Kingfish™, an open API for orchestrating and managing OpenFlex™ infrastructures.

Western Digital is the “anchor tenant” for OpenFlex with a total of seventeen (17) data center hardware and software vendors listed as launch partners in the press release. Notably, WD’s NVMe-oF attached 1U storage device provides up to 168TB of HDD capacity. That’s right – the OpenFlex D3000 is filled with hard disk drives.

While NVMe’s primary use case it to connect to flash memory and emerging ultra-fast memory, companies still want their HDDs. Using Western Digital, organizations can have their NVMe and still get the lower cost HDDs they want.

Gen-Z Composable Infrastructure Standard Gains Momentum

The emerging Gen-Z as a memory-centric architecture is designed for nanosecond latencies. Since last year’s FMS, Gen-Z has made significant progress toward this objective. Consider:

  • The consortium publicly released the Gen-Z Core Specification 1.0 on February 13, 2018. Agreement upon a set of 1.0 standards is a critical milestone in the adoption of any new standard. The fact that the consortium’s 54 members agreed to it suggest broad industry adoption.
  • Intel’s adoption of the SFF-TA-1002 “Gen-Z” universal connector for its “Ruler” SSDs reflects increased adoption of the Gen-Z standards. Making this announcement notable is that Intel is NOT currently a member of the Gen-Z consortium which indicates that Gen-Z standards are gaining momentum even outside of the consortium.
  • The Gen-Z booth included a working Gen-Z connection between a server and a removable DRAM module in another enclosure. This is the first example of a processor being able to use DRAM that is not local to the processor but is instead coming out of a composable pool. This is a concept similar to how companies access shared storage in today’s NAS and SAN environments.

Other Notable FMS Announcements

Many other innovative solutions to data center challenges were also made at the FMS 2018 which included:

  • Solarflare NVMe over TCP enables rapid low-latency data movement over standard Ethernet networks.
  • ScaleFlux computational storage avoids the need to move the data by integrating FPGA-based computation into storage devices.
  • Intel’s announcement of its 660P Series of SSDs that employ quad level cell (QLC) technology. QLC stores more data in less space and at a lower cost.

Recommendations

Based on the impressive progress we observed at Flash Memory Summit 2018, we can reaffirm the recommendations we made coming out of last year’s summit…

  • Enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.
  • Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.



Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.




NVMe Unleashing Performance and Storage System Innovation

Mainstream enterprise storage vendors are embracing NVMe. HPE, NetAppPure Storage, Dell EMC, Kaminario and Tegile all offer all-NVMe arrays. According to these vendors, the products will soon support storage class memory as well. NVMe protocol access to flash memory SSDs is a big deal. Support for storage class memory may become an even bigger deal.

NVMe Flash Delivers More Performance Than SAS

NVM express logo

Using the NVMe protocol to talk to SSDs in a storage system increases the efficiency and effective performance capacity of each processor and of the overall storage system. The slimmed down NVMe protocol stack reduces processing overhead compared to legacy SCSI-based protocols. This yields lower storage latency and more IOPS per processor. This is a good thing.

NVMe also delivers more bandwidth per SSD. Most NVMe SSDs connect via four (4) PCIe channels. This yields up to 4 GB/s bandwidth, an increase of more than 50% compared to the 2.4 GB/s maximum of a dual-ported SAS SSD. Since many all-flash arrays can saturate the path to the SSDs, this NVMe advantage translates directly to an increase in overall performance.

The newest generation of all-flash arrays combine these NVMe benefits with a new generation of Intel processors to deliver more performance in less space. It is this combination that, for example, enables HPE to claim that its new Nimble Storage arrays offer twice the scalability of the prior generation of arrays. This is a very good thing.

The early entrants into the NVMe array marketplace charged a substantial premium for NVMe performance. As NVMe goes mainstream, the price gap between NVMe SSDs and SAS SSDs is rapidly narrowing. With many vendors now offering NVMe arrays, competition should soon eliminate the price premium. Indeed, Pure Storage claims to have done so already.

Storage Class Memory is Non-Volatile Memory

Non-volatile memory (NVM) refers to memory that retains data even when power is removed. The term applies to many technologies that have been widely used for decades. These include EPROM, ROM, NAND flash (the type of NVM commonly used in SSDs and memory cards). NVM also refers to newer or less widely used technologies including 3D XPoint, ReRAM, MRAM and STT-RAM.

Because NVM properly refers to a such wide range of technologies, many people are using the term Storage Class Memory (SCM) to refer to emerging byte-addressable non-volatile memory technologies that may soon be used in enterprise storage systems. These SCM technologies include 3D XPoint, ReRAM, MRAM and STT-RAM. SCM offers several advantages compared to NAND flash:

  • Much lower latency
  • Much higher write endurance
  • Byte-addressable (like DRAM memory)

Storage Class Memory Enables Storage System Innovation

Byte-addressable non-volatile memory on NVMe/PCIe opens up a wonderful set of opportunities to system architects. Initially, storage class memory will generally be used as an expanded cache or as the highest performing tier of persistent storage. Thus it will complement rather than replace NAND flash memory in most storage systems. For example, HPE has announced it will use Intel Optane (3D XPoint) as an extension of DRAM cache. Their tests of HPE 3PAR 3D Cache produced a 50% reduction in latency and an 80% increase in IOPS.

Some of the innovative uses of SCM will probably never be mainstream, but will make sense for a specific set of use cases where microseconds can mean millions of dollars. For example, E8 Storage uses 100% Intel Optane SCM in its E8-X24 centralized NVMe appliance to deliver extreme performance.

Remain Calm, Look for Short Term Wins, Anticipate Major Changes

We humans have a tendency to overestimate short term and underestimate long term impacts. In a recent blog article we asserted that NVMe is an exciting and needed breakthrough, but that differences persist between what NVMe promises for all-flash array and hyperconverged solutions and what they can deliver in 2018. Nevertheless, IT professionals should look for real application and requirements-based opportunities for NVMe, even in the short term.

Longer term, the emergence of NVMe and storage class memory are steps on the path to a new data centric architecture. As we have previously suggested, enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture.




Four Implications of Public Cloud Adoption and Three Risks to Address

Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.

Implication #1: No enterprise IT dept will be able to keep pace with the level of services innovation available from cloud providers

The battle is over. Cloud wins. Deal with it.

Dealing with it does not necessarily mean that every business will move every workload to the cloud. It does mean that it is time for business IT departments to build awareness of the services available from public cloud providers. One way to do this is to tap into the flow of service updates from one or more of the major cloud providers.

four public cloud logosFor Amazon Web Services, I like What’s New with AWS. Easy filtering by service category is combined with sections for featured announcements, featured video announcements, and one-line listings of the most recent announcements from AWS. The one-line listings include links to service descriptions and to longer form articles on the AWS blog.

For Microsoft Azure, I like Azure Updates. As its subtitle says, “One place. All updates.” The Azure Updates site provides easy filtering by product, update type and platform. I especially like the ability to filter by update type for General Availability and for Preview. The site also includes links to the Azure roadmap, blog and other resources. This site is comprehensive without being overwhelming.

For Google Cloud Platform, its blog may be the best place to start. The view can be filtered by label, including by announcements. This site is less functional than the AWS and Microsoft Azure resources cited above.

For IBM Cloud, the primary announcements resource is What’s new with IBM Cloud. Announcements are presented as one-line listings with links to full articles.

Visit these sites, subscribe to their RSS feeds, or follow them via social media platforms. Alternatively, subscribe to their weekly or monthly newsletters via email. Once a business has workloads running in one of the public clouds at a minimum an IT staff member should follow the updates site.

Implication #2: Pressure will mount on Enterprise IT to connect business data to public cloud services

The benefits of bringing public cloud services to bear on the organization’s data will create pressure on enterprise IT departments to connect business data to those services. There are many options for accomplishing this objective, including:

  1. All-in with one public cloud
  2. Hybrid: on-prem plus one public
  3. Hybrid: on-prem plus multiple public
  4. Multi-cloud (e.g. AWS + Azure)

The design of the organization and the priorities of the business should drive the approach taken to connect business data with cloud services.

Implication #3: Standard data protection requirements now extend to data and workloads in the public cloud

No matter what approach it taken when embracing the public cloud, standard data protection requirements extend to data and workloads in the cloud. Address these requirements up front. Explore alternative solutions and select one that meets the organizations data protection requirements.

Implication #4: Cloud Data Protection and DRaaS are on-ramps to public cloud adoption

For most organizations the transition to the cloud will be a multi-phased process. Data protection solutions that can send backup data to the cloud are a logical early phase. Disaster recovery as a service (DRaaS) offerings represent another relatively low-risk path to the cloud that may be more robust and/or lower cost that existing disaster recovery setups. These solutions move business data into public cloud repositories. As such, cloud data protection and DRaaS may be considered on-ramps to public cloud adoption.

Once corporate data has been backed up or replicated to the cloud, tools are available to extract and transform the data into formats that make it available for use/analysis by that cloud provider’s services. With proper attention, this can all be accomplished in ways that comply with security and data governance requirements. Nevertheless, there are risks to be addressed.

Risk to Address #1: Loss of change control

The benefit of rapid innovation has a downside. Any specific service may be upgraded or discarded by the provider without much notice. Features used by a business may be enhanced or decremented. This can force changes in other software that integrates with the service or in procedures used by staff and the associated documentation for those procedures.

For example, Office365 and Google G Suite features can change without much notice. This creates a “Where did that menu option go?” experience for end users. Some providers reduce this pain by providing an quick tutorial for new features within the application itself. Others provide online learning centers that make new feature tutorials easy to discover.

Accept this risk as an unavoidable downside to rapid innovation. Where possible, manage the timing of these releases to an organization’s users, giving them advance notice of the changes along with access to tutorials.

Risk to Address #2: Dropped by provider

A risk that may not be obvious to many business leaders is that of being dropped by a cloud service provider. A business with unpopular opinions might have services revoked, sometimes with little notice. Consider how quickly the movement to boycott the NRA resulted in severed business-to-business relationships. Even an organization as large as the US Military faces this risk. As was highlighted in recent news, Google will not renew its military AI project due in large part to pressure from Google employees.

Mitigate this risk through contracts and architecture. This is perhaps one argument in favor of a hybrid on-prem plus cloud approach to the public cloud versus an all-in approach.

Risk to Address #3: Unpredictable costs

It can be difficult to predict the costs of running workloads in the public cloud, and these costs can change rapidly. Address this risk by setting cost thresholds that trigger an alert. Consider subscribing to a service such as Nutanix Beam to gain granular visibility into and optimization of public cloud costs.

Its time to get real about the public cloud

Many business are ready to embrace the public cloud. IT departments should make themselves aware of services that may create value for their business. They should also work through the implications of moving corporate data and workloads to the cloud, and make plans for managing the attendant risks.




DCIG 2018-19 All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 All-flash Array Buyer’s Guide edition developed from its enterprise storage array body of research. This 64-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-two (32) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent based on a comprehensive scoring of product featuresThese products come from seven (7) vendors including Dell EMCHitachi VantaraHPE, Huawei, NetAppPure Storage and Tegile.

graphical icon for the All-flash Array Buyer's Guide

DCIG’s succinct analysis provides insight into the state of the all-flash array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide edition is available through the following DCIG partner sites: TechTrove.




DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide developed from its enterprise storage array body of research. This 72-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-eight (38) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent. These products come from nine (9) vendors including Dell EMC, Hitachi Vantara, HPE, Huawei, IBM, Kaminario, NetApp, Pure Storage and Tegile.

DCIG’s succinct analysis provides insight into the state of the enterprise all-flash storage array marketplace, the benefits organizations can expect to achieve, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product.

The DCIG 2018-19 Enterprise General Purpose All-flash Array Buyer’s Guide helps businesses drive time and cost out of the all-flash array selection process by:

  • Describing key product considerations and important changes in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Presenting product feature data in standardized one-page data sheets facilitates rapid feature-based comparisons

It is in this context that DCIG presents the DCIG 2018-19 Enterprise General Purpose All-Flash Array Buyer’s Guide. As prior DCIG Buyer’s Guides have done, it puts at the fingertips of enterprises a resource that can assist them in this important buying decision.

Access to this Buyer’s Guide is available through the following DCIG partner sites: TechTrove




Seven Significant Trends in the All-Flash Array Marketplace

Much has changed since DCIG published the DCIG 2017-18 All-Flash Array Buyer’s Guide just one year ago. The DCIG analyst team is in the final stages of preparing a fresh snapshot of the all-flash array (AFA) marketplace. As we reflected on the fresh all-flash array data and compared it to the data we collected just a year ago, we observed seven significant trends in the all-flash array marketplace that will influence buying decisions through 2019.

Trend #1: New Entrants, but Marketplace Consolidation Continues

Although new storage providers continue to enter the all-flash array marketplace—primarily focused on NVMe over Fabrics–the larger trend is continued consolidation. HPE acquired Nimble Storage. Western Digital acquired Tegile.

Every well-known provider has made at least one all-flash acquisition. Consequently, some providers are in the process of “rationalizing” their all-flash portfolios. For example, HPE has decided to position Nimble Storage AFAs as “secondary flash”. HPE also announced it will implement Nimble’s InfoSight predictive analytics platform across HPE’s entire portfolio of data center products, beginning with 3PAR StoreServ storage. Dell EMC seems to be positioning VMAX as its lead product for mission critical workloads, Unity for organizations that value simplified operations, XtremIO for VDI/test/dev, and SC for low cost capacity.

Nearly all the AFA providers also offer at least one hyperconverged infrastructure product. These hyperconverged products compete with AFAs for marketing and data center infrastructure budgets. This will create additional pressure on AFA providers and may drive further consolidation in the marketplace.

Trend #2: Flash Capacity is Increasing Dramatically

The raw capacity of the more than 100 all-flash arrays DCIG researched averaged 4.4 petabytes. This is a 5-fold increase compared to the products in the 2017-18 edition. The highest capacity product can provide 70 petabytes (PB) of all-flash capacity. This is a 7-fold increase. Thus, AFAs now offer the capacity required to be the storage resource for all active workloads in any organization.

graph of all-flash array capacity

Source: DCIG, n=102

Trend #3: Storage Density is Increasing Dramatically

The average AFA flash density of the products continues to climb. Fully half of the AFAs that DCIG researched achieve greater than 50 TB/RU. Some AFAs can provide over 200 TB/RU. The combination of all-flash performance and high storage density means that an AFA may be able to meet an organization’s performance and capacity requirements in 1/10th the space of legacy HDD storage systems and the first generation of all-flash arrays. This creates an opportunity for many organizations to realize significant data center cost reductions. Some have eliminated data centers. Others have been able to delay building new data centers.

graph of all-flash array storage density

Source: DCIG, n=102

Trend #4: Rapid Uptake in Components that Increase Performance

Increases in flash memory capacity and density are being matched with new components that increase array performance. These components include:

  • a new generation of multi-core CPUs from Intel
  • 32 Gb Fibre Channel and 25/40/100 Gb Ethernet
  • GPUs
  • ASICS to offload storage tasks
  • NVMe connectivity to SSDs.

Each of these components can unlock more of the performance available from flash memory. Organizations should assess how well these components are integrated to systemically unlock the performance of flash memory and of their own applications.

chart of front end connectivity percentages

Source: DCIG, n=102

Trend #5: Unified Storage is the New Normal

The first generations of all-flash arrays were nearly all block-only SAN arrays. Tegile was perhaps the only truly unified AFA provider. Today, more than half of all all-flash arrays DCIG researched support unified storage. This support for multiple concurrent protocols creates an opportunity to consolidate and accelerate more types of workloads.

Trend #6: Most AFAs can use Public Cloud Storage as a Target

Most AFAs can now use public cloud storage as a target for cold data or for snapshots as part of a data protection mechanism. In many cases this target is actually one of the provider’s own arrays running in a cloud data center or a software-defined storage instance of its stor­age system running in one of the true public clouds.

Trend #7: Predictive Analytics Get Real

Some storage providers can document how predictive stor­age analytics is enabling increased availability, reliability, and application performance. The promise is huge. Progress varies. Every prospective all-flash array purchaser should incorporate predictive analytics capabilities into their evaluation of these products, particularly if the organization intends to consolidate multiple workloads onto a single all-flash array.

Conclusion: All Active Workloads Belong on All-Flash Storage

Any organization that has yet to adopt an all-flash storage infrastructure for all active workloads is operating at a competitive disadvantage. The current generation of all-flash arrays create business value by…

  • making existing applications run faster even as data sets grow
  • accelerating application development
  • enabling IT departments to say, “Yes” to new workloads and then get those new workloads producing results in record time
  • driving down data center capital and operating costs

DCIG expects to finalize our analysis of all-flash arrays and present the resulting snapshot of this dynamic marketplace in a series of buyer’s guides during the second quarter of 2018.




Predictive Analytics in Enterprise Storage: More Than Just Highfalutin Mumbo Jumbo

Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding the startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.

The Benefits of Predictive Analytics for Enterprise Storage

Picture of Gilbert and Anne from Anne of Avonlea

Gilbert advises Anne to stop using “highfalutin mumbo jumbo” in her writing. (Note 1)

The end goal of predictive analytics for the more visionary startups goes beyond eliminating downtime. Their goal is to enable data center infrastructures to autonomously optimize themselves for application availability, performance and total cost of ownership based on the customer’s priorities.

The vendors that commit to this path and execute better than their competitors are creating value for their customers. They are also enabling their own organizations to scale up revenues without scaling out staff. Vendors that succeed in applying predictive analytics to storage today also position themselves to win tomorrow in the era of software-defined data centers (SDDC) built on top of composable infrastructures.

To some people this may sound like a bunch of “highfalutin mumbo jumbo”, but vendors are making real progress in applying predictive analytics to enterprise storage and other elements of the technical infrastructure. These vendors and their customers are achieving meaningful benefits including:

  • Measurably reducing downtime
  • Avoiding preventable downtime
  • Optimizing application performance
  • Significantly reducing operational expenses
  • Improving NPS

HPE Quantifies the Benefits of InfoSight Predictive Analytics

Incumbent technology vendors are responding to this pressure from startups in a variety of ways. HPE purchased Nimble Storage, the prime mover in this space, and plans to extend the benefits of Nimble’s InfoSight predictive analytics to its other enterprise infrastructure products. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

The Current State of Affairs in Predictive Analytics

HPE is certainly not alone on this journey. In fact, vendors are claiming some use of predictive analytics for more than half of the all-flash arrays DCIG researched.

Source: DCIG; N = 103

Telemetry Data is the Foundation for Predictive Analytics

Storage array vendors use telemetry data collected from the installed product base in a variety of ways. Most vendors evaluate fault data and advise customers how to resolve problems, or they remotely log in and resolve problems for their customers.

Many all-flash arrays transmit not just fault data, but extensive additional telemetry data about workloads back to the vendors. This data includes IOPS, bandwidth, and latency associated with workloads, front end ports, storage pools and more. Some vendors apply predictive analytics and machine learning algorithms to data collected across the entire installed base to identify potential problems and optimization opportunities for each array in the installed base.

Predictive Analytics Features that Matter

Proactive interventions identify something that is going to create a problem and then notify clients about the issue. Interventions may consist of providing guidance in how to avoid the problem or implementing the solution for the client. A wide range of interventions are possible including identifying the date when an array will reach full capacity or identifying a network configuration that could create a loop condition.

Recommending configuration changes enhances application performance at a site by comparing the performance of the same application at similar sites, discovering optimal configurations, and recommending configuration changes at each site.

Tailored configuration changes prevent outages or application performance issues based on the vendor seeing and fixing problems caused by misconfigurations. The vendor deploys the fix to other sites that run the same applications, eliminating potential problems. The vendor goes beyond recommending changes by packaging the changes into an installation script that the customer can run, or by implementing the recommended changes on the customer’s behalf.

Tailored software upgrades eliminate outages based on the vendor seeing and fixing incompatibilities they discover between a software update and specific data center environments. These vendors use analytics to identify similar sites and avoid making the software update available to those other sites until they have resolved the incompatibilities. Consequently, site administrators are only presented with software updates that are believed to be safe for their environment.

Predictive Analytics is a Significant Yet Largely Untapped Opportunity

Vendors are already creating much value by applying predictive analytics to enterprise storage. Yet no vendor or product comes close to delivering all the value that is possible. A huge opportunity remains, especially considering the trends toward software-defined data centers and composable infrastructures. Reflecting for even a few minutes on the substantial benefits that predictive analytics is already delivering should prompt every prospective all-flash array purchaser to incorporate predictive analytics capabilities into their evaluation of these products and the vendors that provide them.

Note 1: Image source: https://jamesmacmillan.wordpress.com/2012/04/02/highfalutin-mumbo-jumbo/




All-inclusive Licensing is All the Rage in All-flash Arrays

Early in my IT career, a friend who owns a software company told me he had been informed by a peer that he wasn’t charging enough for his software. This peer advised him to adopt a “flinch-based” approach to pricing. He said my friend should start with a base licensing cost that meets margin requirements, and then keep adding on other costs until the prospective customer flinches. My friend found that approach offensive, and so do I. I don’t know how common the “flinch-based” approach is, but as a purchaser of technology goods and services I learned to flinch early and often. I was reminded of this “flinch-based” approach when evaluating some traditional enterprise storage products. Every capability was an extra-cost “option”: each protocol, each client connection, each snapshot feature, each integration point. Happily, this a-la-carte approach to licensing is becoming a thing of the past as vendors embrace all-inclusive licensing for their all-flash array products.

The Trend Toward All-inclusive Licensing in All-Flash Arrays

In the process of updating DCIG’s research on all-flash arrays, we discovered a clear trend toward all-inclusive software feature licensing. This trend was initiated by all-flash array startups. Now even the largest traditional vendors are moving toward all-inclusive licensing. HPE made this change in 2017 for its 3PAR StoreServ products. Now Dell EMC is moving this direction with its all-flash Unity products.

Drivers of All-inclusive Licensing in All-Flash Arrays

Competition from storage startups has played an important role in moving the storage industry toward all-inclusive software feature licensing. Some startups embraced all-inclusive licensing because they knew prospective customers were frustrated by the a-la-carte approach. Others, such as Tegile, embraced all-inclusive licensing from the beginning because many of the software features were inherent to the design of their storage systems. Whatever the motivation, the availability of all-inclusive software feature licensing from these startups put pressure on other vendors to adopt the approach.

Technology advances are also driving the movement toward all-inclusive licensing. Advances in multi-core, multi-gigahertz CPU’s from Intel make it practical to incorporate features such as in-line compression and in-line deduplication into storage systems. These in-line data efficiency features are a good fit with the wear and performance characteristics of NAND-flash, and help to reduce the overall cost and data center footprint of an all-flash array.

The Value of All-inclusive Licensing for All-Flash Array Adopters

All-inclusive licensing is one of the five features that contribute to delivering simplicity on all-flash arrays. Vendors that include all software features fully licensed as part of the standard array package create extra value for purchasers by reducing the number of decision points in the purchasing process and smooths the path to full utilization of the array’s capabilities.

All-inclusive licensing enables agility. Separate license fees for software features reduced the agility of the IT department in responding to changing business requirements because the ordering and purchasing processes added weeks or even months to the implementation process. With all-inclusive licensing eliminates the purchasing delay.

The Value of All-inclusive Licensing for All-flash Array Vendors

All-inclusive licensing translates to more sales. Each decision point during the purchase process slows down the process and creates another opportunity for a customer to say, “No.” All-inclusive licensing smooths the path to purchase. Since all-inclusive licensing also fosters full use of the product’s features and the value customers derive from the product, it should also smooth the path to follow-on sales.

Happier engineers. This benefit may be more abstract, but the best engineers want what they create to actually get used and make a difference. All-inclusive licensing makes it more likely that the features engineers create actually get used.

Bundles May Make Sense for Legacy Solutions

Based on the rationale described above, all-inclusive software feature licensing provides a superior approach to creating value in all-flash arrays. But for vendors seeking to transition from an a-la-carte model, bundles may be a more palatable approach. Bundles enable the vendor to offer some of the benefits of true all-inclusive licensing to new customers without offending existing customers. In cases where a feature depends on technology licensed from another vendor, bundling also offers a way to pass 3rd party licensing costs through to the customer.

Vendors that offer all-inclusive software feature licenses or comprehensive bundles add real value to their all-flash array products, and deserve priority consideration from organizations seeking maximum value, simplicity and agility from their all-flash array purchase.

 




Why Tiering and Quality of Service Matter in All-Flash Array Selection

Many organizations are using all-flash arrays in their data centers today. When asked about the benefits they have achieved, two benefits are almost always top of mind. The first benefit mentioned is the increase in application performance. Indeed, increased performance was the primary rationale for the purchase of the all-flash array. The second benefit came as an unexpected bonus; the decrease in time spent managing storage.

Based on these initial wins, organizations are seeking to extend these benefits across the entire application portfolio. Consolidating many applications onto each all-flash array (AFA) can create resource contention, reducing the performance of other applications that share the array.

Sophisticated data tiering and quality of service (QoS) features mitigate the impact of resource contention, giving business critical applications a greater portion of the arrays performance resources. Thus, data tiering and QoS features enable organizations to accelerate more applications without reintroducing storage management overhead.

Why QoS Matters in an All-Flash Array

Quality of service (QoS) features come into play when multiple applications share the same storage system. Just as thin provisioning virtually expands the capacity of a storage system, QoS virtually expands the performance of a storage system. It is like thin provisioning for performance.

AFAs implement quality of service features in a variety of ways. Some implement static maximums or minimums in terms of latency, bandwidth and IOPS. Other AFAs implement QoS policies as dynamic attributes that give priority to workloads based on classifications such as High, Medium and Low.

Different applications have different storage requirements, and some applications are more important than others to an organization. QoS features that map to these differing application requirements and business priorities add real value to an AFA.

Why Tiering Matters in an All-Flash Array

Tiering matters in an AFA because persistent storage media are proliferating based on characteristics including latency, bandwidth, durability and cost. A storage system that can manage data placement based on these characteristics can deliver optimal performance at the optimal cost. Each storage tier can also function as another tool in the QoS tool box.

How Many Tiers is Enough in an All-Flash Array?

Some all-flash arrays provide a single tier of storage. Other AFAs provide two tiers: a performance tier, and a capacity tier. Other AFAs provide three or more tiers. I asked an experienced storage engineer how many tiers were required to optimize performance and cost. His answer, “Fourteen.” That number may raise some eyebrows, but when one takes the varied characteristics of storage media into account, it is not hard to get to fourteen data tiers. Those performance characteristics include:

  • Latency
  • Bandwidth
  • IOPS
  • Durability
  • Cost

Advances in NAND-flash, storage class memories and the adoption of the NVMe protocol create significant new options based on each of these performance characteristics.

Accelerating All Applications Without Compromise

All-flash arrays that provide sophisticated tiering and quality of service features can take greater advantage of advances in storage technology. Organizations implementing these AFAs can accelerate all applications without compromising the benefits they achieved with their initial all-flash array implementations–faster application performance and reduced storage management overhead.

DCIG is refreshing its research on all-flash arrays, and is taking these data tiering and quality of service capabilities into account as it evaluates these products. DCIG expects to publish reports based on its updated all-flash array research beginning in the first quarter of 2018.

Note: This blog entry was updated on January 4, 2018.




Keys to Unlocking Business Value in Next-Generation All-Flash Arrays

Next-generation all-flash arrays will provide dramatic improvements in performance and density over the prior generation of all-flash arrays. These new levels of performance and density will bring the benefits of real-time analysis to a whole new set of problems and organizations, creating tremendous value. They will also enable organizations to achieve significant budget savings through a fresh wave of data center consolidations. But unlocking the ability of any next-generation array to deliver these savings depends on a key set of features that enable workload consolidation and simplified management.

Next-generation All-flash Arrays Provide Massive Performance in Micro Form Factors

The prior generation of all-flash arrays provided up to several hundred thousand IOPS at latencies around one millisecond. As revealed at the recent Flash Memory Summit, the next generation of all-flash arrays, enabled by NVMe, will provide millions of IOPS at latencies of under 200 microseconds.  For example, E8 Storage showed its E8-D24, a 2RU appliance with claims of 10 million IOPS, 40 GB/second of bandwidth with read latency of 120 micro-seconds.

Picture of E8 Storage E8-D24 NVMe all-flash arrays

E8 Storage E8-D24 NVMe Array

Chart showing performance of E8 Storage D24 all-flash arrays

New SSD Form Factors Will Contribute to a 3x to 5x Increase in Storage Density

The highest raw storage density achieved by any product in the DCIG 2017-18 All-Flash Array Buyer’s Guide was 192 TB/RU. This will double once the recently announced 32 TB SSDs are qualified for existing all-flash arrays. But new SSD form factors will also contribute to increased storage density. At the recent Flash Memory Summit, Samsung showed its new 16TB NGSFF SSD and described an NGSFF-based reference system that can utilize 36 NGSFF modules to provide 576TB of raw flash capacity in a one rack unit (1RU) appliance.

Picture of Samsung NGSFF SSD with USA Quarter

Samsung NGSFF SSD

Intel showed its new “ruler” form factor SSD and a 1RU 32-slot design that can achieve 1 PB per rack unit based on its 32TB “ruler” SSDs.

Picture of Intel SSD DC P4500 Series "Ruler" SSD and 1RU All-flash Arrays

Intel SSD DC P4500 Series “Ruler” SSD and 1RU Server (Credit: Intel Corporation)

Features that Enable Workload Consolidation and Simplification are Key to Creating Business Value

The greatest value of all-flash storage is that it enables organizations to move faster. And moving faster than one’s competitors creates wins. As Eric Pearson, the CIO of InterContinental Hotels Group has said, “It’s no longer the big beating the small. It’s the fast beating the slow.” 1

Consolidating many workloads onto an all-flash array accelerates all those workloads, and helps create competitive wins. It also enables significant reductions in overall data center costs. This flash-enabled consolidation extends beyond storage consolidation to include server and even data center consolidation.

As noted above, the next generation of all-flash arrays clearly have the storage capacity, density and low-latency performance to handle many workloads concurrently. Therefore, features that enable workload consolidation are key to unlocking business value, and are a reasonable focus for evaluation.

Features that Enable Consolidation

  • Concurrent multi-protocol support (unified SAN and NAS) accelerates both block and file-based workloads
  • High-speed Ethernet and/or Fibre Channel (FC) connectivity to application servers for maximum front-end bandwidth
  • Non-disruptive Upgrades (NDU) and redundancy features that maximize up-time availability
  • Quality of Service (QoS) features, especially QoS based on predefined service levels, enabling an administrator to quickly and easily assign each application or volume to a priority classification
  • Multi-tenancy that enables distributed administration and secure sharing of the array’s physical resources
  • Certified support for enterprise applications
  • REST API to enable integration into automation frameworks that are the foundation for public cloud-like self-service capabilities

Features that enable simplification can significantly improve IT agility and enable the entire organization to move faster. Therefore, these features also deserve careful consideration.

Features that Enable Simplification

  • Automated intelligent caching and/or storage tiering that keeps the hottest data on the lowest latency media without manual tuning
  • Automated, policy-based provisioning that eliminates much routine storage administration
  • Integration into hypervisor management consoles that empowers application and server administrators to quickly allocate and assign storage to new virtual machines
  • Proactive remediation based on fault data that prevents component failures from becoming system failures
  • Proactive intervention based on storage analytics that optimizes performance and avoids service interruptions

Cautions and Best Practice Recommendations for Next-generation All-flash Arrays

Mind the failure domain. Consolidation can yield dramatic savings; but it is prudent to consider the failure domain, and how much of an organization’s infrastructure should depend on any one component–including an all-flash array.

Focus on accelerating apps. Eliminating storage bottlenecks may reveal other bottlenecks in the application path. Getting the maximum performance benefit from an AFA may require more or faster network connections to application servers and/or the storage system, more server DRAM, adjusting cache sizes and adjusting other server and network configuration details. Some AFAs include utilities that will help identify the bottlenecks wherever they occur along the data path.

Revisit assumptions. Optimal configuration changes may not be obvious. For example, one all-flash proof of concept revealed that a database application performed much better when local DRAM caching was reduced to less than 1/4th of existing best practice guidelines. This discovery resulted in both higher performance and greater server consolidation savings.

Leverage multi-tenancy features. Use multi-tenancy to enable secure sharing of the array while limiting the percentage of array resources any server administrator or software developer can allocate.

Pursue automation. Automation can dramatically reduce the amount of time spent on storage management and enable new levels of enterprise agility. This is another place where multi-tenancy and/or robust QoS capabilities add a layer of safety.

Conduct a proof of concept implementation. This can validate feature claims and uncover performance-limiting bottlenecks elsewhere in the infrastructure.

 

Footnotes:

1. Pat Gelsinger on Stage at VMworld 2015, 15:50. YouTube. YouTube, 01 Sept. 2015. <https://www.youtube.com/watch?v=U6aFO0M0bZA&list=PLeFlCmVOq6yt484cUB6N4LhXZnOso5VC7&index=3>.




Four Flash Memory Trends Influencing the Development of Tomorrow’s All-flash Arrays

The annual Flash Memory Summit is where vendors reveal to the world the future of storage technology. Many companies announced innovative products and technical advances at last week’s 2017 Flash Memory Summit that give enterprises a good understanding of what to expect from today’s all-flash products today as well as a glimpse into tomorrow’s products. These previews into the next generation of flash products revealed four flash memory trends sure to influence the development of the next generation of all-flash arrays.

Flash Memory Trend #1: Storage class memory is real, and it is really impressive. Storage class memory (SCM) is a term applied to several different technologies that share two important characteristics. Like flash memory, storage class memory is non-volatile. It retains data after the power is shut off. Like DRAM, storage class memory is very low latency and is byte-addressable, meaning it can be talked to like DRAM memory. Together, these characteristics enable greater-than-10x improvements in system and application performance.

Two years ago, Intel and Micron rocked the conference with the announcement of 3D XPoint storage class memory. In the run up to this year’s Flash Memory Summit, Intel announced both consumer and enterprise SSDs based on 3D XPoint technology under the Optane brand. These products are shipping now for $2.50 to $5.00 per GB. Initial capacities are reminiscent of 10K and 15K enterprise hard drives. SCM-based SSDs outperform flash memory SSDs in terms of consistent low latency and high bandwidth.

Screen shot of Everspin nvNITRO bandwidth

Screen shot of Everspin nvNITRO bandwidth

Other storage class memory technologies also moved out of the lab and into products. Everspin announced 1 Gb MRAM chips, quadrupling the density of last year’s 256 Mb chip. Everspin demonstrated the performance of a single ST-MRAM SSD in a standard desktop PC. The nvNITRO PCIe card achieved a sustained write bandwidth of 5.8 GB/second and nearly 1.5 Million IOPS. Everspin nvNITRO cards are available in 1 GB and 2 GB capacities today, with 16 GB PCIe cards expected by the end of the year.

CROSSBAR announced that it has licensed its ReRAM technology to multiple memory manufacturers. CROSSBAR displayed sample wafers that were produced by two different licensees. Products based on the technology are in development.

DRAM and flash memory will continue to play important roles for the foreseeable future. Nevertheless, each type of SCM enables the greater-than-10x improvements in performance that inspire new system designs. In the near term, storage class memory will be used as a cache, a write buffer, or as a small pool of high performance storage for database transaction logs. In some cases it will also be used as an expanded pool of system memory. SCM may also replace DRAM in many SSDs.

Picture of NAND roadmap

NAND Roadmap

Flash Memory Trend #2: There is still lot of room for innovation in flash memory. Every flash memory manufacturer announced advances in flash memory technology. Manufacturers provided roadmaps showing that flash memory will be the predominant storage technology for years to come.

Samsung’s keynote presenter brandished the 32 TB 2.5” SSD it announced at the conference. This doubled the 16 TB capacity Samsung announced on the same stage just one year ago. Although the presenter was rightly proud of the achievement, the response of the audience was muted, even mild. I hope our response wasn’t discouraging; but frankly, we expected Samsung to pull this off. The presenter reaffirmed our expectations by telling us that Samsung will continue this pace of advancement in NAND flash for at least the next five years.

Flash Memory Trend #3: NVMe and NVMe-oF are important steps on the path to the future. NVMe is the new standard protocol for talking to flash memory and SCM-based storage. It appears that every enterprise vendor is incorporating NVMe into its products. The availability of dual-ported NVMe SSDs from multiple suppliers helped to hasten the transition to NVMe in enterprise storage systems, as will the hot-swap capability for NVMe SSDs announced at the event.

NVMe-over-Fabrics (NVMe-oF) is the new standard for accessing storage across a network. Pure Storage recently announced the all-NVMe FlashArray//X. At FMS, AccelStor announced its second-generation all-NVMe AccelStor NeoSapphire H810 array. E8 Storage and Kaminario also announced NVMe-based arrays.

Micron discussed its Solid Scale scale-out all-flash array with us. Solid Scale is based on Micron’s new NVMe 9200 SSDs and Excelero’s NVMesh software. NVMesh creates a server SAN using the same underlying technology as NVMe-oF. In the case of Solid Scale, the servers are dedicated storage nodes.

Other vendors told us about their forthcoming NVMe and NVMe-oF arrays. In every case, these products will deliver substantial improvements in latency and throughput compared to existing all-flash arrays, and should deliver millions of IOPS.

Photo of Gen-Z Chassis

Gen-Z Concept Chassis

Flash Memory Trend #4: The future is data centric, not processor centric. Ongoing advances in flash memory and storage class memory are vitally important, yet they introduce new challenges for storage system designers and data center architects. Although NVMe over PCIe can deliver 10x improvements in some storage metrics, PCIe is already a bottleneck that limits overall system performance.

We ultimately need a new data access technology, one that will enable much higher performance. Gen-Z promises to be exactly that. Gen-Z is “an open systems interconnect that enables memory access to data and devices via direct-attached, switched, or fabric topologies. This means Gen-Z will allow any device to communicate with any other device as if it were communicating with its local memory.”

Photo of Barry McAuliffe of HPE and Kurtis Bowman of Dell EMC

Barry McAuliffe (HPE) and Kurtis Bowman (Dell EMC)

I spent a couple hours with the Gen-Z Consortium folks and came away impressed. The consortium is working to enable a composable infrastructure in which every type of performance resource becomes a virtualized pool that can be allocated to tasks as needed. The technology was ready to be demonstrated in an FPGA-based implementation, but a fire in the exhibit hall prevented access. Instead, we saw a conceptual representation of a Gen-Z based system.

The Gen-Z Consortium is creating an open interconnect technology on top of which participating organizations can innovate. There are already more than 40 participating organizations including Dell EMC, HPE, Huawei, IBM, Broadcom and Mellanox. I found it refreshing to observe staff from HPE (Barry McAuliffe, VP and Secretary of Gen-Z) and Dell EMC (Kurtis Bowman, President of Gen-Z) working together to advance this data centric architecture.

Implications of These Flash Memory Trends for Enterprise IT

Vendors are shipping storage class memory products today, with more to come by the end of the year. Flash memory manufacturers continue to innovate, and will extend the viability of flash memory as a core data center technology for at least another five years. NVMe and NVMe-oF are real today, and are key technologies for the next generation of storage systems.

Enterprise technologists should plan 2017 through 2020 technology refreshes around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.

Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.




DCIG Quick Look: iXsystems TrueNAS X10 Offers an Affordable Offramp from Public Cloud Storage

For many of us, commuting in rush hour with its traffic jams is an unpleasant fact of life. But I once had a job on the outer edge of a metropolitan area. I was westbound when most were eastbound. I often felt a little sorry for the mass of people stuck in traffic as I zoomed–with a smile on my face–in the opposite direction. Today there is a massive flow of workloads and their associated storage to the public cloud. But there are also a lot of companies moving workloads off the public cloud, and their reason is cloud economics.

Cloud Economics Are Not Always Economical

In a recent conversation with iXsystems, it indicated that many of its new customers are coming to it in search of lower-than-public-cloud costs. Gary Archer, Director of Storage Marketing at iXsystems met with DCIG earlier this month to brief us on a forthcoming product. It turns out the product was not the rumored hyperconverged infrastructure appliance. Instead, he told us iXsystems was about to reach a new low as in a new low starting price and cost per gigabyte for enterprise-grade storage.

A lot of companies look at iXsystems because they want to reduce costs by migrating workloads off the public cloud. These customers find the Z-Series enterprise-grade open source storage attractive, but asked for a lower entry price and lower cost per GB.

iXsystems TrueNAS X10 is Economical by Design

To meet this demand, iXsystems chose current enterprise-grade, but not the highest-end, hardware for its new TrueNAS X10. For example, each controller features a single 6-core Intel Broadwell Xeon CPU. In an era of ever-larger DRAM caches, each X10 controller has just 32GB of ECC DRAM. Dual one-gigabit Ethernet is built in. 10 GbE is optional. Storage capacity is provided exclusively by SAS-attached hard drives. Flash memory is used, but only as cache.

The TrueNAS X10 retains all the redundancy and reliability features of the Z-Series, but at a starting price of just $5,500. A 20 TB system costs less than $10,000, and a 120 TB system costs less than $18,000 street. So, the X10 starts at $0.50/GB and ranges down to $0.15/GB. Expansion via disk shelves should drive the $/GB even lower.

iXsystems positions the TrueNAS X10 as entry-level enterprise-grade unified storage. As such, the TrueNAS X10 will make a cost-effective storage target for backups, video surveillance and file sharing workloads; but not for workloads characterized by random writes. Although iXsystems lists in-line deduplication and compression on its spec sheet, the relatively limited DRAM cache and CPU performance mean you should probably only implement deduplication with caution. By way of example, the default setting for deduplication is off.

In the TrueNAS X10, iXsystems delivers enterprise-grade storage for companies that want to save money by moving off the public cloud. The X10 will also be attractive to companies that have outgrown the performance, capacity or limited data services offered by SMB-focused NAS boxes.

The TrueNAS X10 is not for every workload. But companies with monthly public cloud bills that have climbed into the tens of thousands may find that “cloud economics” are driving them to seek out affordable on premise alternatives. Seek and ye shall find.




Blockchain Technology Being Used to Protect Data as Opposed to Holding It Ransom

Blockchain technology holds the potential to dramatically enhance global commerce and every supply chain. Unfortunately, the first real-world experience many organizations have had with it is using its implementation vis-à-vis Bitcoin to pay a ransom to cybercriminals who have encrypted their company’s files. The good news is that vendors like Nexsan see the upside of blockchain and are using it for more noble purposes: protecting files stored on its Unity Active Archive appliances.

Blockchain technology is on the verge of becoming really big. As in HUGE big. In a 2016 TED Talk, Don Tapscott referred to it as the technology that is likely to have the greatest impact in the next few decades for one simple reason: it facilitates the creation of trust. In fact, in the video he calls it “the trust protocol.

This brings me to Nexsan and its use of blockchain technology in its Active Archive product. Why does Nexsan use blockchain? So users can trust that when they go to retrieve a file, they know it will be available in its original, undefiled state.

In the case of the Unity Active Archive, whenever it ingests a file, it stores two copies of the file and generates two cryptographic file hashes or digital fingerprints. It stores those fingerprints separately, in a hardened private blockchain internal to the device.

These digital fingerprints are more than a “just-in-case” technology. Rather, they are used in automated file integrity audits. These audits guard the data from silent data corruption. When it discovers a mismatch between an original fingerprint and a fingerprint generated during an audit, it replaces the corrupted file using the other copy of the file from the archive’s object store.

If you are like me, you are sick and tired of seeing criminals use the latest, greatest technologies like blockchain for nefarious purposes. Nexsan’s use of blockchain technology to protect files in its Active Archive is particularly satisfying to me on two levels. Not only does it provide a great way to verify file authenticity, it does it using the very technology that cybercriminals are using to get paid and avoid detection by authorities. I cannot think of a better way to capitalize on blockchain technology while playing turnabout on cybercriminals at the same time. Kudos to Nexsan for doing so!




DCIG 2017-18 Small/Midsize Enterprise All-Flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2017-18 Small/Midsize Enterprise All-flash Array Buyer’s Guide developed from the enterprise storage array body of research.

The DCIG 2017-18 Small/Midsize Enterprise All-flash Array Buyer’s Guide weights, scores and ranks more than 100 features of twenty-four (24) small/midsize enterprise-class all-flash arrays that achieved rankings of Recommended or Excellent. These products come from eleven (11) vendors including Dell EMC, Fujitsu, iXsystems, Kaminario, NEC, NetApp, Nimble Storage, Pivot3, Pure Storage, Tegile and Tintri. This Buyer’s Guide offers much of the information an organization should need to make a highly-informed decision as to which all-flash storage array will suit their needs.

Each array included in the DCIG 2017-18 Small/Midsize Enterprise All-flash Array Buyer’s Guide had to meet the following criteria:

  • Must be available as an appliance that is available as a single SKU and includes its own hardware and software.
  • Must be marketed as an all-flash array (AFA). The best evidence of meeting this criterion is the existence of a specific all-flash SKU.
  • Must use flash memory as primary storage, not merely as an extended cache.
  • May permit storage expansion with disk shelves that contain HDDs or the virtualization of external disk-based arrays—essentially converting the all-flash array into a hybrid storage array.
  • Must support one or more of the following storage networking protocols: iSCSI, Fibre Channel, InfiniBand, NFS.
  • Provides features and capacities appropriate for small/midsize enterprises.
  • There must be sufficient information available to DCIG to make meaningful decisions. DCIG makes a good faith effort to reach out and obtain information from as many storage providers as possible. However, products may be excluded because of a lack of sufficient reliable data.
  • Must be formally announced and/or generally available for purchase as of February 28, 2017.

DCIG’s succinct analysis provides insight into the state of the all-flash storage array marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using an all-flash storage array and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a-glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by-side comparisons assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Competitive Intelligence Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by- side feature comparisons of the products in which the organization is most interested.