Fast Network Connectivity Key to Unlocking All-flash Array Performance

The current generation of all-flash arrays offers enough performance to saturate the network connections between the arrays and application servers in the data center. In many scenarios, the key limiter to all-flash array performance is storage network bandwidth. Therefore, all-flash array vendors have been quick to adopt the latest advances in storage network connectivity.

Fast Networks are Here, and Faster Networks are Coming

Chart showing current and future Ethernet speeds

Ethernet is now available with connection speeds up to 400 Gb per second. Fibre Channel now reaches speeds up to 128 Gb per second. As discussed during a recent SNIA presentation, the roadmaps for both technologies forecast another 2x to 4x increase in performance.

While the fastest connections are generally used to create a storage network fabric among data center switches, many all-flash arrays support fast storage network connectivity.

All-flash Arrays Embrace Fast Network Connectivity

DCIG’s research into all-flash arrays identified thirty-seven (37) models that support 32 Gb FC and seventeen (17) that support 100 Gb Ethernet, and ten (10) that support 100 Gb InfiniBand connectivity. These include products from Dell EMC, FUJITSU Storage, Hitachi Vantara, Huawei, Kaminario, NEC Storage, NetApp, Nimbus Data, Pure Storage and Storbyte.

Summary chart of AFA connectivity support

Source: DCIG

Other Drivers of Fast Network Connectivity

Although all-flash storage is a key driver behind fast network connectivity, there are also several other significant drivers. Each of these has implications for the optimal balance between compute, storage, network bandwidth, and the cost of creating and managing the infrastructure.

These other drivers of fast networking include:

  • Faster servers that offer more capacity and performance density per rack unit
  • Increasing volumes of data require increasing bandwidth
  • Increasing east-west traffic between servers in the data center due to scale-out infrastructure and distributed cloud-native applications
  • The growth of GPU-enabled AI and data mining
  • Larger data centers, especially cloud and co-location facilities that may house tens of thousands of servers
  • Fatter pipes yield more efficient fabrics with fewer switches and cables

Predominant All-Flash Array Connectivity Use Cases

How an all-flash array connects to the network is frequently based on the type of organization deploying the array. While there are certainly exceptions to the rule, the predominant connection methods and use cases can be summarized as follows:

  • Ethernet = Cloud and Service Provider data centers
  • Fibre Channel = Enterprise data centers
  • InfiniBand = HPC environments

Recent advances in network connectivity–and the adoption of these advances by all-flash array providers–creates new opportunities to increase the amount of work that can be accomplished by an all-flash array. Therefore, organizations intending to acquire all-flash storage should consider each product’s embrace of fast network connectivity as an important part of the evaluation process.




HPE Predicts Sunny Future for Cloudless Computing

Antonio Neri, CEO of HPE, declared at its Discover event last week that HPE is transforming into a consumption-driven company that will deliver “Everything as a Service” within three years. In addition, Neri put forward the larger concept of “cloudless” computing. Are these announcements a tactical response to the recent wave of public cloud adoption by enterprises, or are they something more strategic?

“Everything as a Service” is Part of a Larger Cloudless Computing Strategy

“Everything as a Service” is, in fact, part of a larger “cloudless” computing strategy that Neri put forth. Cloudless. Do we really need to add yet another term to our technology dictionaries? Yes, we probably do.

picture of Antonio Neri with the word Cloudless in the background

HPE CEO, Antonio Neri, describing Cloudless Computing at HPE Discover

“Cloudless” is intentionally jarring, just like the term “serverless”. And just as “serverless” applications actually rely on servers, so also “cloudless” computing will rely on public clouds. The point is not that cloud goes away, but that it will no longer be consumed as a set of walled gardens requiring individual management by enterprises and applications.

Enterprises are indeed migrating to the cloud, massively. Attractions of the cloud include flexibility, scalability of performance and capacity, access to innovation, and its pay-per-use operating cost model. But managing and optimizing the hybrid and multi-cloud estate is challenging on multiple fronts including security, compliance and cost.

Cloudless computing is more than a management layer on top of today’s multi-cloud environment. The cloudless future HPE envisions is one where the walls between the clouds are gone; replaced by a service mesh that will provide an entirely new form of consuming and paying for resources in a truly open marketplace.

Insecure Infrastructure is a Barrier to a Cloudless Future

Insecure infrastructure is a huge issue. We recently learned that more than a dozen of the largest global telecom firms were compromised for as much as seven years without knowing it. This was more than a successful spearfishing expedition. Bad actors compromised the infrastructure at a deeper level. In light of such revelations, how can we safely move toward a cloudless future?

Foundations of a Cloudless Future

Trust based on zero trust. The trust fabric is really about confidence. Confidence that infrastructure is secure. HPE has long participated in the Trusted Computing Group (TCG), developing open standards for hardware-based root of trust technology and the creation of interoperable trusted computing platforms. At HPE they call the result “silicon root of trust” technology. This technology is incorporated into HPE ProLiant Gen10 servers.

Memory-driven computing. Memory-driven computing will be important to cloudless computing because it is necessary for real-time supply chain, customer and financial status integration.

Instrumented infrastructure. Providers of services in the mesh must have an instrumented infrastructure. Providers will use the machine data in multiple ways; including analytics, automation, and billing. After all, you have to see it in order to measure it, manage it and bill for it.

Infrastructure providers have created multiple ways to instrument their systems. Lenovo TruScale measures and bills based on power consumption. In HPE’s case, it uses embedded instrumentation and the resulting machine data for predictive analytics (HPE InfoSight), billing (HPE GreenLake) and cost optimization (HPE Consumption Analytics Portal).

Cloudless Computing Coming Next Year

HPE is well positioned to deliver on the “everything as a service” commitment. It has secure hardware. It has memory-driven composable infrastructure. It has an instrumented infrastructure across the entire enterprise stack. It has InfoSight analytics. It has consumption analytics. If has its Pointnext services group.

However, achieving the larger vision of a cloudless future will involve tearing down some walls with participation from a wide range of participants. Neri acknowledged the challenges, yet promised that HPE will deliver cloudless computing just one year from now. Stay tuned.




TrueCommand Brings Unified Management and Predictive Analytics to ZFS Storage

Many businesses are embarking on digital transformation initiatives that will put technology at the core of business value creation. At the same time, many of these same businesses are seeking to reduce or eliminate the cost of managing IT infrastructure. Storage vendors are addressing these seemingly incompatible goals by investing in new storage management capabilities including unified management, automation, predictive analytics, and proactive support.

iXsystems already offered API-based integration into automation frameworks and proactive support for TrueNAS. Now iXsystems has released TrueCommand to bring the benefits of unified storage management with predictive analytics to owners of its ZFS-based TrueNAS and FreeNAS arrays.

Key Business Benefits of TrueCommand Unified Management and Predictive Analytics

  • Unifies the management of primary and secondary storage
  • Increases uptime while decreasing storage management costs
  • Empowers storage administrators and managed service providers
  • Enables team-based global operations and security

Unifies the Management of Primary and Secondary Storage

infographic showing that TrueCommand can provide unified management of all TrueNAS and FreeNAS systems

TrueCommand provides unified management of both TrueNAS and FreeNAS storage systems. Many TrueNAS customers were introduced to iXsystems via FreeNAS, later upgrading toTrueNAS to run key business applications on fault-tolerant appliances with enterprise-level support. Customers with both TrueNAS for mission-critical applications and FreeNAS systems for backup, replication targets, or less critical workloads, can manage both seamlessly via TrueCommand.

Increases Uptime While Reducing Storage Management Costs

TrueCommand takes the complexity out of managing large storage environments with multiple NAS systems in multiple locations. The robust functionality of TrueCommand increases uptime while reducing storage management costs.

Centralized alerts. TrueCommand centralizes the management of alerts. In addition to the standard system alerts, storage administrators can define custom alerts. The alerts for all managed systems show up on the web-based dashboard. Administrators can also define notification groups to receive specific alerts via email. Thus, TrueCommand keeps the right people informed of any current or potential storage system problems.

Predictive analytics. TrueCommand provides predictive analytics focused on array health and capacity planning. Administrators can define thresholds that will trigger alerts based on these predictive analytics. For example, the system can issue alerts when certain capacity utilization thresholds are reached in a storage pool. This gives administrators needed lead time to add capacity or move workloads to less heavily utilized arrays.

In addition, TrueCommand analytics can run locally on the array or on a local server. Consequently, this benefit is available even on air-gapped systems; a requirement for many TrueNAS customers.

Proactive support. Storage management overhead can be further reduced by sending alerts to iXsystems USA-based support engineers for expert, proactive intervention. As many others have discovered, the combination of predictive analytics and proactive support is a potent weapon for increasing uptime and reducing storage administration costs. Proactive support is included in all Silver or above support entitlements.

Integration. Many iXsystems customers that could gain the most benefit from TrueCommand have already made substantial investments in infrastructure management tools and processes. TrueCommand employs REST and WebSocket APIs to provide real-time monitoring of TrueNAS and FreeNAS storage systems, collect performance statistics, enable and disable services, and even configure and monitor TrueCommand. Customers can use these same APIs to integrate these TrueCommand capabilities with their existing infrastructure management tools and processes.

Empowers Storage Administrators and Managed Service Providers

TrueCommand empowers enterprise and managed services provider storage administrators by enabling each administrator to proactively manage a large number of storage systems.

TrueCommand dashboard shows unified management of TrueNAS and FreeNAS systems

TrueCommand Dashboard

Visibility. The TrueCommand dashboard provides visibility to an entire organization’s TrueNAS and FreeNAS storage systems. It includes an auto-discovery tool that expedites the process of identifying and integrating systems into TrueCommand.

Customizable reports. Administrators can create graphical reports and add them to the reporting page. Reports are configurable and can span any group of systems or set of metrics. This enables the administrator and any sub-admins to view the storage system data that they deem most relevant to their administrative duties. They can also export chart data in CSV or JSON format for external use.

Single sign-on. Once a storage system appears on the dashboard, authorized administrators can log in by clicking on the system name. This feature is faster, simpler and more secure than looking up IP addresses and login credentials in a separate document or using a single password across multiple systems.

Enables Team-based Global Operations and Security

Role-based Access Control (RBAC). TrueCommand administrators can specify different levels of system visibility by assigning arrays to system groups, and individuals to teams and/or departments. By assigning different levels of access to each group, the administrator creates the level of access appropriate to each individual in a manageable, granular fashion. These RBAC controls can leverage existing LDAP and Active Directory identities and groups, eliminating redundant effort, error, and management overhead.

Audit logs. TrueCommand records all storage administration actions in secure audit logs. This helps to quickly identify what changed and who changed it when troubleshooting any storage issues.

TrueCommand Brings Unified Management and Predictive Analytics to FreeNAS and TrueNAS

Many enterprises and managed service providers are seeking to reduce the cost of managing IT infrastructure. But until now they have been forced to purchase proprietary storage systems or to go through extensive development efforts to create these capabilities in-house. Now iXsystems is bringing these benefits to the TrueNAS product family, including Open Source FreeNAS, through the simple to implement, yet powerful, TrueCommand storage management utility.

TrueCommand will add significant value to any organization that is managing multiple TrueNAS and/or FreeNAS storage systems. It should also put TrueNAS on more short lists as companies refresh their IT infrastructures with cost-effective enterprise infrastructure in mind.

Availability and licensing. TrueCommand is available now. TrueNAS and FreeNAS customers can manage up to 50 total drives across multiple storage systems without any purchase or contract. Beyond 50 drives, customers can purchase licenses based on the total number of drives and desired support level.




Lenovo TruScale and Nutanix Enterprise Cloud Accelerate Enterprise Transformation

Digital transformation is an enterprise imperative. Enabling that transformation is the focus of Lenovo’s TruScale data center infrastructure services. The combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Cloud is the Transformation Trigger

Many enterprises are seeking to go to the cloud, or at least to gain the benefits associated with the cloud. These benefits include:

  • pay-as-you-go operational costs instead of large capital outlays
  • agility to rapidly deploy new applications
  • flexibility to adapt to changing business requirements

For many IT departments, the trigger for serious consideration of a move to the cloud is when the CFO no longer wants to approve IT acquisitions. Unfortunately, the journey to the cloud often comes with a loss of control over both costs and data assets. Thus many enterprise IT leaders are seeking a path to cloud benefits without sacrificing control of costs and data.

TruScale Brings True Utility Computing to Data Center Infrastructure

The Lenovo Data Center Group focused on the needs of these enterprise customers by asking themselves:

  • What are customers trying to do?
  • What would be a winning consumption model for customers?

The answer they came up with is Lenovo TruScale Infrastructure Services.

Nutanix invited DCIG analysts to attend the recent .NEXT conference. While there we met with many participants in the Nutanix ecosystem, including an interview with Laura Laltrello, VP and GM of Lenovo Data Center Services. This article, and DCIG’s selection of Lenovo TruScale as one of three Best of Show products at the conference, is based largely on that interview.

As noted in the DCIG Best of Show at Nutanix .NEXT article, TruScale literally introduces utility data center computing. Lenovo bills TruScale clients a monthly management fee plus a utilization charge. It bases this charge on the power consumed by the Lenovo-managed IT infrastructure. Clients can commit to a certain level of usage and be billed a lower rate for that baseline. This is similar to reserved instances on Amazon Web Services, except that customers only pay for actual usage, not reserved capacity.

infographic summarizing Lenovo TruScale features

Source: Lenovo

This power consumption-based approach is especially appealing to enterprises and service providers for which one or more of the following holds true:

  • Data center workloads tie directly to revenue.
  • Want IT to focus on enabling digital transformation, not infrastructure management.
  • Need to retain possession or secure control of their data.

Lenovo TruScale Offers Everything as a Service

TruScale can manage everything as a service, including both hardware and software. Lenovo works with its customers to figure out which licensing programs make the most sense for the customer. Where feasible, TruScale includes software licensing as part of the service.

Lenovo Monitors and Manages Data Center Infrastructure

TruScale does not require companies to install any extra software. Instead, it gets its power utilization data from the management processor already embedded in Lenovo servers. It then passes this power consumption data to the Lenovo operations center(s) along with alerts and other sensor data.

Lenovo uses the data it collects to trigger support interventions. Lenovo services professionals handle all routine maintenance including installing firmware updates and replacing failed components to ensure maximum uptime. Thus, Lenovo manages data center infrastructure below the application layer.

Lenovo Provides Continuous Infrastructure (and Cost) Visibility

Lenovo also uses the data it collects to provide near real-time usage data to customers via a dashboard. This dashboard graphically presents performance versus key metrics including actual vs budget. In short, Lenovo’s approach to utility data center computing provides a distinctive and easy means to deploy and manage infrastructure across its entire lifecycle.

Lenovo Integrates with Nutanix Prism

Lenovo TruScale infrastructure services cover the entire range Lenovo ThinkSystem and ThinkAgile products. The software defined infrastructure products include pre-integrated solutions for Nutanix, Azure HCI, Azure Stack and VMware.

Lenovo has taken extra steps to integrate its products with Nutanix. These include:

  • ThinkAgile XClarity Integrator for Nutanix is available via the Nutanix Calm marketplace. It works in concert with Prism to integrate server data and alerts into the Prism management console.
  • ThinkAgile Network Orchestrator is an industry-first integration between Lenovo switches and Prism. It reduces error and downtime by automatically changing physical switch configurations when changes are made to virtual Nutanix networks.

Nutanix Automates the Application Layer

Nutanix software simplifies the deployment and management of enterprise applications at scale. The following graphic, taken from the opening keynote lists each Nutanix component and summarizes its function.

image showing summary list of Nutanix services

Source: Nutanix

The Nutanix .NEXT conference featured many customers telling how Nutanix has transformed their data center operations. Their statements about Nutanix include:

“stable and reliable virtual desktop infrastructure”

“a private cloud with all the benefits of public, under our roof and able to keep pace with our ambitions”

“giving me irreplaceable time and memories with family”

“simplicity, ease of use, scale”

Lenovo TruScale + Nutanix = Accelerated Enterprise Transformation

I was not initially a fan of the term “digital transformation.” It felt like yet another slogan that really meant, “Buy more of my stuff.” But practical applications of machine learning and artificial intelligence are here now and truly do present significant new opportunities (or threats) for enterprises in every industry. Consequently, and more than at any time in the past, the IT department has a crucial role to play in the success of every company.

Enterprises need their IT departments to transition from being “Information Technology” departments to “Intelligent Transformation” departments. TruScale and Nutanix each enable such a transition by freeing up IT staff to focus on the business rather than on technology. Together, the combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Transform and thrive.

 

Disclosure: As noted above, Nutanix invited DCIG analysts to attend the .NEXT conference. Nutanix covered most of my travel expenses. However, neither Nutanix nor Lenovo sponsored this article.

Updated on 5/24/2019.




DCIG’s ISC West 2019 Best of Show in Video Surveillance

ISC West logoISC West—the International Security Conference and Exposition—provides insight into some of the biggest trends in the security industry. The conference attracted more than 30,000 attendees and nearly 1,000 vendors earlier this month. DCIG analysts planned our attendance at this year’s conference with a focus on video surveillance, especially video analytics. We had an eye-opening experience.

Artificial Intelligence is More Than a Buzzword

Artificial intelligence was one of the major themes of the conference. We saw some disappointing examples of vendors stretching to apply the artificial intelligence (AI) label to their products. In contrast, other vendors said AI is just a buzzword with no real projects demonstrating value in the field.

From what we gleaned from our experience at ISC West, video analytics in the field has advanced significantly. AI is more than a buzzword. The ability of AI to generate insight and value from surveillance video has moved from a diamond in the rough to a multi-faceted gem.

DCIG identified three companies for “Best of Show” awards in various facets of video surveillance infrastructure.

Briefcam Proves the Benefits of Video Analytics in the Field

BriefCam logo

The best example that we encountered of video analytics yielding actionable intelligence is Briefcam. At their booth, former law enforcement officer Johnmichael O’Hare demonstrated how he had used Briefcam to quickly condense four hours of surveillance video to a heat map that instantly revealed a house being used to sell drugs. He said they raided the house the next day, resulting in multiple arrests and the seizure of a significant quantity of dangerous drugs.

Briefcam is a multi-faceted tool. For example, Johnmichael demonstrated how Briefcam could be used to rapidly analyze traffic flows and add value to video for law enforcement and city planners.

Pivot3 Hyperconverged Infrastructure Handles Video Surveillance Workloads at Enterprise Scale

Pivot three company logoWe found Briefcam through a mention at the Pivot3 booth. It turns out that Briefcam and Pivot3 have been partnering since 2011 to deliver integrated surveillance video storage and analytics. Pivot3 provides a hyperconverged infrastructure (HCI) that can handle video surveillance workloads at scale. The Pivot3 solution incorporates NVIDIA GPU’s into its intelligent storage architecture to accelerate video analytics. The scalability of the Pivot3 HCI is important to deployments that may scale to include thousands of cameras and other IoT endpoints.

Razberi Technologies EndpointDefender Secures IoT Infrastructure

razberi technologies logo

Speaking of IoT, an important element of deploying and managing a video surveillance infrastructure is securing that infrastructure. Hackers gaining control of security cameras in homes is creepy. Hackers gaining control of security cameras within enterprise networks and critical facilities is spooky on a whole different level.

Cameras destined for enterprise deployments are supposed to be more secure than devices intended for the home. Nevertheless, they depend on manufacturers to incorporate appropriate security features in firmware, and on installers properly configuring the devices during installation. These and other IoT devices depend on the infrastructure to protect the IoT devices from attacks. Prudent companies also protect the infrastructure from attacks that improperly secured IoT devices make possible. This is where EndpointDefender from Razberi Technologies comes in.

photo of awards on display at razberi boothEndpointDefender secures cameras and other IoT devices on the edge, even those that were deemed unsecure. A gentleman at the Razberi Technologies booth told of a situation where an integrator had installed hundreds of video cameras just before a federal agency issued a warning that the cameras were not secure. Rather than replacing all the video cameras, the integrator was able to replace the standard Ethernet switches with razberi EndpointDefender network appliances to harden the connected cameras and protect the network from cybersecurity threats posed by the cameras.

Apparently, we were not the only ones who were impressed by Razberi Technologies and the EndpointDefender. The technology won SIA’s 2019 Cybersecurity New Product Showcase Award at the conference.

Analytics Can Move Video Surveillance from Cost Center to Strategic Asset

Many organizations implemented video surveillance as an operational tool and are now managing more than a petabyte of video surveillance storage. AI-enabled analytics tools are now available that in many cases can turn that operational cost center into a strategic asset. It is time to up-level our thinking and discussions within the enterprise about video surveillance.

DCIG will continue to cover developments in video surveillance and cybersecurity. If you haven’t already done so, please signup for the weekly DCIG Newsletter so that we can keep you informed of these developments.




TrueNAS Plugins Converge Services for Simple Hybrid Cloud Enablement

iXsystems is taking simplified service delivery to a new level by enabling a curated set of third-party services to run directly on its TrueNAS arrays. TrueNAS already provided multi-protocol unified storage to include file, block and S3-compatible object storage. Now preconfigured plugins converge additional services onto TrueNAS for simple hybrid cloud enablement.

TrueNAS Technology Provides a Robust Foundation for Hybrid Cloud Functionality

iXsystems is known for enterprise-class storage software and rock-solid storage hardware. This foundation lets iXsystems customers run select third-party applications as plugins directly on the storage arrays—whether TrueNAS, FreeNAS Mini or FreeNAS Certified. Several of these plugins dramatically simplify the deployment of hybrid public and private clouds.

How it Works

iXsystems works with select technology partners to preconfigure their solutions to run on TrueNAS using FreeBSD jails, iocage plugins, and bhyve virtual machines. By collaborating with these technology partners, iXsystems enables rapid IT service delivery and drives down the total cost of technology infrastructure. The flexibility to extend TrueNAS functionality via these plugins transforms the appliances into complete solutions that streamline common workflows.

Benefits of Curated Third-party Service Plugins

There are many advantages to this pre-integrated plugin approach:

  • Plugins are preconfigured for optimal operation on TrueNAS
  • Services can be added any time through the web interface
  • Simply turn it on, download the plugin and enter the associated login credentials
  • Plugins eliminate network latency by moving processing to the storage array
  • Third party applications can be run in a virtual machine without purchasing separate server hardware

Hybrid Cloud Data Protection

The integrated Asigra Cloud Backup software protects cloud, physical, and virtual environments. It is an enterprise-class backup solution that uniquely helps prevent malware from compromising backups. Asigra embeds cybersecurity software in its Cloud Backup software. It goes the extra mile to protect backup repositories, ensuring businesses can recover from malware attacks in their production environments.

Asigra is also one of the only enterprise backup solutions that offers agentless backup support across all types of environments: cloud, physical, and virtual. This flexibility makes adopting and deploying Asigra Cloud Backup easy with zero disruption to clients and servers. The integration of Asigra with TrueNAS is Storage Magazine’s Backup Product of the year for 2018.

Hybrid Cloud Media Management

TrueNAS arrays from iXsystems are heavily used in the media and entertainment industry, including several major film and television studios. iXsystems storage accelerates workflows with any device file sharing, multi-tier caching technology, and the latest interconnect technologies on the marketplace.  iXsystems recently announced a partnership with Cantemo to integrate its iconik software.

iconik is a hybrid cloud-based video and content management hub. Its main purpose is managing processes including ingestion, annotation, cataloging, collaboration, storage, retrieval, and distribution of digital assets. The main strength of the product is the support for managing metadata and transcoding of audio, video, and image files, but can store essentially all file formats. Users can choose to keep large original files on-premise yet still view and access the entire library in the cloud using proxy versions where required.

The Cantemo solutions are used to manage media across the entire asset lifecycle, from ingest to archive. iconik is used across a variety of industries including Fortune 500 IT companies, advertising agencies, broadcasters, houses of worship, and media production houses. Cantemo’s clients include BBC Worldwide, Nike, Madison Square Garden, The Daily Telegraph, The Guardian and many other leading media companies.

Enabling iconik on TrueNAS streamlines multimedia workflows and increases productivity for iXsystems customers who choose to enable the Cantemo service.

Cloud Sync

Both Asigra and Cantemo include hybrid cloud data management capabilities within their feature sets. iXsystems also supports file synchronization with many business-oriented and personal public cloud storage services. These enable staff to be productive anywhere—whether working with files locally or in the cloud.

Supported public cloud providers include Amazon Cloud Drive, Amazon S3, Backblaze B2, Box, Dropbox, Google Cloud Storage, Google Drive, Hubic, Mega, Microsoft Azure Blob Storage, Microsoft OneDrive, pCloud and Yandex. The Cloud Sync tool also supports file sync via SFTP and WebDAV.

More Technology Partnerships Planned

According to iXsystems, they will extend TrueNAS pre-integration to more technology partners where such partnerships provide win-win benefits for all involved. This intelligent strategy allows iXsystems to focus on enhancing core TrueNAS storage services, and it enables TrueNAS customers to quickly and confidently implement best-of-breed applications directly on their TrueNAS arrays.

All TrueNAS Owners Benefit

TrueNAS plugins provide a simple and flexible way for all iXsystems customers to add sophisticated hybrid-cloud media management and data protection services to their IT environments. Existing TrueNAS customers can gain the benefits of this plugin capability by updating to the most recent version of the TrueNAS software.




Ways Persistent Memory is Showing Up in Enterprise Storage in 2019

Persistent Memory is bringing a revolution in performance, cost and capacity that will change server, storage system, data center and software design over the next decade. This article describes some ways storage vendors are integrating persistent memory into enterprise storage systems in 2019.

Intel Optane DC Persistent Memory Modules (PMM)

picture of an Intel® Optane™ DC persistent memory stickAs noted in the second article in the series–NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise—the lack of a standard DIMM format for persistent memory is a key barrier to the development of NVDIMMs. Nevertheless, Intel recently announced general availability of pre-standard Optane DIMMs, branded Intel Optane DC Persistent Memory Modules (PMM).

Intel supports multiple modes for accessing Optane PMM. Each mode exposes different capabilities for systems to exploit. In “Memory Mode” DRAM acts as a hot-data cache in front of the Optane capacity tier. Somewhat strangely, in memory mode the Optane provides a large pool of volatile memory. A second mode for Optane PMM is called “App Direct Mode”. In App Direct Mode, Optane is persistent memory, and applications write to the Optane using load/store memory semantics.

NetApp demonstrates one way this technology can be integrated into existing enterprise storage systems. It uses Optane DIMMs in application servers as part of the NetApp Memory Accelerated (MAX) Data solution. MAX Data writes to Optane PMM in App Direct Mode as the hot storage tier. The solution tiers cold data to NetApp AFF all-flash arrays. With NetApp MAX, applications do not need to be rewritten to take advantage of Optane. Instead, the solution presents the Optane memory as POSIX-compliant storage.

Storage Vendors are Using Optane SSDs in Multiple Ways

As noted in the first article in this series, multiple storage system providers are taking advantage of Optane SSDs. Some storage vendors, such as HPE, use the Optane SSDs to provide a large ultra-low-latency read cache. Some vendors, including E8 Storage, use Optane SSDs as primary storage. Still others use Optane SSDs as the highest performing tier of storage in a multi-tiered storage environment.

A startup called VAST Data recently emerged from stealth. Its solution uses Optane SSDs as a write buffer and metadata store in front of the primary storage pool. It uses the least expensive flash memory–currently QLC SSDs–as the only capacity tier. The architecture also disaggregates storage processing from the storage pool by running the logic in containers on servers that talk to the storage nodes via NVMe-oF.

MRAM is Being Embedded Into Storage Components

At the SNIA Persistent Memory Summit, one presenter said that the largest uses of MRAM in the data center are in enterprise SSDs, RAID controllers, storage accelerator add-in cards and network adapters. For example, IBM uses MRAM in its Flashcore Modules, its most recent generation of 2.5-inch U.2 SSDs. The MRAM replaced supercapacitors plus DRAM it used in the prior generation of SSDs, simplifying the design and enabling more capacity in less space without the risk of data loss.

Persistent Memory Will Impact All Aspects of Data Processing

Technology companies have invested many millions of dollars into the development of a variety of persistent memory technologies. Some of these technologies exist only in the laboratories of these companies. But today, multiple vendors are incorporating Intel’s Optane 3D XPoint and MRAM into a variety of data center products.

We are in the very early phases of a persistent-memory-enabled revolution in performance, cost and capacity that will change server, storage system, data center and software design over the next decade. Although some aspects of this revolution are being held back by a lack of standards, multiple vendors are now shipping storage class memory as part of their enterprise storage systems. The revolution has begun.

 

This is the third in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

This article was updated on 4/5/2019 to add a link to the prior article in the series.




NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise

logo of the persistent memory summitThe SNIA Persistent Memory Summit held in late January 2019 provided a good view into the current state of industry. Some key technologies and standards related to persistent memory are moving forward more slowly than expected. Others are finally transitioning from promise to products. This article summarizes a few key takeaways from the event as they relate to enterprise storage systems.

Great Performance Gains Possible Without Modifying Software

One point the presenters at this SNIA-sponsored event took pains to make clear is that great performance gains from storage class memory are possible without making any changes to the software that uses the storage. For example, a machine learning test using Optane to extend server memory capacity allowed a standard host to complete 3x more analytics models.

These results are being obtained due to the efforts of SNIA and its member organizations. They developed the SNIA NVM Programming Model and a set of persistent memory libraries. Both Microsoft Windows and multiple Linux variants take advantage of these libraries to enable any application running on those operating systems to benefit from persistent memory.

Optane is a Gap Filler in the Storage Hierarchy, Not a DRAM Replacement

chart showing place of optane in storage memory hierarchy between DRAM and NAND SSD

Slide from Intel Presentation at SNIA PM Summit

One fact made clear across multiple presentations is that Optane (Intel’s brand name for 3D XPoint persistent memory) fills an important gap in the storage hierarchy, but falls short as a non-volatile replacement for DRAM. Every storage medium has strengths and weaknesses. Optane has excellent read latency and bandwidth, so deploying it as a persistent read-cache as HPE is doing may be its primary use case in enterprise storage systems.

MRAM is Shipping Now and Being Embedded Into Many Products

The main surprise for me from the event was the extent to which MRAM has become a real product. In addition to Everspin and Avalanche, both Intel and Samsung have announced that they are ready to ship STT-MRAM (spin-transfer torque magnetic RAM) in commercial production volumes.

MRAM offers read/write speeds similar to DRAM, and enough endurance to be used as a DRAM replacement in many scenarios. The initial focus of MRAM shipments is embedded devices, where the necessary surrounding standards are already in place. MRAM’s capacity, endurance and low power draw make it a great fit with the requirements of next-generation embedded edge devices.

photo of Kevin Conley CEO of Everspin and the memory landscape

Kevin Conley presenting at the PM Summit

Kevin Conley, CEO of Everspin Technologies, gave an especially helpful presentation describing the characteristics of MRAM and how it fits into the memory technology landscape. He stated that MRAM is currently being used in enterprise SSDs, RAID controllers and storage accelerator cards. His 10-minute presentation begins approximately 13 minutes into this video recording.

Persistent Memory Moving Onto the NIC

One new use case for persistent memory is to place it on network interface cards. The idea is to persist writes on the NIC before the data leaves the host server, eliminating the network and back-end storage system from the write-latency equation. It will be interesting to see how providers will integrate this capability into their storage solutions.

MRAM Memory Sticks Waiting on DDR5 and NVDIMM-P Standards

One factor holding back MRAM and other storage-class memories from being used in the familiar DIMM format is the lack of critical standards. The NVDIMM-P is the standard for placing non-volatile memory on DIMMs. The DDR5 standard will permit large capacity DIMMs. Both standards were originally expected to be completed in 2018, but that did not happen. No firm date for their completion was provided at the Summit.

Not all are waiting for the standards to be finalized. Intel is shipping its Optane DC Persistent Memory in DDR4-compatible DIMM format without waiting for the NVDIMM-P standard. The modules are available in capacities of 128, 256 and 512GB–a foretaste of what NVDIMM-P will do for memory capacities. While it is good to see some pre-standard NVDIMM products being introduced, the NVDIMM-P and DDR5 standards will be key to the broad adoption of persistent memory, just as the CCITT Group 3 and IEEE 802.3 standards were to fax and networking.

NVDIMM-N Remains the Predominant Non-Volatile Memory Technology for 2019 and 2020

The predominant technology for providing non-volatile memory on the memory bus is based on NVDIMM-N standard. These NVDIMMs pair DRAM with flash memory and a battery or capacitor. The DRAM handles I/O until a shutdown or power loss triggers the contents of DRAM to be copied to the flash memory.

NVDIMM-N modules provide the performance of DRAM and the persistence of flash memory. This makes them excellent for use as a write-cache, as iXsystems and Western Digital do in their respective TrueNAS and IntelliFlash enterprise storage arrays.

NVMe-oF Delivers in 2019 and 2020

If the DDR5 and NVDIMM-P standards are published by the end of 2019, we may see MRAM and other storage class memory technologies in enterprise storage systems by 2021. In the meantime, enterprise storage providers will focus on integrating NVMe and NVMe-oF into their products to provide advances in storage performance. Multiple vendors are already shipping NVMe-oF compliant products. These include E8 Storage, Pavilion Data Systems, Kaminario, and Pure Storage.

Learn More About Persistent Memory

DCIG focuses most of its efforts on enterprise technology that is currently available in the marketplace. Nevertheless, we believe that persistent memory will have significant implications for servers, storage and data center designs within the technology planning horizons of most enterprises. As such, it is important for anyone involved in enterprise information technology to understand those implications.

You can learn more about persistent memory from the people and organizations that are driving the industry forward. SNIA is making all the presentations from the Persistent Memory Summit available for viewing at https://www.snia.org/pm-summit.

DCIG will continue to cover developments in persistent memory, especially as it makes its way into enterprise technology products. If you haven’t already done so, please signup for the weekly DCIG Newsletter so that we can keep you informed of these developments.

 

This is the second in a series of articles about Persistent Memory and its use in enterprise storage. The first article in the series is Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems. The third article is Ways Persistent Memory is Showing Up in Enterprise Storage in 2019.

This article was updated on 4/1/2019 to add more detail about MRAM and NVDIMM-P, and on 4/5/2019 to add links to the other articles in the series.




Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems

Dell EMC announced that it will soon add Optane-based storage to its PowerMAX arrays, and that PowerMAX will use Optane as a storage tier, not “just” cache. This statement implies using Optane as a storage tier is superior to using it as a cache. But is it?

PowerMAX will use Storage Class Memory as Tier in All-NVMe System

Some people criticized Dell EMC for taking an all-NVMe approach–and therefore eliminating hybrid (flash memory plus HDD) configurations. Yet the all-NVMe decision gave the engineers an opportunity to architect PowerMAX around the inherent parallelism of NVMe. Dell EMC’s design imperative for the PowerMAX is performance over efficiency. And it does perform:

  • 290 microsecond latency
  • 150 GB per second of throughput
  • 10 million IOPS

These results were achieved with standard flash memory NVMe SSDs. The numbers will get even better when Dell EMC adds Optane-based storage class memory (SCM) as a tier. Once SCM has been added to the array, Dell EMC’s fully automated storage tiering (FAST) technology will monitor array activity and automatically move the most active data to the SCM tier and less active data to the flash memory SSDs.

The intelligence of the tiering algorithms will be key to delivering great results in production environments. Indeed, Dell EMC states that, “Built-in machine learning is the only cost-effective way to leverage SCM”.

HPE “Memory-Driven Flash” uses Storage Class Memory as Cache

HPE is one of many vendors taking the caching path to integrating SCM into their products. It recently began shipping Optane-based read caching via 750 GB NVMe SCM Module add-in cards. In testing, HPE 3PAR 20850 arrays equipped with this “HPE Memory-Driven Flash” delivered:

  • Sub-200 microseconds of latency for most IO
  • Nearly 100% of IO in under 300 microseconds
  • 75 GB per second of throughput
  • 4 million IOPS

These results were achieved with standard 12 Gb SAS SSDs providing the bulk of the storage capacity. HPE Memory-Driven Flash is currently shipping for HPE 3PAR Storage, with availability on HPE Nimble Storage yet in 2019.

An advantage of caching approach is that even a relatively small amount of SCM can enable a storage system to deliver SCM performance by dynamically caching hot data, even when it is storing most of the data on much slower and less expensive media. As with tiering, the intelligence of the algorithms is key to delivering great results in production environments.

The performance HPE is achieving with SCM is good news for other arrays based on caching-oriented storage operating systems. In particular, ZFS-based products such as those offered by Tegile, iXsystems and OpenDrives, should see substantial performance gains when they switch to using SCM for the L2ARC read cache.

What is Best – Tier or Cache?

I favor the caching approach. Caching is more dynamic than tiering, responding to workloads immediately rather than waiting for a tiering algorithm to move active data to the fastest tier on some scheduled basis. A tiering-based system may completely miss out on the opportunity to accelerate some workloads. I also favor caching because I believe it will bring the benefits of SCM within reach of more organizations.

Whether using SCM as a capacity tier or as a cache, the intelligence of the algorithms that automate the placement of data is critical. Many storage vendors talk about using artificial intelligence and machine learning (AI/ML) in their storage systems. SCM provides a new, large, persistent, low-latency class of storage for AI/ML to work with in order to deliver more performance in less space and at a lower cost per unit of performance.

The right way to integrate NVMe and SCM into enterprise storage is to do so–as a tier, as a cache or as both tier and cache–and then use automated intelligent algorithms to make the most of the storage class memory that is available.

Prospective enterprise storage array purchasers should take a close look at how the systems use (or plan to use) storage class memory and how they use AI/ML to inform caching and/or storage tiering decisions to deliver cost-effective performance.

 

This is the first in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

Revised on 4/5/2019 to add the link to the next article in the series.




Your Data Center is No Place for a Space Odyssey

The first movie I remember seeing in a theater was 2001: A Space Odyssey. If you saw it, I am guessing that you remember it, too. At the core of the story is HAL, a sophisticated computer that controls everything on a space ship en route to Jupiter. The movie is ultimately a story of artificial intelligence gone awry.

When the astronauts realize that HAL has become dangerous due to a malfunction, they decide they need to turn HAL off. I still recall the chill I experienced when one of the astronauts issues the command, “Open the pod bay doors please, HAL.” And HAL responds with, “I’m sorry, Dave. I’m afraid I can’t do that.”

Artificial Intelligence is Real Today, but not Perfect

Today, we are finally experiencing voice interaction with a computer that feels as sophisticated as what that movie depicted more than 50 years ago. But sometimes with unintended or unexpected consequences.

Artificial intelligence (AI) is great, except when it is not. My sister recently purchased a vehicle with collision avoidance technology built in. Surprisingly, it engaged the emergency stop procedure on a rural highway when no traffic was approaching. Fortunately, there was no vehicle following close behind or this safety feature might have actually caused an accident. (The dealer eventually accepted the return of the vehicle.)

Artificial Intelligence in Data Center Infrastructure Products

Artificial intelligence and machine learning technologies are being incorporated into data center infrastructure products. Some of these implementations are delivering measurable value to the customers who use these products. AI/ML enabled capabilities may include:

  • AI/ML enabled by default… Yay!
  • Cloud-based analytics…Yay!
  • Proactive fault remediation… Yay!
  • Recommendations… Yay!
  • Totally autonomous operations… I’m not sure about that.

Examples of Artificial Intelligence and Machine Learning Done Right

  • HPE InfoSight – all the “Yay!” items above. For example, HPE claims that with InfoSight, 86% of problems are predicted and automatically resolved before customers even realize there is an issue.
  • HPE Memory-Driven Flash is now shipping for HPE 3PAR arrays. It is implemented as an 750 GB NVMe Intel Optane SSD add-in card that provides an extremely low-latency read cache. The read cache uses sophisticated caching algorithms to complete nearly all I/O operations in under 300 microseconds. Yet, system administrators can enable this cache per volume, giving humans the opportunity to specify which workloads are of the highest value to the business.
  • Pivot3 Dynamic QoS provides policy-based quality of service management based on the business value of workloads. The system automatically applies a set of default policies, and dynamically enforces those policies. But administrators can change the policies and change which workloads are assigned to each policy on-the-fly.

When evaluating the AI/ML capabilities of data center infrastructure products, enterprises should look for products that enable AI/ML by default, yet which humans can override based on site-specific priorities, preferably on a granular basis.

After all, when a critical line of business application is not getting the priority it deserves, the last thing you want to hear from your infrastructure is, “I’m sorry, Dave. I’m afraid I can’t do that.”

 




Leading Hyperconverged Infrastructure Solutions Diverge Over QoS

Hyperconvergence is Reshaping the Enterprise Data Center

Virtualization largely shaped the enterprise data center landscape for the past ten years. Hyper-converged infrastructure (HCI) is beginning to have the same type of impact, re-shaping the enterprise data center to fully capitalize on the benefits that virtualizing the infrastructure affords them.

Hyperconverged Infrastructure Defined

DCIG defines a hyperconverged infrastructure (HCI) as a solution that pre-integrates virtualized compute, storage and data protection functions along with a hypervisor and scale-out cluster management software. HCI vendors may offer their solutions as turnkey appliances, installable software or as an instance running on public cloud infrastructure. The most common physical instantiation of—and unit of scaling for—hyperconverged infrastructure is a 1U or 2U rack-mountable appliance contain­ing 1–4 cluster nodes.

HCI Adoption Exceeding Analyst Forecasts

Hyperconverged Infrastructure (HCI)–and the software-defined storage (SDS) technology that is a critical component of these solutions–is still in the early stages of adoption. Yet according to IDC data, spending on HCI already exceeds $5 Billion annually and is growing at a rate that substantially outpaces many analyst forecasts.Graph comparing analyst forecasts with actual hyperconverged sales growth

HCI Requirements for Next-Generation Datacenter Adoption

The success of initial HCI deployments in reducing complexity, speeding time to deployment, and lowering costs compared to traditional architectures has opened the door to an expanded role in the enterprise data center. Indeed, HCI is rapidly becoming the core technology of the next-generation enterprise data center. In order to succeed as a core technology these HCI solutions must meet a new and demanding set of expectations. These expectations include:

  • Simplified management, including at scale
  • Workload consolidation, including mission-critical

The Role of Quality of Service in Simplifying Management and Consolidating Workloads

Three performance elements that are candidates for quality of service (QoS) management are latency, IOPS, and throughput. Some HCI solutions address all three elements, others manage just a single element.

HCI solutions also take varied approaches to managing QoS in terms of fixed assignments versus relative priority. The fixed assignment approach involves assigning minimum, maximum and/or target values per volume. The relative priority approach involves assigning each volume to a priority group–like Gold, Silver or Bronze.

Superior QoS technology creates business value by driving down operating expenses (OPEX). It dramatically reduces the amount of time IT staff must spend troubleshooting service level agreement (SLA) related problems.

Superior QoS also creates business value by driving down capital expenses (CAPEX). It enables more workloads to be confidently consolidated onto less hardware. The more intelligent it is, the less over-provisioning (and over-purchasing) of hardware will be required.

Finally, QoS can be applied to workload performance alone or to performance and data protection to meet service level agreements in both domains.

How Some Popular Hyperconverged Infrastructure Solutions Diverge Over QoS

DCIG is in the process of updating its research on hyperconverged infrastructure solutions. In the process we have observed that these solutions take very divergent approaches to quality of service.

Cisco HyperFlex offers QoS on the NIC, which is useful for converged networking, but does not offer storage QoS that addresses application priority within the solution itself.

Dell EMC VxRail QoS is very basic. Administrators can assign fixed IOPS limits per volume. Workloads using those volumes get throttled even when there is no resource contention, yet still compete for IOPS with more important workloads. This approach to QoS does protect a cluster from a rogue application consuming too many resources, but is probably a better fit for managed service providers than for most enterprises.

Nutanix “Autonomic QoS” automatically prioritizes user applications over back end operations whenever contention occurs. Nutanix AI/ML technology understands common workloads and prioritizes different kinds of IO from a given application accordingly. This approach offers great appeal because it is fully automatic. However, it is global and not user configurable.

Pivot3 offers intelligent policy-based QoS. Administrators assign one of five QoS policies to each volume when it is created. In addition to establishing priority, each policy assigns targets for latency, IOPS and throughput. Pivot3’s Intelligence Engine then prioritizes workloads in real-time based on those policies. The administrator assigning the QoS policy to the volume must know the relative importance of the associated workload; but once the policy has been assigned, performance management is “set it and forget it”. Pivot3 QoS offers other advanced capabilities including applying QoS to data protection and the ability to change QoS settings on-the-fly or on a scheduled basis.

QoS Ideal = Automatic, Intelligent and Configurable

The ideal quality of service technology would be automatic and intelligent, yet configurable. Though none of these hyperconverged solutions may fully realize that ideal, Nutanix and Pivot3 both bring significant elements of this ideal to market as part of their hyperconverged infrastructure solutions.

Enterprises considering HCI as a replacement for existing core data center infrastructure should give special attention to how the solution implements quality of service technology. Superior QoS technology will reduce OPEX by simplifying management and reduce CAPEX by consolidating many workloads onto the solution.




Three Hallmarks of an Effective Competitive Intelligence System

Across more than twenty years as an IT Director, I had many sales people incorrectly tell me that their product was the only one that offered a particular benefit. Did their false claims harm their credibility? Absolutely. Were they trying to deceive me? Possibly. But it is far more likely that they sincerely believed their claims. 

Their lack was not truthfulness but accuracy. They lacked accurate and up-to-date information about the current capabilities of competing products in the marketplace. Their competitive intelligence system had failed them.

When DCIG was recruiting me to become an analyst I asked DCIG’s founder, Jerome Wendt, what were the most surprising things he had learned since founding DCIG. One of the three things he mentioned in his response was the degree to which vendors lack a knowledge of the product features and capabilities of their key competitors.

Reasons Vendors Lack Good Competitive Intelligence

There are many reasons why vendors lack good competitive intelligence. These include:

  • They are focused on delivering and enhancing their own product to meet the perceived needs of current and prospective customers.
  • Collecting and maintaining accurate data about even key competitor’s products can be time consuming and challenging.
  • Staff transitions may result in a loss of data continuity.

Benefits of an Effective Competitive Intelligence System

An effective competitive intelligence system increases sales by enabling partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits. Thus, it enhances the onboarding of new personnel and their opportunity for success.

Three Hallmarks of an Effective Competitive Intelligence System

The hallmarks of an effective competitive intelligence system center around three themes: data, insight and communication.

Regarding Data, the system must:

  • Capture current, accurate data about key competitor products
  • Provide data continuity across staff transitions
  • Provide analyses that surfaces commonalities and differences between products

 

Regarding Insight, the system must:

  • Clearly identify product differentiators
  • Clearly articulate the business benefits of those differentiators

 

Regarding Communication, the system must:

  • Provide concise content that enables partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits for CxOs and line of business executives
  • Bridge the gap between sales and marketing with messages that are tailored to be consistent with product branding
  • Provide the content at the right time and in the right format

Whatever combination of software, services and competitive intelligence personnel a company employs, an effective competitive intelligence system is an important asset for any company seeking to thrive in a competitive marketplace.

DCIG’s Competitive Intelligence Track Record

DCIG Buyer’s Guides

Since 2010, DCIG Buyer’s guides have provided hundreds of thousands with an independent look at the many products in each market DCIG covers. Each Buyer’s Guide gives decision makers insight into the features that merit particular attention, what is available now and key directions in the marketplace. DCIG produces Buyer’s Guides based on our larger bodies of research in data protection, enterprise storage and converged infrastructure.

DCIG Pocket Analyst Reports

DCIG leverages much of the Buyer’s Guide research methodology–and the competitive intelligence platform that supports that research–to create focused reports that highlight the differentiators between two products that are frequently making it onto the same short lists.

Our Pocket Analyst Reports are published and made available for sale on a third party website to substantiate the independence of each report. Vendors can license these reports for use in lead generation, internal sales training and for use with prospective clients. 

DCIG Competitive Intelligence Reports

DCIG also uses its Competitive Intelligence Platform to produce reports for internal use by our clients. These concise reports enable partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits that make sense to CxOs and line of business executives. Because these reports are for internal use, the client can have substantial input into the messaging.

DCIG Battle Cards

Each DCIG Battle Card is a succinct 2-page document that compares the client’s product or product family to one other product or product family. The client and DCIG collaborate to identify the key product features to compare, the key strengths that the client’s product offers over the competing product, and the appropriate messaging to include on the battle card. Content may be contributed by the client for inclusion on the battle card. The battle card is only for the internal use of the client and its partners and may not be distributed.

DCIG Competitive Intelligence Platform

The DCIG Competitive Intelligence (CI) Platform is a multi-tenant, platform-as-a-service (PaaS) offering backed by support from DCIG analysts. The DCIG Competitive Intelligence Platform offers the flexibility to centrally store data and compare features on competitive products. Licensees receive the ability to centralize competitive intelligence data in the cloud with the data made available internally to their employees and partners via reports prepared by DCIG analysts.

DCIG Competitive Intelligence platform and associated analyst services strengthen the competitive intelligence capabilities of our clients. Sometimes in unexpected ways…

  • Major opportunity against a competitor never faced before
  • Strategic supplier negotiation and positioning of competitor

 

In each case, DCIG analysis identified differentiators and 3rd party insights that helped close the deal.




TrueNAS M-Series Turns Tech Buzz into Music

NVMe and other advances in non-volatile memory technology are generating a lot of buzz in the enterprise technology industry, and rightly so. As providers integrate these technologies into storage systems, they are closing the gap between the dramatic advances in processing power and the performance of the storage systems that support them. The TrueNAS M-Series from iXsystems provides an excellent example of what can be achieved when these technologies are thoughtfully integrated into a storage system.

DCIG Quick Look

In the process of refreshing its research on enterprise midrange arrays, DCIG discovered that the iXsystems TrueNAS M-Series all-flash and hybrid storage arrays leverage many of the latest technologies, including:

  • Intel® Xeon® Scalable Family Processors

  • Large DRAM caches
  • NVDIMMs

  • NVMe SSDs

  • Flash memory

  • High-capacity hard disk drives

 

The TrueNAS M-Series lineup comprises two models: the M40 and the M50. The M40 is lower entry cost, scalable to 2 PB, and includes 40 GbE connectivity with SAS SSD caching. The M50 scales to 10 PB and adds 100 GbE connectivity with NVMe-based caching.

Both models come standard with redundant storage controllers for high-availability and 24×7 service. Though single-controller configurations are available for less critical applications. 

Advanced Technologies in Perfect Harmony

DCIG analysts are impressed with the way iXsystems engineers have orchestrated the latest technologies in the M50 storage array, achieving maximum end-to-end cost-efficient performance.

The M50 marries 40 Intel® Xeon® Scalable Family Processor cores with up to 3 TB of DRAM, a 32 GB NVDIMM write cache and 15.2 TB of NVMe SSD read-cache in front of up to 10 PB of hard disk storage. (The M-Series can also be configured as an all-flash array.) Moreover, iXsystems attaches each storage expansion shelf directly to each controller via 12 Gb SAS ports. This approach adds back end throughput to the storage system as each shelf is added.

image of TrueNAS M50 array rear view
iXsystems TrueNAS M50

This well-balanced approach carries through to front-end connectivity. The M50 supports the latest advances in high-speed networking, including up to 4 ports of 40/100 Gb Ethernet and 16/32 Gb Fibre Channel connectivity per controller.

TrueNAS is Enterprise Open Source

TrueNAS is built on BSD and ZFS Open Source technology. iXsystems is uniquely positioned to support the full Open Source stack behind TrueNAS. It has developers and expertise in the operating system, file systems and NAS software.

iXsystems also stewards the popular (>10 million downloads) FreeNAS software-defined storage platform. Among other things, FreeNAS functions as the experimental feature and QA testbed for TrueNAS. TrueNAS can even replicate data to and from FreeNAS. Thus, TrueNAS owners benefit from the huge ZFS and FreeNAS Open Source ecosystems.

NVM Advances are in Tune with the TrueNAS Architecture

The recent advances in non-volatile memory are a perfect fit with the TrueNAS architecture.

Geeking out just a bit…

diagram of TrueNAS M50 cacheZFS uses DRAM as a read cache to accelerate read operations. This primary read cache is called the ARC. ZFS also supports a secondary read cache called L2ARC. The M50 can use much of the 1.5 TB of DRAM in each storage controller for the ARC, and combine it with up to 15.2 TB of NVMe-based L2ARC to provide a huge low-latency read cache that offers up to 8 GB/s throughput.

The ZFS Intent Log (ZIL) is where all data to be written is initially stored. These writes are later flushed to disk. The M50 uses NVDIMMs for the ZIL write cache. The NVDIMMs safely provide near-DRAM-speed write caching. This enables the array to quickly acknowledge writes on the front end while efficiently coalescing many random writes into sequential disk operations on the back end.

Broad Protocol Support Enables Many Uses

TrueNAS supports AFP, SMB, NFS, iSCSI and FC protocols plus S3-compliant object storage. It also offers Asigra backup as an integrated service that runs natively on the array. This broad protocol support enables the M50 to cost-effectively provide high performance storage for:

  • File sharing
  • Virtual machine storage
  • Cloud-native apps
  • Backup target

 

All-inclusive Licensing Adds Value

TrueNAS software licensing is all-inclusive; with unlimited snapshots, clones and replication. Thus, there are no add-on license fees to negotiate and no additional PO’s to wait for. This reduces costs, promotes full utilization of the extensive capabilities of the TrueNAS M-Series and increases business agility. 

TrueNAS M50 Turns Tech Buzz into Music

The TrueNAS M50 integrates multiple buzz-worthy technologies to deliver large amounts of low-latency storage. The M50 accelerates a broad range of workloads–safely and economically. Speaking of economics, according to the iXsystems web site, TrueNAS storage can be expanded for less than $100/TB. That should be music to the ears of business people everywhere.




HCI Comparison Report Reveals Key Differentiators Between Dell EMC VxRail and Nutanix NX

Many organizations view hyper-converged infrastructure (HCI) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

HCI Appliances Deliver Radical Simplicity

Hyper-converged infrastructure appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. Thus, the stakes are high for vendors such as Dell EMC and Nutanix as they compete to own this critical piece of data center real estate.

In the last several years, HCI has also emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, plus non-disruptive hardware upgrades and data migrations are among the features that enterprises love about these solutions.

HCI Appliances Are Not All Created Equal

Many enterprises are considering HCI solutions from providers Dell EMC and Nutanix. A cursory examination of these two vendors and their solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their HCI appliances. Also, both providers pretest firmware and software updates and automate cluster-wide roll-outs.

Nevertheless, important differences remain between the products. Due to the high level of interest in these products, DCIG published an initial comparison in November 2017. Both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first page of HCI comparison reportUpdated DCIG Pocket Analyst Report Reveals Key HCI Differentiators

In this updated report, DCIG identifies six ways the HCI solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed feature matrix as well as insight into key differentiators between these two HCI solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




VMware vSphere and Nutanix AHV Hypervisors: An Updated Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform. Each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its updated DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors.

blurred image of first page of reportThis succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  • Data protection ecosystem
  • Support for Guest OSes
  • Support for VDI platforms
  • Certified enterprise applications
  • Fit with corporate direction
  • More favorable licensing model
  • Simpler management

This DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




Dell EMC VxRail vs Nutanix NX: Six Key HCIA Differentiators

Many organizations view hyper-converged infrastructure appliances (HCIAs) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises consider HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their respective HCIA solutions. Both pre-test firmware and software updates holistically and automate cluster-wide roll-outs.

Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG published its first comparison of these products in November 2017. There is a high level of interest in these products, and both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first pageIn this updated report, DCIG identifies six ways the HCIA solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed product matrix as well as insight into key differentiators between these two HCIA solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace.




Storage Analytics and Latency Matters

Some pretty amazing storage performance numbers are being bandied about these days. Generally speaking, these heretofore unheard of claims of millions of IOPS and latencies measured in microseconds include references to NVMe and perhaps storage class memories. What ultimately matters to a business is the performance of its applications, not just storage arrays. When an application is performing poorly, identifying the root cause can be a difficult and time-consuming challenge. This is particularly true in virtualized infrastructures. But meaningful help is now available to address this challenge through advances in storage analytics.

Storage Analytics Delivers Quantifiable Value

In a previous blog article about the benefits of Predictive Analytics in Enterprise Storage, I mentioned HPE’s InfoSight predictive analytics and the VMVision cross-stack analytics tool they released in mid-2015. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight VMVision cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

 

Pure Storage also offers predictive analytics, called Pure1 Meta. On September 20, 2018, Pure Storage released an extension of the Pure1 Meta platform called VM Analytics. Even in this first release, VM Analytics is clearly going to simplify and accelerate the process of resolving performance problems for Pure Storage FlashArray customers.

Application Latency is a Systemic Issue

The online demonstration of VM Analytics quickly impressed me with the fact that application latency is a systemic issue, not just a storage performance issue. The partial screen shot from the Pure1 VM Analytics tool included below shows a virtual machine delivering an average latency of 7.4 milliseconds. This view into performance provided by VM Analytics enables IT staff to quickly zero in on the VM itself as the place to focus in resolving the performance issue.screen shot of vm analytics

This view also shows that the datastore is responsible for less than 1 millisecond of that 7.4 milliseconds of latency. My point is that application latency depends on factors beyond the storage system. It must be addressed as a systemic issue.

Storage Analytics Simplify the Data Center Balancing Act

The key performance resources in a data center include CPU cycles, DRAM, storage systems and the network. Unless a system is dramatically over-provisioned, one of these resources will always constrain the performance of applications. Storage has historically been the limiting factor in application performance but the flash-enabled transformation of the data center has changed that dynamic.

Tools like VMVision and VM Analytics create value by giving data center administrators new levels of visibility into infrastructure performance. Therefore, technology purchasers should carefully evaluate these storage analytics tools as part of the purchase process. IT staff should use these tools to balance the key performance resources in the data center and deliver the best possible application performance to the business.




Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays

Both HPE and NetApp have multiple enterprise storage product lines. Each company also has a flagship product. For HPE it is the 3PAR StoreServ line. For NetApp it is the AFF (all flash FAS) A-Series. DCIG’s latest Pocket Analyst Report examines these flagship all-flash arrays. The report identifies many similarities between the products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features.

DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.

DCIG recently updated its research on the dynamic and growing all-flash array marketplace. In so doing, DCIG identified many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products including:

  • Unified SAN and NAS protocol support
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

DCIG also identified significant differences between the HPE and NetApp products including:

  • Hardware-accelerated Inline Data Services
  • Predictive analytics
  • Hybrid Cloud Support
  • Host-to-Storage Connectivity
  • Scalability
  • Licensing simplicity

Blurred image of pocket analyst report first page

DCIG’s 4-page Pocket Analyst Report on the Six Key Differentiators between HPE 3PAR StoreServ and NetApp AFF A-Series All-flash Arrays analyzes and compares the flagship all-flash arrays from HPE and NetApp. To see which product has the edge in each of the above categories and why, you can purchase the report on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available for no charge at some future time.




Data Center Challenges and Technology Advances Revealed at Flash Memory Summit 2018

If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.

Flash Memory Summit logoKey themes at the conference include:

  • Processing and storing ever growing amounts of data is becoming more and more challenging. Faster connections and higher capacity drives are coming but are not the whole answer. We need to completely rethink data center architecture to meet these challenges.
  • Artificial intelligence and machine learning are expanding beyond their traditional high-performance computing research environments and into the enterprise.
  • Processing must be moved closer to—and perhaps even into—storage.

Multiple approaches to addressing these challenges were championed at the conference that range from composable infrastructure to computational storage. Some of these solutions will complement one another. Others will compete with one another for mindshare.

NVMe and NVMe-oF Get Real

For the near term, NVMe and NVMe over Fabrics (NVMe-oF) are clear mindshare winners. NVMe is rapidly becoming the primary protocol for connecting controllers to storage devices. A clear majority of product announcements involved NVMe.

WD Brings HDDs into the NVMe World

Western Digital announced the OpenFlex™ architecture and storage products. OpenFlex speaks NVMe-oF across an Ethernet network. In concert with OpenFlex, WD announced Kingfish™, an open API for orchestrating and managing OpenFlex™ infrastructures.

Western Digital is the “anchor tenant” for OpenFlex with a total of seventeen (17) data center hardware and software vendors listed as launch partners in the press release. Notably, WD’s NVMe-oF attached 1U storage device provides up to 168TB of HDD capacity. That’s right – the OpenFlex D3000 is filled with hard disk drives.

While NVMe’s primary use case it to connect to flash memory and emerging ultra-fast memory, companies still want their HDDs. Using Western Digital, organizations can have their NVMe and still get the lower cost HDDs they want.

Gen-Z Composable Infrastructure Standard Gains Momentum

The emerging Gen-Z as a memory-centric architecture is designed for nanosecond latencies. Since last year’s FMS, Gen-Z has made significant progress toward this objective. Consider:

  • The consortium publicly released the Gen-Z Core Specification 1.0 on February 13, 2018. Agreement upon a set of 1.0 standards is a critical milestone in the adoption of any new standard. The fact that the consortium’s 54 members agreed to it suggest broad industry adoption.
  • Intel’s adoption of the SFF-TA-1002 “Gen-Z” universal connector for its “Ruler” SSDs reflects increased adoption of the Gen-Z standards. Making this announcement notable is that Intel is NOT currently a member of the Gen-Z consortium which indicates that Gen-Z standards are gaining momentum even outside of the consortium.
  • The Gen-Z booth included a working Gen-Z connection between a server and a removable DRAM module in another enclosure. This is the first example of a processor being able to use DRAM that is not local to the processor but is instead coming out of a composable pool. This is a concept similar to how companies access shared storage in today’s NAS and SAN environments.

Other Notable FMS Announcements

Many other innovative solutions to data center challenges were also made at the FMS 2018 which included:

  • Solarflare NVMe over TCP enables rapid low-latency data movement over standard Ethernet networks.
  • ScaleFlux computational storage avoids the need to move the data by integrating FPGA-based computation into storage devices.
  • Intel’s announcement of its 660P Series of SSDs that employ quad level cell (QLC) technology. QLC stores more data in less space and at a lower cost.

Recommendations

Based on the impressive progress we observed at Flash Memory Summit 2018, we can reaffirm the recommendations we made coming out of last year’s summit…

  • Enterprise technologists should plan technology refreshes through 2020 around NVMe and NVMe-oF. Data center architects and application owners should seek 10:1 improvements in performance, and a similar jump in data center efficiency.
  • Beyond 2020, enterprise technologists should plan their technology refreshes around a composable data centric architecture. Data center architects should track the development of the Gen-Z ecosystem as a possible foundation for their next-generation data centers.



Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.

Bitnami