Hyper-converged infrastructure (HCI) appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center real estate.
Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.
Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, storage, networking, and data protection functions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deployment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.
The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.
Some pretty amazing storage performance numbers are being bandied about these days. Generally speaking, these heretofore unheard of claims of millions of IOPS and latencies measured in microseconds include references to NVMe and perhaps storage class memories. What ultimately matters to a business is the performance of its applications, not just storage arrays. When an application is performing poorly, identifying the root cause can be a difficult and time-consuming challenge. This is particularly true in virtualized infrastructures. But meaningful help is now available to address this challenge through advances in storage analytics.
VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.
DCIG’s latest Pocket Analyst Report examines the flagship all-flash arrays from HPE and NetApp. The report identifies many similarities between the HPE 3PAR StoreServ and NetApp AFF A-Series products, including the ability to deliver low latency storage with high levels of availability, and a relatively full set of data management features. DCIG’s Pocket Analyst Report also identifies six significant differences between the products. These differences include how each product provides deduplication and other data services, hybrid cloud integration, host-to-storage connectivity, scalability, and simplified management through predictive analytics and bundled or all-inclusive software licensing.
If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.
When organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high end and standard hyper-converged architectures deliver the same benefits. They each virtualize compute, memory, storage networking, and data storage in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snapshots and replication. In short, these two architectures mirror each other in many ways. However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments.
Dell EMC VMAX and HPE 3PAR StoreServ arrays can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements. Features such as data center footprint, licensing simplicity, mainframe connectivity, performance resources, predictive analytics, raw storage density and effective storage density are key areas where these two products differentiate themselves.
Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. While some of these changes reinforced the respective strengths of each of their product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. Features such as host-to-storage networking connectivity, predictive analytics, support for public clouds, and data protection and flash performance optimization are key areas where these two products differentiate themselves.
Mainstream enterprise storage vendors are embracing NVMe. HPE, NetApp, Pure Storage, Dell EMC, Kaminario and Tegile all offer all-NVMe arrays. According to these vendors, the products will soon support storage class memory as well. NVMe protocol access to flash memory SSDs is a big deal. Support for storage class memory may become an even bigger deal.
Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.
DCIG is pleased to announce the availability of the DCIG 2018-19 All-flash Array Buyer’s Guide edition developed from its enterprise storage array body of research. This 64-page report presents a fresh snapshot of the dynamic all-flash array (AFA) marketplace. It evaluates and ranks thirty-two (32) enterprise class all-flash arrays that achieved rankings of Recommended or Excellent based on a comprehensive scoring of product features. These products come from seven (7) vendors including Dell EMC, Hitachi Vantara, HPE, Huawei, NetApp, Pure Storage and Tegile.
Much has changed since DCIG published the DCIG 2017-18 All-Flash Array Buyer’s Guide just one year ago. The DCIG analyst team is in the final stages of preparing a fresh snapshot of the all-flash array (AFA) marketplace. As we reflected on the fresh all-flash array data and compared it to the data we collected just a year ago, we observed seven significant trends in the all-flash array marketplace that will influence buying decisions through 2019.
Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding these startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.
Early in my IT career, a friend who owns a software company told me he had been informed by a peer that he wasn’t charging enough for his software. This peer advised him to adopt a “flinch-based” approach to pricing. He said my friend should start with a base licensing cost that meets margin requirements, and then keep adding on other costs until the prospective customer flinches. My friend found that approach offensive, and so do I.
Many organizations are using all-flash arrays in their data centers today. When asked about the benefits they have achieved, two benefits are almost always top of mind. The first benefit mentioned is the increase in application performance. Indeed, increased performance was the primary rationale for the purchase of the all-flash array. The second benefit came as an unexpected bonus; the decrease in time spent managing storage. As organizations consolidate many applications on each all-flash array; and are discovering that data tiering and quality of service features are important for preserving these benefits.
When one examines enterprise data protection and data storage products through the lens of hyper-converged infrastructure (HCI) designs, one would think each product either supports an HCI architecture or it does not. But as one begins to see when one scrutinizes this topic, the answer is not a simple “Yes” or “No”. Nuancing how well or if a product fits into an HCI design, one first needs to think about the question or even the series of questions that he or she should ask to properly make this assessment.
Every organization, consciously or unconsciously, views and evaluates new technologies through a lens. In recent years, they have largely evaluated new data center technologies in the context of virtualization and how easily it enabled them to achieve that end. That viewpoint has begun to change. Having largely virtualized their infrastructure, they increasingly view and evaluate new and existing data center technologies through the lens of hyper-converged infrastructures and how well these technologies support and enable the adoption hyper-converged infrastructure platforms in their environment.