The first movie I remember seeing in a theater was 2001: A Space Odyssey. If you saw it, I am guessing that you remember it, too. At the core of the story is HAL, a sophisticated computer that controls everything on a space ship en route to Jupiter. The movie is ultimately a story of artificial intelligence gone awry.
The cloud has gone mainstream with more companies than ever looking to host their production applications with general-purpose cloud providers such as the Google Cloud Platform (GCP). As this occurs, companies must identify backup solutions architected for the cloud that capitalize on the native features of each provider’s cloud offering to best protect their virtual machines (VMs) hosted in the cloud.
One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.
Virtualization largely shaped the enterprise data center landscape for the past ten years. Hyper-converged infrastructure (HCI) is beginning to have the same type of impact, re-shaping the enterprise data center to fully capitalize on the benefits that virtualizing the infrastructure affords them. Enterprises considering HCI as a replacement for existing core data center infrastructure should give special attention to how the solution implements quality of service technology. Superior QoS technology will reduce OPEX by simplifying management and reduce CAPEX by consolidating many workloads onto the solution.
The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.
On the surface, all-inclusive software licensing sounds great. You get all the software features that the product offers at no additional charge. You can use them – or not use them – at your discretion. It simplifies product purchases and ongoing licensing. But what if you opt not to use all the product’s features or only need a small subset of them? In those circumstances, you need to take a hard look at any product that offers all-inclusive software licensing to determine if it will deliver the value that you expect.
In 2019 the level of interest that companies expressed in using artificial Intelligence (AI) and machine learning (ML) exploded. Their interest is justifiable. These technologies gather the almost endless streams of data coming out of the scads of devices that companies deploy everywhere, analyze it, and then turn it into useful information. But time is the secret ingredient that companies must look for as they look to select an effective AI or ML product.
Across more than twenty years as an IT Director, I had many sales people incorrectly tell me that their product was the only one that offered a particular benefit. Did their false claims harm their credibility? Absolutely. Were they trying to deceive me? Possibly. But it is far more likely they lacked accurate and up-to-date information about the current capabilities of competing products in the marketplace. Their competitive intelligence system had failed them.
Vendors are finding multiple ways to enter the scale-out hyper-converged infrastructure (HCI) backup conversation. Some acquire other companies such as StorageCraft did in early 2017 with its acquisition of ExaBlox. Others build their own such as Cohesity and Commvault did. Yet among these many iterations of scale-out, HCI-based backup systems, HYCU’s decision to piggyback its new HYCU-X on top of existing HCI offerings, starting with Nutanix’s AHV HCI Platform, represents one of the better and more insightful ways to deliver backup using a scale-out architecture.
NVMe and other advances in non-volatile memory technology are generating a lot of buzz in the enterprise technology industry, and rightly so. As providers integrate these technologies into storage systems they are closing the gap between the dramatic advances in processing power and the performance of the storage systems that support them. The TrueNAS M-Series from iXsystems provides an excellent example of what can be achieved when these technologies are thoughtfully integrated into a storage system.
To ensure an application migration to the cloud goes well or that a company should even migrate a specific application to the cloud requires a thorough understanding of each application. This understanding should encompass what resources the application currently uses as well as how it behaves over time. To gather the information it needs about each application, here is a list of best practices that a company can put in place for its on-premises applications before it moves any of them to the cloud.
There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.
Deduplication appliances remain a foundational technology in corporate data centers for cost-effective short-term backup storage, disaster recoveries, and long-term data retention. The HPE StoreOnce 5650 and Dell EMC Data Domain 9300, along with their respective virtual appliances, are two product lines to which companies often turn to host their backup data. While these two product lines share some common feature functionality, six key points of differentiation between them persist which DCIG examines in its most recently released Pocket Analyst Report.
Hyper-converged infrastructure (HCI) appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center real estate.
Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.
Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.
Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, storage, networking, and data protection functions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deployment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.
Mention data management to almost any seasoned IT professional and they will almost immediately greet the term with skepticism. While organizations have found they can manage their data within certain limits, when they remove those boundaries and attempt to do so at scale, those initiatives have historically fallen far short if not outright failed. It is time for that perception to change. 20 years in the making, Commvault Activate puts organizations in a position to finally manage their data at scale.
The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.
When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.