The DCIG 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide helps enterprises assess the enterprise deduplication backup target appliance marketplace to help them identify which appliance may be the best fit for their environment. This Buyer’s Guide includes data sheets for 19 enterprise deduplication backup target appliances that achieved rankings of Recommended and Excellent. These products are available from five vendors including Cohesity, Dell EMC, ExaGrid, HPE, and NEC.
iXsystems is taking simplified service delivery to a new level by enabling a curated set of third-party services to run directly on its TrueNAS arrays. TrueNAS already provided multi-protocol unified storage to include file, block and S3-compatible object storage. Now pre-configured plugins converge additional services onto TrueNAS for simple hybrid cloud enablement.
Persistent Memory is bringing a revolution in performance, cost and capacity that will change server, storage system, data center and software design over the next decade. This article describes some ways storage vendors are integrating persistent memory into enterprise storage systems in 2019.
The SNIA Persistent Memory Summit held in late January 2019 provided a good view into the current state of industry. Some key technologies and standards related to persistent memory are moving forward more slowly than expected. Others are finally transitioning from promise to products. This article summarizes a few key takeaways from the event as they relate to enterprise storage systems.
Companies of all sizes pay more attention to their backup and recovery infrastructure than perhaps ever before. While they still rightfully prioritize their production infrastructure over their backup one, companies seem to recognize and understand that can use backups as more than just insurance policies to recover their production data. This is resulting in cutting edge innovations such as analytics, microservices, and scalable storage finding their way into backup solutions in general and backup appliances specifically.
Companies are always on the lookout for simpler, most cost-effective methods to manage their infrastructure. This explains, in part, the emergence of scale-out architectures over the last few years as a preferred means for implementing backup appliances. It is as scale-out architectures gain momentum that it behooves companies taking a closer look at the benefits and drawbacks of both scale-out and scale-up architectures to make the best choice for their environment.
Dell EMC announced that it will soon add Optane-based storage to its PowerMAX arrays, and that PowerMAX will use Optane as a storage tier, not “just” cache. This statement implies using Optane as a storage tier is superior to using it as a cache. But is it?
Malware – and specifically ransomware – tends to regularly make headlines with some business somewhere in the world reporting having its data encrypted by it. Due to this routine occurrence, companies need to acknowledge that their standard first line defenses such as cybersecurity and backup software no longer completely suffice to detect malware. To augment these defenses, companies need to take new steps to shore up these traditional defenses which, for many, will start with creating a secondary perimeter around their backup stores to detect the presence of malware.
The first movie I remember seeing in a theater was 2001: A Space Odyssey. If you saw it, I am guessing that you remember it, too. At the core of the story is HAL, a sophisticated computer that controls everything on a space ship en route to Jupiter. The movie is ultimately a story of artificial intelligence gone awry.
The cloud has gone mainstream with more companies than ever looking to host their production applications with general-purpose cloud providers such as the Google Cloud Platform (GCP). As this occurs, companies must identify backup solutions architected for the cloud that capitalize on the native features of each provider’s cloud offering to best protect their virtual machines (VMs) hosted in the cloud.
One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.
Virtualization largely shaped the enterprise data center landscape for the past ten years. Hyper-converged infrastructure (HCI) is beginning to have the same type of impact, re-shaping the enterprise data center to fully capitalize on the benefits that virtualizing the infrastructure affords them. Enterprises considering HCI as a replacement for existing core data center infrastructure should give special attention to how the solution implements quality of service technology. Superior QoS technology will reduce OPEX by simplifying management and reduce CAPEX by consolidating many workloads onto the solution.
The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.
On the surface, all-inclusive software licensing sounds great. You get all the software features that the product offers at no additional charge. You can use them – or not use them – at your discretion. It simplifies product purchases and ongoing licensing. But what if you opt not to use all the product’s features or only need a small subset of them? In those circumstances, you need to take a hard look at any product that offers all-inclusive software licensing to determine if it will deliver the value that you expect.
In 2019 the level of interest that companies expressed in using artificial Intelligence (AI) and machine learning (ML) exploded. Their interest is justifiable. These technologies gather the almost endless streams of data coming out of the scads of devices that companies deploy everywhere, analyze it, and then turn it into useful information. But time is the secret ingredient that companies must look for as they look to select an effective AI or ML product.
Across more than twenty years as an IT Director, I had many sales people incorrectly tell me that their product was the only one that offered a particular benefit. Did their false claims harm their credibility? Absolutely. Were they trying to deceive me? Possibly. But it is far more likely they lacked accurate and up-to-date information about the current capabilities of competing products in the marketplace. Their competitive intelligence system had failed them.
Vendors are finding multiple ways to enter the scale-out hyper-converged infrastructure (HCI) backup conversation. Some acquire other companies such as StorageCraft did in early 2017 with its acquisition of ExaBlox. Others build their own such as Cohesity and Commvault did. Yet among these many iterations of scale-out, HCI-based backup systems, HYCU’s decision to piggyback its new HYCU-X on top of existing HCI offerings, starting with Nutanix’s AHV HCI Platform, represents one of the better and more insightful ways to deliver backup using a scale-out architecture.
NVMe and other advances in non-volatile memory technology are generating a lot of buzz in the enterprise technology industry, and rightly so. As providers integrate these technologies into storage systems they are closing the gap between the dramatic advances in processing power and the performance of the storage systems that support them. The TrueNAS M-Series from iXsystems provides an excellent example of what can be achieved when these technologies are thoughtfully integrated into a storage system.
To ensure an application migration to the cloud goes well or that a company should even migrate a specific application to the cloud requires a thorough understanding of each application. This understanding should encompass what resources the application currently uses as well as how it behaves over time. To gather the information it needs about each application, here is a list of best practices that a company can put in place for its on-premises applications before it moves any of them to the cloud.