More companies than ever want to use the cloud as part of their overall IT strategy. To do so, they often look to achieve some quick wins in the cloud to demonstrate its value. Achieving these quick wins also serves to give them some practical hands on experience in the cloud. Incorporating the cloud into your backup and disaster recovery (DR) processes may serve as the best way to get these wins.
Scalable data protection appliances have arguably emerged as one of the hottest backup trends in quite some time, possibly since the introduction of deduplication into the backup process. These appliances offer backup software, cloud connectivity, replication, and scalable storage in a single, logical converged or hyperconverged infrastructure platform offering that simplify backup while positioning a company to seamlessly implement the appliance as part of its disaster recovery strategy or even create a DR solution for the first time.
Companies are always on the lookout for simpler, most cost-effective methods to manage their infrastructure. This explains, in part, the emergence of scale-out architectures over the last few years as a preferred means for implementing backup appliances. It is as scale-out architectures gain momentum that it behooves companies taking a closer look at the benefits and drawbacks of both scale-out and scale-up architectures to make the best choice for their environment.
One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.
There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.
Deduplication appliances remain a foundational technology in corporate data centers for cost-effective short-term backup storage, disaster recoveries, and long-term data retention. The HPE StoreOnce 5650 and Dell EMC Data Domain 9300, along with their respective virtual appliances, are two product lines to which companies often turn to host their backup data. While these two product lines share some common feature functionality, six key points of differentiation between them persist which DCIG examines in its most recently released Pocket Analyst Report.
In the last few years all-flash arrays have taken enterprise data centers by storm but, as that has occurred, the criteria by which organizations should evaluate storage arrays from competing vendors have changed substantially. Features that once mattered considerably now barely get anyone’s attention while features that no one had knowledge of a few years ago are closely scrutinized. Here are three features that organizations should examine on all-flash arrays and one feature that has largely dropped off the radar screen in terms of importance.
One of the more perplexing challenges that Nutanix administrators face is how to protect the data in their Nutanix deployments. Granted, Nutanix natively offers its own data protection utilities. However, these utilities leave gaps that enterprises are unlikely to find palatable when protecting their production applications. This is where Comtrade Software’s HYCU and ExaGrid come into play as their combined solutions provide a more affordable and elegant approach to protecting Nutanix environments.
Deduplication backup target appliances remain a critical component of the data protectioninfrastructure for many enterprises. While storing protected data in the cloud may be fine for very small businesses or even as a final resting place for enterprise data, deduplication backup target appliances continue to function as their primary backup target and primary source for recovering data. It is for these reasons that enterprises frequently turn to deduplication backup target appliances from Dell EMC and ExaGrid to meet these specific needs that are covered in recent DCIG Pocket Analyst Report.
Hybrid and all-disk arrays still have their place in enterprise data centers but all-flash arrays are “where it’s at” when it comes to hosting and accelerating the performance of production applications. Once reserved only for applications that could cost-justify these arrays, continuing price erosion in the underlying flash media coupled with technologies such as compression and deduplication have put these arrays at a price point within reach of almost any size enterprise. As that occurs, all-flash arrays from Dell EMC XtremIO and Pure Storage are often on the buying short lists for many companies. Those companies considering these two products can turn to a recent DCIG Pocket Analyst Report that compares these two products to help them make an informed buying decision.
DCIG Pocket Analyst Report Compares Dell EMC Data Domain and ExaGrid Product Families
Technology conversations within enterprises increasingly focus on the “data center stack” with an emphasis on cloud enablement. While I agree with this shift in thinking, one can too easily overlook the merits of underlying individual technologies when only considering the “Big Picture”. Such is happening with deduplication technology. A key enabler of enterprise archiving, data protecton, and disaster recovery solutions, vendors such as Dell EMC and ExaGrid deliver deduplication technology in different ways as DCIG’s most recent 4-page Pocket Analyst Report reveals that makes each product family better suited for specific use cases.
Vendors first started bandying about the phrase “cloud data management” a year or so ago. While that phrase caught my attention, specifics as what one should expect when acquiring a “cloud data management” solution remained nebulous at best. Fast forward to this week’s Veritas Vision 2017 and I finally encountered a vendor that was providing meaningful details as to what cloud data management encompasses while simultaneously performing a 180 behind the scenes.
There are two assumptions that IT professionals need to exercise caution before making when evaluating cloud data protection products. One is to assume all products share some feature or features in common. The other is to assume that one product possesses some feature or characteristic that no other product on the market offers. As DCIG reviews its recent research into the cloud data protection products, one cannot make either one of these assumptions, even on features such as deduplication, encryption, and replication that one might expect to be universally adopted by these products in comparable ways.
Today’s backup mantra seems to be backup to the cloud or bust! But backup to the cloud is more than just redirecting backup streams from a local file share to a file share presented by a cloud storage provider and clicking the “Start” button. Organizations must examine to which cloud storage providers they can send their data as well as how their backup software packages and sends the data to the cloud. BackupAssist 10.0 answers many of these tough questions about cloud data protection that businesses face while providing them some welcomed flexibility in their choice of cloud storage providers.
If you assume that leading enterprise midrange all-flash arrays (AFAs) support deduplication, your assumption would be correct. But if you assume that these arrays implement and deliver deduplication’s features in the same way, you would be mistaken. These differences in deduplication should influence any all-flash array buying decision as deduplication’s implementation affects the array’s total effective capacity, performance, usability, and, ultimately, your bottom line.
In early November DCIG finalized its research into all-flash arrays and, in the coming weeks and months, will be announcing its rankings in its various Buyer’s Guide Editions as well as in its new All-flash Array Product Ranking Bulletins. It as DCIG prepares to release its all-flash array rankings that we also find ourselves remarking just how quickly interest in HDD-based arrays has declined just this year alone. While we are not ready to declare HDDs dead by any stretch, finding any sparks that represent interest or innovation in hard disk drives (HDDs) is getting increasingly difficult.
DCIG is pleased to announce the availability of the 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide developed from the backup appliance body of research. As core business processes become digitized, the ability to keep services online and to rapidly recover from any service interruption becomes a critical need. Given the growth and maturation of cloud services, many organizations are exploring the advantages of storing application data with cloud providers and even recovering applications in the cloud.
Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.
Usually when I talk to backup and system administrators, they willingly talk about how great a product installation was. But it then becomes almost impossible to find anyone who wants to comment about what life is like after their backup appliance is installed. This blog entry represents a bit of anomaly in that someone willingly pulled back the curtain on what their experience was like after they had the appliance installed. In this third installment in my interview series with system architect, Fidel Michieli, describes how the implementation of Cohesity went in his environment and how Cohesity responded to issues that arose.
Evaluating product features, comparing prices, and doing proofing of concepts are important steps in the process of adopting almost any new product. But once one completes those steps, the time arrives to start to roll the product out and implement it. In this second installment of my interview series with System Architect, Fidel Michieli, he shares how his company gained a comfort level with Cohesity for backup and disaster recovery (DR) and how broadly it decided to deploy the product in the primary and secondary data centers.