Rethinking Your Data Deduplication Strategy in the Software-defined Data Center

Data Centers Going Software-defined

There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.

The introduction of technologies that can monitor, report on, analyze, and increasingly manage and optimize data center hardware frees organizations from performing housekeeping tasks such as:

  • Verifying hardware firmware compatibility with applications and operating systems
  • Troubleshooting hot spots in the infrastructure
  • Identifying and repairing failing hardware components

Automating these tasks does more than change how organizations manage their data center infrastructures. It reshapes how they can think about their entire IT strategy. Rather than adapting their business to match the limitations of the hardware they choose, they can now pursue business objectives where they expect their IT hardware infrastructure to support these business initiatives.

This change in perspective has already led to the availability of software-defined compute, networking, and storage solutions. Further, software-defined applications such as databases, firewalls, and other applications that organizations commonly deploy have also emerged. These virtual appliances enable companies to quickly deploy entire application stacks. While it is premature to say that organizations can immediately virtualize their entire data center infrastructure, the foundation exists for them to do so.

Software-defined Storage Deduplication Targets

As they do, data protection software, like any other application, needs to be part of this software-defined conversation. In this regard, backup software finds itself well-positioned to capitalize on this trend. It can be installed on either physical or virtual machines (VMs) and already ships from many providers as a virtual appliance. But storage software that functions primarily as a deduplication storage target already finds itself being boxed out of the broader software-defined conversation.

Software-defined storage (SDS) deduplication targets exist that have significantly increased in storage capabilities. By the end of 2018, a few of these software-defined virtual appliances scaled to support about 100TB or more of capacity. But organizations must exercise caution when looking to position these available solutions as a cornerstone in a broader software-defined deduplication storage target strategy.

This caution, in many cases, stems less from the technology itself and more from the vendors who provide these SDS deduplication target solutions. In every case, save one, these solutions originate with providers who focus on selling hardware solutions.

Foundation for Software-defined Data Centers Being Laid Today

Companies are putting plans in place right now to build the data center of tomorrow. That data center will be a largely software-defined data center with solutions that span both on-premises and cloud environments. To achieve that end, companies need to select solutions that have a software-designed focus which meet their current needs while positioning them for tomorrow’s requirements.

Most layers in the data center stack, to include compute, networking, storage, and even applications, are already well down the road of transforming from hardware-centric to software-centric offerings. Yet in the face of this momentous shift in corporate data center environments, SDS deduplication target solutions have been slow to adapt.

It is this gap that SDS deduplication products such as Quest QoreStor look to fill. Coming from a company with “software” in its name, Quest comes without the hardware baggage that other SDS providers must balance. More importantly, Quest QoreStor offers a feature-rich set of services that range from deduplication to replication to support for all major cloud, hardware, and backup software platforms that comes from 10 years of experience in delivering deduplication software.

Free to focus solely on delivering a SDDC solution, Quest QoreStor represents the type of SDS deduplication target that does truly meet the needs of today’s enterprise while positioning them to realize the promise of tomorrow’s software-defined data center.

To read more of DCIG’s thoughts about using SDS deduplication targets in the software-defined data center of tomorrow, follow this link.




Key Differentiators between the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 Systems

Companies have introduced a plethora of technologies into their core enterprise infrastructures in recent years that include all-flash arrays, cloud, hyper-converged infrastructures, object-based storage, and snapshots, just to name a few. But as they do, a few constants remain. One is the need to backup and recover all the data they create.

Deduplication appliances remain one of the primary means for companies to store this data for short-term recovery, disaster recoveries, and long-term data retention. To fulfill these various roles, companies often select either the HPE StoreOnce 5650 or the Dell EMC Data Domain 9300. (To obtain a complimentary DCIG report that compares these two products, follow this link.)

Their respective deduplication appliance lines share many features in common. They both perform inline deduplication. They both offer client software to do source-side deduplication that reduces data sent over the network and improves backup throughput rates. They both provide companies with the option to backup data over NAS or SAN interfaces.

Despite these similarities, key areas of differentiation between these two product lines remain which include the following:

  1. Cloud support. Every company either has or anticipates using a hybrid cloud configuration as part of its production operations. These two product lines differ in their levels of cloud support.
  2. Deduplication technology. Data Domain was arguably the first to popularize widespread use of deduplication for backup. Since then, others such as the HPE StoreOnce 5650 have come on the scene that compete head-to-head with Data Domain appliances.
  3. Breadth of application integration. Software plug-ins that work with applications and understand their data formats prior to deduplicating the data provide tremendous benefits as they improve data reduction rates and decrease the amount of data sent over the network during backups. The software that accompanies the appliances from these two providers has varying degrees of integration with leading enterprise applications.
  4. Licensing. The usefulness of any product hinges on the features it offers, their viability, and which ones are available to use. Clear distinctions between the HPE StoreOnce and Dell EMC Data Domain solutions exist in this area.
  5. Replication. Copying data off-site for disaster recovery and long-term data retention is paramount in comprehensive enterprise disaster recovery strategies. Products from each of these providers offer this but they differ in the number of features they offer.
  6. Virtual appliance. As more companies adopt software-defined data center strategies, virtual appliances have increased appeal.

In the latest DCIG Pocket Analyst Report, DCIG compares the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 product lines and examines how each well these two products fare in their support of these six areas in which DCIG looks at nearly 100 features to draw its conclusions. This report is currently available at no charge for a limited time on DCIG’s partner website, TechTrove. To receive complimentary access to this report, complete a registration form that you can find at this link.




Three Features that Matter on All-flash Arrays and One that Matters Not So Much

In the last few years all-flash arrays have taken enterprise data centers by storm but, as that has occurred, the criteria by which organizations should evaluate storage arrays from competing vendors have changed substantially. Features that once mattered considerably now barely get anyone’s attention while features that no one had knowledge of a few years ago are closely scrutinized. Here are three features that organizations should examine on all-flash arrays and one feature that has largely dropped off the radar screen in terms of importance.

Performance and throughput seem to be top of mind with every organization when it comes to evaluating all-flash arrays and all-flash arrays certainly differ in their ability to deliver on those attributes depending on the applications they are intended to host. However, most organizations will find that many all-flash will provide the levels of performance and throughput that their applications require. As such, there are three other features to which they should pay attention when evaluating different products. These include:

I. Storage capacity optimization technologies. Of the 90+ enterprise all-flash arrays that DCIG recently evaluated in anticipation of its forthcoming All-flash Array Buyer’s Guides, it found that over 75% of these all-flash arrays supported some type of storage capacity optimization technology, whether compression, deduplication, or both. While DCIG generally believes these technologies positively influence an organization’s ability to maximize available storage capacity, organizations should verify their environment will benefit from their use. Organizations that plan to host virtual machines and/or databases on all-flash arrays will almost always recognize the benefits of these technologies.

Source: DCIG

II. Breadth of VMware vSphere API Support. The tremendous storage capacity optimization benefits that all-flash arrays can offer for virtualized environments can be further amplified by the level of VMware vSphere API support that the array offers. However, the level of support for these APIs that each all-flash array supports vary significantly, even from arrays from the same vendor. The chart below shows the level of support that one vendors offers for these APIs on its multiple AFA products (over 10.) If organizations plan to use these APIs and the features they offer, they should ideally determine which ones they want to use and ensure that the array they want offers them.

Source: DCIG

III. Non-disruptive upgrades. Who has time for outages or maintenance windows anymore? Or, maybe better phrased, who wants to explain to their management why they had an outage or needed an extended maintenance window? In short, no one wants to have those conversations and the good news is that many of today’s all-flash arrays offer non-disruptive upgrades and maintenance windows. The bad news is that there are still some gaps in the non-disruptive nature of today’s enterprise AFA models as the chart below illustrates. If ready to put application outages in your past, verify the all-flash array you select has all the non-disruptive features that your company requires.

Source: DCIG

All the changes in storage arrays is not without its upside and one feature that organizations need to be less concerned about or perhaps not concerned about at all is the RAID options on today’s all-flash arrays. As many of today’s all-flash arrays are re-inventions of yesterday’s hard disk drive array, they carried forward the RAID methodologies offered on them. The good news is that the failure rates of SSDs are few and far between so almost any RAID implementation will work equally well. If anything, organizations should give preference to those all-flash arrays that offer their own proprietary RAID implementation which had flash-first design and did not carry forward some of the HDD baggage that these RAID implementations were designed to address.




A More Elegant (and Affordable) Approach to Nutanix Backups

One of the more perplexing challenges that Nutanix administrators face is how to protect the data in their Nutanix deployments. Granted, Nutanix natively offers its own data protection utilities. However, these utilities leave gaps that enterprises are unlikely to find palatable when protecting their production applications. This is where Comtrade Software’s HYCU and ExaGrid come into play as their combined solutions provide a more affordable and elegant approach to protecting Nutanix environments.

One of the big appeals of using hyperconverged solutions such as Nutanix’s inclusion of basic data protection utilities. Using its Time Stream and Cloud Connect technologies, Nutanix makes it easy and practical for organizations to protect applications hosted on VMs running on Nutanix deployments.

The issue becomes how does one affordably deliver and manage data protection in Nutanix environments at scale? This becomes a tougher question for Nutanix to answer because to use its data protection technologies at scale requires running the Nutanix platform to host the secondary/backup copies of data. While that is certainly doable, that approach is likely not the most affordable way to tackle this challenge.

This is where a combined data protection solution from Comtrade Software and ExaGrid for the protection of Nutanix environments makes sense. Comtrade Software’s HYCU was the first backup software product to come to market purpose-built to protect Nutanix environments. Like Nutanix’s native data protection utilities, Nutanix administrators can manage HYCU and their VM backups from within the Nutanix PRISM management console. Unlike Nutanix’s native data protection utilities, HYCU auto-detects applications running within VMs and configures them for protection.

Further distinguishing HYCU from other competitive backup software products mentioned on Nutanix’s web page, HYCU is the only one currently listed that can run as a VM in an existing Nutanix implementation. The other products listed require organizations to deploy a separate physical machine to run their software which add cost and complexity into the backup equation.

Of course, once HYCU protects the data, the issue becomes, where does one store the backup copies of data for fast recoveries and long-term retention. While one can certainly keep these backup copies on the existing Nutanix deployment or on a separate deployment of it, these creates two issues.

  • One, if there is some issue with the current Nutanix deployment, you may not be able to recover the data.
  • Two, there are more cost-effective solutions for the storage and retention of backup copies of data.

ExaGrid addresses these two issues. Its scale-out architecture resembles Nutanix’s architecture enabling an ExaGrid deployment to start small and then easily scale to greater amounts of capacity and throughput. However, since it is a purpose-built backup appliance intended to store secondary copies of data, it is more affordable than deploying a second Nutanix cluster. Further, the Landing Zones that are uniquely found on ExaGrid deduplication systems facilitate near instantaneous recovery of VMs.

Adding to the appeal of ExaGrid’s solutions in enterprise environments is its recently announced EX63000E appliance. This appliance has 58% more capacity than its predecessor, allowing for a 63TB full backup. Up to thirty-two (32) EX63000E appliances can be combined in a single scale-out system to allow for a 2PB full backup. Per ExaGrid’s published performance benchmarks, each EX63000E appliance has a maximum ingest rate of 13.5TB/hr. per appliance enabling thirty-two (32) EX63000Es combined in a single system to achieve maximum ingest rate of 432TB/hr.

Hyperconverged infrastructure solutions are in general poised to re-shape enterprise data center landscapes with solutions from Nutanix currently leading the way. As this data center transformation occurs, organizations need to make sure that the data protection solutions that they put in place offer both the same ease of management and scalability that the primary hyperconverged solution provides. Using Comtrade Software HYCU and ExaGrid, organizations get the affordable yet elegant data protection solution that they seek for this next generation data center architecture.




Differentiating between the Dell EMC Data Domain and ExaGrid EX Systems

Deduplication backup target appliances remain a critical component of the data protectioninfrastructure for many enterprises. While storing protected data in the cloud may be fine for very small businesses or even as a final resting place for enterprise data, deduplication backup target appliances continue to function as their primary backup target and primary source for recovering data. It is for these reasons that enterprises frequently turn to deduplication backup target appliances from Dell EMC and ExaGrid to meet these specific needs that are covered in recent DCIG Pocket Analyst Report.

The Dell EMC Data Domain and ExaGrid families of deduplication backup target appliances appear on the short lists for many enterprises. While both these providers offer systems for small, midsize, and large organizations, the underlying architecture and features on the systems from these two providers make them better suited for specific use cases.

Their respective data center efficiency, deduplication, networking, recoverability, replication, and scalability features (to include recently announced enhancements) provide insight into the best use cases for the systems from these two vendors.

Purpose-built, deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They offer appliances in vari­ous physical configurations to meet the specific backup needs of small, midsize, and large enterprises while provid­ing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

Their systems significantly reduce backup data stores and offer concurrent backup and replication. They also limit the number of backup streams, display real-time dedu­plication ratios, and do capacity analysis and trending. Despite the similarities that the systems from these respective vendors share, six differences exist between them in their underlying features that impact their ability to deliver on key end-user expectations. These include:

  1. Data center efficiency to include how much power they use and the size of their data center footprint.
  2. Data reduction to include what deduplication options they offer and how they deliver them.
  3. Networking protocols to include connectivity for NAS and SAN environments.
  4. Recoverability to include how quickly, how easily, and where recoveries may be performed.
  5. Replication to include copying data offsite as well as protecting data in remote and branch offices.
  6. Scalability to include total amount of capacity as well as ease and simplicity of scaling.

DCIG is pleased to make a recent DCIG Pocket Analyst Report that compares these two families of deduplication backup target appliances available for a complimentary download for a limited time. This succinct, 4-page report includes a detailed product matrix as well as insight into these six differentiators between these two solutions and which one is best positioned to deliver on these six key data center considerations.

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




Data Center Efficiency, Performance, Scalability: How Dell EMC XtremIO, Pure Storage Flash Arrays Differ

Latest DCIG Pocket Analyst Report Compares Dell EMC XtremIO and Pure Storage All-flash Product Families

Hybrid and all-disk arrays still have their place in enterprise data centers but all-flash arrays are “where it’s at” when it comes to hosting and accelerating the performance of production applications. Once reserved only for applications that could cost-justify these arrays, continuing price erosion in the underlying flash media coupled with technologies such as compression and deduplication have put these arrays at a price point within reach of almost any size enterprise. As that occurs, flash arrays from Dell EMC XtremIO and Pure Storage are often on the buying short lists for many companies.

When looking at all-flash arrays, it is easy to fall into the trap that they are all created equal. While it can be truthfully said that every all-flash array is faster and will outperform any of its all-disk or hybrid storage array predecessors, there can be significant differences in how effectively and efficiently each one delivers that performance.

Consider product families from leaders in the all-flash array market: Dell EMC XtremIO and Pure Storage. When you look at their published performance specifications, they both scale to offer hundreds of thousands of IOPS, achieve sub one millisecond response times, and offer capacity optimization features such as compression and deduplication.

It is only when you start to pull back the covers on these two respective product lines that substantial differences between them start to emerge such as:

  • Their data center efficiency in areas such as power consumption and data center footprint
  • How much flash capacity they can ultimately hold
  • What storage protocols they support

This recent published 4-page DCIG Pocket Analyst Report analyzes these attributes and others on all-flash arrays from these two providers. It examines how well their features support these key data center considerations and includes analyst commentary on which product has the edge in this these specific areas. This report also contains a feature comparison matrix to support this analysis.

This report provides the key insight in a concise manner that enterprises need to make the right choice in an all-flash array solution for the rapidly emerging all-flash array data center. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

All-flash data centers are coming and with every all-flash array providing higher levels of performance than previous generations of storage arrays, enterprises need to examine key underlying features that go deeper than simply fast they perform. Their underlying architecture, the storage protocols they support, and the software they use to deliver these features are all features that impact how effective and efficient the array will be in your environment. This DCIG Pocket Analyst Report makes plain some of the key ways that the all-flash arrays from Dell EMC and Pure Storage differentiate themselves from one another. Follow this link to purchase this report.

Author’s Note: The link to the DCIG Pocket Analyst Report comparing the Dell EMC XtremIO and Pure Storage FlashArrays was updated and correct at 12:40 pm CT on 10/18/2017 to point to the correct page on the TechTrove website. Sorry for any confusion!




Deduplication Still Matters in Enterprise Clouds as Data Domain and ExaGrid Prove

Technology conversations within enterprises increasingly focus on the “data center stack” with an emphasis on cloud enablement. While I agree with this shift in thinking, one can too easily overlook the merits of underlying individual technologies when only considering the “Big Picture“. Such is happening with deduplication technology. A key enabler of enterprise archiving, data protecton, and disaster recovery solutions, vendors such as Dell EMC and ExaGrid deliver deduplication technology in different ways as DCIG’s most recent 4-page Pocket Analyst Report reveals that makes each product family better suited for specific use cases.

It seemed for too many years enterprise data centers focused too much on the vendor name on the outside of the box as opposed to what was inside the box – the data and the applications. Granted, part of the reason for their focus on the vendor name is they wanted to demonstrate they had adopted and implemented the best available technologies to secure the data and make it highly available. Further, some of the emerging technologies necessary to deliver a cloud-like experience with the needed availability and performance characteristics did not yet exist, were not yet sufficiently mature, or were not available from the largest vendors.

That situation has changed dramatically. Now the focus is almost entirely on software that provides enterprises with cloud-like experiences that enables them to more easily and efficiently manage their applications and data. While this change is positive, enterprises should not lose sight of the technologies that make up their emerging data center stack as they are not all equally equipped to deliver them in the same way.

A key example is deduplication. While this technology has existed for years and has become very mature and stable during that time, the options in which enterprises can implement it and the benefits they will realize it vary greatly. The deduplication solutions from Dell EMC Data Domain and ExaGrid illustrate these differences very well.

DCIG Pocket Analyst Report Compares Dell EMC Data Domain and ExaGrid Product Families

Deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They also both offer appliances in various physical configurations to meet the specific backup needs of small, midsize, and large enterprises while providing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

However, their respective systems also differ in key areas that will impact the overall effectiveness these systems will have in the emerging cloud data stacks that enterprises are putting in place. The six areas in which they differ include:

  1. Data center efficiency
  2. Deduplication methodology
  3. Networking protocols
  4. Recoverability
  5. Replication
  6. Scalability

The most recent 4-page DCIG Pocket Analyst Report analyzes these six attributes on the systems from these two providers of deduplication systems and compares their underlying features that deliver on these six attributes. Further, this report identifies which product family has the advantage in each area and provides a feature comparison matrix to support these claims.

This report provides the key insight in a concise manner that enterprises need to make the right choice in deduplication solutions for their emerging cloud data center stack. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

Cloud-like data center stacks that provide application and data availability, mobility, and security are rapidly becoming a reality. But as enterprises adopt these new enterprise clouds, they ignore or overlook technologies such as deduplication that make up these stacks at their own peril as the underlying technologies they implement can directly impact the overall efficiency and effectiveness of the cloud that one is building.




Veritas Delivering on its 360 Data Management Strategy While Performing a 180

Vendors first started bandying about the phrase “cloud data management” a year or so ago. While that phrase caught my attention, specifics as what one should expect when acquiring a “cloud data management” solution remained nebulous at best. Fast forward to this week’s Veritas Vision 2017 and I finally encountered a vendor that was providing meaningful details as to what cloud data management encompasses while simultaneously performing a 180 behind the scenes.

Ever since I heard the term cloud data management a year or so ago, I loved it. If there was ever a marketing phrase that captured the essence of how every end-user secretly wants to manage all its data while the vendor or vendors promising to deliver it commits to absolutely nothing, this phrase nailed it. A vendor could shape and mold that definition however it wanted and know that end-users would listen to the pitch even if deep down the users knew it was marketing spin at its best.

Of course, Veritas promptly blew up these pre-conceived notions of mine this week at Vision 2017. While at the event, Veritas provided specifics about its cloud data management strategy that rang true if for no other reason that they had a high degree of veracity to them. Sure, Veritas may refer to its current strategy as “360 Data Management.” But to my ears it sure sounded like someone had finally articulated, in a meaningful way, what cloud data management means and the way in which they could deliver on it.

Source: Veritas

The above graphic is the one that Veritas repeatedly rolls out when it discusses its 360 Data Management strategy. While notable in that it is one of the few vendors that can articulate the particulars of its data management strategy, it more importantly has three important components to it that currently makes its strategy more viable than many of its competitors. Consider:

  1. Its existing product portfolio maps very neatly into its 360 Data Management strategy. One might argue (probably rightfully so) that Veritas derived its 360 Data Management strategy from its existing product portfolio that it has built-up over the years. However, many of these same critics have also contended that Veritas has been nothing but a company with an amalgamation of point products with no comprehensive vision. Well, guess what, the world changed over the past 12-24 months and it bent decidedly bent in the direction of software. Give Veritas some credit. It astutely recognized this shift, saw that its portfolio aligned damn well with how enterprises want to manage their data going forward, and had the hutzpah to craft a vision that it could deliver based upon the products it had in-house.
  2. It is not resting on its laurels. Last year when Veritas first announced its 360 Data Management strategy, I admit, I inwardly groaned a bit. In its first release, all it did was essentially mine the data in its own NetBackup catalogs. Hello, McFly! Veritas is only now thinking of this? To its credit, this past week it expanded the list of products to which to which its Information Map connectors can access to over 20. These include Microsoft Exchange, Microsoft SharePoint, and Google Cloud among others. Again, I must applaud Veritas for its efforts on this front. While this news may not be momentous or earth-shattering, it visibly reflects a commitment to delivering on and expanding the viability of its 360 Data Management strategy beyond just NetBackup catalogs.
  3. The cloud plays very well in this strategy. Veritas knows that plays in the enterprise space and it also knows that enterprises want to go to the cloud. While nowhere in its vision image above does it overtly say “cloud”, guess what? It doesn’t have to. It screams, “Cloud!” This is why many of its announcements at Veritas Vision around its CloudMobility, Information Map, NetBackup Catalyst, and other products talk about efficiently moving data to and from the cloud and then monitoring and managing it whether it resides on-premises, in the cloud, or both.

One other change it has made internally (and this is where the 180 initially comes in,) is how it communicates this vision. When Veritas was part of Symantec, it stopped sharing its roadmap with current and prospective customers. In this area, Veritas has made a 180, customers who ask and sign a non-disclosure agreement (NDA) with Veritas can gain access to this road map.

Veritas may communicate that the only 180 turn it has made in the last 18 months or so since it was spun out of Symantec is its new freedom to communicate its road map to current and/or prospective customers. While that may be true, the real 180 it has made entails it successfully putting together a cohesive vision that articulates the value of products in its portfolio in a context that enterprises are desperate to hear. Equally impressive, Veritas’ software-first focus better positions it than its competitors to enable enterprises to realize this ideal.

 




Exercise Caution Before Making Any Assumptions about Cloud Data Protection Products

There are two assumptions that IT professionals need to exercise caution before making when evaluating cloud data protection products. One is to assume all products share some feature or features in common. The other is to assume that one product possesses some feature or characteristic that no other product on the market offers. As DCIG reviews its recent research into the cloud data protection products, one cannot make either one of these assumptions, even on features such as deduplication, encryption, and replication that one might expect to be universally adopted by these products in comparable ways.

The feature that best illustrates this point is deduplication. One would almost surely think that after the emphasis put on deduplication over the past decade, every product would now support deduplication. That conclusion would be true. But how each product implements deduplication can vary greatly. For example:

  1. Block-level deduplication is still not universally adopted by all products. A few products still only deduplicate at the file level.
  2. In-line deduplication is also not universally available on all products. Further, post-process deduplication is becoming more readily available as organizations want to do more with their copies of data after they back it up.
  3. Only about 2 in 5 products offer the flexibility to recognize data in backup streams and apply the most appropriate deduplication algorithm.

Source: DCIG; 175 products

Deduplication is not the only feature that differs between these products. As organizations look to centralize data protection in their infrastructure and then keep a copy of data offsite with cloud providers, features such as encryption and replication have taken on greater importance in these products and more readily available than ever before. However, here again one cannot assume that all cloud data protection products support each of these features.

On the replication side, DCIG found that this feature to be universally supported across the products it evaluated. Further, these products all implement the option where organizations can schedule replication to occur at certain times (every five minutes, on the hour, etc.).

However, when organizations get beyond this baseline level of replication, differences again immediate appear. For instance, just over 75 percent of the products perform continuous data replication (replicate data immediately after the write occurs at the primary site) while less than 20 percent support synchronous replication.

Organizations all need to pay attention to the fan-in and fan-out options that these products provide. While all support 1:1 replication configurations, only 75 percent of the products support fan-in replication (N:1) and only 71 percent support fan-out replication (1:N). The number of products that support replication across multiple hops drops even further – down to less than 40 percent.

Source: DCIG; 176 products

Encryption is another feature that has become widely used in recent years as organizations have sought to centralize backup storage in their data centers as well as store data with cloud providers. In support of these initiatives, over 95 percent of the products support AES-256 bit encryption for data at-rest while nearly 80 percent of them support this level of encryption for data in-flight.

Deduplication, encryption, and replication are features that organizations of almost any size almost universally expect to find on any cloud data protection product that they are considering for their environment. Further, as DCIG’s research into these products reveals, they nearly all support these features in some capacity. However, they certainly do not give organizations the same number of options to deploy and leverage them and it is these differences in the breadth of feature functionality that each product offers that organizations need to be keenly aware of as they make their buying decisions.




BackupAssist 10.0 Brings Welcomed Flexibility for Cloud Backup to Windows Shops

Today’s backup mantra seems to be backup to the cloud or bust! But backup to the cloud is more than just redirecting backup streams from a local file share to a file share presented by a cloud storage provider and clicking the “Start” button. Organizations must examine to which cloud storage providers they can send their data as well as how their backup software packages and sends the data to the cloud. BackupAssist 10.0 answers many of these tough questions about cloud data protection that businesses face while providing them some welcomed flexibility in their choice of cloud storage providers.

Recently I was introduced to BackupAssist, a backup software company that hails from Australia, and had the opportunity to speak with its founder and CEO, Linus Chang, about Backup Assist’s 10.0 release. The big news in this release was BackupAssist’s introduction of cloud independent backup that gives organizations the freedom to choose any cloud storage provider to securely store their Windows backup data.

The flexibility to choose from multiple cloud storage providers as a target when doing backup in today’s IT environment has become almost a prerequisite. Organizations increasingly want the ability to choose between one or more cloud storage providers for cost and redundancy reasons.

Further, availability, performance, reliability, and support can vary widely by cloud storage provider. These features may even vary by the region of the country in which an organization resides as large cloud storage providers usually have multiple data centers located in different regions of the country and world. This can result in organizations having very different types of backup and recovery experiences depending upon which cloud storage provider they use and the data center to which they send their data.

These factors and others make it imperative that today’s backup software give organizations more freedom of their choice in cloud storage providers which is exactly what BackupAssist 10.0 provides. By giving organizations the freedom to choose from Amazon S3 and Microsoft Azure among others, they can select the “best” cloud storage provider for them. However, since the factors as to what constitute the “best” cloud storage provider can and probably will change over time, BackupAssist 10.0 gives organizations the flexibility to adapt to any changes in conditions at the situation warrants.

Source:BackupAssist

To ensure organizations experience success when they backup to the cloud, it has also introduced three other cloud-specific features as well, which include:

  1. Compresses and deduplicates data. Capacity usage and network bandwidth consumption are the two primary factors that drive up cloud storage costs. By introducing compression and deduplication into this release, BackupAssist 10.0 helps organizations better keeps these variable costs associated with using cloud storage under control.
  2. Insulated encryption. Every so often stories leak out about how government agencies subpoena cloud providers and ask for the data of their clients. Using this feature, organizations can fully encrypt their backup data to make it inaccessible to anyone.
  3. Resilient transfers. Nothing is worse than having a backup two-thirds to three-quarters complete only to have a hiccup in the network connection or on the server itself that interrupts the backup and forces one to restart the backup from the beginning. Minimally, this is annoying and disruptive to business operations. Over time, restarting backup jobs and resending the same backup data to the cloud can run networking and storage costs. BackupAssist 10.0 ensures that if a backup job gets interrupted, it can resume from the point where it stopped while only sending the required amount of data to complete the backup.

In its 10.0 release, BackupAssist makes needed enhancements to ensure it remains a viable, cost-effective backup solution for businesses wishing to protect their applications running on Windows Server. While these businesses should keep some copies of data on local disk for faster backups and recoveries, the value of efficiently and cost-effectively keeping copies of their data offsite with cloud storage providers cannot be ignored. The 10.0 version of BackupAssist gives them the versatility to store data locally, in the cloud, or both with new flexibility to choose a cloud storage provider at any time that most closely aligns with their business and technical requirements.




Deduplicate Differently with Leading Enterprise Midrange All-flash Arrays

If you assume that leading enterprise midrange all-flash arrays (AFAs) support deduplication, your assumption would be correct. But if you assume that these arrays implement and deliver deduplication’s features in the same way, you would be mistaken. These differences in deduplication should influence any all-flash array buying decision as deduplication’s implementation affects the array’s total effective capacity, performance, usability, and, ultimately, your bottom line.

The availability of deduplication technology on all leading enterprise midrange AFAs comes as a relief to many organizations. The raw price per GB of AFAs often precludes them from deploying AFAs in their environment. However, deduplication’s presence enables organizations to deploy AFAs more widely in their environment since it may increase an AFA’s total effective capacity by 2-3x over its total useable capacity.

The operative word in that previous sentence is “may.” Selecting an enterprise midrange all-flash array model from Dell EMC, HDS, HPE, Huawei, or NetApp only guarantees that you will get an array that supports deduplication. One should not automatically assume that any of these vendors will deliver it in the way that your organization can best capitalize on it.

For instance, if you only want to do post-process deduplication, a model from only one of those five vendors listed above supports that option. If you want deduplication included when you buy the array and not have to license it separately, only three of the vendors support that option. If you want to do inline deduplication of production data, then only two of those vendors support that option.

Deduplication on all-flash arrays is highly desirable as it helps drive the price point of flash down to the point where organizations can look to cost-effectively use it more widely in production. However, deduplication only makes sense if the vendor delivers deduplication in a manner that matches the needs of your organization.

To get a glimpse into how these five vendors deduplicate data differently, check out this short, two-page report* from DCIG that examines 14 deduplication features on five different products. This concise, easy-to-understand report provides you with an at-a-glance snapshot of which products support the key deduplication features that organizations need to make the right all-flash array buying decision.

Access to this report* is available through the DCIG Competitive Intelligence Portal and is limited to subscribers to it. However, Unitrends is currently providing complimentary access to the DCIG Competitive Intelligence Portal for end-users. Once registered, individuals may download this report as well as the latest DCIG All-flash Array Buyer’s Guide.

If not already a subscriber, register now to get this report for free today to obtain the information you need to get a better grasp on more than whether these arrays deduplicate data. Rather, learn how they do it differently!

* This report is only available for a limited time to subscribers of the DCIG Competitive Intelligence (CI) Portal. Individuals who work for manufacturers, resellers, or vendors must pay to subscribe to the DCIG CI Portal. All information accessed and reports downloaded from the DCIG CI Portal is for individual, confidential use, and may not be publicly disseminated.




Difficult to Find any Sparks of Interest or Innovation in HDDs Anymore

In early November DCIG finalized its research into all-flash arrays and, in the coming weeks and months, will be announcing its rankings in its various Buyer’s Guide Editions as well as in its new All-flash Array Product Ranking Bulletins. It as DCIG prepares to release its all-flash array rankings that we also find ourselves remarking just how quickly interest in HDD-based arrays has declined just this year alone. While we are not ready to declare HDDs dead by any stretch, finding any sparks that represents interest or innovation in hard disk drives (HDDs) is getting increasingly difficult.

spark

The rapid declining of interest in HDDs over the last 18 months, and certainly the last six months, is stunning. When flash first came started gaining market acceptance in enterprise storage arrays around 2010, there was certainly speculation that flash could replace HDDs. But the disparity in price per GB between disk and flash was great at the time and forecast to remain that way for many years. As such, I saw no viable path for flash to replace disk in the near term.

Fast forward to late 2016 and flash’s drop in price per GB coupled with the introduction of technologies such as compression and deduplication in enterprise storage arrays has brought its price down to where it now approaches HDDs. Then factor in the reduced power and heating costs, flash’s increased life span (5 years or longer in many cases,) the improved performance and intangibles such as the elimination of noise in data centers, and suddenly the feasibility of all-flash data centers does not seem so far-fetched.

Some vendors are even working behind the scenes to make the case for flash even more compelling. They plan to eliminate the upfront capital costs associated with deploying flash and are instead working on flash deployments that charge monthly based on how much capacity your organization uses.

Recent statistics support this rapid adoption. Trendfocus announced that it found a 101% quarter over quarter increase in the number of enterprise PCIe units shipped, the capacity for all shipped SSDs approaching 14 exabytes,  and the total number of SATA and SAS SSDs shipped topped 4 million units. Those numbers coupled with CEOs from providers such as Kaminario (link) and Nimbus Data (link) both publicly saying that the list prices for flash for their all-flash units have dropped below the $1/Gb price point and it is no wonder that flash is dousing any sparks of interest that companies have in buying HDDs or that vendors have in innovating in HDD technology.

Is DCIG declaring disk dead? Absolutely not. In talking with providers of integrated and hybrid cloud backup appliances, deduplicating backup appliances, and archiving appliances, they still cannot yet justify replacing HDDs with flash. Or at least not yet.

One backup appliance provider tells me his company watches the prices of flash like a hawk and re-evaluates the price of flash versus HDDs about every six months to see if it makes sense to replace HDDs with flash. The threshold that makes it compelling for his company to use flash in lieu of HDDs has not yet been crossed and may still be some time away.

While flash has certainly dropped in price even as it simultaneously increases in capacity, companies should not expect to store their archive and backup data on flash in the next few years. The recently announced Samsung 15.36TB SSD drive that is available for around $10,000 is ample proof of that. Despite its huge capacity, it still costs around 65 cents/GB as compared to the price/GB for 8TB HDDs which run around a nickel per GB – or about one tenth.

That said, circle the year 2020 as potential tipping point. That year, Samsung anticipates releasing a 100TB flash drive. If that flash drive stays at the same $10,000 price point, it will put flash within striking range of HDDs on a price per GB or make it so low in cost per GB that most shops will no longer care about the slight price differential between HDDs and flash. That price point coupled with flash’s lower operating costs and longer life may finally put out whatever sparks of interest or innovation are left in HDDs.




DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide developed from the backup appliance body of research. Other Buyer’s Guides based on this body of research include the recent DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide and the forthcoming 2016-17 Integrated Backup Appliance Buyer’s Guide.

As core business processes become digitized, the ability to keep services online and to rapidly recover from any service interruption becomes a critical need. Given the growth and maturation of cloud services, many organizations are exploring the advantages of storing application data with cloud providers and even recovering applications in the cloud.

Hybrid cloud backup appliances (HCBA) are deduplicating backup appliances that include pre-integrated data protection software and integration with at least one cloud-based storage provider. An HCBA’s ability to replicate backups to the cloud supports disaster recovery needs and provides essentially infinite storage capacity.

The DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-three (23) products from six (6) different providers. Using ranking categories of Recommended, Excellent and Good, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which hybrid cloud backup appliance will suit their needs.

Each backup appliance included in the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide meets the following criteria:

  • Be available as a physical appliance
  • May also ship as a virtual appliance
  • Includes backup and recovery software that enables seamless integration into an existing infrastructure
  • Stores backup data on the appliance via on premise DAS, NAS or SAN-attached storage
  • Enables connectivity with at least one cloud-based storage provider for remote backups and long-term retention of backups in a secure/encrypted fashion
  • Provides the ability to connect the cloud-based backup images on more than one geographically dispersed appliance
  • Be formally announced or generally available for purchase on July 1, 2016

It is within this context that DCIG introduces the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide. DCIG’s succinct analysis provides insight into the state of the hybrid cloud backup appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using a hybrid cloud backup appliance, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a- glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by- side comparisons, assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by-side feature comparisons of the products in which the organization is most interested.

By using the DCIG Analysis Portal and applying the hybrid cloud backup appliance criteria to the backup appliance body of research, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create this Buyer’s Guide Edition. DCIG plans to use this same process to create future Buyer’s Guide Editions that further examine the backup appliance marketplace.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Data Visualization, Recovery, and Simplicity of Management Emerging as Differentiating Features on Integrated Backup Appliances

Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.

Once largely assembled and configured by either IT staff or value added resellers (VARs), integrated backup appliances have gone mainstream and are available for use in almost any size organization. By bundling together both hardware and software, large enterprises get the turnkey backup appliance solution that was just a few years ago primary reserved for smaller organizations. In so doing, large enterprises can eliminate the need to spend days, weeks, or even months they previously had to spend configuring and deploying these solutions into their infrastructure.

The evidence of the demand for backup appliances at all levels of the enterprise is made plain by the providers who bring them to market. Once the domain of providers such as STORServer and Unitrends, “software only” companies such as Commvault and Veritas have responded to the demand for turnkey backup appliance solutions with both now offering their own backup appliances under their respective brand names.

commvault-pbba

Commvault Backup Appliance

symantec-netbackup-pbba

Veritas NetBackup Appliance

In so doing, any size organization may get any of the most feature rich enterprise backup software solutions on the market, whether it is IBM Tivoli Storage Manager (STORServer), Commvault (Commvault and STORServer), Unitrends or Veritas NetBackup, delivered to them as a backup appliance. Yet while traditional all-software providers have entered the backup appliance market,  behind the scenes new business demands are driving further changes on backup appliances that organizations should consider as they contemplate future backup appliance acquisitions.

  • First, organizations expect successful recoveries. A few years ago, the concept of all backup jobs completing successfully was enough to keep everyone happy and giving high-fives to one another. No more. organizations recognize that they have reliable backups residing on a backup appliance and these appliances may largely sit idle during off-backup hours. This gives the enterprise some freedom to do more with these backup appliances during these periods of time such as testing recoveries, recovering applications on the appliance itself, or even presenting these backup copies of data to other applications to use as sources for internal testing and development. DCIG found that a large number of backup appliances support one or more vCenter Instant Recovery features and the emerging crop of backup appliances can also host virtual machines and recover applications on them.
  • Second, organizations want greater visibility into their data to justify business decisions. The amount of data residing in enterprise backup repositories is staggering. Yet the lack of value that organizations derive from that stored data combined with the potential risk it presents to them by retaining it is equally staggering. Features that provide greater visibility into the metadata of these backups which then analyze it and help turn it into measurable value for the business are already starting to find their way onto these appliances. Expect these features to become more prevalent in the years to come.
  • Third, enterprises want backup appliances to expand their value proposition. Backup appliances are already easy to deploy but maintaining and upgrading them over time or deploying them for other use cases gets more complicated over time. To address these concerns, emerging providers such as Cohesity, which is making its first appearance in DCIG Buyer’s Guides as an integrated backup appliance, directly addresses these concerns. Available as a scale-out backup appliance that can function as a hybrid cloud backup appliance, a deduplicating backup appliance and/or as an integrated backup appliance, it provides an example of how an enterprises can more easily scale and maintain it over time while giving them the flexibility to use it internally in multiple different ways.

The forthcoming DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide Editions highlight the most robust and feature rich integrated backup appliances available on the market today. As such, organizations should consider these backup appliances covered in these Buyer’s Guides as having many of the features they need to protect both their physical and virtual environments. Further, a number of these appliances give them early access to the set of features that will position them to meet their next set of recovery challenge,s satisfy rising expectations for visibility into their corporate data, and simplify their ongoing management so they may derive additional value from it.




SaaS Provider Pulls Back the Curtain on its Backup Experience with Cohesity; Interview with System Architect, Fidel Michieli, Part 3

Usually when I talk to backup and system administrators, they willingly talk about how great a product installation was. But it then becomes almost impossible to find anyone who wants to comment about what life is like after their backup appliance is installed. This blog entry represents a bit of anomaly in that someone willingly pulled back the curtain on what their experience was like after they had the appliance installed. In this third installment in my interview series with system architect, Fidel Michieli, describes how the implementation of Cohesity went in his environment and how Cohesity responded to issues that arose.

Jerome:  Once you had Cohesity deployed in your environment, can you provide some insights into how it operated and how upgrades went?

Fidel:  We have been through the upgrade process and the process of adding nodes twice. Those were the scary milestones that we did not test during the proof of concept (POC). Well, we did cover the upgrade process, but we did not cover adding nodes.

Jerome:  How did those upgrade go? Seamlessly?

Fidel:  The fact that our backup windows are small and we can run during the night essentially leaves all of our backup infrastructure idle during the day. If we take down one node at a time, we barely notice as we do not have anything running. But as software company, we expect there to be a few bumps along the way which we encountered.

Jerome:  Can you describe a bit about the “bumps” that you encountered?

Fidel:  We filled up the Cohesity cluster much faster than we expected which set its metadata sprawling. We went to 90-92 percent very quickly so we had to add in nodes in order to get the capacity back which was being taken up by its metadata.

Jerome:  Do you control how much metadata the Cohesity cluster creates?

Fidel:  The metadata size is associated with the amount of duplicated data it holds. As that grew, we started seeing some services restart and we got alerts of services restarting.

Jerome:  You corrected the out of capacity condition by adding more nodes?

Fidel:   Temporarily, yes.  Cohesity recognized we were not in a stable state and they did not want us to have a problem so they shipped us eight more nodes for us to create a new cluster.  [Editor’s Note:  Cohesity subsequently issued a new software release to store dedupe metadata more efficiently, which has since been implemented at this SaaS provider’s site.]

Jerome:  That means a lot that Cohesity stepped up to the plate to support its product.

Fidel:   It did. But while it was great that they shipped us the new cluster, I did not have any additional Ethernet ports to connect these new nodes as we did not have the additional port count in our infrastructure. To resolve this, Cohesity agreed to ship us the networking gear we needed. It talked to my network architect, found out what networking gear we liked, agreed to buy it and then shipped the gear to us overnight.

Further, my Cohesity system engineer, calls me every time I open a support ticket and shows up here. He replies and makes sure that my ticket moves through the support queue. He came down to install the original Cohesity cluster and the upgrades to the cluster, which we have been through twice already. The support experience has been fantastic and Cohesity has taken all of my requests into consideration as it has released software upgrades to its product, which is great.

Jerome:  Can you share one of your requests that Cohesity has implemented into its software?

Fidel:  We needed to have connectivity to Iron Mountain’s cloud. Cohesity got that certified with Iron Mountain so it works in a turnkey fashion. We also needed support for SQL Server which Cohesity into its road map at the time and which it recently delivered. We also needed Cohesity to certify support for Exchange 2016 so they expedited support for that so it is also now certified.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.

In part 4 of this interview series Fidel shares how Cohesity functions as both an integrated backup software appliance and a deduplicating target backup appliance in his company’s environment.




SaaS Provider Decides to Roll Out Cohesity for Backup and DR; Interview with System Architect, Fidel Michieli, Part 2

Evaluating product features, comparing prices, and doing proofing of concepts are important steps in the process of adopting almost any new product. But once one completes those steps, the time arrives to start to roll the product out and implement it. In this second installment of my interview series with System Architect, Fidel Michieli, he shares how his company gained a comfort level with Cohesity for backup and disaster recovery (DR) and how broadly it decided to deploy the product in the primary and secondary data centers.

Jerome: How did you come to gain a comfort level for introducing Cohesity into your production environment?

Fidel: We first did a proof of concept (POC).  We liked what we saw about Cohesity but we had a set of target criteria based on the tests we had previously run using our existing backup software and the virtual machine backup software. As such, we had a matrix of what numbers were good and what numbers were bad. Cohesity’s numbers just blew them out of the water.

Jerome:  How much faster was Cohesity than the other solutions you had tested?

Fidel: Probably 250 percent or more. Cohesity does a metadata snapshot where it essentially uses VMware’s technology, but the way that it ingests the data and the amount of compute that it has available to do the backups creates the difference, if that makes sense. We really liked the performance for both backups and restores.

We had two requirements. On the Exchange side we needed to do granular message restores. Cohesity was able to help us achieve that objective by using an external tool that it licensed and which works. Our second objective was to get out of the tape business. We wanted to go to cloud. Unfortunately for us we are constrained to a single vendor. So we needed to work with that vendor.

Jerome: You mean single cloud vendor?

Fidel: Well it’s a tape vendor, Iron Mountain. We are constrained to them by contract. If we were going to shift to the cloud, it had to be to Iron Mountain’s cloud. But Cohesity, during the POC level, got the data to Iron Mountain.

Jerome: How many VMs?

Fidel: We probably have around 1,400 in our main data center and about 120 hosts. We have a two-site disaster recovery (DR) strategy with a primary and a backup. Obviously it was important to have replication for DR. That was part of the plan before the 3-2-1 rule of backup. We wanted to cover that.

Jerome: So you have Cohesity at both your production and DR sites replicating between them?

Fidel: Correct.

Jerome: How many Cohesity nodes at each site?

Fidel: We have 8 and 8 at both sites. After the POC we started to recognize a lot of the efficiencies from management perspective. We knew that object storage was the way we wanted to go, the obvious reason being the metadata.

What the metadata means to us is that we can have a lot of efficiencies sit on top of your data. When you are analyzing or creating objects on your metadata, you can more efficiently manage your data. You can create objects that do compression, deduplication, objects that do analysis, and objects that hold policies. It’s more of a software defined data, if you will. Obviously with that metadata and the object storage behind it, our maintenance windows and backups windows started getting lower and lower.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.

In the fourth and final installment of this interview series, Fidel describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.




DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Editions Now Available

DCIG is pleased to announce the availability of the following DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Editions developed from the backup appliance body of research. Other Buyer’s Guide Editions based on this body of research will be published in the coming weeks and months, including the 2016-17 Integrated Backup Appliance Buyer’s Guide and 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Editions.

Buyer’s Guide Editions being released on September 20, 2016:

  • DCIG 2016-17 Sub-$100K Deduplicating Backup Appliance Buyer’s Guid
  • DCIG 2016-17 Sub-$75K Deduplicating Backup Appliance Buyer’s Guide
  • DCIG 2016-17 Sub-$50K Deduplicating Backup Appliance Buyer’s Guide
  • DCIG 2016-17 US Enterprise Deduplicating Backup Appliance Buyer’s Guide

DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Edition had to meet the following criteria:

  • Be intended for the deduplication of backup data, primarily target-based deduplication
  • Includes an NAS (network attached storage) interface
  • Supports CIFS (Common Internet File System) or NFS (Network File System) protocols
  • Supports a minimum of two (2) hard disk drives and/or a minimum raw capacity of eight terabytes
  • Be formally announced or generally available for purchase on July 1, 2016

The various Deduplicating Backup Appliance Buyer’s Guide Editions are based on at least one additional criterion, whether list price (Sub-$100K, Sub-$75K and Sub-$50K) or being from a US-based provider.

By using the DCIG Analysis Portal and applying these criteria to its body of research into backup appliances, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create Buyer’s Guide Editions to publish and release. DCIG plans to use this same process to create future Buyer’s Guide Editions that examine hybrid cloud and integrated backup appliances among others.

End users registering to access any of these reports via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by- side feature comparisons of the products in which the organization is most interested.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Feature Consolidation on Backup Appliances Currently Under Way

Integrating backup software, cloud services support, deduplication, and virtualization into a single hardware appliance remains a moving target. Even as backup appliance providers merge these technologies onto their respective appliances, the methodologies they employ to do so can vary significantly between them. This becomes very apparent when one looks at growing number of backup appliances from the providers in the market today and the various ways that they offer these features.

fujitsu2

Some providers such as Cohesity provide options in their appliances to satisfy the demands of three different backup appliance configuration. Their appliances may be configured as a target-based deduplication appliance, an integrated backup (offer both storage and backup software for data protection behind the firewall) and as a hybrid cloud backup appliance which gives their appliances to backup data locally and store data with cloud services providers.

By offering options to configure their appliances this way, it opens up the door for their products to address multiple use cases over time. In recently speaking with Cohesity, it often initially positions its product as a target deduplication appliance as a means to non-disruptively get a foothold in organizations with the hopes that organizations will eventually start to use its backup software as well.

Cohesity’s scale-out design also makes it an appealing alternative to competitors such as EMC Data Domain. By scaling out, organizations can eliminate creating the backup silos that results from deploying multiple instances of EMC Data Domain. Using Cohesity, organizations can instead create one central backup repository that makes its solution a more scalable and easier to manage deduplicating backup target that EMC Data Domain.

Further, now that Cohesity has a foothold, organizations can begin to test and use Cohesity’s backup software in lieu of their existing software. A number have already found that Cohesity’s software is already sufficiently robust that it meets the needs of their backup environment. This frees organizations to save even more money and further consolidate their backup infrastructure on a single solution.

Other providers also bundle deduplication along with virtualization and connectivity to cloud services providers as part of their backup appliance offering to offer instant and cloud recovery as part of their solution. In doing so, one specific area in which these appliances differentiate themselves in their ability to deliver instant recoveries on the appliance and even with cloud services providers.

Many providers now offer make virtual machines (VMs) available on their backup appliances to host application recoveries and some even make VMs available with cloud services providers. These VMs that reside locally on the backup appliance give organizations access to application recoveries such as Microsoft Exchange or SQL Server or to use these VMs for test and development. DCIG has found that appliances from Barracuda, Datto, Dell, and Unitrends all support these types of capabilities.

In evaluating these features across different backup appliances, DCIG finds that the Dell DL4300 Backup and Recovery Appliance sets itself apart from the others with its Virtual Standby feature that includes fully licensed VMs from Microsoft. Its VMs run in the background in standby mode and receive constant application. In this way, they are ready for access and use at any time should they be called up. This compares to the others where VMs on the appliance take time to set up. While organizations may want also bring up production level applications on the VMs on other backup appliances, it does take more time to bring these applications on these VMs and may require the intervention of the backup administrators to do this.

However other providers also give organizations a means to access and recover their data and applications.

  • Using Barracuda organizations can recover from a replicated site using a Local Control appliance and Local LiveBoot. Once accessed, administrators may recover to the local appliance using virtual machines.
  • Datto aoffers instant restore capabilities where VMs may be set up locally on the appliance for instant recovery. If the Datto appliance connects to the cloud, users also have the option to run VMs in the cloud which gives organizations time to fix a local server outage and providing business continuity during this time.
  • Unitrends lets users mount VMs for instant recovery on the appliance and in the cloud. Users that opt-in to its Disaster Recovery Service gain access to up to five VMs depending on the size of the appliance or they may also acquire VMs in the cloud if needed.

The consolidating of deduplication, virtualization, and cloud connectivity coupled with new scale-out capabilities provide organizations more reasons than ever to purchase a single appliance to protect their applications and data. Buying a single backup appliance not only provides a smart data protection plan but affords them new opportunities to introduce new technologies into their environment.

The means in which providers incorporate these new technologies into their backup appliances is one of many components to consider when selecting any of today’s backup appliances. However, their cloud connectivity, instant recovery, consolidated set of features, and scale-out features are becoming the new set of features that organizations should examine on the latest generation of backup appliances. Look for the release of a number of DCIG Buyer’s Guide Editions on backup appliances in the weeks and months to come that provide the guidance and insight you need to make these all important decisions about these products.




Small, Smaller and Smallest Have Become the New SLAs for Recovery Windows; Interview with Dell’s Michael Grant, Part 2

Small, smaller and smallest. Those three words pretty well describe the application and file recovery windows that organizations of all sizes must meet with growing regularity. The challenge is finding tools and solutions that enable them to satisfy these ever-shrinking recovery windows. In this second part of my interview series with Michael Grant, director of data protection product marketing for Dell’s systems and information management group, he elaborates upon how the latest features available in Dell’s data protection line enable organizations to meet the shrinking SLAs associated with these new recovery objectives.

Jerome: When organizations go to restore data or an application, restores can actually take longer to complete because the data is stored in a duplicated state and they have to re-hydrate the data. How does Dell Data Protection | Rapid Recovery manage to achieve these 15 minute to two hour recovery windows?

Michael: We see prospective customers challenged with these lengthy times in recovery as well. If you are moving data a long distance, particularly if you have deduplicated it, you have now added re-hydration and latency to the equation.

At the same time, their onsite server recovery service level agreements (SLAs) have gotten small. We have already seen a lot of mid-market customers turning to Rapid Recovery to deal with this challenge. What they are doing is building something of a hybrid environment. Now, long-term, they tell us in no uncertain terms that when they find the ways and the means to get all of their data protection off site, they would like to do that. Will they really do that? I don’t know. But that’s long term. In the short-term, they are focused on building these hybrid environments.

StorageReview-Dell-Data-Protection-Rapid-Recovery

Source: Dell

When I say building a hybrid environment, typically that means they run a Rapid Recovery media server on site, and keep a full repository there. Then they replicate to public or private cloud. As part of what Rapid Recovery does, it spawns a hot standby virtual machine (VM), which is always running and available.

It updates as frequently as you take snapshots of your environment, and then replicates it automatically. For users, that means they can recover on site within literally minutes. They can recover offsite depending upon the latency. It is deduplicated throughout. But they can also access that media server directly.

In the event they have an outage where recovery time would be too onerous, they can access the media server directly running on the data that is minutes to no more than a half hour old, and work there while the IT team takes time to decide how they want to restore and where they want to restore. This is how we bridge the two, so that you get the data back, get the application back, and get the workforce up and running even though you may not have yet completed your entire restore.

In the DR series we see something similar. Re-hydration and line latency seem to take a bit of time. We are watching a lot of customers at this point put one or sometimes several DRs on site and then replicate between the two. In that scenario, you are not dealing as much with the re-hydration and latency between them.

Right now, given current capacities, customers easily keep 90 to 120 days of data next to the end users and systems that need it, so they may restore at line speeds within their environment, versus having to get it from off site.

But if you talk about the future of data protection, the next big challenge I think all of us face is, “How do we effectively put stuff at a distance?” So think away from the CPU, away from the storage, away from the end users, but still in a way that we can retrieve it almost instantly. That will be the new challenge that DR-as-a-service, backup-as-a-service, anything-as-a-service, really, has to tackle and solve. It has to perform these tasks to the satisfaction of the admin whose job is on the line if that application does not get back online and running.

Right now, I see a growing interest from customers and partners in getting as much data off site as they can. We continue to work with MSPs and with other partners to take a look at how can we do this most effectively while offering the same very low service level agreements (SLAs). We want to do so without unnecessarily being bounded by legacy architectures that many people have inherited, are stuck with, and have to figure out a way to augment and manage.

Something that also ties into this is that we have put out a free edition of our Endpoint Recovery offering. As part of the the digital transformation many of these businesses are going through, there’s a lot of IP that is getting created on the laptop while on the train ride to work or on the airplane to the next destination.

dwuf15-introducing-dell-data-protection-endpoint-recovery-4-638

Source: Dell

Our Endpoint Recovery solution is designed with the same principles in mind as Rapid Recovery. It’s a snapshot based technology that can frequently snap your entire image. In today’s version, it’s user driven, so an end user is responsible for both backing up and then restoring his or her data. You can restore granularly or up to a full image if you choose.

Later this year, we are going to put a management interface on that. For firms that are interested in managing everything holistically, we will provide that type of interface. The free edition is available for folks to experiment with and also to give us feedback through our data collection process so that we can understand what they are doing there.

If you think about end point all the way out to cloud, it is not hard to see how we connect the dots. We have got to protect the end point. We have got to help you get out to the cloud. The only way to do that, frankly, is to do it in cooperation with our customers. Having them tell us what they need, rather than us telling them what they need. That’s the impetus behind both this release and other features we plan to release later this year.

In Part 1 of this series, Michael Grant summarizes some of the latest features available in Dell’s data protection line and why organizations are laser-focused on recovery like never before.

In Part 3 of this series, Michael Grant shares about some of the new development going on in the NetVault and vRanger product lines.




The Four (4) Behind-the-Scenes Forces that Drive Many of Today’s Technology Infrastructure Buying Decisions

It is almost a given in today’s world that for almost any organization to operate at peak efficiency and achieve optimal results that it has to acquire and use multiple forms of technology as part of its business processes. However what is not always so clear is the forces that are at work both insider and outside of the business that drive its technology acquisitions. While by no means a complete list, here are four (4) forces that DCIG often sees at work behind the scenes that influence and drive many of today’s technology infrastructure buying decisions.

  1. Keep Everything (All Data). Many organizations often start with the best of intentions when it comes to reigning in data growth by deleting their aging or unwanted data. Then reality sets as they consider the cost and time associated with managing this data in an optimal manner. At that point, they often find it easier, simpler, less risky and most cost effective to just keeping the date.

New technologies heavily contribute to them arriving at this decision. Data compression and data deduplication minimize or eliminate redundant data. Ever higher capacity hard disk drives (HDDs) facilitate storing more data in the same data center footprint. The combination of these technologies amplify the benefits of the other. Further, with IT staffing levels staying flat or even dropping in many organizations, no one has the time to manage the data or wants to risk deleting data that is later deemed needed.

  1. Virtualize Everything. An initial motivation for many organizations to virtualize many applications in the data center was to eliminate both capital and operational expenditures. While those reasons persist, organizations now recognize that virtualizing everything pays many other dividends as well. These include faster applications recoveries; better access to copies of production data; eliminating backup windows; and, new opportunities for testing and developing existing and new applications.
  2. Instant Recovery. Almost all users within organizations expect continuous availability from all of their applications regardless of the tier of application within an organization. However instant recovery is a realistic expectation on the part of most end-users. By virtualizing applications, r using data protection solutions that offer continuous data protection for applications that reside on physical machines, or clustering software, applications that cannot be recovered in seconds or minutes should be in the process of becoming the exception rather than the rule.
  3. Real Time Analytics and Performance. As is evidenced by the prior three points, organizations have more data than ever before at their fingertips and should be able to make better decisions in real time using that data. While this force is still in the early vestiges of becoming a reality, DCIG sees more evidence of this happening all of the time thanks in large part to the growing adoption of open source computing, the use of commodity or inexpensive hardware for mission critical processing and the growing availability of software that can leverage these resources and deliver on these new business requirements.

Technology infrastructure buying decisions are never easy and always have some risk associated with them but if organizations are to remain at peak efficiency and competitive, not having the right technologies is NOT an option. By understanding these seen and unseen forces that are often at work behind the scenes can help organizations better understand and prioritize which technologies they should buy as well as help to quantify the business benefits they should to expect to see after acquiring them and putting them in place.