Four Ways to Achieve Quick Wins in the Cloud

More companies than ever want to use the cloud as part of their overall IT strategy. To do so, they often look to achieve some quick wins in the cloud to demonstrate its value. Achieving these quick wins also serves to give them some practical hands on experience in the cloud. Incorporating the cloud into your backup and disaster recovery (DR) processes may serve as the best way to get these wins.

Any company hoping to get some quick wins in the cloud should first define what a “win” looks like. For the purposes of this blog entry, a win consists of:

  • Fast, easy deployments of cloud resources
  • Minimal IT staff involvement
  • Improved application processes or workflows
  • The same or lower costs

Here are four ways for companies to achieve the quick wins in the cloud through their backup and DR processes:

#1 – Take a Non-disruptive Approach

When possible, leverage your company’s existing backup infrastructure to store copies of data in the cloud. All enterprise backup products such as backup software and deduplication backup appliances, save one or two, interface with public clouds. These products can store backup data in the cloud without disrupting your existing environment.

Using these products, companies can get exposure to the public cloud’s core compute and storage services. These are the cloud services companies are most apt to use initially and represent the most mature of the public cloud offerings.

#2 – Deduplicate Backup Data Whenever Possible

Public cloud providers charge monthly for every GB of data that companies store in their respective clouds. The more data that your company stores in the cloud, the higher these charges become.

Deduplicating data reduces the amount of data that your company stores in the cloud. In so doing, it also helps to control and reduce your company’s monthly cloud storage costs.

#3 – Tier Your Backup Data

Many public cloud storage providers offer multiple tiers of storage. The default storage tier they offer does not, however, represent their most cost-effective option. This is designed for data that needs high levels of availability and moderate levels of performance.

Backup data tends to only need these features for the first 24 – 72 hours after it is backed up. After that, companies can often move it to lower cost tiers of cloud storage. Note that these lower cost tiers of storage come with decreasing levels of availability and performance. While many backups (over 99%) fall into this category, check to see if any application recoveries occurred that required data over three days old before moving it to lower tiers of storage.

#4 – Actively Manage Your Cloud Backup Environment

Applications and data residing in the cloud differ from your production environment in one important way. Every GB of data consumed and every hour that an application runs incur costs. This differs from on-premises environments where all existing hardware represents a sunk cost. As such, there is less incentive to actively manage existing hardware resources since any resources recouped only represent a “soft” savings.

This does not apply in the cloud. Proactively managing and conserving cloud resources translate into real savings. To realize these savings, companies need to look to products such as Quest Foglight. It helps them track where their backup data resides in the cloud and identify the application processes they have running. This, in turn, helps them manage and control their cloud costs.

Companies rightfully want to adopt the cloud for the many benefits that it offers and, ideally, achieve a quick win in the process. Storing backup data in the cloud and moving DR processes to the cloud provides the quick win in the cloud that many companies initially seek. As they do so, they should also ensure they put the appropriate processes and software in place to manage and control their usage of cloud resources.




Breaking Down Scalable Data Protection Appliances

Scalable data protection appliances have arguably emerged as one of the hottest backup trends in quite some time, possibly since the introduction of deduplication into the backup process. These appliances offer backup software, cloud connectivity, replication, and scalable storage in a single, logical converged or hyperconverged infrastructure platform offering that simplify backup while positioning a company to seamlessly implement the appliance as part of its disaster recovery strategy or even create a DR solution for the first time.

It is as the popularity of these appliances increases, so does the number of product offerings and the differences between them. To help a company break down the differences between these scalable data protection appliances, here are some commonalities between them as well as three key features to evaluate how they differ.

Features in Common

At a high level these products all generally share the following seven features in common:

  1. All-inclusive licensing that includes most if not all the software features available on the appliance.
  2. Backup software that an organization can use to protect applications in its environment.
  3. Connectivity to general-purpose clouds for off-site long-term data retention.
  4. Deduplication technologies to reduce the amount of data stored on the appliance.
  5. Replication to other appliances on-premises, off-site, and even in the cloud to lay the foundation for disaster recovery.
  6. Rapid application recovery which often includes the appliance’s ability to host one or more virtual machines (VMs) on the appliance.
  7. Scalable storage that enables a company to quickly and easily add more storage capacity to the appliance.

 

It is only when a company begins to drill down into each of these features that it starts to observe noticeable differences between each of the features available from each provider.

All-inclusive Licensing

For instance, on the surface, all-inclusive licensing sounds straight forward. If a company buys the appliance, it obtains the software with it. That part of the statement holds true. The key question that a company must ask is, “How much capacity does the all-inclusive licensing cover before I have to start paying more?

That answer will vary by provider. Some providers such as Cohesity and Rubrik charge by the terabyte. As the amount of data on its appliance under management by its software grows, so do the licensing costs. In contrast, StorageCraft licenses the software on its OneXafe appliance by the node. Once a company licenses StorageCraft’s software for a node, the software license covers all data stored on that node (up to 204TBs raw.)

Deduplication

Deduplication software is another technology available on these appliances that a company might assume is implemented essentially the same way across all these available offerings. That assumption would be incorrect.

Each of these appliances implement deduplication in slightly different ways. Cohesity gives a company a few ways to implement deduplication. These include deduplicating data when it backs up data using its backup software or deduplicating data backed up by another backup software to its appliance. A company may, at its discretion, choose in this latter use case to deduplicate either inline or post-process.

StorageCraft deduplicates data using its backup software on the client and also offers inline deduplication for data backed up by another backup software to its appliance. Rubrik only deduplicates data backed up by Cloud Data Management Software. HYCU uses the deduplication technology natively found in the Nutanix AHV hypervisor.

Scalable Storage

A third area of differentiation between these appliances shows up in how they scale storage. While scale-out architectures get a lot of the press, that is only one scalable storage option available to a company. The scale-out architecture, such as employed by Cohesity, HYCU, and Rubrik, entails adding more nodes to an existing configuration.

Using a scale-up architecture, such as is available on the Asigra TrueNAS appliance from iXsystems, a company can add more disk drives to an existing chassis. Still another provider, StorageCraft, uses a combination of both architectures in its OneXafe appliance. Once can add more drives to an existing node or add more nodes to an existing OneXafe deployment.

Scalable data protection appliances are changing the backup and recovery landscape by delivering both the simplicity of management and the breadth of features that companies have long sought. However, as a cursory examination into three of their features illustrates, the differences between the features on these appliances can be significant. This makes it imperative that a company first break down the features on any of these scalable data protection appliances that it is considering for purchase to ensure it obtains the appliance with the most appropriate feature set for its requirements.




Make the Right Choice between Scale-out and Scale-up Backup Appliance Architectures

Companies are always on the lookout for simpler, most cost-effective methods to manage their infrastructure. This explains, in part, the emergence of scale-out architectures over the last few years as a preferred means for implementing backup appliances. It is as scale-out architectures gain momentum that it behooves companies taking a closer look at the benefits and drawbacks of both scale-out and scale-up architectures to make the best choice for their environment.

Backup appliances primarily ship in two architectures: scale-out and scale-up.  A scale-out architecture is comprised of nodes that are logically grouped together using software that the vendor provides. Each node ships with preconfigured amounts of memory, compute, network ports, and storage capacity. The maximum raw capacities of backup appliances from about a few dozen terabytes to nearly twelve petabytes.

In contract, a scale-up architecture places a controller with compute, memory and network ports in front of storage shelves. A storage shelf may be internal or external to the appliance. Each storage shelf holds a fixed number of disk drives.

Backup appliances based on a scale-up architecture usually require lower amounts of storage capacity for an initial deployment. If an organization needs more capacity, it adds more disk drives to these storage shelves, up to some predetermined, fixed, hard upper limit. Backup appliances that use this scale-up architecture range from a few terabytes of maximum raw capacity to over multiple petabytes of maximum raw capacity.

Scale-out Benefits and Limitations

A scale-out architecture, sometimes referred to as a hyper-converged infrastructure (HCI), enables a company to purchase more nodes as it needs them. Each time it acquires another node, it provides more memory, compute, network interfaces, and storage capacity to the existing solutions. This approach addresses enterprise needs to complete increased backup workloads in the same window of time since they have more hardware resources available to them.

This approach also addresses concerns about product upgrades. By placing all nodes in a single configuration, as existing nodes age or run of capacity, new nodes with higher levels of performance and more capacity can be introduced into the scale-out architecture.

Additionally, an organization may account for and depreciate each node individually. While the solution’s software can logically group physically nodes together, there is no requirement to treat all the physical nodes as a single entity. By treating each node as its own physical entity, an organization can depreciate physical over a three to five-year period (or whatever period its accounting rules allow for.) This approach mitigates the need to depreciate newly added appliances in a shorter time frame as is sometimes required when adding capacity to scale-up appliances.

The flexibility of scale-out solutions can potentially create some management overhead. Using a scale-out architecture, an enterprise should verify that as the number of nodes in the scale-out configuration increases, the solution has a means to automatically load balance the workloads and store backup data across all its available nodes. If not, an enterprise may find it spends an increasing amount of time balancing the backup jobs across its available nodes

An enterprise should also verify that all the nodes work together as one collective entity. For instance, an enterprise should verify that the scale-out solution offers “global deduplication”. This feature dedu­plicates data across all the nodes in the system, regardless of on which node the data resides. If it does not offer this feature, the solution will still deduplicate the data but only on each individual node.

Finally, an enterprise should keep its eye on the possibility of “node sprawl” when using these solutions. These solutions make it easy to grow but an enterprise needs to plan for the optimal way to add each node as individual nodes can vary widely in their respective capacity and performance characteristics.

Scale-up Benefits and Limitations

Backup appliances that use a scale-up architecture have their own sets of benefits and limitations. Three features that currently working in their favor include:

  1. Mature
  2. Well-understood
  3. Widely adopted and used

One other broader backup industry trend currently also works in favor of scale-up architectures. More enterprises use snapshots as their primary backup technique. Using these snapshots as the source for the backup frees enterprises to do backups at almost any time of the day. This helps to mitigate the night and weekend performance bottleneck that can occur when forced to do all backups at the same using one of these appliances as the backup target.

A company may encounter the following challenges when working with scale-up appliances:

First, it must size and configure the appliance correctly. This requires an enterprise to have a good understanding of its current and anticipated backup workloads, its total amount of data to backup, and its data retention requirements. Should it overestimate its requirements, it may end up with an appliance oversized for its environment. Should it underestimate its requirements, backup jobs may not complete on time or it may run out of capacity, requiring it to buy another appliance sooner than it anticipated.

Second, all storage capacity sits behind a single controller. This architecture necessitates that the controller be sufficiently sized to meet all current and future backup workloads. Even though the appliance may support the addition of more disk drives, all backup jobs will still need to run through the same controller. Depending on the amount of data and how quickly backup jobs need to complete, this could bottleneck performance and slow backup and recovery jobs.

Make the Right Backup Appliance Choice

In order to make the right choice between these two architectures, the choice may come down to how well you understand your own environment. If a company expects to experience periods of rapid or unexpected data growth, using a scale-out appliance will often be a better approach. In these scenarios, look to appliances from Cohesity, Commvault, ExaGrid, NEC and StorageCraft.

If a company expects more predictable or minimal data growth in its environment, scale-up backup solutions such as the Asigra TrueNAS and Unitrends appliances will likely better match its requirements.




Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.




Rethinking Your Data Deduplication Strategy in the Software-defined Data Center

Data Centers Going Software-defined

There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.

The introduction of technologies that can monitor, report on, analyze, and increasingly manage and optimize data center hardware frees organizations from performing housekeeping tasks such as:

  • Verifying hardware firmware compatibility with applications and operating systems
  • Troubleshooting hot spots in the infrastructure
  • Identifying and repairing failing hardware components

Automating these tasks does more than change how organizations manage their data center infrastructures. It reshapes how they can think about their entire IT strategy. Rather than adapting their business to match the limitations of the hardware they choose, they can now pursue business objectives where they expect their IT hardware infrastructure to support these business initiatives.

This change in perspective has already led to the availability of software-defined compute, networking, and storage solutions. Further, software-defined applications such as databases, firewalls, and other applications that organizations commonly deploy have also emerged. These virtual appliances enable companies to quickly deploy entire application stacks. While it is premature to say that organizations can immediately virtualize their entire data center infrastructure, the foundation exists for them to do so.

Software-defined Storage Deduplication Targets

As they do, data protection software, like any other application, needs to be part of this software-defined conversation. In this regard, backup software finds itself well-positioned to capitalize on this trend. It can be installed on either physical or virtual machines (VMs) and already ships from many providers as a virtual appliance. But storage software that functions primarily as a deduplication storage target already finds itself being boxed out of the broader software-defined conversation.

Software-defined storage (SDS) deduplication targets exist that have significantly increased in storage capabilities. By the end of 2018, a few of these software-defined virtual appliances scaled to support about 100TB or more of capacity. But organizations must exercise caution when looking to position these available solutions as a cornerstone in a broader software-defined deduplication storage target strategy.

This caution, in many cases, stems less from the technology itself and more from the vendors who provide these SDS deduplication target solutions. In every case, save one, these solutions originate with providers who focus on selling hardware solutions.

Foundation for Software-defined Data Centers Being Laid Today

Companies are putting plans in place right now to build the data center of tomorrow. That data center will be a largely software-defined data center with solutions that span both on-premises and cloud environments. To achieve that end, companies need to select solutions that have a software-designed focus which meet their current needs while positioning them for tomorrow’s requirements.

Most layers in the data center stack, to include compute, networking, storage, and even applications, are already well down the road of transforming from hardware-centric to software-centric offerings. Yet in the face of this momentous shift in corporate data center environments, SDS deduplication target solutions have been slow to adapt.

It is this gap that SDS deduplication products such as Quest QoreStor look to fill. Coming from a company with “software” in its name, Quest comes without the hardware baggage that other SDS providers must balance. More importantly, Quest QoreStor offers a feature-rich set of services that range from deduplication to replication to support for all major cloud, hardware, and backup software platforms that comes from 10 years of experience in delivering deduplication software.

Free to focus solely on delivering a SDDC solution, Quest QoreStor represents the type of SDS deduplication target that does truly meet the needs of today’s enterprise while positioning them to realize the promise of tomorrow’s software-defined data center.

To read more of DCIG’s thoughts about using SDS deduplication targets in the software-defined data center of tomorrow, follow this link.




Key Differentiators between the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 Systems

Companies have introduced a plethora of technologies into their core enterprise infrastructures in recent years that include all-flash arrays, cloud, hyper-converged infrastructures, object-based storage, and snapshots, just to name a few. But as they do, a few constants remain. One is the need to backup and recover all the data they create.

Deduplication appliances remain one of the primary means for companies to store this data for short-term recovery, disaster recoveries, and long-term data retention. To fulfill these various roles, companies often select either the HPE StoreOnce 5650 or the Dell EMC Data Domain 9300. (To obtain a complimentary DCIG report that compares these two products, follow this link.)

Their respective deduplication appliance lines share many features in common. They both perform inline deduplication. They both offer client software to do source-side deduplication that reduces data sent over the network and improves backup throughput rates. They both provide companies with the option to backup data over NAS or SAN interfaces.

Despite these similarities, key areas of differentiation between these two product lines remain which include the following:

  1. Cloud support. Every company either has or anticipates using a hybrid cloud configuration as part of its production operations. These two product lines differ in their levels of cloud support.
  2. Deduplication technology. Data Domain was arguably the first to popularize widespread use of deduplication for backup. Since then, others such as the HPE StoreOnce 5650 have come on the scene that compete head-to-head with Data Domain appliances.
  3. Breadth of application integration. Software plug-ins that work with applications and understand their data formats prior to deduplicating the data provide tremendous benefits as they improve data reduction rates and decrease the amount of data sent over the network during backups. The software that accompanies the appliances from these two providers has varying degrees of integration with leading enterprise applications.
  4. Licensing. The usefulness of any product hinges on the features it offers, their viability, and which ones are available to use. Clear distinctions between the HPE StoreOnce and Dell EMC Data Domain solutions exist in this area.
  5. Replication. Copying data off-site for disaster recovery and long-term data retention is paramount in comprehensive enterprise disaster recovery strategies. Products from each of these providers offer this but they differ in the number of features they offer.
  6. Virtual appliance. As more companies adopt software-defined data center strategies, virtual appliances have increased appeal.

In the latest DCIG Pocket Analyst Report, DCIG compares the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 product lines and examines how each well these two products fare in their support of these six areas in which DCIG looks at nearly 100 features to draw its conclusions. This report is currently available at no charge for a limited time on DCIG’s partner website, TechTrove. To receive complimentary access to this report, complete a registration form that you can find at this link.




Three Features that Matter on All-flash Arrays and One that Matters Not So Much

In the last few years all-flash arrays have taken enterprise data centers by storm but, as that has occurred, the criteria by which organizations should evaluate storage arrays from competing vendors have changed substantially. Features that once mattered considerably now barely get anyone’s attention while features that no one had knowledge of a few years ago are closely scrutinized. Here are three features that organizations should examine on all-flash arrays and one feature that has largely dropped off the radar screen in terms of importance.

Performance and throughput seem to be top of mind with every organization when it comes to evaluating all-flash arrays and all-flash arrays certainly differ in their ability to deliver on those attributes depending on the applications they are intended to host. However, most organizations will find that many all-flash will provide the levels of performance and throughput that their applications require. As such, there are three other features to which they should pay attention when evaluating different products. These include:

I. Storage capacity optimization technologies. Of the 90+ enterprise all-flash arrays that DCIG recently evaluated in anticipation of its forthcoming All-flash Array Buyer’s Guides, it found that over 75% of these all-flash arrays supported some type of storage capacity optimization technology, whether compression, deduplication, or both. While DCIG generally believes these technologies positively influence an organization’s ability to maximize available storage capacity, organizations should verify their environment will benefit from their use. Organizations that plan to host virtual machines and/or databases on all-flash arrays will almost always recognize the benefits of these technologies.

Source: DCIG

II. Breadth of VMware vSphere API Support. The tremendous storage capacity optimization benefits that all-flash arrays can offer for virtualized environments can be further amplified by the level of VMware vSphere API support that the array offers. However, the level of support for these APIs that each all-flash array supports vary significantly, even from arrays from the same vendor. The chart below shows the level of support that one vendors offers for these APIs on its multiple AFA products (over 10.) If organizations plan to use these APIs and the features they offer, they should ideally determine which ones they want to use and ensure that the array they want offers them.

Source: DCIG

III. Non-disruptive upgrades. Who has time for outages or maintenance windows anymore? Or, maybe better phrased, who wants to explain to their management why they had an outage or needed an extended maintenance window? In short, no one wants to have those conversations and the good news is that many of today’s all-flash arrays offer non-disruptive upgrades and maintenance windows. The bad news is that there are still some gaps in the non-disruptive nature of today’s enterprise AFA models as the chart below illustrates. If ready to put application outages in your past, verify the all-flash array you select has all the non-disruptive features that your company requires.

Source: DCIG

All the changes in storage arrays is not without its upside and one feature that organizations need to be less concerned about or perhaps not concerned about at all is the RAID options on today’s all-flash arrays. As many of today’s all-flash arrays are re-inventions of yesterday’s hard disk drive array, they carried forward the RAID methodologies offered on them. The good news is that the failure rates of SSDs are few and far between so almost any RAID implementation will work equally well. If anything, organizations should give preference to those all-flash arrays that offer their own proprietary RAID implementation which had flash-first design and did not carry forward some of the HDD baggage that these RAID implementations were designed to address.




A More Elegant (and Affordable) Approach to Nutanix Backups

One of the more perplexing challenges that Nutanix administrators face is how to protect the data in their Nutanix deployments. Granted, Nutanix natively offers its own data protection utilities. However, these utilities leave gaps that enterprises are unlikely to find palatable when protecting their production applications. This is where Comtrade Software’s HYCU and ExaGrid come into play as their combined solutions provide a more affordable and elegant approach to protecting Nutanix environments.

One of the big appeals of using hyperconverged solutions such as Nutanix’s inclusion of basic data protection utilities. Using its Time Stream and Cloud Connect technologies, Nutanix makes it easy and practical for organizations to protect applications hosted on VMs running on Nutanix deployments.

The issue becomes how does one affordably deliver and manage data protection in Nutanix environments at scale? This becomes a tougher question for Nutanix to answer because to use its data protection technologies at scale requires running the Nutanix platform to host the secondary/backup copies of data. While that is certainly doable, that approach is likely not the most affordable way to tackle this challenge.

This is where a combined data protection solution from Comtrade Software and ExaGrid for the protection of Nutanix environments makes sense. Comtrade Software’s HYCU was the first backup software product to come to market purpose-built to protect Nutanix environments. Like Nutanix’s native data protection utilities, Nutanix administrators can manage HYCU and their VM backups from within the Nutanix PRISM management console. Unlike Nutanix’s native data protection utilities, HYCU auto-detects applications running within VMs and configures them for protection.

Further distinguishing HYCU from other competitive backup software products mentioned on Nutanix’s web page, HYCU is the only one currently listed that can run as a VM in an existing Nutanix implementation. The other products listed require organizations to deploy a separate physical machine to run their software which add cost and complexity into the backup equation.

Of course, once HYCU protects the data, the issue becomes, where does one store the backup copies of data for fast recoveries and long-term retention. While one can certainly keep these backup copies on the existing Nutanix deployment or on a separate deployment of it, these creates two issues.

  • One, if there is some issue with the current Nutanix deployment, you may not be able to recover the data.
  • Two, there are more cost-effective solutions for the storage and retention of backup copies of data.

ExaGrid addresses these two issues. Its scale-out architecture resembles Nutanix’s architecture enabling an ExaGrid deployment to start small and then easily scale to greater amounts of capacity and throughput. However, since it is a purpose-built backup appliance intended to store secondary copies of data, it is more affordable than deploying a second Nutanix cluster. Further, the Landing Zones that are uniquely found on ExaGrid deduplication systems facilitate near instantaneous recovery of VMs.

Adding to the appeal of ExaGrid’s solutions in enterprise environments is its recently announced EX63000E appliance. This appliance has 58% more capacity than its predecessor, allowing for a 63TB full backup. Up to thirty-two (32) EX63000E appliances can be combined in a single scale-out system to allow for a 2PB full backup. Per ExaGrid’s published performance benchmarks, each EX63000E appliance has a maximum ingest rate of 13.5TB/hr. per appliance enabling thirty-two (32) EX63000Es combined in a single system to achieve maximum ingest rate of 432TB/hr.

Hyperconverged infrastructure solutions are in general poised to re-shape enterprise data center landscapes with solutions from Nutanix currently leading the way. As this data center transformation occurs, organizations need to make sure that the data protection solutions that they put in place offer both the same ease of management and scalability that the primary hyperconverged solution provides. Using Comtrade Software HYCU and ExaGrid, organizations get the affordable yet elegant data protection solution that they seek for this next generation data center architecture.




Differentiating between the Dell EMC Data Domain and ExaGrid EX Systems

Deduplication backup target appliances remain a critical component of the data protectioninfrastructure for many enterprises. While storing protected data in the cloud may be fine for very small businesses or even as a final resting place for enterprise data, deduplication backup target appliances continue to function as their primary backup target and primary source for recovering data. It is for these reasons that enterprises frequently turn to deduplication backup target appliances from Dell EMC and ExaGrid to meet these specific needs that are covered in recent DCIG Pocket Analyst Report.

The Dell EMC Data Domain and ExaGrid families of deduplication backup target appliances appear on the short lists for many enterprises. While both these providers offer systems for small, midsize, and large organizations, the underlying architecture and features on the systems from these two providers make them better suited for specific use cases.

Their respective data center efficiency, deduplication, networking, recoverability, replication, and scalability features (to include recently announced enhancements) provide insight into the best use cases for the systems from these two vendors.

Purpose-built, deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They offer appliances in vari­ous physical configurations to meet the specific backup needs of small, midsize, and large enterprises while provid­ing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

Their systems significantly reduce backup data stores and offer concurrent backup and replication. They also limit the number of backup streams, display real-time dedu­plication ratios, and do capacity analysis and trending. Despite the similarities that the systems from these respective vendors share, six differences exist between them in their underlying features that impact their ability to deliver on key end-user expectations. These include:

  1. Data center efficiency to include how much power they use and the size of their data center footprint.
  2. Data reduction to include what deduplication options they offer and how they deliver them.
  3. Networking protocols to include connectivity for NAS and SAN environments.
  4. Recoverability to include how quickly, how easily, and where recoveries may be performed.
  5. Replication to include copying data offsite as well as protecting data in remote and branch offices.
  6. Scalability to include total amount of capacity as well as ease and simplicity of scaling.

DCIG is pleased to make a recent DCIG Pocket Analyst Report that compares these two families of deduplication backup target appliances available for a complimentary download for a limited time. This succinct, 4-page report includes a detailed product matrix as well as insight into these six differentiators between these two solutions and which one is best positioned to deliver on these six key data center considerations.

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




Data Center Efficiency, Performance, Scalability: How Dell EMC XtremIO, Pure Storage Flash Arrays Differ

Latest DCIG Pocket Analyst Report Compares Dell EMC XtremIO and Pure Storage All-flash Product Families

Hybrid and all-disk arrays still have their place in enterprise data centers but all-flash arrays are “where it’s at” when it comes to hosting and accelerating the performance of production applications. Once reserved only for applications that could cost-justify these arrays, continuing price erosion in the underlying flash media coupled with technologies such as compression and deduplication have put these arrays at a price point within reach of almost any size enterprise. As that occurs, flash arrays from Dell EMC XtremIO and Pure Storage are often on the buying short lists for many companies.

When looking at all-flash arrays, it is easy to fall into the trap that they are all created equal. While it can be truthfully said that every all-flash array is faster and will outperform any of its all-disk or hybrid storage array predecessors, there can be significant differences in how effectively and efficiently each one delivers that performance.

Consider product families from leaders in the all-flash array market: Dell EMC XtremIO and Pure Storage. When you look at their published performance specifications, they both scale to offer hundreds of thousands of IOPS, achieve sub one millisecond response times, and offer capacity optimization features such as compression and deduplication.

It is only when you start to pull back the covers on these two respective product lines that substantial differences between them start to emerge such as:

  • Their data center efficiency in areas such as power consumption and data center footprint
  • How much flash capacity they can ultimately hold
  • What storage protocols they support

This recent published 4-page DCIG Pocket Analyst Report analyzes these attributes and others on all-flash arrays from these two providers. It examines how well their features support these key data center considerations and includes analyst commentary on which product has the edge in this these specific areas. This report also contains a feature comparison matrix to support this analysis.

This report provides the key insight in a concise manner that enterprises need to make the right choice in an all-flash array solution for the rapidly emerging all-flash array data center. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

All-flash data centers are coming and with every all-flash array providing higher levels of performance than previous generations of storage arrays, enterprises need to examine key underlying features that go deeper than simply fast they perform. Their underlying architecture, the storage protocols they support, and the software they use to deliver these features are all features that impact how effective and efficient the array will be in your environment. This DCIG Pocket Analyst Report makes plain some of the key ways that the all-flash arrays from Dell EMC and Pure Storage differentiate themselves from one another. Follow this link to purchase this report.

Author’s Note: The link to the DCIG Pocket Analyst Report comparing the Dell EMC XtremIO and Pure Storage FlashArrays was updated and correct at 12:40 pm CT on 10/18/2017 to point to the correct page on the TechTrove website. Sorry for any confusion!




Deduplication Still Matters in Enterprise Clouds as Data Domain and ExaGrid Prove

Technology conversations within enterprises increasingly focus on the “data center stack” with an emphasis on cloud enablement. While I agree with this shift in thinking, one can too easily overlook the merits of underlying individual technologies when only considering the “Big Picture“. Such is happening with deduplication technology. A key enabler of enterprise archiving, data protecton, and disaster recovery solutions, vendors such as Dell EMC and ExaGrid deliver deduplication technology in different ways as DCIG’s most recent 4-page Pocket Analyst Report reveals that makes each product family better suited for specific use cases.

It seemed for too many years enterprise data centers focused too much on the vendor name on the outside of the box as opposed to what was inside the box – the data and the applications. Granted, part of the reason for their focus on the vendor name is they wanted to demonstrate they had adopted and implemented the best available technologies to secure the data and make it highly available. Further, some of the emerging technologies necessary to deliver a cloud-like experience with the needed availability and performance characteristics did not yet exist, were not yet sufficiently mature, or were not available from the largest vendors.

That situation has changed dramatically. Now the focus is almost entirely on software that provides enterprises with cloud-like experiences that enables them to more easily and efficiently manage their applications and data. While this change is positive, enterprises should not lose sight of the technologies that make up their emerging data center stack as they are not all equally equipped to deliver them in the same way.

A key example is deduplication. While this technology has existed for years and has become very mature and stable during that time, the options in which enterprises can implement it and the benefits they will realize it vary greatly. The deduplication solutions from Dell EMC Data Domain and ExaGrid illustrate these differences very well.

DCIG Pocket Analyst Report Compares Dell EMC Data Domain and ExaGrid Product Families

Deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They also both offer appliances in various physical configurations to meet the specific backup needs of small, midsize, and large enterprises while providing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

However, their respective systems also differ in key areas that will impact the overall effectiveness these systems will have in the emerging cloud data stacks that enterprises are putting in place. The six areas in which they differ include:

  1. Data center efficiency
  2. Deduplication methodology
  3. Networking protocols
  4. Recoverability
  5. Replication
  6. Scalability

The most recent 4-page DCIG Pocket Analyst Report analyzes these six attributes on the systems from these two providers of deduplication systems and compares their underlying features that deliver on these six attributes. Further, this report identifies which product family has the advantage in each area and provides a feature comparison matrix to support these claims.

This report provides the key insight in a concise manner that enterprises need to make the right choice in deduplication solutions for their emerging cloud data center stack. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

Cloud-like data center stacks that provide application and data availability, mobility, and security are rapidly becoming a reality. But as enterprises adopt these new enterprise clouds, they ignore or overlook technologies such as deduplication that make up these stacks at their own peril as the underlying technologies they implement can directly impact the overall efficiency and effectiveness of the cloud that one is building.




Veritas Delivering on its 360 Data Management Strategy While Performing a 180

Vendors first started bandying about the phrase “cloud data management” a year or so ago. While that phrase caught my attention, specifics as what one should expect when acquiring a “cloud data management” solution remained nebulous at best. Fast forward to this week’s Veritas Vision 2017 and I finally encountered a vendor that was providing meaningful details as to what cloud data management encompasses while simultaneously performing a 180 behind the scenes.

Ever since I heard the term cloud data management a year or so ago, I loved it. If there was ever a marketing phrase that captured the essence of how every end-user secretly wants to manage all its data while the vendor or vendors promising to deliver it commits to absolutely nothing, this phrase nailed it. A vendor could shape and mold that definition however it wanted and know that end-users would listen to the pitch even if deep down the users knew it was marketing spin at its best.

Of course, Veritas promptly blew up these pre-conceived notions of mine this week at Vision 2017. While at the event, Veritas provided specifics about its cloud data management strategy that rang true if for no other reason that they had a high degree of veracity to them. Sure, Veritas may refer to its current strategy as “360 Data Management.” But to my ears it sure sounded like someone had finally articulated, in a meaningful way, what cloud data management means and the way in which they could deliver on it.

Source: Veritas

The above graphic is the one that Veritas repeatedly rolls out when it discusses its 360 Data Management strategy. While notable in that it is one of the few vendors that can articulate the particulars of its data management strategy, it more importantly has three important components to it that currently makes its strategy more viable than many of its competitors. Consider:

  1. Its existing product portfolio maps very neatly into its 360 Data Management strategy. One might argue (probably rightfully so) that Veritas derived its 360 Data Management strategy from its existing product portfolio that it has built-up over the years. However, many of these same critics have also contended that Veritas has been nothing but a company with an amalgamation of point products with no comprehensive vision. Well, guess what, the world changed over the past 12-24 months and it bent decidedly bent in the direction of software. Give Veritas some credit. It astutely recognized this shift, saw that its portfolio aligned damn well with how enterprises want to manage their data going forward, and had the hutzpah to craft a vision that it could deliver based upon the products it had in-house.
  2. It is not resting on its laurels. Last year when Veritas first announced its 360 Data Management strategy, I admit, I inwardly groaned a bit. In its first release, all it did was essentially mine the data in its own NetBackup catalogs. Hello, McFly! Veritas is only now thinking of this? To its credit, this past week it expanded the list of products to which to which its Information Map connectors can access to over 20. These include Microsoft Exchange, Microsoft SharePoint, and Google Cloud among others. Again, I must applaud Veritas for its efforts on this front. While this news may not be momentous or earth-shattering, it visibly reflects a commitment to delivering on and expanding the viability of its 360 Data Management strategy beyond just NetBackup catalogs.
  3. The cloud plays very well in this strategy. Veritas knows that plays in the enterprise space and it also knows that enterprises want to go to the cloud. While nowhere in its vision image above does it overtly say “cloud”, guess what? It doesn’t have to. It screams, “Cloud!” This is why many of its announcements at Veritas Vision around its CloudMobility, Information Map, NetBackup Catalyst, and other products talk about efficiently moving data to and from the cloud and then monitoring and managing it whether it resides on-premises, in the cloud, or both.

One other change it has made internally (and this is where the 180 initially comes in,) is how it communicates this vision. When Veritas was part of Symantec, it stopped sharing its roadmap with current and prospective customers. In this area, Veritas has made a 180, customers who ask and sign a non-disclosure agreement (NDA) with Veritas can gain access to this road map.

Veritas may communicate that the only 180 turn it has made in the last 18 months or so since it was spun out of Symantec is its new freedom to communicate its road map to current and/or prospective customers. While that may be true, the real 180 it has made entails it successfully putting together a cohesive vision that articulates the value of products in its portfolio in a context that enterprises are desperate to hear. Equally impressive, Veritas’ software-first focus better positions it than its competitors to enable enterprises to realize this ideal.

 




Exercise Caution Before Making Any Assumptions about Cloud Data Protection Products

There are two assumptions that IT professionals need to exercise caution before making when evaluating cloud data protection products. One is to assume all products share some feature or features in common. The other is to assume that one product possesses some feature or characteristic that no other product on the market offers. As DCIG reviews its recent research into the cloud data protection products, one cannot make either one of these assumptions, even on features such as deduplication, encryption, and replication that one might expect to be universally adopted by these products in comparable ways.

The feature that best illustrates this point is deduplication. One would almost surely think that after the emphasis put on deduplication over the past decade, every product would now support deduplication. That conclusion would be true. But how each product implements deduplication can vary greatly. For example:

  1. Block-level deduplication is still not universally adopted by all products. A few products still only deduplicate at the file level.
  2. In-line deduplication is also not universally available on all products. Further, post-process deduplication is becoming more readily available as organizations want to do more with their copies of data after they back it up.
  3. Only about 2 in 5 products offer the flexibility to recognize data in backup streams and apply the most appropriate deduplication algorithm.

Source: DCIG; 175 products

Deduplication is not the only feature that differs between these products. As organizations look to centralize data protection in their infrastructure and then keep a copy of data offsite with cloud providers, features such as encryption and replication have taken on greater importance in these products and more readily available than ever before. However, here again one cannot assume that all cloud data protection products support each of these features.

On the replication side, DCIG found that this feature to be universally supported across the products it evaluated. Further, these products all implement the option where organizations can schedule replication to occur at certain times (every five minutes, on the hour, etc.).

However, when organizations get beyond this baseline level of replication, differences again immediate appear. For instance, just over 75 percent of the products perform continuous data replication (replicate data immediately after the write occurs at the primary site) while less than 20 percent support synchronous replication.

Organizations all need to pay attention to the fan-in and fan-out options that these products provide. While all support 1:1 replication configurations, only 75 percent of the products support fan-in replication (N:1) and only 71 percent support fan-out replication (1:N). The number of products that support replication across multiple hops drops even further – down to less than 40 percent.

Source: DCIG; 176 products

Encryption is another feature that has become widely used in recent years as organizations have sought to centralize backup storage in their data centers as well as store data with cloud providers. In support of these initiatives, over 95 percent of the products support AES-256 bit encryption for data at-rest while nearly 80 percent of them support this level of encryption for data in-flight.

Deduplication, encryption, and replication are features that organizations of almost any size almost universally expect to find on any cloud data protection product that they are considering for their environment. Further, as DCIG’s research into these products reveals, they nearly all support these features in some capacity. However, they certainly do not give organizations the same number of options to deploy and leverage them and it is these differences in the breadth of feature functionality that each product offers that organizations need to be keenly aware of as they make their buying decisions.




BackupAssist 10.0 Brings Welcomed Flexibility for Cloud Backup to Windows Shops

Today’s backup mantra seems to be backup to the cloud or bust! But backup to the cloud is more than just redirecting backup streams from a local file share to a file share presented by a cloud storage provider and clicking the “Start” button. Organizations must examine to which cloud storage providers they can send their data as well as how their backup software packages and sends the data to the cloud. BackupAssist 10.0 answers many of these tough questions about cloud data protection that businesses face while providing them some welcomed flexibility in their choice of cloud storage providers.

Recently I was introduced to BackupAssist, a backup software company that hails from Australia, and had the opportunity to speak with its founder and CEO, Linus Chang, about Backup Assist’s 10.0 release. The big news in this release was BackupAssist’s introduction of cloud independent backup that gives organizations the freedom to choose any cloud storage provider to securely store their Windows backup data.

The flexibility to choose from multiple cloud storage providers as a target when doing backup in today’s IT environment has become almost a prerequisite. Organizations increasingly want the ability to choose between one or more cloud storage providers for cost and redundancy reasons.

Further, availability, performance, reliability, and support can vary widely by cloud storage provider. These features may even vary by the region of the country in which an organization resides as large cloud storage providers usually have multiple data centers located in different regions of the country and world. This can result in organizations having very different types of backup and recovery experiences depending upon which cloud storage provider they use and the data center to which they send their data.

These factors and others make it imperative that today’s backup software give organizations more freedom of their choice in cloud storage providers which is exactly what BackupAssist 10.0 provides. By giving organizations the freedom to choose from Amazon S3 and Microsoft Azure among others, they can select the “best” cloud storage provider for them. However, since the factors as to what constitute the “best” cloud storage provider can and probably will change over time, BackupAssist 10.0 gives organizations the flexibility to adapt to any changes in conditions at the situation warrants.

Source:BackupAssist

To ensure organizations experience success when they backup to the cloud, it has also introduced three other cloud-specific features as well, which include:

  1. Compresses and deduplicates data. Capacity usage and network bandwidth consumption are the two primary factors that drive up cloud storage costs. By introducing compression and deduplication into this release, BackupAssist 10.0 helps organizations better keeps these variable costs associated with using cloud storage under control.
  2. Insulated encryption. Every so often stories leak out about how government agencies subpoena cloud providers and ask for the data of their clients. Using this feature, organizations can fully encrypt their backup data to make it inaccessible to anyone.
  3. Resilient transfers. Nothing is worse than having a backup two-thirds to three-quarters complete only to have a hiccup in the network connection or on the server itself that interrupts the backup and forces one to restart the backup from the beginning. Minimally, this is annoying and disruptive to business operations. Over time, restarting backup jobs and resending the same backup data to the cloud can run networking and storage costs. BackupAssist 10.0 ensures that if a backup job gets interrupted, it can resume from the point where it stopped while only sending the required amount of data to complete the backup.

In its 10.0 release, BackupAssist makes needed enhancements to ensure it remains a viable, cost-effective backup solution for businesses wishing to protect their applications running on Windows Server. While these businesses should keep some copies of data on local disk for faster backups and recoveries, the value of efficiently and cost-effectively keeping copies of their data offsite with cloud storage providers cannot be ignored. The 10.0 version of BackupAssist gives them the versatility to store data locally, in the cloud, or both with new flexibility to choose a cloud storage provider at any time that most closely aligns with their business and technical requirements.




Deduplicate Differently with Leading Enterprise Midrange All-flash Arrays

If you assume that leading enterprise midrange all-flash arrays (AFAs) support deduplication, your assumption would be correct. But if you assume that these arrays implement and deliver deduplication’s features in the same way, you would be mistaken. These differences in deduplication should influence any all-flash array buying decision as deduplication’s implementation affects the array’s total effective capacity, performance, usability, and, ultimately, your bottom line.

The availability of deduplication technology on all leading enterprise midrange AFAs comes as a relief to many organizations. The raw price per GB of AFAs often precludes them from deploying AFAs in their environment. However, deduplication’s presence enables organizations to deploy AFAs more widely in their environment since it may increase an AFA’s total effective capacity by 2-3x over its total useable capacity.

The operative word in that previous sentence is “may.” Selecting an enterprise midrange all-flash array model from Dell EMC, HDS, HPE, Huawei, or NetApp only guarantees that you will get an array that supports deduplication. One should not automatically assume that any of these vendors will deliver it in the way that your organization can best capitalize on it.

For instance, if you only want to do post-process deduplication, a model from only one of those five vendors listed above supports that option. If you want deduplication included when you buy the array and not have to license it separately, only three of the vendors support that option. If you want to do inline deduplication of production data, then only two of those vendors support that option.

Deduplication on all-flash arrays is highly desirable as it helps drive the price point of flash down to the point where organizations can look to cost-effectively use it more widely in production. However, deduplication only makes sense if the vendor delivers deduplication in a manner that matches the needs of your organization.

To get a glimpse into how these five vendors deduplicate data differently, check out this short, two-page report* from DCIG that examines 14 deduplication features on five different products. This concise, easy-to-understand report provides you with an at-a-glance snapshot of which products support the key deduplication features that organizations need to make the right all-flash array buying decision.

Access to this report* is available through the DCIG Competitive Intelligence Portal and is limited to subscribers to it. However, Unitrends is currently providing complimentary access to the DCIG Competitive Intelligence Portal for end-users. Once registered, individuals may download this report as well as the latest DCIG All-flash Array Buyer’s Guide.

If not already a subscriber, register now to get this report for free today to obtain the information you need to get a better grasp on more than whether these arrays deduplicate data. Rather, learn how they do it differently!

* This report is only available for a limited time to subscribers of the DCIG Competitive Intelligence (CI) Portal. Individuals who work for manufacturers, resellers, or vendors must pay to subscribe to the DCIG CI Portal. All information accessed and reports downloaded from the DCIG CI Portal is for individual, confidential use, and may not be publicly disseminated.




Difficult to Find any Sparks of Interest or Innovation in HDDs Anymore

In early November DCIG finalized its research into all-flash arrays and, in the coming weeks and months, will be announcing its rankings in its various Buyer’s Guide Editions as well as in its new All-flash Array Product Ranking Bulletins. It as DCIG prepares to release its all-flash array rankings that we also find ourselves remarking just how quickly interest in HDD-based arrays has declined just this year alone. While we are not ready to declare HDDs dead by any stretch, finding any sparks that represents interest or innovation in hard disk drives (HDDs) is getting increasingly difficult.

spark

The rapid declining of interest in HDDs over the last 18 months, and certainly the last six months, is stunning. When flash first came started gaining market acceptance in enterprise storage arrays around 2010, there was certainly speculation that flash could replace HDDs. But the disparity in price per GB between disk and flash was great at the time and forecast to remain that way for many years. As such, I saw no viable path for flash to replace disk in the near term.

Fast forward to late 2016 and flash’s drop in price per GB coupled with the introduction of technologies such as compression and deduplication in enterprise storage arrays has brought its price down to where it now approaches HDDs. Then factor in the reduced power and heating costs, flash’s increased life span (5 years or longer in many cases,) the improved performance and intangibles such as the elimination of noise in data centers, and suddenly the feasibility of all-flash data centers does not seem so far-fetched.

Some vendors are even working behind the scenes to make the case for flash even more compelling. They plan to eliminate the upfront capital costs associated with deploying flash and are instead working on flash deployments that charge monthly based on how much capacity your organization uses.

Recent statistics support this rapid adoption. Trendfocus announced that it found a 101% quarter over quarter increase in the number of enterprise PCIe units shipped, the capacity for all shipped SSDs approaching 14 exabytes,  and the total number of SATA and SAS SSDs shipped topped 4 million units. Those numbers coupled with CEOs from providers such as Kaminario (link) and Nimbus Data (link) both publicly saying that the list prices for flash for their all-flash units have dropped below the $1/Gb price point and it is no wonder that flash is dousing any sparks of interest that companies have in buying HDDs or that vendors have in innovating in HDD technology.

Is DCIG declaring disk dead? Absolutely not. In talking with providers of integrated and hybrid cloud backup appliances, deduplicating backup appliances, and archiving appliances, they still cannot yet justify replacing HDDs with flash. Or at least not yet.

One backup appliance provider tells me his company watches the prices of flash like a hawk and re-evaluates the price of flash versus HDDs about every six months to see if it makes sense to replace HDDs with flash. The threshold that makes it compelling for his company to use flash in lieu of HDDs has not yet been crossed and may still be some time away.

While flash has certainly dropped in price even as it simultaneously increases in capacity, companies should not expect to store their archive and backup data on flash in the next few years. The recently announced Samsung 15.36TB SSD drive that is available for around $10,000 is ample proof of that. Despite its huge capacity, it still costs around 65 cents/GB as compared to the price/GB for 8TB HDDs which run around a nickel per GB – or about one tenth.

That said, circle the year 2020 as potential tipping point. That year, Samsung anticipates releasing a 100TB flash drive. If that flash drive stays at the same $10,000 price point, it will put flash within striking range of HDDs on a price per GB or make it so low in cost per GB that most shops will no longer care about the slight price differential between HDDs and flash. That price point coupled with flash’s lower operating costs and longer life may finally put out whatever sparks of interest or innovation are left in HDDs.




DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide developed from the backup appliance body of research. Other Buyer’s Guides based on this body of research include the recent DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide and the forthcoming 2016-17 Integrated Backup Appliance Buyer’s Guide.

As core business processes become digitized, the ability to keep services online and to rapidly recover from any service interruption becomes a critical need. Given the growth and maturation of cloud services, many organizations are exploring the advantages of storing application data with cloud providers and even recovering applications in the cloud.

Hybrid cloud backup appliances (HCBA) are deduplicating backup appliances that include pre-integrated data protection software and integration with at least one cloud-based storage provider. An HCBA’s ability to replicate backups to the cloud supports disaster recovery needs and provides essentially infinite storage capacity.

The DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-three (23) products from six (6) different providers. Using ranking categories of Recommended, Excellent and Good, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which hybrid cloud backup appliance will suit their needs.

Each backup appliance included in the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide meets the following criteria:

  • Be available as a physical appliance
  • May also ship as a virtual appliance
  • Includes backup and recovery software that enables seamless integration into an existing infrastructure
  • Stores backup data on the appliance via on premise DAS, NAS or SAN-attached storage
  • Enables connectivity with at least one cloud-based storage provider for remote backups and long-term retention of backups in a secure/encrypted fashion
  • Provides the ability to connect the cloud-based backup images on more than one geographically dispersed appliance
  • Be formally announced or generally available for purchase on July 1, 2016

It is within this context that DCIG introduces the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide. DCIG’s succinct analysis provides insight into the state of the hybrid cloud backup appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using a hybrid cloud backup appliance, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a- glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by- side comparisons, assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by-side feature comparisons of the products in which the organization is most interested.

By using the DCIG Analysis Portal and applying the hybrid cloud backup appliance criteria to the backup appliance body of research, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create this Buyer’s Guide Edition. DCIG plans to use this same process to create future Buyer’s Guide Editions that further examine the backup appliance marketplace.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Data Visualization, Recovery, and Simplicity of Management Emerging as Differentiating Features on Integrated Backup Appliances

Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.

Once largely assembled and configured by either IT staff or value added resellers (VARs), integrated backup appliances have gone mainstream and are available for use in almost any size organization. By bundling together both hardware and software, large enterprises get the turnkey backup appliance solution that was just a few years ago primary reserved for smaller organizations. In so doing, large enterprises can eliminate the need to spend days, weeks, or even months they previously had to spend configuring and deploying these solutions into their infrastructure.

The evidence of the demand for backup appliances at all levels of the enterprise is made plain by the providers who bring them to market. Once the domain of providers such as STORServer and Unitrends, “software only” companies such as Commvault and Veritas have responded to the demand for turnkey backup appliance solutions with both now offering their own backup appliances under their respective brand names.

commvault-pbba

Commvault Backup Appliance

symantec-netbackup-pbba

Veritas NetBackup Appliance

In so doing, any size organization may get any of the most feature rich enterprise backup software solutions on the market, whether it is IBM Tivoli Storage Manager (STORServer), Commvault (Commvault and STORServer), Unitrends or Veritas NetBackup, delivered to them as a backup appliance. Yet while traditional all-software providers have entered the backup appliance market,  behind the scenes new business demands are driving further changes on backup appliances that organizations should consider as they contemplate future backup appliance acquisitions.

  • First, organizations expect successful recoveries. A few years ago, the concept of all backup jobs completing successfully was enough to keep everyone happy and giving high-fives to one another. No more. organizations recognize that they have reliable backups residing on a backup appliance and these appliances may largely sit idle during off-backup hours. This gives the enterprise some freedom to do more with these backup appliances during these periods of time such as testing recoveries, recovering applications on the appliance itself, or even presenting these backup copies of data to other applications to use as sources for internal testing and development. DCIG found that a large number of backup appliances support one or more vCenter Instant Recovery features and the emerging crop of backup appliances can also host virtual machines and recover applications on them.
  • Second, organizations want greater visibility into their data to justify business decisions. The amount of data residing in enterprise backup repositories is staggering. Yet the lack of value that organizations derive from that stored data combined with the potential risk it presents to them by retaining it is equally staggering. Features that provide greater visibility into the metadata of these backups which then analyze it and help turn it into measurable value for the business are already starting to find their way onto these appliances. Expect these features to become more prevalent in the years to come.
  • Third, enterprises want backup appliances to expand their value proposition. Backup appliances are already easy to deploy but maintaining and upgrading them over time or deploying them for other use cases gets more complicated over time. To address these concerns, emerging providers such as Cohesity, which is making its first appearance in DCIG Buyer’s Guides as an integrated backup appliance, directly addresses these concerns. Available as a scale-out backup appliance that can function as a hybrid cloud backup appliance, a deduplicating backup appliance and/or as an integrated backup appliance, it provides an example of how an enterprises can more easily scale and maintain it over time while giving them the flexibility to use it internally in multiple different ways.

The forthcoming DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide Editions highlight the most robust and feature rich integrated backup appliances available on the market today. As such, organizations should consider these backup appliances covered in these Buyer’s Guides as having many of the features they need to protect both their physical and virtual environments. Further, a number of these appliances give them early access to the set of features that will position them to meet their next set of recovery challenge,s satisfy rising expectations for visibility into their corporate data, and simplify their ongoing management so they may derive additional value from it.




SaaS Provider Pulls Back the Curtain on its Backup Experience with Cohesity; Interview with System Architect, Fidel Michieli, Part 3

Usually when I talk to backup and system administrators, they willingly talk about how great a product installation was. But it then becomes almost impossible to find anyone who wants to comment about what life is like after their backup appliance is installed. This blog entry represents a bit of anomaly in that someone willingly pulled back the curtain on what their experience was like after they had the appliance installed. In this third installment in my interview series with system architect, Fidel Michieli, describes how the implementation of Cohesity went in his environment and how Cohesity responded to issues that arose.

Jerome:  Once you had Cohesity deployed in your environment, can you provide some insights into how it operated and how upgrades went?

Fidel:  We have been through the upgrade process and the process of adding nodes twice. Those were the scary milestones that we did not test during the proof of concept (POC). Well, we did cover the upgrade process, but we did not cover adding nodes.

Jerome:  How did those upgrade go? Seamlessly?

Fidel:  The fact that our backup windows are small and we can run during the night essentially leaves all of our backup infrastructure idle during the day. If we take down one node at a time, we barely notice as we do not have anything running. But as software company, we expect there to be a few bumps along the way which we encountered.

Jerome:  Can you describe a bit about the “bumps” that you encountered?

Fidel:  We filled up the Cohesity cluster much faster than we expected which set its metadata sprawling. We went to 90-92 percent very quickly so we had to add in nodes in order to get the capacity back which was being taken up by its metadata.

Jerome:  Do you control how much metadata the Cohesity cluster creates?

Fidel:  The metadata size is associated with the amount of duplicated data it holds. As that grew, we started seeing some services restart and we got alerts of services restarting.

Jerome:  You corrected the out of capacity condition by adding more nodes?

Fidel:   Temporarily, yes.  Cohesity recognized we were not in a stable state and they did not want us to have a problem so they shipped us eight more nodes for us to create a new cluster.  [Editor’s Note:  Cohesity subsequently issued a new software release to store dedupe metadata more efficiently, which has since been implemented at this SaaS provider’s site.]

Jerome:  That means a lot that Cohesity stepped up to the plate to support its product.

Fidel:   It did. But while it was great that they shipped us the new cluster, I did not have any additional Ethernet ports to connect these new nodes as we did not have the additional port count in our infrastructure. To resolve this, Cohesity agreed to ship us the networking gear we needed. It talked to my network architect, found out what networking gear we liked, agreed to buy it and then shipped the gear to us overnight.

Further, my Cohesity system engineer, calls me every time I open a support ticket and shows up here. He replies and makes sure that my ticket moves through the support queue. He came down to install the original Cohesity cluster and the upgrades to the cluster, which we have been through twice already. The support experience has been fantastic and Cohesity has taken all of my requests into consideration as it has released software upgrades to its product, which is great.

Jerome:  Can you share one of your requests that Cohesity has implemented into its software?

Fidel:  We needed to have connectivity to Iron Mountain’s cloud. Cohesity got that certified with Iron Mountain so it works in a turnkey fashion. We also needed support for SQL Server which Cohesity into its road map at the time and which it recently delivered. We also needed Cohesity to certify support for Exchange 2016 so they expedited support for that so it is also now certified.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.

In part 4 of this interview series Fidel shares how Cohesity functions as both an integrated backup software appliance and a deduplicating target backup appliance in his company’s environment.




SaaS Provider Decides to Roll Out Cohesity for Backup and DR; Interview with System Architect, Fidel Michieli, Part 2

Evaluating product features, comparing prices, and doing proofing of concepts are important steps in the process of adopting almost any new product. But once one completes those steps, the time arrives to start to roll the product out and implement it. In this second installment of my interview series with System Architect, Fidel Michieli, he shares how his company gained a comfort level with Cohesity for backup and disaster recovery (DR) and how broadly it decided to deploy the product in the primary and secondary data centers.

Jerome: How did you come to gain a comfort level for introducing Cohesity into your production environment?

Fidel: We first did a proof of concept (POC).  We liked what we saw about Cohesity but we had a set of target criteria based on the tests we had previously run using our existing backup software and the virtual machine backup software. As such, we had a matrix of what numbers were good and what numbers were bad. Cohesity’s numbers just blew them out of the water.

Jerome:  How much faster was Cohesity than the other solutions you had tested?

Fidel: Probably 250 percent or more. Cohesity does a metadata snapshot where it essentially uses VMware’s technology, but the way that it ingests the data and the amount of compute that it has available to do the backups creates the difference, if that makes sense. We really liked the performance for both backups and restores.

We had two requirements. On the Exchange side we needed to do granular message restores. Cohesity was able to help us achieve that objective by using an external tool that it licensed and which works. Our second objective was to get out of the tape business. We wanted to go to cloud. Unfortunately for us we are constrained to a single vendor. So we needed to work with that vendor.

Jerome: You mean single cloud vendor?

Fidel: Well it’s a tape vendor, Iron Mountain. We are constrained to them by contract. If we were going to shift to the cloud, it had to be to Iron Mountain’s cloud. But Cohesity, during the POC level, got the data to Iron Mountain.

Jerome: How many VMs?

Fidel: We probably have around 1,400 in our main data center and about 120 hosts. We have a two-site disaster recovery (DR) strategy with a primary and a backup. Obviously it was important to have replication for DR. That was part of the plan before the 3-2-1 rule of backup. We wanted to cover that.

Jerome: So you have Cohesity at both your production and DR sites replicating between them?

Fidel: Correct.

Jerome: How many Cohesity nodes at each site?

Fidel: We have 8 and 8 at both sites. After the POC we started to recognize a lot of the efficiencies from management perspective. We knew that object storage was the way we wanted to go, the obvious reason being the metadata.

What the metadata means to us is that we can have a lot of efficiencies sit on top of your data. When you are analyzing or creating objects on your metadata, you can more efficiently manage your data. You can create objects that do compression, deduplication, objects that do analysis, and objects that hold policies. It’s more of a software defined data, if you will. Obviously with that metadata and the object storage behind it, our maintenance windows and backups windows started getting lower and lower.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.

In the fourth and final installment of this interview series, Fidel describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.

Bitnami