Key Differentiators between the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 Systems

Companies have introduced a plethora of technologies into their core enterprise infrastructures in recent years that include all-flash arrays, cloud, hyper-converged infrastructures, object-based storage, and snapshots, just to name a few. But as they do, a few constants remain. One is the need to backup and recover all the data they create.

Deduplication appliances remain one of the primary means for companies to store this data for short-term recovery, disaster recoveries, and long-term data retention. To fulfill these various roles, companies often select either the HPE StoreOnce 5650 or the Dell EMC Data Domain 9300. (To obtain a complimentary DCIG report that compares these two products, follow this link.)

Their respective deduplication appliance lines share many features in common. They both perform inline deduplication. They both offer client software to do source-side deduplication that reduces data sent over the network and improves backup throughput rates. They both provide companies with the option to backup data over NAS or SAN interfaces.

Despite these similarities, key areas of differentiation between these two product lines remain which include the following:

  1. Cloud support. Every company either has or anticipates using a hybrid cloud configuration as part of its production operations. These two product lines differ in their levels of cloud support.
  2. Deduplication technology. Data Domain was arguably the first to popularize widespread use of deduplication for backup. Since then, others such as the HPE StoreOnce 5650 have come on the scene that compete head-to-head with Data Domain appliances.
  3. Breadth of application integration. Software plug-ins that work with applications and understand their data formats prior to deduplicating the data provide tremendous benefits as they improve data reduction rates and decrease the amount of data sent over the network during backups. The software that accompanies the appliances from these two providers has varying degrees of integration with leading enterprise applications.
  4. Licensing. The usefulness of any product hinges on the features it offers, their viability, and which ones are available to use. Clear distinctions between the HPE StoreOnce and Dell EMC Data Domain solutions exist in this area.
  5. Replication. Copying data off-site for disaster recovery and long-term data retention is paramount in comprehensive enterprise disaster recovery strategies. Products from each of these providers offer this but they differ in the number of features they offer.
  6. Virtual appliance. As more companies adopt software-defined data center strategies, virtual appliances have increased appeal.

In the latest DCIG Pocket Analyst Report, DCIG compares the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 product lines and examines how each well these two products fare in their support of these six areas in which DCIG looks at nearly 100 features to draw its conclusions. This report is currently available at no charge for a limited time on DCIG’s partner website, TechTrove. To receive complimentary access to this report, complete a registration form that you can find at this link.




A More Elegant (and Affordable) Approach to Nutanix Backups

One of the more perplexing challenges that Nutanix administrators face is how to protect the data in their Nutanix deployments. Granted, Nutanix natively offers its own data protection utilities. However, these utilities leave gaps that enterprises are unlikely to find palatable when protecting their production applications. This is where Comtrade Software’s HYCU and ExaGrid come into play as their combined solutions provide a more affordable and elegant approach to protecting Nutanix environments.

One of the big appeals of using hyperconverged solutions such as Nutanix’s inclusion of basic data protection utilities. Using its Time Stream and Cloud Connect technologies, Nutanix makes it easy and practical for organizations to protect applications hosted on VMs running on Nutanix deployments.

The issue becomes how does one affordably deliver and manage data protection in Nutanix environments at scale? This becomes a tougher question for Nutanix to answer because to use its data protection technologies at scale requires running the Nutanix platform to host the secondary/backup copies of data. While that is certainly doable, that approach is likely not the most affordable way to tackle this challenge.

This is where a combined data protection solution from Comtrade Software and ExaGrid for the protection of Nutanix environments makes sense. Comtrade Software’s HYCU was the first backup software product to come to market purpose-built to protect Nutanix environments. Like Nutanix’s native data protection utilities, Nutanix administrators can manage HYCU and their VM backups from within the Nutanix PRISM management console. Unlike Nutanix’s native data protection utilities, HYCU auto-detects applications running within VMs and configures them for protection.

Further distinguishing HYCU from other competitive backup software products mentioned on Nutanix’s web page, HYCU is the only one currently listed that can run as a VM in an existing Nutanix implementation. The other products listed require organizations to deploy a separate physical machine to run their software which add cost and complexity into the backup equation.

Of course, once HYCU protects the data, the issue becomes, where does one store the backup copies of data for fast recoveries and long-term retention. While one can certainly keep these backup copies on the existing Nutanix deployment or on a separate deployment of it, these creates two issues.

  • One, if there is some issue with the current Nutanix deployment, you may not be able to recover the data.
  • Two, there are more cost-effective solutions for the storage and retention of backup copies of data.

ExaGrid addresses these two issues. Its scale-out architecture resembles Nutanix’s architecture enabling an ExaGrid deployment to start small and then easily scale to greater amounts of capacity and throughput. However, since it is a purpose-built backup appliance intended to store secondary copies of data, it is more affordable than deploying a second Nutanix cluster. Further, the Landing Zones that are uniquely found on ExaGrid deduplication systems facilitate near instantaneous recovery of VMs.

Adding to the appeal of ExaGrid’s solutions in enterprise environments is its recently announced EX63000E appliance. This appliance has 58% more capacity than its predecessor, allowing for a 63TB full backup. Up to thirty-two (32) EX63000E appliances can be combined in a single scale-out system to allow for a 2PB full backup. Per ExaGrid’s published performance benchmarks, each EX63000E appliance has a maximum ingest rate of 13.5TB/hr. per appliance enabling thirty-two (32) EX63000Es combined in a single system to achieve maximum ingest rate of 432TB/hr.

Hyperconverged infrastructure solutions are in general poised to re-shape enterprise data center landscapes with solutions from Nutanix currently leading the way. As this data center transformation occurs, organizations need to make sure that the data protection solutions that they put in place offer both the same ease of management and scalability that the primary hyperconverged solution provides. Using Comtrade Software HYCU and ExaGrid, organizations get the affordable yet elegant data protection solution that they seek for this next generation data center architecture.




Differentiating between the Dell EMC Data Domain and ExaGrid EX Systems

Deduplication backup target appliances remain a critical component of the data protectioninfrastructure for many enterprises. While storing protected data in the cloud may be fine for very small businesses or even as a final resting place for enterprise data, deduplication backup target appliances continue to function as their primary backup target and primary source for recovering data. It is for these reasons that enterprises frequently turn to deduplication backup target appliances from Dell EMC and ExaGrid to meet these specific needs that are covered in recent DCIG Pocket Analyst Report.

The Dell EMC Data Domain and ExaGrid families of deduplication backup target appliances appear on the short lists for many enterprises. While both these providers offer systems for small, midsize, and large organizations, the underlying architecture and features on the systems from these two providers make them better suited for specific use cases.

Their respective data center efficiency, deduplication, networking, recoverability, replication, and scalability features (to include recently announced enhancements) provide insight into the best use cases for the systems from these two vendors.

Purpose-built, deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They offer appliances in vari­ous physical configurations to meet the specific backup needs of small, midsize, and large enterprises while provid­ing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

Their systems significantly reduce backup data stores and offer concurrent backup and replication. They also limit the number of backup streams, display real-time dedu­plication ratios, and do capacity analysis and trending. Despite the similarities that the systems from these respective vendors share, six differences exist between them in their underlying features that impact their ability to deliver on key end-user expectations. These include:

  1. Data center efficiency to include how much power they use and the size of their data center footprint.
  2. Data reduction to include what deduplication options they offer and how they deliver them.
  3. Networking protocols to include connectivity for NAS and SAN environments.
  4. Recoverability to include how quickly, how easily, and where recoveries may be performed.
  5. Replication to include copying data offsite as well as protecting data in remote and branch offices.
  6. Scalability to include total amount of capacity as well as ease and simplicity of scaling.

DCIG is pleased to make a recent DCIG Pocket Analyst Report that compares these two families of deduplication backup target appliances available for a complimentary download for a limited time. This succinct, 4-page report includes a detailed product matrix as well as insight into these six differentiators between these two solutions and which one is best positioned to deliver on these six key data center considerations.

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




Deduplication Still Matters in Enterprise Clouds as Data Domain and ExaGrid Prove

Technology conversations within enterprises increasingly focus on the “data center stack” with an emphasis on cloud enablement. While I agree with this shift in thinking, one can too easily overlook the merits of underlying individual technologies when only considering the “Big Picture“. Such is happening with deduplication technology. A key enabler of enterprise archiving, data protecton, and disaster recovery solutions, vendors such as Dell EMC and ExaGrid deliver deduplication technology in different ways as DCIG’s most recent 4-page Pocket Analyst Report reveals that makes each product family better suited for specific use cases.

It seemed for too many years enterprise data centers focused too much on the vendor name on the outside of the box as opposed to what was inside the box – the data and the applications. Granted, part of the reason for their focus on the vendor name is they wanted to demonstrate they had adopted and implemented the best available technologies to secure the data and make it highly available. Further, some of the emerging technologies necessary to deliver a cloud-like experience with the needed availability and performance characteristics did not yet exist, were not yet sufficiently mature, or were not available from the largest vendors.

That situation has changed dramatically. Now the focus is almost entirely on software that provides enterprises with cloud-like experiences that enables them to more easily and efficiently manage their applications and data. While this change is positive, enterprises should not lose sight of the technologies that make up their emerging data center stack as they are not all equally equipped to deliver them in the same way.

A key example is deduplication. While this technology has existed for years and has become very mature and stable during that time, the options in which enterprises can implement it and the benefits they will realize it vary greatly. The deduplication solutions from Dell EMC Data Domain and ExaGrid illustrate these differences very well.

DCIG Pocket Analyst Report Compares Dell EMC Data Domain and ExaGrid Product Families

Deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They also both offer appliances in various physical configurations to meet the specific backup needs of small, midsize, and large enterprises while providing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

However, their respective systems also differ in key areas that will impact the overall effectiveness these systems will have in the emerging cloud data stacks that enterprises are putting in place. The six areas in which they differ include:

  1. Data center efficiency
  2. Deduplication methodology
  3. Networking protocols
  4. Recoverability
  5. Replication
  6. Scalability

The most recent 4-page DCIG Pocket Analyst Report analyzes these six attributes on the systems from these two providers of deduplication systems and compares their underlying features that deliver on these six attributes. Further, this report identifies which product family has the advantage in each area and provides a feature comparison matrix to support these claims.

This report provides the key insight in a concise manner that enterprises need to make the right choice in deduplication solutions for their emerging cloud data center stack. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

Cloud-like data center stacks that provide application and data availability, mobility, and security are rapidly becoming a reality. But as enterprises adopt these new enterprise clouds, they ignore or overlook technologies such as deduplication that make up these stacks at their own peril as the underlying technologies they implement can directly impact the overall efficiency and effectiveness of the cloud that one is building.




The End Game for Cloud Data Protection Appliances is Recovery


The phrase “Cloud Data Protection Appliance” is included in the name of DCIG’s forthcoming Buyer’s Guide but the end game of each appliance covered in that Guide is squarely on recovery. While successful recoveries have theoretically always been the objective of backup appliances, vendors too often only paid lip service to that ideal as most of their new product features centered on providing better means for doing backups.  Recent technology advancements have flipped this premise on its head.

Multiple reasons exist as to why these appliances can focus more fully on this end game of recovery though five key ones have emerged in the last few years that have enabled it. These include:

  1. The low price point of using disk as a backup target (as opposed to tape)
  2. The general availability of private and public cloud providers
  3. The use of deduplication to optimize storage capacity
  4. The widespread availability of snapshot technologies on hypervisors, operating systems, and storage arrays
  5. The widespread enterprise adoption of hypervisors like VMware ESX, and Microsoft Hyper-V as well as the growing adoption of container technologies such as Docker and Kubernetes,

While there are other contributing technologies, these five more so than the others give these appliances new freedom to deliver on backup’s original promise: successful recoveries. By way of example:

  • The backup appliance is used for local application recoveries. Over 80 percent of the appliances that DCIG evaluated now support the instant recovery of an application on a virtual machine on the appliance. This frees enterprises to start the recovery of the application on the appliance itself before moving the application to its primary host. Enterprises can even opt to recover and run the application on the appliance for an extended time for test and development or to simply host the application until the production physical machine on which the application resides recovers.
  • Application conversions and migrations. All these appliances support the backup of virtual machines and their recovery as a virtual machine, but fully 88 percent of the software on these appliances support the backup of a physical machine and its recovery to a virtual machine. This feature gives enterprises access to a tool that can use to migrate applications from physical to virtual machines as a matter of course or in the event of disasters. Further, 77 percent of them support recovery of virtual machines to physical machines. While that may seem counter intuitive, not every application runs well on virtual machines or may need functionality only found when running on a physical machine.
  • Location of backup data. By storing data in the cloud (even if only using it as a cloud target,) enterprises know where their backup data is located. This is not trivial. Too many enterprises do not even know exactly what physical gear they have in their data center, much less where their data is located. While many enterprises still need to concern themselves with various international regulations governing the data’s physical location when storing data in the cloud, at least they know with which cloud provider they stored the data and how to access it. As anyone who uses or has used tape may recall, tracking down, lost tapes, misplaced tapes or even existing tapes can quickly become like trying to find a needle in a haystack. Even using disk is not without its challenges. Many enterprises may have to use multiple disk targets to store their backup data and trying to identify exactly which disk device holds what data may not be as simple as it sounds.
  • Recovering in the cloud. This end game of recovering in the cloud, whether it is recovering a single file, a single application, or an entire data center, may appeal to enterprises more so than any other option on these appliances. The ability to virtually create and have access to a secondary site from which they can recover data or even perform a disaster recovery and run one or more applications removes a dark cloud of unspoken worry that hangs over many enterprises today. The fact that they can use that recovery in the cloud as a stepping stone to potentially hosting applications or their entire data center in the cloud is an added benefit.

Enterprises should be very clear as to what opportunities that today’s cloud data protection appliances offer them. Near term they provide them a means to easily connect to one or more cloud providers, get their backup data offsite, and even recover their data or applications in the cloud. But the long term ramifications of using these appliances to store data in the cloud are much more significant. They represent the bridge to recovering and even potentially hosting more of their applications and data with one or more cloud providers. Organizations should therefore give this end game of recovery specific attention both when they choose a cloud data protection appliance and the cloud provider(s) to which the appliance connects.

To receive regular updates on when blog entries like this post on DCIG’s website, follow this link to subscribe to DCIG’s newsletter.




Full Potential of Disk-based Backup Finally Becoming a Reality with Cohesity DataPlatform 4.0

Organizations have come to the realization that using disk as a backup storage target does more than simply solve backup problems. It creates entirely new possibilities for recovery. But as they recognize these new opportunities, they also see the need for backup solutions that offer them new options for application availability and recoverability backed by ease of management. The latest DataPlaform 4.0 release from Cohesity moves organizations closer to this ideal.

Using tape as a primary backup target is largely dead but the best practices, technologies, and the possibilities to capitalize on using disk as a backup target and as a source for recoveries are still emerging. For instance, secondary storage solutions that only offer “scale-up” architectures create management problems. Additionally, organizations want to do more with their long neglected second or third copies of data so they want to use these secondary storage solutions to host applications or VMs for the purposes of recovery.

Cohesity’s latest DataPlatform 4.0 release illustrates the potential of what the current generation of secondary storage targets can do for organizations to improve their abilities to recover while simultaneously making it easier for them to manage and scale their infrastructure.

Source: Cohesity

Consider:

  • Integration with the Pure Storage FlashArray//M series. Making snapshots of applications and/or virtual machines (VMs) on your Pure Storage production arrays is a great approach to data protection and instant recovery until one starts to run out of capacity on these arrays. Aggravating this situation, flash costs money. Through its recently announced integration with Pure Storage, organizations can seamlessly move snapshots via SAN or NAS protocols from Pure Storage FlashArray//M arrays to the Cohesity DataPlatform. This frees up availability capacity on Pure Storage arrays while making it possible for organizations to retain snapshots for longer periods of time.
  • More usable capacity using the same amount of raw capacity. Everyone ideally wants something for nothing and Cohesity’s latest 4.0 DataPlatform release delivers on this ideal. Previously, it mirrored data between disk drives for data redundancy. Using its new erasure coding technology, organizations can achieve 40% or more storage efficiency when compared to its previous generation product. Further, organizations can achieve this increase in storage capacity by installing this latest software realize on its existing platform.
  • New options for remote and branch office locations. Remote and branch offices are not going away anytime soon yet organizations do not have any more time to manage and protect them. To provide them with higher levels of protection while reducing the time required to manage them, Cohesity introduced its smaller C2100 appliance as well as rolled out a Virtual Edition of its software. The Virtual Edition can be used on traditional backup servers to support current backup and recovery operations or even operate in the cloud when it can serve as a backup target.
  • Your choice of cloud providers. The Cohesity Virtual Edition can operate with multiple cloud providers to include Microsoft Azure and Amazon. In this way, organizations can extend their Cohesity deployment into the cloud to provide instant backup and recovery to ensure uninterrupted operations.

Organizations are now quite acquainted with using disk as a backup target but many still find themselves on the outside looking in when it comes to realizing disk’s full potential as a backup target… such as offering fast, simple recoveries that they can deliver at an enterprise scale. The Cohesity DataPlatform 4.0 changes that perspective.  Cohesity’s use of hyperconverged technology as part of a secondary storage offering solves the key pain points that organizations have for quickly recovering either locally or in the cloud while simultaneously making their backups easier to manage.




DCIG 2016-17 Small/Midsize Enterprise Integrated Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2016-17 Small/Midsize Enterprise Integrated Backup Appliance Buyer’s Guide developed from DCIG’s backup appliance body of research.

Integrated backup appliances address enterprise data protection challenges by pre-integrating backup software with self-contained purpose-built backup appliances. Because Integrated backup appliances include backup software, they displace both legacy backup hardware and legacy backup software.

The DCIG 2016-17 Small/Midsize Enterprise Integrated Backup Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-nine (29) products from seven (7) different providers. Using ranking categories of Recommended, Excellent and Good, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which integrated backup appliance will suit their needs.

Each backup appliance included in the DCIG 2016-17 Small/Midsize Enterprise Integrated Backup Appliance Buyer’s Guide meets the following criteria:

  • Must be available as a physical appliance that includes backup and recovery software as a combined bundle under one SKU
  • Provides features and capacities appropriate for small/midsize enterprises
  • Must store backup data on the appliance via on premise DAS, NAS or SAN-attached storage
  • May connect to a public storage cloud
  • Sufficient information provided to reach meaningful conclusions
  • Must be formally announced or generally available for purchase on July 1, 2016

DCIG’s succinct analysis provides insight into the state of the integrated backup appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using an integrated backup appliance, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a- glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by- side comparisons, assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by-side feature comparisons of the products in which the organization is most interested.




DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the 2016-17 Integrated Backup Appliance Buyer’s Guide developed from DCIG’s backup appliance body of research.

Integrated backup appliances address enterprise data protection challenges by pre-integrating backup software with self-contained purpose-built backup appliances. Because Integrated backup appliances include backup software, they displace both legacy backup hardware and legacy backup software.

The DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide ranks more than 100 features of thirty-three (33) products from ten (10) different providers. Using ranking categories of Recommended, Excellent and Good, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which integrated backup appliance will suit their needs.

Each backup appliance included in the DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide meets the following criteria:

  • Must be available as a physical appliance that includes backup and recovery software as a combined bundle under one SKU
  • Must store backup data on the appliance via on premise DAS, NAS or SAN-attached storage
  • May connect to a public storage cloud
  • Sufficient information provided to reach meaningful conclusions
  • Must be formally announced or generally available for purchase on July 1, 2016

DCIG’s succinct analysis provides insight into the state of the integrated backup appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using an integrated backup appliance, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a- glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by- side comparisons, assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by-side feature comparisons of the products in which the organization is most interested.




Modern Backup Appliances Function as Integrated Backup Appliance, Deduplicating Backup Appliance or Both; Interview with System Architect, Fidel Michieli, Part 4

Most organizations when they look at backup appliances have to segregate them into one of two categories: those that function as integrated backup appliances (which include backup software) and those that function as target-based deduplicating backup appliances. Cohesity effectively blurs these lines by giving organizations the option to use its appliances to satisfy either or both of these use cases in their environment. In this fourth and final installment in my interview series with system architect, Fidel Michieli, he describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.

Jerome:  Can you provide some insight into how much data you protect in your environment and how you are using Cohesity to back it up and protect it?

Fidel:  We have 2.8 PB of data protected which has a long retention requirement. We use Cohesity as a backup target for our Veritas NetBackup to protect data store on our traditional NAS arrays with NFS and CIFS shares. Currently there is no way for Cohesity to back those file shares up so we leverage NetBackup to protect that data and use Cohesity as a backup target.

Jerome:  What percentage of your current environment is protected by NetBackup?

Fidel: I would say about seven percent of our environment which includes those CIFS shares and a couple of physical servers. In those cases, we use Cohesity as a backup target.

One feature of Cohesity that is very help is its views concept which is very cool. Cohesity has three settings.

  1. Inline deduplication
  2. No data deduplication
  3. Post process deduplication

Sometimes I do not want to deduplicate data because it is very processor intensive and I know that all of my data does not deduplicate and compress well. So Cohesity gives me the option to turn these features off as I have never heard of any storage device that you can optionally turn off deduplication and compression if you do not want it.

The other setting is post process deduplication. That frees it to quickly ingest backup data and not waste any time deduplicating it.

I prefer to do inline deduplication for the majority of my data as it comes in but, it my backup windows do not allow for it or the data does not deduplicate well, it is nice to have options to do post process deduplication or even turn deduplication off.

Jerome:  Do you schedule the time post process deduplication occurs and how much do you use it?

Fidel:  We do not use the no deduplication option though it is a very cool feature. We see basically what we would expect from using post process deduplication. Backups go really fast as Cohesity ingests the data really quickly.

When we use inline deduplication, we experience tremendous dedupe and compression rates that allows us to store large amounts of data on Cohesity and the recoveries are awesome. Basically, you tell Cohesity to recover a VM and it mounts the VM to vCenter and spins up the VM from its location on the Cohesity appliance. You do not even need to move data in order to recover it. That’s very, very powerful.

We started to leveraging this capacity for test and dev environment. We have an identity management application from Dell that we are rolling out and our architect is having a lot of issues with deploying it because it is very closely tied to Active Directory (AD). Since he does not have a real test environment, we use Cohesity to take and store a backup of our production AD. Then, we restore it on Cohesity for test and development so he can connect to this test DR environment using yesterday’s backup. We have a script that changes the IPs to make it kosher for him to use plus we have a process of refreshing that environment weekly.

Jerome:  Do you back up your VMs using Cohesity?

Fidel:  NetBackup is only used for physical servers which Cohesity does not protect right now nor does it protect CIFS shares. However, by moving most of VM backups to Cohesity, our licensing costs for NetBackup have dropped by over 97 percent.

Jerome:  Was the amount saved enough to pay for the Cohesity solution?

Fidel:  Yes. When we originally architected our backup environment, we used raw disk in the design and its cost per TB could not compete with Cohesity as it achieves deduplication ratios of 20:1. That is pretty amazing considering we could get half a PB of raw storage for around 50 grand but even taking those numbers into consideration, they could not compete with Cohesity plus that fails to take into consideration the cooling, power, and rack space costs associated with that extra disk.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.




DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide developed from the backup appliance body of research. Other Buyer’s Guides based on this body of research include the recent DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide and the forthcoming 2016-17 Integrated Backup Appliance Buyer’s Guide.

As core business processes become digitized, the ability to keep services online and to rapidly recover from any service interruption becomes a critical need. Given the growth and maturation of cloud services, many organizations are exploring the advantages of storing application data with cloud providers and even recovering applications in the cloud.

Hybrid cloud backup appliances (HCBA) are deduplicating backup appliances that include pre-integrated data protection software and integration with at least one cloud-based storage provider. An HCBA’s ability to replicate backups to the cloud supports disaster recovery needs and provides essentially infinite storage capacity.

The DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-three (23) products from six (6) different providers. Using ranking categories of Recommended, Excellent and Good, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which hybrid cloud backup appliance will suit their needs.

Each backup appliance included in the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide meets the following criteria:

  • Be available as a physical appliance
  • May also ship as a virtual appliance
  • Includes backup and recovery software that enables seamless integration into an existing infrastructure
  • Stores backup data on the appliance via on premise DAS, NAS or SAN-attached storage
  • Enables connectivity with at least one cloud-based storage provider for remote backups and long-term retention of backups in a secure/encrypted fashion
  • Provides the ability to connect the cloud-based backup images on more than one geographically dispersed appliance
  • Be formally announced or generally available for purchase on July 1, 2016

It is within this context that DCIG introduces the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide. DCIG’s succinct analysis provides insight into the state of the hybrid cloud backup appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using a hybrid cloud backup appliance, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a- glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by- side comparisons, assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by-side feature comparisons of the products in which the organization is most interested.

By using the DCIG Analysis Portal and applying the hybrid cloud backup appliance criteria to the backup appliance body of research, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create this Buyer’s Guide Edition. DCIG plans to use this same process to create future Buyer’s Guide Editions that further examine the backup appliance marketplace.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Data Visualization, Recovery, and Simplicity of Management Emerging as Differentiating Features on Integrated Backup Appliances

Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.

Once largely assembled and configured by either IT staff or value added resellers (VARs), integrated backup appliances have gone mainstream and are available for use in almost any size organization. By bundling together both hardware and software, large enterprises get the turnkey backup appliance solution that was just a few years ago primary reserved for smaller organizations. In so doing, large enterprises can eliminate the need to spend days, weeks, or even months they previously had to spend configuring and deploying these solutions into their infrastructure.

The evidence of the demand for backup appliances at all levels of the enterprise is made plain by the providers who bring them to market. Once the domain of providers such as STORServer and Unitrends, “software only” companies such as Commvault and Veritas have responded to the demand for turnkey backup appliance solutions with both now offering their own backup appliances under their respective brand names.

commvault-pbba

Commvault Backup Appliance

symantec-netbackup-pbba

Veritas NetBackup Appliance

In so doing, any size organization may get any of the most feature rich enterprise backup software solutions on the market, whether it is IBM Tivoli Storage Manager (STORServer), Commvault (Commvault and STORServer), Unitrends or Veritas NetBackup, delivered to them as a backup appliance. Yet while traditional all-software providers have entered the backup appliance market,  behind the scenes new business demands are driving further changes on backup appliances that organizations should consider as they contemplate future backup appliance acquisitions.

  • First, organizations expect successful recoveries. A few years ago, the concept of all backup jobs completing successfully was enough to keep everyone happy and giving high-fives to one another. No more. organizations recognize that they have reliable backups residing on a backup appliance and these appliances may largely sit idle during off-backup hours. This gives the enterprise some freedom to do more with these backup appliances during these periods of time such as testing recoveries, recovering applications on the appliance itself, or even presenting these backup copies of data to other applications to use as sources for internal testing and development. DCIG found that a large number of backup appliances support one or more vCenter Instant Recovery features and the emerging crop of backup appliances can also host virtual machines and recover applications on them.
  • Second, organizations want greater visibility into their data to justify business decisions. The amount of data residing in enterprise backup repositories is staggering. Yet the lack of value that organizations derive from that stored data combined with the potential risk it presents to them by retaining it is equally staggering. Features that provide greater visibility into the metadata of these backups which then analyze it and help turn it into measurable value for the business are already starting to find their way onto these appliances. Expect these features to become more prevalent in the years to come.
  • Third, enterprises want backup appliances to expand their value proposition. Backup appliances are already easy to deploy but maintaining and upgrading them over time or deploying them for other use cases gets more complicated over time. To address these concerns, emerging providers such as Cohesity, which is making its first appearance in DCIG Buyer’s Guides as an integrated backup appliance, directly addresses these concerns. Available as a scale-out backup appliance that can function as a hybrid cloud backup appliance, a deduplicating backup appliance and/or as an integrated backup appliance, it provides an example of how an enterprises can more easily scale and maintain it over time while giving them the flexibility to use it internally in multiple different ways.

The forthcoming DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide Editions highlight the most robust and feature rich integrated backup appliances available on the market today. As such, organizations should consider these backup appliances covered in these Buyer’s Guides as having many of the features they need to protect both their physical and virtual environments. Further, a number of these appliances give them early access to the set of features that will position them to meet their next set of recovery challenge,s satisfy rising expectations for visibility into their corporate data, and simplify their ongoing management so they may derive additional value from it.




SaaS Provider Pulls Back the Curtain on its Backup Experience with Cohesity; Interview with System Architect, Fidel Michieli, Part 3

Usually when I talk to backup and system administrators, they willingly talk about how great a product installation was. But it then becomes almost impossible to find anyone who wants to comment about what life is like after their backup appliance is installed. This blog entry represents a bit of anomaly in that someone willingly pulled back the curtain on what their experience was like after they had the appliance installed. In this third installment in my interview series with system architect, Fidel Michieli, describes how the implementation of Cohesity went in his environment and how Cohesity responded to issues that arose.

Jerome:  Once you had Cohesity deployed in your environment, can you provide some insights into how it operated and how upgrades went?

Fidel:  We have been through the upgrade process and the process of adding nodes twice. Those were the scary milestones that we did not test during the proof of concept (POC). Well, we did cover the upgrade process, but we did not cover adding nodes.

Jerome:  How did those upgrade go? Seamlessly?

Fidel:  The fact that our backup windows are small and we can run during the night essentially leaves all of our backup infrastructure idle during the day. If we take down one node at a time, we barely notice as we do not have anything running. But as software company, we expect there to be a few bumps along the way which we encountered.

Jerome:  Can you describe a bit about the “bumps” that you encountered?

Fidel:  We filled up the Cohesity cluster much faster than we expected which set its metadata sprawling. We went to 90-92 percent very quickly so we had to add in nodes in order to get the capacity back which was being taken up by its metadata.

Jerome:  Do you control how much metadata the Cohesity cluster creates?

Fidel:  The metadata size is associated with the amount of duplicated data it holds. As that grew, we started seeing some services restart and we got alerts of services restarting.

Jerome:  You corrected the out of capacity condition by adding more nodes?

Fidel:   Temporarily, yes.  Cohesity recognized we were not in a stable state and they did not want us to have a problem so they shipped us eight more nodes for us to create a new cluster.  [Editor’s Note:  Cohesity subsequently issued a new software release to store dedupe metadata more efficiently, which has since been implemented at this SaaS provider’s site.]

Jerome:  That means a lot that Cohesity stepped up to the plate to support its product.

Fidel:   It did. But while it was great that they shipped us the new cluster, I did not have any additional Ethernet ports to connect these new nodes as we did not have the additional port count in our infrastructure. To resolve this, Cohesity agreed to ship us the networking gear we needed. It talked to my network architect, found out what networking gear we liked, agreed to buy it and then shipped the gear to us overnight.

Further, my Cohesity system engineer, calls me every time I open a support ticket and shows up here. He replies and makes sure that my ticket moves through the support queue. He came down to install the original Cohesity cluster and the upgrades to the cluster, which we have been through twice already. The support experience has been fantastic and Cohesity has taken all of my requests into consideration as it has released software upgrades to its product, which is great.

Jerome:  Can you share one of your requests that Cohesity has implemented into its software?

Fidel:  We needed to have connectivity to Iron Mountain’s cloud. Cohesity got that certified with Iron Mountain so it works in a turnkey fashion. We also needed support for SQL Server which Cohesity into its road map at the time and which it recently delivered. We also needed Cohesity to certify support for Exchange 2016 so they expedited support for that so it is also now certified.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.

In part 4 of this interview series Fidel shares how Cohesity functions as both an integrated backup software appliance and a deduplicating target backup appliance in his company’s environment.




SaaS Provider Decides to Roll Out Cohesity for Backup and DR; Interview with System Architect, Fidel Michieli, Part 2

Evaluating product features, comparing prices, and doing proofing of concepts are important steps in the process of adopting almost any new product. But once one completes those steps, the time arrives to start to roll the product out and implement it. In this second installment of my interview series with System Architect, Fidel Michieli, he shares how his company gained a comfort level with Cohesity for backup and disaster recovery (DR) and how broadly it decided to deploy the product in the primary and secondary data centers.

Jerome: How did you come to gain a comfort level for introducing Cohesity into your production environment?

Fidel: We first did a proof of concept (POC).  We liked what we saw about Cohesity but we had a set of target criteria based on the tests we had previously run using our existing backup software and the virtual machine backup software. As such, we had a matrix of what numbers were good and what numbers were bad. Cohesity’s numbers just blew them out of the water.

Jerome:  How much faster was Cohesity than the other solutions you had tested?

Fidel: Probably 250 percent or more. Cohesity does a metadata snapshot where it essentially uses VMware’s technology, but the way that it ingests the data and the amount of compute that it has available to do the backups creates the difference, if that makes sense. We really liked the performance for both backups and restores.

We had two requirements. On the Exchange side we needed to do granular message restores. Cohesity was able to help us achieve that objective by using an external tool that it licensed and which works. Our second objective was to get out of the tape business. We wanted to go to cloud. Unfortunately for us we are constrained to a single vendor. So we needed to work with that vendor.

Jerome: You mean single cloud vendor?

Fidel: Well it’s a tape vendor, Iron Mountain. We are constrained to them by contract. If we were going to shift to the cloud, it had to be to Iron Mountain’s cloud. But Cohesity, during the POC level, got the data to Iron Mountain.

Jerome: How many VMs?

Fidel: We probably have around 1,400 in our main data center and about 120 hosts. We have a two-site disaster recovery (DR) strategy with a primary and a backup. Obviously it was important to have replication for DR. That was part of the plan before the 3-2-1 rule of backup. We wanted to cover that.

Jerome: So you have Cohesity at both your production and DR sites replicating between them?

Fidel: Correct.

Jerome: How many Cohesity nodes at each site?

Fidel: We have 8 and 8 at both sites. After the POC we started to recognize a lot of the efficiencies from management perspective. We knew that object storage was the way we wanted to go, the obvious reason being the metadata.

What the metadata means to us is that we can have a lot of efficiencies sit on top of your data. When you are analyzing or creating objects on your metadata, you can more efficiently manage your data. You can create objects that do compression, deduplication, objects that do analysis, and objects that hold policies. It’s more of a software defined data, if you will. Obviously with that metadata and the object storage behind it, our maintenance windows and backups windows started getting lower and lower.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.

In the fourth and final installment of this interview series, Fidel describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.




If We Cannot Scale Our Backup Solution, We Die; Interview with SaaS Provider System Architect Fidel Michieli, Part I

Every year at VMworld I have conversations that broaden my understanding and appreciation for new products on the market. This year was no exception as I had the opportunity to talk at length with Fidel Michieli, a System Architect at a SaaS provider, who shared his experiences with me about his challenges with backup and recovery and how he came to choose Cohesity. In this first installment in my interview series with Fidel, he shared the challenges that his company was facing with his existing backup configuration as well as the struggles that he had in identifying a backup solution that scaled to meet his dynamically changing and growing environment.

Jerome: Fidel, thanks for taking time out of schedule here at VMworld to meet and talk with me about how you came to choose Cohesity for your environment. To begin, please tell me about your role at your company.

fidel_hands

Fidel:    I work as a system architect at a software-as-a-service (SaaS) provider that pursues a very innovative, agile course of development which is very good at adopting new technology and trends.

My job is on the corporate infrastructure side. I do not work with the software delivery to our customers. Our software is more of a cookie cutter environment. It is very scalable but it is restricted to our application stack. I work on the corporate side where we have all the connectivity, email, financial, and other applications that the enterprise needs, including some of our customers’ applications.

I am responsible for choosing and deploying the technology and the strategy to get us to where we need to go and try to develop this division and the strategy to see where things are going. We used Veritas NetBackup and Dell deduplication appliances for backups. Using these solutions, we were constrained as they did not scale to match the demands of our business that grows at the rate we were going.

One of the biggest things that we have to worry about is scale. Often by the time we architect and set up a new solution, we always end up short. If we do not scale, we die.

We were at a crossroads with our previous strategy where we did not scale. It was very expensive to grow and manage. The criticality of the restore is huge and we had horrible restore times. We had a tape strategy. The tape guy came once a week. You could ask for a tape and it would come the next time he stopped by so we would potentially wait six days for the tape to get there. Then you had to move the data, get it off of tape, and convert it to a disk format. Our recovery SLAs were horrible.

I was tasked with finding a new solution. For back-end storage, we looked at Data Domain as we were an EMC shop.  For backup software, we looked at Gartner and their magic quadrant and we chose the first three.  With EMC (now Dell Technologies) we saw what the ecosystem looked like 12 years ago. A bunch of acquisitions integrated into one solution. It does not get one out of the silo scaling. There were some efficiencies but, honestly, we were not impressed with the price.

Jerome: Did you find EMC expensive for what it offered?

Fidel: Yes. It was ridiculously expensive. We also looked at Commvault and just with the first quote we realized this is way too complicated. We are a smaller organization, so we do not have people dedicated to jobs. Commvault quoted us 30 days for implementation engineers. We would have a guy from Commvault in our office for 30 days implementing and migrating jobs. That speaks about the complexity about what we are doing and it speaks to how, when their implementation engineer leaves, who is going to take on these responsibilities and how long is that going to take.

We decided that we should find a more sensible approach.

Jerome: How virtualized is your environment?

Fidel:  98 percent. All VMware. This led us to look at a virtual machine backup solution. We had heard very good things about this product but the only problem we had was the back-end storage. How do we tackle the back-end storage? My background is on the storage side so I started looking at solutions like Swift, which is an open source object-based storage as well as ScaleIO.  Yet when we evaluated this virtual machine backup solution using this storage, we were not impressed with it.

Jerome: Why was that? Those solutions are specifically tailored for backup of virtual machines.

Fidel: To be very honest, NetBackup performed better which I did not expect. I was very invested in the virtual machine backup solution. We did a full analysis on times and similar testing using different back ends. We found that the virtual machine backup software was up to 37 percent slower and more expensive because of its licensing model so it was not going to work for us.

Jerome: What did you decide to do at that point?

Fidel: We talked with our SHI International representative. We explained that we experienced a very high rate of change and that we needed to invest in a solution that in 2-3 years could be supporting an environment that may look radically different than today. Further, we did not want to delay deploying it because we were concerned how competitive we would be. If we delayed, the impact could be huge.

He recommended Cohesity. We recognized that it was obviously scale-out. One of the things that I particularly really liked about its scale-out architecture is that since you originate all of your data copies from the storage, you can have multiple streams from all your nodes. In this way, you are not only scale-out on capacity, but also performance and the amount of data streams that you can have.

In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.

In the fourth and final installment of this interview series, Fidel describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.




DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Editions Now Available

DCIG is pleased to announce the availability of the following DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Editions developed from the backup appliance body of research. Other Buyer’s Guide Editions based on this body of research will be published in the coming weeks and months, including the 2016-17 Integrated Backup Appliance Buyer’s Guide and 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Editions.

Buyer’s Guide Editions being released on September 20, 2016:

  • DCIG 2016-17 Sub-$100K Deduplicating Backup Appliance Buyer’s Guid
  • DCIG 2016-17 Sub-$75K Deduplicating Backup Appliance Buyer’s Guide
  • DCIG 2016-17 Sub-$50K Deduplicating Backup Appliance Buyer’s Guide
  • DCIG 2016-17 US Enterprise Deduplicating Backup Appliance Buyer’s Guide

DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Edition had to meet the following criteria:

  • Be intended for the deduplication of backup data, primarily target-based deduplication
  • Includes an NAS (network attached storage) interface
  • Supports CIFS (Common Internet File System) or NFS (Network File System) protocols
  • Supports a minimum of two (2) hard disk drives and/or a minimum raw capacity of eight terabytes
  • Be formally announced or generally available for purchase on July 1, 2016

The various Deduplicating Backup Appliance Buyer’s Guide Editions are based on at least one additional criterion, whether list price (Sub-$100K, Sub-$75K and Sub-$50K) or being from a US-based provider.

By using the DCIG Analysis Portal and applying these criteria to its body of research into backup appliances, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create Buyer’s Guide Editions to publish and release. DCIG plans to use this same process to create future Buyer’s Guide Editions that examine hybrid cloud and integrated backup appliances among others.

End users registering to access any of these reports via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by- side feature comparisons of the products in which the organization is most interested.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Latest Enhancements to Dell DL1300 Provide the Out-of-the-box Backup, Recovery, and DR Experience that Mid-market Companies Demand

More data to backup, less time to recover it, heightened recovery expectations and limited time to dedicate to manage these tasks. These are the dilemmas that every mid-market business faces when backing up and recovering its data. The good news is that the DL1300 Backup and Recovery Appliance offers the specific features that mid-market companies need to address these issues. Delivered as a turn-key, easy-to-deploy solution, the DL1300 offers the comprehensive set of features that mid-market companies need to reduce their time spent on backups, replication and/or archiving data to low cost 3rd party cloud locations.

Faster, Easier Recoveries Top the List of Mid-market Company Needs

Today, perhaps more so than ever, organizations want a backup appliance that makes all of the tasks associated with managing backup and recovery faster and easier. This includes making the appliance easier to deploy and manage that minimizes the amount of time and IT manpower needed to maintain and scale the appliance.

Foremost, organizations want a backup appliance that possesses the features to quickly and easily recover their applications and data. Most  mid-market organizations already rely on disk-based backup in some form as part of their existing backup process to improve backup success rates as well as shorten backup windows. As such, one might assume that faster, easier recoveries go hand-in-glove with faster, easier backups.

Unfortunately, this rarely holds true as recovery has largely lagged behind backup in its ability to deliver on either “faster” or “easier”. Many organizations grapple with the aggressive recovery point objectives (RPOs) and recovery time objectives (RTOs) for their applications and data putting added pressure on them and any solution they evaluate to:

  • Recovering their mission critical applications in minutes
  • Quickly restore any of their applications and data (under an hour)
  • Scale to keep pace with their growing storage capacity requirements for their applications and data
  • Update or upgrade the appliance with minimal to no impact to business operations
  • Utilize 3rd party cloud services providers for archive and disaster recovery

In short, they want a solution that maximizes their investment in a backup appliance which offers them the best possible backup, recovery and archive experience for their ever changing environment.

The Fast, Easy Backup Experience that Mid-market Companies Expect

The DL1300 Backup and Recovery Appliances positions midmarket organizations to address these key challenges that they face. The DL1300 is built on the Dell 13G PowerEdge Server platform that hosts the DL1300’s backup software and provides the resources that organizations need to consistently and quickly backup and recover large amounts of data. Marrying disk and application software in a turn-key solution, the DL1300:

  • Offers configuration wizards so organizations may go from box-to-backup in approximately 20 minutes
  • Takes the guesswork out of deployment and ongoing management and maintenance
  • Delivers data deduplication and compression to reduce data footprints and shorten backup windows
  • Scales to quickly and easily add capacity with in-box capacity upgrades as well as supports external storage arrays for greater capacity

Dell DL1300 Image

An initial deployment of a DL1300 Backup and Recovery Appliance may be future-proofed to meet their ever-growing and difficult-to-predict capacity growth without replacing the entire appliance.

Leveraging this option, mid-market organizations may size the DL1300 to solve their immediate backup challenges and then easily scale it up if and when the amount of data they backup increases. The DL1300 ships in 3 capacities: 2TB, 3TB and 4TB with 2 VMs included with the 3TB and 4TB models. All three models can be expanded to include 8TB of capacity inside the appliance. However, the 4TB DL1300 model can be increased to a total capacity of 18TBs using one MD1400 external storage array. Additionally, the RAM on all models may be upgraded to 64GB.

These extra levels of availability, capacity and horsepower particularly come into play for mid-market organizations looking to consolidate and centralize backup, recovery, and cloud archive across their environment.

As many mid-market organizations have a mix of Linux, Windows and VMware in their environment, the DL1300 provides them the solution they need to backup and recover all of these operating systems as well as protect both physical and virtual machines. Equally important, it integrates tightly with leading Windows applications such as Active Directory (AD), SharePoint and SQL Server to provide the application consistent backups that each of these applications need.

Once setup and configured, organizations may concurrently backup up to 60 backup streams. The DL1300 then optimizes its available disk capacity using RAID 5. This RAID configuration uses the DL1300’s available disk capacity more efficiently than what a RAID 1 (mirrored disk drive) configuration natively offers while still protecting the backup data against data loss should a drive failure occur.

More Recovery Options for All Applications and Data

The DL1300’s increased amount of disk storage capacity coupled with its RAID 5 data protection scheme facilitates the ability of organizations to centralize the backup of all of their applications and data onto a single platform in one location. This serves an important purpose: organizations may then leverage the DL1300 to quickly, easily, and centrally recover all of their applications and data.

The DL1300 possibly does a better job of doing recovery than any other backup appliance in its class as organizations may recover across both physical and virtual machines, to include doing P2V, V2P, P2P and V2V conversions. Its Live Recovery feature can recover any protected application—physical or virtual—back to the production machine, usually in under an hour and often within minutes.

To perform the restore, Live Recovery restores data from the application’s most recent backup checkpoint. As backups typically occur each hour, when an organization initiates a restore the organization can reasonably expect to recover application data that is less than an hour old.

Organizations that need even higher levels of application availability (no more than minutes of downtime) for their mission critical applications such as Microsoft Exchange or SQL Server, may leverage the optional Virtual Standby feature found on either the DL1300 3TB and 4TB models.

Using this feature, an organization may create a couple of virtual machines (VMs) on the DL1300. Then, for up to two selected protected servers, it will create a standby VM on the DL1300. Once set up, if the production physical or virtual machine goes offline, a virtual copy may be started almost immediately on the DL1300 appliance.

Finally, all mid-market organizations increasingly need a means to get their data offsite for archive and prepare an offsite disaster recovery (DR) plan. This is where the DL1300 shines. It supports multiple cloud storage providers to include Amazon, eFolder, Microsoft Azure, Rackspace and Zerolag. In so doing, it positions mid-market organizations to store data with their provider of choice providing them with cost-effective, non-proprietary options to move data and prepare their DR readiness plan.

DL1300 Provides the Out-of-the-Box Backup, Recovery and DR Solution that Mid-market Organizations Demand

Many mid-market organizations find themselves at a crossroads. They need a solution that offers the scalability and technical features they need to protect their ever-changing environment in a turn-key and easy-to-manage package. Organizations need to improve application recovery, shorten backup windows, archive to non-proprietary 3rd party cloud locations and prepare a cohesive DR readiness plan. Further, they need to so without breaking their budget, or stretching their technical limits.

The DL1300 meets these objectives by providing the out-of-the-box backup, recovery, and DR solution that organizations demand. Starting at an attractive price point under $5,000 and available from Dell and its partners, the DL1300 puts mid-market organizations on a path toward consolidating and simplifying their backup operations even as they get new flexibility to recover and scale going forward.




Expanded Use Cases for Hybrid Cloud Backup Appliances

Viewing hybrid cloud backup appliances strictly in the context of “backup and recovery” is a mindset that organizations must strive to overcome. While these appliances certainly fulfill this traditional role, new use cases are constantly emerging for these appliances. Hybrid cloud backup appliance have now matured to the point where organizations may use them in multiple roles besides just backup.

Hybrid cloud backup appliances minimally solve a challenge that confronts many organizations. They provide onsite backups and give them the flexibility to store backup copies of their data in the cloud for data recovery purposes. Instead of needing to purchase and install backup appliances at two or more locations for data recovery, organizations may use a hybrid cloud backup appliance in conjunction with a public cloud storage provider as a means to:

  • Move and store data offsite
  • Keep a retention copy or copies of the backup data with the provider long term
  • Set the stage for organizations to recover their applications at the provider’s site

Once these standard data protection requirements are met, organizations may now look to leverage some of the other features that a number of hybrid cloud backup appliances offer to address their broader business continuity needs.

For example, some hybrid cloud backup appliances give organizations the flexibility to create one or more VMs on the appliance that can host the protected applications and/or their data. Using this features, these organizations can, with comparative levels of ease and simplicity and without disrupting their production environment, test and verify that they can restore protected applications and data.

Some appliances even offer the flexibility to run these applications on a VM in a standby state. In this configuration, if the production application goes offline, the application running on the standby VM on the hybrid cloud backup appliance can keep the application operational until the production server or VM comes back online.

Restoring applications on a standby VM also gives organizations new flexibility to test application and operating system fixes, patches and upgrades before they apply them on the production server. An organization may bring up an application on a VM on the hybrid cloud backup appliance in a configuration that mimics their production environment.

Fixes, patches or upgrades may then be applied to either the OS and/or application to verify that they work. This technique also gives administrators some practice on how to apply the patch and grants them visibility and understanding into what occurs on the system when the fix or patch is applied such as seeing what alerts are generated (if any) and how much time it takes to complete.

Organizations using public cloud storage providers that offer cloud recovery options may even be able to go so far as to simulate a disaster recovery (DR) at the provider’s site. Granted, no organization should expect any of the appliances evaluated in DCIG’s recently released Hybrid Cloud Backup Appliance Buyer’s Guide to provide an out-of-the-box, turnkey DR solution. However using these appliances and the partnerships they have built with various public cloud storage providers, organizations may realistically look toward creating a viable DR solution much more easily than they have in the past.

Most hybrid cloud backup appliances provide the out-of-the-box backup experience that organizations expect when they acquire them. However for organizations to strictly view and use these appliances strictly in that context is to fail to fully realize the additional value that these appliances now bring to the table. By giving organizations the flexibility to stand-up VMs, test fixes, patches and upgrades and even simulate disaster recoveries, these appliances give organizations the opportunity and foundation to begin to implement some level of business continuity in their day-to-day operations.




DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide that evaluates and ranks more than 100 features from nearly 60 different hybrid cloud backup appliances from ten (10) different providers.

DCIG-2015-16-Hybrid-Cloud-Backup-Appliance-Icon-200x200

DCIG’s goal in preparing this Buyer’s Guide is to evaluate and rank each appliance based upon a comprehensive list of features that reflects the needs of the widest range of organizations. The Buyer’s Guide rankings enable “at-a-glance” comparisons between many different appliance models and its standardized data sheets facilitate side-by-side reviews to quickly enable organizations to examine products in greater detail.

Hybrid cloud backup appliances are particularly well-suited for organizations that need:

  • A turnkey backup and recovery solution to replace or upgrade their existing backup software
  • To keep their backup data both locally and with an off-site cloud provider
  • To do fast application recoveries
  • To set the stage for implementing an offsite disaster recovery solution

The DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide Top 10 solutions include (in alphabetical order):

  • Cobalt Iron Vault – Enterprise
  • Cobalt Iron Vault – Large
  • Cobalt Iron Vault – Medium
  • STORServer EBA 1202-CV
  • STORServer EBA 2502-CV
  • Unitrends Recovery 823S
  • Unitrends Recovery 824S
  • Unitrends Recovery 933S
  • Unitrends Recovery 936S
  • Unitrends Recovery 943S

The Unitrends Recovery 943S earned the Best-in-Class ranking among all hybrid cloud backup appliances evaluated in this Buyer’s Guide. The Recovery 943S stood out by offering the following capabilities:

  • Connectivity to multiple cloud providers
  • Heightened levels of virtual server data protection
  • Robust encryption options for data at-rest and in-flight
  • Scales to offer high levels of cache, processing power and storage capacity
  • Support for multiple networking storage protocols and dozens of operating systems

About the DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide

DCIG creates Buyer’s Guides in order to help organizations accelerate their product research and selection process by driving the cost, time and effort out of the research process while simultaneously increasing confidence in the results.

The DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide achieves the following objectives:

  • Provides an objective, third party evaluation of products that evaluates and scores their features from an end user’s perspective
  • Ranks each appliance in each category
  • Provides a standardized data sheet for each appliances so organizations may quickly do side-by-side product comparisons
  • Provides insights into what options these appliances offer to integrate with third party cloud storage providers
  • Provides insight into which features will result in improved performance
  • Provides a solid foundation for getting competitive bids from different providers with products that are based on “apples-to-apples” comparisons

The DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide is available immediately to subscribing users of the DCIG Analysis Portal. Individuals who have not yet subscribed to the DCIG Analysis Portal may test drive the DCIG Analysis Portal as well as download this Guide by following this link.




The Dell DL4300 Puts the Type of Thrills into Backup and Recovery that Organizations Really Want

Organizations have long wanted to experience the thrills of non-disruptive backups and instant application recoveries. Yet the solutions delivered to date have largely been the exact opposite offering only unwanted backup pain with very few of the types of recovery thrills that organizations truly desire. The new Dell DL4300 Backup and Recovery Appliance successfully takes the pain out of daily backup and puts the right types of thrills into the backup and recovery experience.

Everyone enjoys a thrill now and then. However individuals should want to get their thrills at an amusement park, not when they backup or recover applications or manage the appliance that hosts their software. In cases like these, boring is the goal when it comes to performing backups and/or managing the appliance that hosts the software with the excitement and thrills appropriately reserved for fast, successful application recoveries. This is where the latest Dell DL4300 Backup and Recovery Appliance introduces the right mix of boring and excitement into today’s organizations.

Show Off

Being a show off is rarely if ever perceived as a “good thing.” However IT staff can now in good conscience show off a bit by demonstrating the DL4300’s value to the business as it quickly backs up and recovers applications without putting business operations at risk. The Dell DL4300 Backup and Recovery Appliance’s AppAssure software provides the following five (5) key features to give them this ability:

  • Near-continuous backups. The Dell DL4300 may perform application backups as frequently as every five (5) minutes for both physical and virtual machines. During the short period of time it takes to complete a backup, it only consumes a minimal amount of system resources – no more than 2 percent. Since the backups occur so quickly, organizations have the flexibility to schedule as many as 288 backups in a 24 hour period which helps to minimize the possibility of data loss so organizations can achieve near-real time recovery point objectives (RPOs).
  • Near-instantaneous recoveries. The Dell DL4300 complements its near-continuous backup functionality by also offering near-instantaneous application recoveries. Its Live Recovery feature works across both physical and virtual machines and is intended for use in situations where application data is corrupted or becomes unavailable. In those circumstances, Live Recovery can within minutes present data residing on non-system volumes to a physical or virtual machine. The application may then access that data and resume operations until the data is restored and/or available locally.
  • Virtual Standby. The Dell DL4300’s Virtual Standby feature complements its Live Recovery feature by providing an even higher level of availability and recovery for those physical or virtual machines that need this level of recovery. To take advantage of this feature, organizations identify production applications that need instant recovery. Once identified, these applications are associated with the up to four (4) virtual machines (VMs) that may be hosted by a Dell DL4300 appliance and which are kept in a “standby” state. While in this state, the Standby VM on the DL4300 is kept updated with changes on the production physical or virtual VM. Then should the production server ever go offline, the standby VM on the Dell DL4300 will promptly come online and take over application operations.
  • Helps to insure application consistent recoveries. Simply being able to bring up a Standby VM on a moment’s notice for some production applications may be insufficient. Some applications such as Microsoft Exchange create check points to ensure it is brought up in an application consistent state. In cases such as these, the DL4300 integrates with applications such as Exchange by regularly performing mount checks for specific Exchange server recovery points. These mount checks help to guarantee the recoverability of Microsoft Exchange.
  • Open Cloud support. As more organizations keep their backup data on disk in their data center, many still need to retain copies of data offsite without either moving it to tape or needing to set up a secondary site to which to replicate the data. This makes integration with public cloud storage providers to archive retention backup copies an imperative. The Dell DL4300 meets this requirement by providing one of the broadest levels of public cloud storage integration available as it natively integrates with Amazon S3, Microsoft Azure, OpenStack and Rackspace Cloud Block storage.

The Thrill of Having Peace of Mind

The latest Dell DL4300 series goes a long way towards introducing the type of excitement that organizations really want to experience when they use an integrated backup appliance. It also goes an equally long way toward providing the type of peace of mind that organizations want when implementing a backup appliance or managing it long term.

For instance, the Dell DL4300 gives organization the flexibility to start small and scale as needed in both its Standard and High Capacity models with their capacity on demand license features. The Dell DL4300 Standard comes equipped with 5TB of licensed capacity and a total of 13TB of usable capacity. Similarly, the Dell DL4300 High Capacity ships with 40TB of licensed capacity and 78TB of usable capacity.

Configured in this fashion, DL4300 series minimizes or even eliminates the need for organizations to install additional storage capacity at a later date should its existing, available licensed capacity ever run out of room. If the 5TB threshold is reached on the DL4300 Standard or the 40TB limit is reached on the DL4300 High Capacity, organizations only need to acquire an upgrade license to access and use the pre-installed and existing additional capacity. This takes away the unwanted worry about later upgrades as organizations may easily and non-disruptively add 5TB of additional capacity to the DL4300 Standard or 20TB of additional capacity to the DL4300 High Capacity.

Similarly the DL4300’s Rapid Appliance Software Recovery (RASR) removes the shock of being unable to recover the appliance should it fail. RASR improves the reliability and recoverability of the appliance by taking regularly scheduled backups of the appliance. Then should the appliance itself ever experience data corruption or fail, organizations may first do a default restore to the original backup appliance configuration from an internal SD card and then restore from a recent backup to bring the appliance back up-to-date.

The Dell DL4300 Provides the Types of Thrills that Organizations Want

Organizations want the sizzle that today’s latest technologies have to offer without the unexpected worries that can too often accompany them. The Dell DL4300 provides this experience. It makes its ongoing management largely a non-issue so organizations may experience the thrills of near-continuous backup and near-instantaneous recovery of data and applications across their physical, virtual and/or cloud infrastructures.

It also delivers the new type of functionality that organizations want to meet their needs now and into the future. Through its native integration with multiple public cloud storage providers and giving organizations the flexibility to use its virtual standby feature for enhanced testing to insure consistent and timely recovery of their data, organizations get the type of thrills that they want and should rightfully expect from a solution such as the Dell DL4300 Backup and Recovery appliance that offers industry-leading self-recovery features and enhanced appliance management.




Advanced Encryption and VTL Features Give Organizations New Impetus to Use the Dell DR Series as their “One Stop Shop” Backup Target

To simplify their backup environments, organizations desire backup solutions that essentially function as “one-stop shops” to satisfy their multiple backup requirements. To succeed in this role, they should provide needed software, offer NAS and virtual tape library (VTL) interfaces, scale to high capacities and deliver advanced encryption capabilities to secure backup data. By Dell introducing advanced encryption and VTL options into its latest DR 3.2 OS software release for its DR Series, it delivers this “one-stop shop” experience that organizations want to implement in their backup infrastructure.

The More Backup Changes, the More It Stays the Same

Deduplicating backup appliances have replaced tape as a backup target in many organizations. By accelerating backups and restores, increasing backup success rates and making disk-based backup economical, these appliances have fundamentally transformed backup.

Yet their introduction does not always change the underlying backup processes. Backup jobs may still occur daily; are configured as differential, incremental or full; and, are managed centrally. The only real change is using disk in lieu of tape as a target.

Even once in place, many organizations still move backup data to tape for long term data retention and/or offsite disaster recovery. Further, organizations in the finance, government and healthcare sectors typically encrypt data such as SEC Rule 17a-4 specifies or the 2003 HIPAA Security Rules and more recent 2009 HITECH Act strongly encourage.

Continued Relevance of Encryption and VTLs in Enterprises

This continued widespread use of tape as a final resting place for backup data leads organizations to keep current backup processes in place. While they want to use deduplicating backup appliances, they simply want to swap out existing tape libraries for these solutions. This has given rise to the need for deduplicating backup appliances to emulate physical tape libraries as virtual tape libraries (VTLs).

A VTL requires minimal to no changes to existing backup-to-tape processes nor does it require many changes to how the backup data is managed after backup. The backup software now backs up data to the VTL’s virtual tape drives where the data is stored on virtual tape cartridges. Storing data this way facilitates its movement from virtual to real or physical tape cartridges and enables the backup software to track its location regardless of where it resides.

VTLs also accelerate backups. They give organizations more flexibility to keep data on existing SANs which negates the need to send data over corporate LANs where it has to contend with other network traffic. SAN protocols also better support the movement of larger block sizes of data which are used during backup.

Finally, VTLs free backup from the constraints of physical tape libraries. Creating new tape drives and tape cartridges on a VTL may be done with the click of a button. In this way organizations may quickly create multiple new backup targets to facilitate scheduling multiple, concurrent backup jobs.

Encrypting backup data is also of greater concern to organizations as data breaches occur both inside and outside of corporate firewalls. This behooves organizations to encrypt backup data in the most secure manner regardless if the data resides on disk or tape.

Advanced Encryption and VTL Functionality Central to Dell DR Series 3.2 OS Release

Advanced encryption capabilities and VTL functionality are two new features central to Dell’s 3.2 operating system (OS) release for its DR Series of deduplicating backup appliances. The 3.2 OS release provides organizations a key advantage over competitive solutions as Dell makes all of its software features available without requiring additional licensing fees. This applies to both new DR Series appliances as well as existing Dell DR Series appliances which may be upgraded to this release to gain full access to these features at no extra cost.

The 3.2 OS release’s advanced encryption capabilities use the FIPS 140-2 compliant 256-bit Advanced Encryption Standard (AES) standard to encrypt data. By encrypting data that conforms to this standard ensures that it is acceptable to federal agencies in both Canada and the United States. This also means that organizations who are in these countries and need to comply with their regulations are typically, by extension, in compliance when they use the DR Series to encrypt their backup data.

The 3.2 OS release implements this advanced encryption capability by encrypting data after its inline deduplication of the backup data is complete. In this way, each DR Series appliance running the 3.2 OS release deduplicates backup data as it is ingested to achieve the highest possible deduplication ratio as encrypting data prior to deduplication negatively impacts deduplication’s effectiveness. Encrypting the data after it is deduplicated also reduces the amount of overhead associated with encryption since there is less data to encrypt while keeping the overhead associated with the encryption on the DR Series appliance. In cases where existing DR4100s are upgraded to the 3.2 OS release, encryption may be done post-process on those data volumes that have previously been stored unencrypted in the DR4100’s storage repository.

The VTL functionality that is part of the 3.2 OS release includes options to present a VTL interface on either corporate LANs or SANs. If connected to a corporate LAN, the NDMP protocol is used to send data to the DR Series while, if it is connected to a corporate SAN, the iSCSI protocol is used.

Every DR Series appliance running the 3.2 OS release may be configured to present up to four (4) containers that each operate as separate VTLs. Each of these individual VTL containers may emulate one (1) StorageTek STK L700 tape library or an OEM version of the STK L700; up to ten (10) IBM ULT3580-TD4 tape drives; and, up to 10,000 tape cartridges that may each range in size from 10GB to 800GB.

As each individual VTL container on the DR Series appears as an STK L700 library to backup software, the backup software manages the VTL in the same way it does a physical tape library: it copies the data residing on virtual tape cartridges to physical tape cartridges and back again, if necessary. With this functionality available on leading enterprise backup software products such as Dell NetVault, CommVault Simpana, EMC Networker, IBM TSM, Microsoft Data Protection Manager (iSCSI only), Symantec Backup Exec and Symantec NetBackup, each of these can recognize and manage the Dell DR Series VTL as a physical STK L700 tape library, carry forward existing tape copy processes, implement new ones if required, and manage where copies of tape cartridges—physical or virtual—reside.

Dell’s 3.2 OS Release Gives Organizations New Impetus to Make Dell DR Series Their “One Stop Shop” Backup Target

All size organizations want to consolidate and simplify their backup environments and using a common deduplicating backup appliance platform is one excellent way to do so. Dell’s 3.2 OS release for its DR Series gives organizations new impetus to start down that path. The introduction of advanced encryption and VTL features along with the introduction of 6TB HDDs on expansion shelves for the DR6000 and the availability of Rapid NFS/Rapid CIFS protocol accelerators for the DR4100 provide the additional motivation that organizations need to non-disruptively introduce and use the DR Series in this broader role to improve their backup environments even as they keep existing backup processes in place.