Four Ways to Achieve Quick Wins in the Cloud

More companies than ever want to use the cloud as part of their overall IT strategy. To do so, they often look to achieve some quick wins in the cloud to demonstrate its value. Achieving these quick wins also serves to give them some practical hands on experience in the cloud. Incorporating the cloud into your backup and disaster recovery (DR) processes may serve as the best way to get these wins.

Any company hoping to get some quick wins in the cloud should first define what a “win” looks like. For the purposes of this blog entry, a win consists of:

  • Fast, easy deployments of cloud resources
  • Minimal IT staff involvement
  • Improved application processes or workflows
  • The same or lower costs

Here are four ways for companies to achieve the quick wins in the cloud through their backup and DR processes:

#1 – Take a Non-disruptive Approach

When possible, leverage your company’s existing backup infrastructure to store copies of data in the cloud. All enterprise backup products such as backup software and deduplication backup appliances, save one or two, interface with public clouds. These products can store backup data in the cloud without disrupting your existing environment.

Using these products, companies can get exposure to the public cloud’s core compute and storage services. These are the cloud services companies are most apt to use initially and represent the most mature of the public cloud offerings.

#2 – Deduplicate Backup Data Whenever Possible

Public cloud providers charge monthly for every GB of data that companies store in their respective clouds. The more data that your company stores in the cloud, the higher these charges become.

Deduplicating data reduces the amount of data that your company stores in the cloud. In so doing, it also helps to control and reduce your company’s monthly cloud storage costs.

#3 – Tier Your Backup Data

Many public cloud storage providers offer multiple tiers of storage. The default storage tier they offer does not, however, represent their most cost-effective option. This is designed for data that needs high levels of availability and moderate levels of performance.

Backup data tends to only need these features for the first 24 – 72 hours after it is backed up. After that, companies can often move it to lower cost tiers of cloud storage. Note that these lower cost tiers of storage come with decreasing levels of availability and performance. While many backups (over 99%) fall into this category, check to see if any application recoveries occurred that required data over three days old before moving it to lower tiers of storage.

#4 – Actively Manage Your Cloud Backup Environment

Applications and data residing in the cloud differ from your production environment in one important way. Every GB of data consumed and every hour that an application runs incur costs. This differs from on-premises environments where all existing hardware represents a sunk cost. As such, there is less incentive to actively manage existing hardware resources since any resources recouped only represent a “soft” savings.

This does not apply in the cloud. Proactively managing and conserving cloud resources translate into real savings. To realize these savings, companies need to look to products such as Quest Foglight. It helps them track where their backup data resides in the cloud and identify the application processes they have running. This, in turn, helps them manage and control their cloud costs.

Companies rightfully want to adopt the cloud for the many benefits that it offers and, ideally, achieve a quick win in the process. Storing backup data in the cloud and moving DR processes to the cloud provides the quick win in the cloud that many companies initially seek. As they do so, they should also ensure they put the appropriate processes and software in place to manage and control their usage of cloud resources.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 




CloudShift Puts Flawless DR on Corporate Radar Screens

Hyper-converged Infrastructure (HCI) solutions excel at delivering on many of the attributes commonly associated with private clouds. Consequently, the concepts of hyper-convergence and private clouds have become, in many respects, almost inextricably linked.

But for an HCI solution to not have a clear path forward for public cloud support … well, that’s almost anathema in the increasingly hybrid cloud environments found in today’s enterprises. That’s what makes this week’s CloudShift announcement from Datrium notable – it begins to clarify Datrium’s strategy for how Datrium is going to go beyond backup to the public cloud as part of its DVX solution and puts the concept of flawless DR on corporate radar screens.

Source: Datrium

HCI is transforming how organizations manage their on-premise infrastructure. By combining compute, data protection, networking, storage and server virtualization into a single pre-integrated solution, they eliminate many of the headaches associated with traditional IT infrastructures while delivering the “cloud-like” speed and ease of deployment that enterprises want.

However, enterprises increasingly want more than “cloud-like” abilities from their on-premise HCI solution. They also want the flexibility to move the virtual machines (VMs) they host on their HCI solution into public cloud environments if needed. Specifically, if they run disaster recovery (DR) tests, perform an actual DR, or need to move a specific workload in the public cloud that is experiencing high throughput, having the flexibility to move VMs into and out of the cloud as needed is highly desirable.

Datrium answers the call for public cloud integration with its recent CloudShift announcement. However, Datrium did not just come out with a #MeToo answer for public clouds by announcing it will support the AWS cloud. Rather, it delivered what most enterprises are looking for at this stage in their journey to a hybrid cloud environment: a means to seamlessly incorporate the cloud into their overall DR strategy.

The goals behind its CloudShift announcement are three-fold:

  1. Build on the existing Datrium DVX platform that already manages the primary copy of data as well as its backups. With the forthcoming availability of CloudShift in the first half of 2019, it will complete the primary to backup to cloud circle that companies want.
  2. Make DR work flawlessly. If there are two words together that often represent an oxymoron, it is “flawless DR”. By bringing all primary, backup and cloud together and managing them as one holistic piece, companies can begin to someday soon (ideally in this lifetime) view flawless DR as the norm instead of the exception.
  3. Orchestrated DR failover and failback. DR failover and failback just rolls off the tongue – it is simple to say and everyone understands what it means. But to execute on the successful DR failover and failback in today’s world tends to get very pricey and very complex. By Datrium rolling the management of primary, backup and cloud under one roof and then continually performing compliance checks on the execution environment to ensure that they meet RPO and RTO of the DR plan, companies can have a higher degree of confidence that DR failovers and failbacks only occur when they are supposed to and that when they occur, they will succeed.

Despite many technology advancements in recent years, enterprise-wide, turnkey DR capabilities with orchestrated failover and failback between on-premises and the cloud are still largely the domain of high-end enterprises that have the expertise to pull it off and are willing to commit large amounts of money to establish and maintain a (hopefully) functional DR capability. Datrium’s CloudShift announcement puts the industry on notice that reliable, flawless DR that will meet the budget and demands of a larger number of enterprises is on its way.




Four Implications of Public Cloud Adoption and Three Risks to Address

Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.

Implication #1: No enterprise IT dept will be able to keep pace with the level of services innovation available from cloud providers

The battle is over. Cloud wins. Deal with it.

Dealing with it does not necessarily mean that every business will move every workload to the cloud. It does mean that it is time for business IT departments to build awareness of the services available from public cloud providers. One way to do this is to tap into the flow of service updates from one or more of the major cloud providers.

four public cloud logosFor Amazon Web Services, I like What’s New with AWS. Easy filtering by service category is combined with sections for featured announcements, featured video announcements, and one-line listings of the most recent announcements from AWS. The one-line listings include links to service descriptions and to longer form articles on the AWS blog.

For Microsoft Azure, I like Azure Updates. As its subtitle says, “One place. All updates.” The Azure Updates site provides easy filtering by product, update type and platform. I especially like the ability to filter by update type for General Availability and for Preview. The site also includes links to the Azure roadmap, blog and other resources. This site is comprehensive without being overwhelming.

For Google Cloud Platform, its blog may be the best place to start. The view can be filtered by label, including by announcements. This site is less functional than the AWS and Microsoft Azure resources cited above.

For IBM Cloud, the primary announcements resource is What’s new with IBM Cloud. Announcements are presented as one-line listings with links to full articles.

Visit these sites, subscribe to their RSS feeds, or follow them via social media platforms. Alternatively, subscribe to their weekly or monthly newsletters via email. Once a business has workloads running in one of the public clouds at a minimum an IT staff member should follow the updates site.

Implication #2: Pressure will mount on Enterprise IT to connect business data to public cloud services

The benefits of bringing public cloud services to bear on the organization’s data will create pressure on enterprise IT departments to connect business data to those services. There are many options for accomplishing this objective, including:

  1. All-in with one public cloud
  2. Hybrid: on-prem plus one public
  3. Hybrid: on-prem plus multiple public
  4. Multi-cloud (e.g. AWS + Azure)

The design of the organization and the priorities of the business should drive the approach taken to connect business data with cloud services.

Implication #3: Standard data protection requirements now extend to data and workloads in the public cloud

No matter what approach it taken when embracing the public cloud, standard data protection requirements extend to data and workloads in the cloud. Address these requirements up front. Explore alternative solutions and select one that meets the organizations data protection requirements.

Implication #4: Cloud Data Protection and DRaaS are on-ramps to public cloud adoption

For most organizations the transition to the cloud will be a multi-phased process. Data protection solutions that can send backup data to the cloud are a logical early phase. Disaster recovery as a service (DRaaS) offerings represent another relatively low-risk path to the cloud that may be more robust and/or lower cost that existing disaster recovery setups. These solutions move business data into public cloud repositories. As such, cloud data protection and DRaaS may be considered on-ramps to public cloud adoption.

Once corporate data has been backed up or replicated to the cloud, tools are available to extract and transform the data into formats that make it available for use/analysis by that cloud provider’s services. With proper attention, this can all be accomplished in ways that comply with security and data governance requirements. Nevertheless, there are risks to be addressed.

Risk to Address #1: Loss of change control

The benefit of rapid innovation has a downside. Any specific service may be upgraded or discarded by the provider without much notice. Features used by a business may be enhanced or decremented. This can force changes in other software that integrates with the service or in procedures used by staff and the associated documentation for those procedures.

For example, Office365 and Google G Suite features can change without much notice. This creates a “Where did that menu option go?” experience for end users. Some providers reduce this pain by providing an quick tutorial for new features within the application itself. Others provide online learning centers that make new feature tutorials easy to discover.

Accept this risk as an unavoidable downside to rapid innovation. Where possible, manage the timing of these releases to an organization’s users, giving them advance notice of the changes along with access to tutorials.

Risk to Address #2: Dropped by provider

A risk that may not be obvious to many business leaders is that of being dropped by a cloud service provider. A business with unpopular opinions might have services revoked, sometimes with little notice. Consider how quickly the movement to boycott the NRA resulted in severed business-to-business relationships. Even an organization as large as the US Military faces this risk. As was highlighted in recent news, Google will not renew its military AI project due in large part to pressure from Google employees.

Mitigate this risk through contracts and architecture. This is perhaps one argument in favor of a hybrid on-prem plus cloud approach to the public cloud versus an all-in approach.

Risk to Address #3: Unpredictable costs

It can be difficult to predict the costs of running workloads in the public cloud, and these costs can change rapidly. Address this risk by setting cost thresholds that trigger an alert. Consider subscribing to a service such as Nutanix Beam to gain granular visibility into and optimization of public cloud costs.

Its time to get real about the public cloud

Many business are ready to embrace the public cloud. IT departments should make themselves aware of services that may create value for their business. They should also work through the implications of moving corporate data and workloads to the cloud, and make plans for managing the attendant risks.




A More Elegant (and Affordable) Approach to Nutanix Backups

One of the more perplexing challenges that Nutanix administrators face is how to protect the data in their Nutanix deployments. Granted, Nutanix natively offers its own data protection utilities. However, these utilities leave gaps that enterprises are unlikely to find palatable when protecting their production applications. This is where Comtrade Software’s HYCU and ExaGrid come into play as their combined solutions provide a more affordable and elegant approach to protecting Nutanix environments.

One of the big appeals of using hyperconverged solutions such as Nutanix’s inclusion of basic data protection utilities. Using its Time Stream and Cloud Connect technologies, Nutanix makes it easy and practical for organizations to protect applications hosted on VMs running on Nutanix deployments.

The issue becomes how does one affordably deliver and manage data protection in Nutanix environments at scale? This becomes a tougher question for Nutanix to answer because to use its data protection technologies at scale requires running the Nutanix platform to host the secondary/backup copies of data. While that is certainly doable, that approach is likely not the most affordable way to tackle this challenge.

This is where a combined data protection solution from Comtrade Software and ExaGrid for the protection of Nutanix environments makes sense. Comtrade Software’s HYCU was the first backup software product to come to market purpose-built to protect Nutanix environments. Like Nutanix’s native data protection utilities, Nutanix administrators can manage HYCU and their VM backups from within the Nutanix PRISM management console. Unlike Nutanix’s native data protection utilities, HYCU auto-detects applications running within VMs and configures them for protection.

Further distinguishing HYCU from other competitive backup software products mentioned on Nutanix’s web page, HYCU is the only one currently listed that can run as a VM in an existing Nutanix implementation. The other products listed require organizations to deploy a separate physical machine to run their software which add cost and complexity into the backup equation.

Of course, once HYCU protects the data, the issue becomes, where does one store the backup copies of data for fast recoveries and long-term retention. While one can certainly keep these backup copies on the existing Nutanix deployment or on a separate deployment of it, these creates two issues.

  • One, if there is some issue with the current Nutanix deployment, you may not be able to recover the data.
  • Two, there are more cost-effective solutions for the storage and retention of backup copies of data.

ExaGrid addresses these two issues. Its scale-out architecture resembles Nutanix’s architecture enabling an ExaGrid deployment to start small and then easily scale to greater amounts of capacity and throughput. However, since it is a purpose-built backup appliance intended to store secondary copies of data, it is more affordable than deploying a second Nutanix cluster. Further, the Landing Zones that are uniquely found on ExaGrid deduplication systems facilitate near instantaneous recovery of VMs.

Adding to the appeal of ExaGrid’s solutions in enterprise environments is its recently announced EX63000E appliance. This appliance has 58% more capacity than its predecessor, allowing for a 63TB full backup. Up to thirty-two (32) EX63000E appliances can be combined in a single scale-out system to allow for a 2PB full backup. Per ExaGrid’s published performance benchmarks, each EX63000E appliance has a maximum ingest rate of 13.5TB/hr. per appliance enabling thirty-two (32) EX63000Es combined in a single system to achieve maximum ingest rate of 432TB/hr.

Hyperconverged infrastructure solutions are in general poised to re-shape enterprise data center landscapes with solutions from Nutanix currently leading the way. As this data center transformation occurs, organizations need to make sure that the data protection solutions that they put in place offer both the same ease of management and scalability that the primary hyperconverged solution provides. Using Comtrade Software HYCU and ExaGrid, organizations get the affordable yet elegant data protection solution that they seek for this next generation data center architecture.




Glitch is the Next Milestone in Recoveries

No business – and I mean no business – regardless of its size ever wants to experience an outage for any reason or duration. However, to completely avoid outages means spending money and, in most cases, a lot of money. That is why, when someone shared with me earlier this week, that one of their clients has put in place a solution that keeps their period of downtime to what appears as a glitch to their end-users for nominal cost, it struck a chord with me.

The word outage does not sit well with anyone in any size organization. It conjures up images of catastrophes, chaos, costs, lost data, screaming clients, and uncertainty. Further, anyone who could have possibly been involved with causing the outage often takes the time to make sure they have their bases covered or their resumes updated. Regardless of the scenario, very little productive work gets done as everyone scrambles to first diagnose the root cause of the outage, fix it, and then takes steps to prevent it from ever happening again.

Here’s the rub in this situation: only large enterprises with money to buy top-notch hardware and software backed by elite staff to put solutions in place that come anywhere near guaranteeing this type of availability. Even then, these solutions are usually reserved for a handful of mission critical and maybe business critical applications. The rest of their applications remain subject to outages of varying lengths and causes.

Organizations other than large enterprises daily face this fear. While their options for speed of recovery have certainly improved in recent years thanks to disk-based backup and virtualization, recovering any of their applications from a major outage such as hardware failure, ransomware attack, or just plain old unexpected human error, it may still take hours or longer to complete the recovery. Perhaps worse, everyone knows about it and cursing out the IT staff for this unexpected and prolonged interruption in their work day.

Here’s what caught my attention on the phone call I had this week. While this company in question retains its ideal of providing uninterrupted availability for its end-users as its end game, its immediate milestone is to reduce the impact of outages down to a glitch from the perspective of their end-users.

Granted, a temporary outage of any applications for even a few minutes is neither ideal nor will end-users or management greet any outage with cheers. However, recovering an application in a few minutes (say in 5-10 minutes,) will be more well-received than communicating that the recovery will take hours, days, or replying with an ambiguous “we are making a best faith effort to fix the problem.

This is where setting a milestone of having any application recovery appear as a glitch to the organization starts to make sense. Solutions that provide uninterrupted availability and instant recoveries often remain out of reach financially for all but the wealthiest enterprises. However, solutions that provide recoveries that can make outages appear as only a glitch to end-users are now within reach of almost any size business.

No one likes outages of any type. However, if IT can in the near-term turn outages into glitches from a corporate visibility perspective, IT will have achieved a lot. The good news is that data protection solutions that span on-premises and the cloud are readily available now that when properly implemented can well turn many applications outages into a glitch.




Deduplication Still Matters in Enterprise Clouds as Data Domain and ExaGrid Prove

Technology conversations within enterprises increasingly focus on the “data center stack” with an emphasis on cloud enablement. While I agree with this shift in thinking, one can too easily overlook the merits of underlying individual technologies when only considering the “Big Picture“. Such is happening with deduplication technology. A key enabler of enterprise archiving, data protecton, and disaster recovery solutions, vendors such as Dell EMC and ExaGrid deliver deduplication technology in different ways as DCIG’s most recent 4-page Pocket Analyst Report reveals that makes each product family better suited for specific use cases.

It seemed for too many years enterprise data centers focused too much on the vendor name on the outside of the box as opposed to what was inside the box – the data and the applications. Granted, part of the reason for their focus on the vendor name is they wanted to demonstrate they had adopted and implemented the best available technologies to secure the data and make it highly available. Further, some of the emerging technologies necessary to deliver a cloud-like experience with the needed availability and performance characteristics did not yet exist, were not yet sufficiently mature, or were not available from the largest vendors.

That situation has changed dramatically. Now the focus is almost entirely on software that provides enterprises with cloud-like experiences that enables them to more easily and efficiently manage their applications and data. While this change is positive, enterprises should not lose sight of the technologies that make up their emerging data center stack as they are not all equally equipped to deliver them in the same way.

A key example is deduplication. While this technology has existed for years and has become very mature and stable during that time, the options in which enterprises can implement it and the benefits they will realize it vary greatly. The deduplication solutions from Dell EMC Data Domain and ExaGrid illustrate these differences very well.

DCIG Pocket Analyst Report Compares Dell EMC Data Domain and ExaGrid Product Families

Deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They also both offer appliances in various physical configurations to meet the specific backup needs of small, midsize, and large enterprises while providing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

However, their respective systems also differ in key areas that will impact the overall effectiveness these systems will have in the emerging cloud data stacks that enterprises are putting in place. The six areas in which they differ include:

  1. Data center efficiency
  2. Deduplication methodology
  3. Networking protocols
  4. Recoverability
  5. Replication
  6. Scalability

The most recent 4-page DCIG Pocket Analyst Report analyzes these six attributes on the systems from these two providers of deduplication systems and compares their underlying features that deliver on these six attributes. Further, this report identifies which product family has the advantage in each area and provides a feature comparison matrix to support these claims.

This report provides the key insight in a concise manner that enterprises need to make the right choice in deduplication solutions for their emerging cloud data center stack. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

Cloud-like data center stacks that provide application and data availability, mobility, and security are rapidly becoming a reality. But as enterprises adopt these new enterprise clouds, they ignore or overlook technologies such as deduplication that make up these stacks at their own peril as the underlying technologies they implement can directly impact the overall efficiency and effectiveness of the cloud that one is building.




The End Game for Cloud Data Protection Appliances is Recovery


The phrase “Cloud Data Protection Appliance” is included in the name of DCIG’s forthcoming Buyer’s Guide but the end game of each appliance covered in that Guide is squarely on recovery. While successful recoveries have theoretically always been the objective of backup appliances, vendors too often only paid lip service to that ideal as most of their new product features centered on providing better means for doing backups.  Recent technology advancements have flipped this premise on its head.

Multiple reasons exist as to why these appliances can focus more fully on this end game of recovery though five key ones have emerged in the last few years that have enabled it. These include:

  1. The low price point of using disk as a backup target (as opposed to tape)
  2. The general availability of private and public cloud providers
  3. The use of deduplication to optimize storage capacity
  4. The widespread availability of snapshot technologies on hypervisors, operating systems, and storage arrays
  5. The widespread enterprise adoption of hypervisors like VMware ESX, and Microsoft Hyper-V as well as the growing adoption of container technologies such as Docker and Kubernetes,

While there are other contributing technologies, these five more so than the others give these appliances new freedom to deliver on backup’s original promise: successful recoveries. By way of example:

  • The backup appliance is used for local application recoveries. Over 80 percent of the appliances that DCIG evaluated now support the instant recovery of an application on a virtual machine on the appliance. This frees enterprises to start the recovery of the application on the appliance itself before moving the application to its primary host. Enterprises can even opt to recover and run the application on the appliance for an extended time for test and development or to simply host the application until the production physical machine on which the application resides recovers.
  • Application conversions and migrations. All these appliances support the backup of virtual machines and their recovery as a virtual machine, but fully 88 percent of the software on these appliances support the backup of a physical machine and its recovery to a virtual machine. This feature gives enterprises access to a tool that can use to migrate applications from physical to virtual machines as a matter of course or in the event of disasters. Further, 77 percent of them support recovery of virtual machines to physical machines. While that may seem counter intuitive, not every application runs well on virtual machines or may need functionality only found when running on a physical machine.
  • Location of backup data. By storing data in the cloud (even if only using it as a cloud target,) enterprises know where their backup data is located. This is not trivial. Too many enterprises do not even know exactly what physical gear they have in their data center, much less where their data is located. While many enterprises still need to concern themselves with various international regulations governing the data’s physical location when storing data in the cloud, at least they know with which cloud provider they stored the data and how to access it. As anyone who uses or has used tape may recall, tracking down, lost tapes, misplaced tapes or even existing tapes can quickly become like trying to find a needle in a haystack. Even using disk is not without its challenges. Many enterprises may have to use multiple disk targets to store their backup data and trying to identify exactly which disk device holds what data may not be as simple as it sounds.
  • Recovering in the cloud. This end game of recovering in the cloud, whether it is recovering a single file, a single application, or an entire data center, may appeal to enterprises more so than any other option on these appliances. The ability to virtually create and have access to a secondary site from which they can recover data or even perform a disaster recovery and run one or more applications removes a dark cloud of unspoken worry that hangs over many enterprises today. The fact that they can use that recovery in the cloud as a stepping stone to potentially hosting applications or their entire data center in the cloud is an added benefit.

Enterprises should be very clear as to what opportunities that today’s cloud data protection appliances offer them. Near term they provide them a means to easily connect to one or more cloud providers, get their backup data offsite, and even recover their data or applications in the cloud. But the long term ramifications of using these appliances to store data in the cloud are much more significant. They represent the bridge to recovering and even potentially hosting more of their applications and data with one or more cloud providers. Organizations should therefore give this end game of recovery specific attention both when they choose a cloud data protection appliance and the cloud provider(s) to which the appliance connects.

To receive regular updates on when blog entries like this post on DCIG’s website, follow this link to subscribe to DCIG’s newsletter.




DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide developed from the backup appliance body of research. Other Buyer’s Guides based on this body of research include the recent DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide and the forthcoming 2016-17 Integrated Backup Appliance Buyer’s Guide.

As core business processes become digitized, the ability to keep services online and to rapidly recover from any service interruption becomes a critical need. Given the growth and maturation of cloud services, many organizations are exploring the advantages of storing application data with cloud providers and even recovering applications in the cloud.

Hybrid cloud backup appliances (HCBA) are deduplicating backup appliances that include pre-integrated data protection software and integration with at least one cloud-based storage provider. An HCBA’s ability to replicate backups to the cloud supports disaster recovery needs and provides essentially infinite storage capacity.

The DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-three (23) products from six (6) different providers. Using ranking categories of Recommended, Excellent and Good, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which hybrid cloud backup appliance will suit their needs.

Each backup appliance included in the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide meets the following criteria:

  • Be available as a physical appliance
  • May also ship as a virtual appliance
  • Includes backup and recovery software that enables seamless integration into an existing infrastructure
  • Stores backup data on the appliance via on premise DAS, NAS or SAN-attached storage
  • Enables connectivity with at least one cloud-based storage provider for remote backups and long-term retention of backups in a secure/encrypted fashion
  • Provides the ability to connect the cloud-based backup images on more than one geographically dispersed appliance
  • Be formally announced or generally available for purchase on July 1, 2016

It is within this context that DCIG introduces the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide. DCIG’s succinct analysis provides insight into the state of the hybrid cloud backup appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using a hybrid cloud backup appliance, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a- glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by- side comparisons, assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by-side feature comparisons of the products in which the organization is most interested.

By using the DCIG Analysis Portal and applying the hybrid cloud backup appliance criteria to the backup appliance body of research, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create this Buyer’s Guide Edition. DCIG plans to use this same process to create future Buyer’s Guide Editions that further examine the backup appliance marketplace.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Data Visualization, Recovery, and Simplicity of Management Emerging as Differentiating Features on Integrated Backup Appliances

Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.

Once largely assembled and configured by either IT staff or value added resellers (VARs), integrated backup appliances have gone mainstream and are available for use in almost any size organization. By bundling together both hardware and software, large enterprises get the turnkey backup appliance solution that was just a few years ago primary reserved for smaller organizations. In so doing, large enterprises can eliminate the need to spend days, weeks, or even months they previously had to spend configuring and deploying these solutions into their infrastructure.

The evidence of the demand for backup appliances at all levels of the enterprise is made plain by the providers who bring them to market. Once the domain of providers such as STORServer and Unitrends, “software only” companies such as Commvault and Veritas have responded to the demand for turnkey backup appliance solutions with both now offering their own backup appliances under their respective brand names.

commvault-pbba

Commvault Backup Appliance

symantec-netbackup-pbba

Veritas NetBackup Appliance

In so doing, any size organization may get any of the most feature rich enterprise backup software solutions on the market, whether it is IBM Tivoli Storage Manager (STORServer), Commvault (Commvault and STORServer), Unitrends or Veritas NetBackup, delivered to them as a backup appliance. Yet while traditional all-software providers have entered the backup appliance market,  behind the scenes new business demands are driving further changes on backup appliances that organizations should consider as they contemplate future backup appliance acquisitions.

  • First, organizations expect successful recoveries. A few years ago, the concept of all backup jobs completing successfully was enough to keep everyone happy and giving high-fives to one another. No more. organizations recognize that they have reliable backups residing on a backup appliance and these appliances may largely sit idle during off-backup hours. This gives the enterprise some freedom to do more with these backup appliances during these periods of time such as testing recoveries, recovering applications on the appliance itself, or even presenting these backup copies of data to other applications to use as sources for internal testing and development. DCIG found that a large number of backup appliances support one or more vCenter Instant Recovery features and the emerging crop of backup appliances can also host virtual machines and recover applications on them.
  • Second, organizations want greater visibility into their data to justify business decisions. The amount of data residing in enterprise backup repositories is staggering. Yet the lack of value that organizations derive from that stored data combined with the potential risk it presents to them by retaining it is equally staggering. Features that provide greater visibility into the metadata of these backups which then analyze it and help turn it into measurable value for the business are already starting to find their way onto these appliances. Expect these features to become more prevalent in the years to come.
  • Third, enterprises want backup appliances to expand their value proposition. Backup appliances are already easy to deploy but maintaining and upgrading them over time or deploying them for other use cases gets more complicated over time. To address these concerns, emerging providers such as Cohesity, which is making its first appearance in DCIG Buyer’s Guides as an integrated backup appliance, directly addresses these concerns. Available as a scale-out backup appliance that can function as a hybrid cloud backup appliance, a deduplicating backup appliance and/or as an integrated backup appliance, it provides an example of how an enterprises can more easily scale and maintain it over time while giving them the flexibility to use it internally in multiple different ways.

The forthcoming DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide Editions highlight the most robust and feature rich integrated backup appliances available on the market today. As such, organizations should consider these backup appliances covered in these Buyer’s Guides as having many of the features they need to protect both their physical and virtual environments. Further, a number of these appliances give them early access to the set of features that will position them to meet their next set of recovery challenge,s satisfy rising expectations for visibility into their corporate data, and simplify their ongoing management so they may derive additional value from it.




SaaS Provider Decides to Roll Out Cohesity for Backup and DR; Interview with System Architect, Fidel Michieli, Part 2

Evaluating product features, comparing prices, and doing proofing of concepts are important steps in the process of adopting almost any new product. But once one completes those steps, the time arrives to start to roll the product out and implement it. In this second installment of my interview series with System Architect, Fidel Michieli, he shares how his company gained a comfort level with Cohesity for backup and disaster recovery (DR) and how broadly it decided to deploy the product in the primary and secondary data centers.

Jerome: How did you come to gain a comfort level for introducing Cohesity into your production environment?

Fidel: We first did a proof of concept (POC).  We liked what we saw about Cohesity but we had a set of target criteria based on the tests we had previously run using our existing backup software and the virtual machine backup software. As such, we had a matrix of what numbers were good and what numbers were bad. Cohesity’s numbers just blew them out of the water.

Jerome:  How much faster was Cohesity than the other solutions you had tested?

Fidel: Probably 250 percent or more. Cohesity does a metadata snapshot where it essentially uses VMware’s technology, but the way that it ingests the data and the amount of compute that it has available to do the backups creates the difference, if that makes sense. We really liked the performance for both backups and restores.

We had two requirements. On the Exchange side we needed to do granular message restores. Cohesity was able to help us achieve that objective by using an external tool that it licensed and which works. Our second objective was to get out of the tape business. We wanted to go to cloud. Unfortunately for us we are constrained to a single vendor. So we needed to work with that vendor.

Jerome: You mean single cloud vendor?

Fidel: Well it’s a tape vendor, Iron Mountain. We are constrained to them by contract. If we were going to shift to the cloud, it had to be to Iron Mountain’s cloud. But Cohesity, during the POC level, got the data to Iron Mountain.

Jerome: How many VMs?

Fidel: We probably have around 1,400 in our main data center and about 120 hosts. We have a two-site disaster recovery (DR) strategy with a primary and a backup. Obviously it was important to have replication for DR. That was part of the plan before the 3-2-1 rule of backup. We wanted to cover that.

Jerome: So you have Cohesity at both your production and DR sites replicating between them?

Fidel: Correct.

Jerome: How many Cohesity nodes at each site?

Fidel: We have 8 and 8 at both sites. After the POC we started to recognize a lot of the efficiencies from management perspective. We knew that object storage was the way we wanted to go, the obvious reason being the metadata.

What the metadata means to us is that we can have a lot of efficiencies sit on top of your data. When you are analyzing or creating objects on your metadata, you can more efficiently manage your data. You can create objects that do compression, deduplication, objects that do analysis, and objects that hold policies. It’s more of a software defined data, if you will. Obviously with that metadata and the object storage behind it, our maintenance windows and backups windows started getting lower and lower.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.

In the fourth and final installment of this interview series, Fidel describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.




SimpliVity Hyper-converged Solution Already Driving Enterprises to Complete Do-overs of their Data Center Infrastructures

Technologies regularly come along that prompt enterprises to re-do their existing data center infrastructure. Whether it is improved performance, lower costs, more flexibility, improved revenue opportunities, or some combination of all of these factors, they drive enterprises to update or change the technologies they use to support their business.

But every now and then a technology comes along that prompts enterprises to a complete do-over of their existing data center infrastructures. This type of dramatic change is already occurring within organizations of all sizes who are adopting and implementing SimpliVity.

Hyper-converged infrastructures have had my attention for some time now. Combining servers, storage, networking, virtualization, and data protection into a single solution and delivering it as an easy-to-manage and scale appliance or software solution, hyper-converged infrastructures minimally struck me as novel and even disruptive for small and midsized business (SMB) environment. But as enterprise play … well, let’s just say I was more than a bit dubious.

OmniStack Technology Snapshot

Source: Simplivity

That viewpoint changed after attending SimpliVity Connect last week in San Francisco. An intimate event with maybe 30 people attending (including SimpliVity employees, partners, and customers along with some analysts and press,) I had unfettered access to the people within SimpliVity making the decisions and building the product as well as partners and customers who were responsible for implementing and supporting SimpliVity’s OmniStack Technology.

Unlike too many analyst events where I sometimes sense customers, partners, and even the vendor feel obligated to curtail their answers or refrain from commenting, I saw very little of that at this event. If anything, when I challenged the customers on why they make the decision they did to implement SimpliVity, or partners on why they elected to recommend SimpliVity over traditional distributed architectures (servers, storage, and networking,) their answers were surprisingly candid and unrestrained.

One customer I spoke to at length over dinner was Blake Soiu, the IT director of Interland Corp, who thought at the outset that SimpliVity was simply too good to be true. After all, it promised to deliver servers, storage, networking, data protection, virtualization, and disaster recovery (<- yes, disaster recovery!) for less than what he would spend on refreshing his existing distributed architecture. Further, a refresh of his distributed architecture would only include the foundation for DR but not an actual working implementation of it. By choosing SimpliVity, he allegedly would also get DR.

Having heard promises like this in the past, his skepticism was palpable. But after testing SimpliVity’s product in his environment with his applications and then sharing the financial and technical benefits with Interland’s management team, the decision to switch to SimpliVity became remarkably easy to make.

As he privately told me over dinner, the primary concerns of the CEO and CFO are making money. The fact that they could lower their costs, improve the availability and recoverability of the applications in their infrastructures, and lower their risks was all that it took to convince them. On his side, he has realized a significant improvement in the quality of his life with the luxury of going home without being regularly called out. Further, he has a viable and working DR solution that was included as part of the overall implementation of SimpliVity.

Equally impressive were the responses from some of the value added resellers in attendance. One I spoke to at length was Ken Payne, the CTO of Abba Technologies. Abba was a former (and maybe still is) an EMC reseller that offers SimpliVity as part of its product portfolio. However, Abba does more than offer technology products and services, it also consumes them as part of its CloudWorks offering.

Resellers such as Abba have a lot on the line, especially when they have partnerships like providers such as EMC. However, in their evaluation of SimpliVity for both their internal use and as a potential offering to their customers, Payne felt like Abba almost had no choice but to adopt it to stay at the front end of the technology curve though it was difficult to say the least. He says, “It is akin to throwing out everything you ever knew and believed in about IT and starting over.”

Abba has since brought SimpliVity in-house to use as the foundation of its cloud offering and is offering it to his customers. The benefits from using SimpliVity have been evident almost from the outset. One of Abba’s customers, after using SimpliVity for three months, finally gave up on trying to monitor the status of backups using SimpliVity’s native data protection feature.

However, he gave up not because they failed all of the time. Rather, they never failed and he was wasting his time monitoring them. On the status of Abba using SimpliVity internally, Payne says, “The amount of time that Abba spends managing and monitoring its own infrastructure has dropped from 45 percent to five percent. On some weeks, it is zero.

To suggest a do-over of how one does everything is never easy and, to do it successfully, requires a certain amount of faith and, at this stage, a high degree of technical aptitude and appreciation of how complex today’s distributed environments truly are. In spite of these obstacles, organizations such Interland Corp and Abba Technologies are making this leap forward and executing upon do-overs of their data center infrastructures to simplify them, lower their costs, and get new levels of flexibility and opportunities to scale that existing distributed architectures could not easily provide.

But perhaps more impressive is the fact that SimpliVity is already finding its way into Global 50 enterprise accounts and displacing working, mission-critical applications. These types of events suggest that SimpliVity is ready for more than do-overs in SMB or even small and midsized enterprise (SME) data centers. It tells me that leading-edge, large enterprises are ready for this type of do-over in their data center infrastructures and have the budget and, maybe more importantly, the fortitude and desire to  do so.




Dell NetVault and vRanger are Alive and Kicking; Interview with Dell’s Michael Grant, Part 3

Every now and then I hear rumors in the market place that the only backup software product that Dell puts any investment into is Dell Data Protection | Rapid Recovery while it lets NetVault and vRanger wither on the vine. Nothing could be further from the truth. In this third and final part of my interview series with Michael Grant, director of data protection product marketing for Dell’s systems and information management group, he refutes those rumors and illustrates how both the NetVault and vRanger products are alive and kicking within Dell’s software portfolio.

Jerome: Can you talk about the newest release of NetVault?

Michael: Dell Data Protection | NetVault Backup, as we now call it, continues to be an important part of our portfolio, especially if you are an enterprise shop that protects more than Linux, Windows and VMware. If you have a heterogeneous, cross-platform environment, NetVault does the job incredibly effectively and at a very good price. Netvault development keeps up with all the revs of the various operating systems. This is not a small list of to-dos. Every time anybody revs anything, we rev additional agents and provide updates to support them.

dell_netvault

Source: Dell

 In this current rev we also improved the speed and performance of NetVault. We now have a protocol accelerator, so we can keep less data on the wire. Within the media server itself, we also had to improve the speed and we wanted to address more clients. Customers protect 1,000’s of clients using NetVault and they want to add even more than that. To accommodate them, we automate the installation so that it’s effective, easily scalable and not a burden to the administrator.

To speed up protection of the file system, we put multi-stream capability into the product, so one can break up bigger backup images into smaller images and then simultaneously stream those to the target of your choice. Obviously, we love to talk to organizations about putting the DR deduplication appliances in as that target, but because we believe in giving customers flexibility and choice, you can multi-stream to just about any target.

Re-startable VMware backup is another big pain point for a lot of our customers.. They really bent our development team’s ear and said, “Listen, going back and restarting the backup of an entire VMDK file is a pain if it doesn’t complete. You guys need to put an automatic restart in the product.

Think about watching a show on DVR. If you did not make it all the way through the show in the first sitting, you don’t want to have to go back to the beginning and re-watch the entire thing the next time you watch it. You want to pick up where you left off.

Well, we actually put similar capability in NetVault. We can restart the VM backup from wherever the backup ended. Then you can just pick back up knowing that you have the last decently mountable restore point at a point in time when it trailed off. Just restart the VM and get the whole job done. That cuts hours out of your day if you did not get a full backup of a VM.   .

Sadly, backing up VMDK files, particularly in a dynamic environment, can be a real challenge. It is not unusual to have one fail midway through the job or not have a full job when you go to look in the queue. Restarting that VM backup just made a lot of sense for the IT teams.

Those new features really highlight what is new in the NetVault 11 release that we just announced. Later in the first half of this year, you will see the accompanying updates to the agents for NetVault 11 so that we remain in sync with the latest releases from everybody from Oracle through Citrix and VMware, as well as any other agents that need to be updated to align with this NetVault 11 release.

Jerome:  Are the functionality of vRanger and AppAssure now being folded under the Rapid Recovery brand?

Michael: That’s a little too far. We are blending the technologies, to be sure. But we are still very much investing in vRanger and it remains a very active part of our portfolio. To quote the famous Mark Twain line, “the tales of vRanger’s death are greatly exaggerated.”

dell_vranger_image

Source: Dell

We are still investing in it and it’s still very popular with customers. In fact, we made an aggressive price change in the fall to combine vRanger Pro with the standard vRanger offering. We just rolled in three years of service and made it all vRanger Pro. Then we dropped the price point down several hundred dollars, so that’s it less than any of the other entry level price points for virtualized backup in the industry. We will continue to invest in that product for dynamic virtual environments.

So, yes, you will absolutely still see it as a standalone product. However, even with that being the case, there is no reason that we should not reach in there and get some amazing code and start to meld that with Rapid Recovery. As DCIG has pointed out in its research and, as our customers tell us frequently, they would like to have as few backup tools in their arsenal as possible, so we will continue to blend those products to simplify data protection for our customers. The bottom line for us is, wherever the customer wants to go, we can meet them there with a solution that fits.

Jerome: How are you positioning each of these three products in terms of market segment?

Michael:  I do want to emphasize that we focus very much on the midmarket. We define midmarket as 500 to 5,000 employees. When we took a look at who really buys these products, we found that 90 plus percent of our solutions are being deployed by midmarket firms. The technologies that we have just talked about are well aligned to that market, and that makes them pretty unique. The midmarket is largely under served when it comes to IT solutions in general, but especially when it comes to backup and recovery. We are focusing on filling a need that has gone unfilled for too long.

In Part 1 of this interview series, Michael shares some details on the latest features available in Dell’s data protection line and why organizations are laser-focused on recovery like never before.

In Part 2 of this interview series, Michael elaborates upon how the latest features available in Dell’s data protection line enable organizations to meet the shrinking SLAs associated with these new recovery objectives.




Small, Smaller and Smallest Have Become the New SLAs for Recovery Windows; Interview with Dell’s Michael Grant, Part 2

Small, smaller and smallest. Those three words pretty well describe the application and file recovery windows that organizations of all sizes must meet with growing regularity. The challenge is finding tools and solutions that enable them to satisfy these ever-shrinking recovery windows. In this second part of my interview series with Michael Grant, director of data protection product marketing for Dell’s systems and information management group, he elaborates upon how the latest features available in Dell’s data protection line enable organizations to meet the shrinking SLAs associated with these new recovery objectives.

Jerome: When organizations go to restore data or an application, restores can actually take longer to complete because the data is stored in a duplicated state and they have to re-hydrate the data. How does Dell Data Protection | Rapid Recovery manage to achieve these 15 minute to two hour recovery windows?

Michael: We see prospective customers challenged with these lengthy times in recovery as well. If you are moving data a long distance, particularly if you have deduplicated it, you have now added re-hydration and latency to the equation.

At the same time, their onsite server recovery service level agreements (SLAs) have gotten small. We have already seen a lot of mid-market customers turning to Rapid Recovery to deal with this challenge. What they are doing is building something of a hybrid environment. Now, long-term, they tell us in no uncertain terms that when they find the ways and the means to get all of their data protection off site, they would like to do that. Will they really do that? I don’t know. But that’s long term. In the short-term, they are focused on building these hybrid environments.

StorageReview-Dell-Data-Protection-Rapid-Recovery

Source: Dell

When I say building a hybrid environment, typically that means they run a Rapid Recovery media server on site, and keep a full repository there. Then they replicate to public or private cloud. As part of what Rapid Recovery does, it spawns a hot standby virtual machine (VM), which is always running and available.

It updates as frequently as you take snapshots of your environment, and then replicates it automatically. For users, that means they can recover on site within literally minutes. They can recover offsite depending upon the latency. It is deduplicated throughout. But they can also access that media server directly.

In the event they have an outage where recovery time would be too onerous, they can access the media server directly running on the data that is minutes to no more than a half hour old, and work there while the IT team takes time to decide how they want to restore and where they want to restore. This is how we bridge the two, so that you get the data back, get the application back, and get the workforce up and running even though you may not have yet completed your entire restore.

In the DR series we see something similar. Re-hydration and line latency seem to take a bit of time. We are watching a lot of customers at this point put one or sometimes several DRs on site and then replicate between the two. In that scenario, you are not dealing as much with the re-hydration and latency between them.

Right now, given current capacities, customers easily keep 90 to 120 days of data next to the end users and systems that need it, so they may restore at line speeds within their environment, versus having to get it from off site.

But if you talk about the future of data protection, the next big challenge I think all of us face is, “How do we effectively put stuff at a distance?” So think away from the CPU, away from the storage, away from the end users, but still in a way that we can retrieve it almost instantly. That will be the new challenge that DR-as-a-service, backup-as-a-service, anything-as-a-service, really, has to tackle and solve. It has to perform these tasks to the satisfaction of the admin whose job is on the line if that application does not get back online and running.

Right now, I see a growing interest from customers and partners in getting as much data off site as they can. We continue to work with MSPs and with other partners to take a look at how can we do this most effectively while offering the same very low service level agreements (SLAs). We want to do so without unnecessarily being bounded by legacy architectures that many people have inherited, are stuck with, and have to figure out a way to augment and manage.

Something that also ties into this is that we have put out a free edition of our Endpoint Recovery offering. As part of the the digital transformation many of these businesses are going through, there’s a lot of IP that is getting created on the laptop while on the train ride to work or on the airplane to the next destination.

dwuf15-introducing-dell-data-protection-endpoint-recovery-4-638

Source: Dell

Our Endpoint Recovery solution is designed with the same principles in mind as Rapid Recovery. It’s a snapshot based technology that can frequently snap your entire image. In today’s version, it’s user driven, so an end user is responsible for both backing up and then restoring his or her data. You can restore granularly or up to a full image if you choose.

Later this year, we are going to put a management interface on that. For firms that are interested in managing everything holistically, we will provide that type of interface. The free edition is available for folks to experiment with and also to give us feedback through our data collection process so that we can understand what they are doing there.

If you think about end point all the way out to cloud, it is not hard to see how we connect the dots. We have got to protect the end point. We have got to help you get out to the cloud. The only way to do that, frankly, is to do it in cooperation with our customers. Having them tell us what they need, rather than us telling them what they need. That’s the impetus behind both this release and other features we plan to release later this year.

In Part 1 of this series, Michael Grant summarizes some of the latest features available in Dell’s data protection line and why organizations are laser-focused on recovery like never before.

In Part 3 of this series, Michael Grant shares about some of the new development going on in the NetVault and vRanger product lines.




The Inability to do a Server Recovery in Minutes May Become a Resume Generating Event; Interview with Dell’s Michael Grant, Part 1

In the last few years, anytime I get an update on new features from almost any provider of data protection products, I can almost guarantee they will talk about how they have improved their ability to do recovery. But perhaps no one better articulated why they need to improve recovery than Michael Grant, director of data protection product marketing for Dell’s systems and information management group. In this first installment in my interview series with Michael, he summarizes some of the latest features available in Dell’s data protection line and why organizations are laser-focused on recovery like never before.

Jerome:  Michael, good to speak with you and thanks for taking time out of your schedule to join me today. Can you begin by sharing what new features are available as part of the latest releases to Dell’s existing line of data protection products?

Michael: Jerome, good to speak with you again, as well. At a high level, the feature enhancements to all of our solutions are aimed at helping midmarket organizations tackle their digital initiatives. It’s very much the same story across the board when we talk to customers in that they consistently tell us about their challenges with legacy architecture.

MG Head_Feb14

They have inherited digital architecture that they need to continue to manage and maintain. They have new applications that they are trying to put into their environments using their existing IT staff. A lot of these companies want to get further connected to their customers and many also want to get connected to the cloud in some way. And that’s really what’s driving all the enhancements we recently announced,

In the DR series line of disk backup appliances, we released three new versions, including the DR4300, which is an upgrade over the existing DR4000,, and an entry level offering called the DR4300e, which is designed for a smaller scale environment and comes with a price point that makes it more affordable for that small- to mid-size organization. Dell also released an upgrade to the DR6000 series, the DR6300, which features the increased capacity and scale that organizations in the upper midmarket to lower end of the enterprise are looking for from their backup appliances.

We also launched NetVault 11, and the focus there was really on more speed and more scalability. For instance, we added a more clients for broader levels of data protection. We have also took a look at the file system and how we can multi-stream it to achieve faster performance. In this case, what we’ve done is chunked up backup jobs to increase performance. That helps with the I/O load and facilitates putting in re-startable VMs. Our customers were pretty emphatic about this one. Don’t make me go back and restart an entire backup of a VM if I have a failure. Just let me restart from wherever I was.

As excited as we are about the enhancements to NetVault and the DR Line, the big headline in this latest round of releases is general availability of Dell Data Protection | Rapid Recovery, an entirely new product that integrates proven IP from AppAssure and other Dell solutions. For example, it brings in some of vRanger’s capabilities. It also brings in some capabilities from the continuous data protection product from the SonicWALL acquisition.

As you know, Jerome, from our work with DCIG, Dell’s plan is to judiciously select capabilities from the key products in our portfolio and combine them to create new, more versatile solutions based to meet our customers quickly changing requirements. The Rapid Recovery product is a great example of this work and is very much aimed at that zero-impact recovery that customers with new digital projects tell us is essential to have.

One last point. We have consistently seen server SLAs go down. As customers put in more servers – virtual or physical, a lot of times these are aimed toward the x86 or Linux market. Now the service level agreements (SLAs) for recovery for those servers, if they have an event, is anywhere from two hours to 15 minutes.

The SLAs used to be a lot longer. According to our research, if you go back no more than five years ago, they said they could get away with four hours on some critical stuff and a little bit longer on others. We now see those SLAs reduced to literally minutes. If a server recovery takes longer, it becomes a resume generating event for some member of the IT team.  This entire announcement is focused on that portion of the market, very much with those customers in mind.

In Part 2 of this interview series, Michael shares how Rapid Recovery positions an organization to recover a server in minutes.

In Part 3 of this interview series, Michael shares how both the NetVault and vRanger products are alive and kicking within Dell’s software portfolio.




DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide that evaluates and ranks more than 100 features from nearly 60 different hybrid cloud backup appliances from ten (10) different providers.

DCIG-2015-16-Hybrid-Cloud-Backup-Appliance-Icon-200x200

DCIG’s goal in preparing this Buyer’s Guide is to evaluate and rank each appliance based upon a comprehensive list of features that reflects the needs of the widest range of organizations. The Buyer’s Guide rankings enable “at-a-glance” comparisons between many different appliance models and its standardized data sheets facilitate side-by-side reviews to quickly enable organizations to examine products in greater detail.

Hybrid cloud backup appliances are particularly well-suited for organizations that need:

  • A turnkey backup and recovery solution to replace or upgrade their existing backup software
  • To keep their backup data both locally and with an off-site cloud provider
  • To do fast application recoveries
  • To set the stage for implementing an offsite disaster recovery solution

The DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide Top 10 solutions include (in alphabetical order):

  • Cobalt Iron Vault – Enterprise
  • Cobalt Iron Vault – Large
  • Cobalt Iron Vault – Medium
  • STORServer EBA 1202-CV
  • STORServer EBA 2502-CV
  • Unitrends Recovery 823S
  • Unitrends Recovery 824S
  • Unitrends Recovery 933S
  • Unitrends Recovery 936S
  • Unitrends Recovery 943S

The Unitrends Recovery 943S earned the Best-in-Class ranking among all hybrid cloud backup appliances evaluated in this Buyer’s Guide. The Recovery 943S stood out by offering the following capabilities:

  • Connectivity to multiple cloud providers
  • Heightened levels of virtual server data protection
  • Robust encryption options for data at-rest and in-flight
  • Scales to offer high levels of cache, processing power and storage capacity
  • Support for multiple networking storage protocols and dozens of operating systems

About the DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide

DCIG creates Buyer’s Guides in order to help organizations accelerate their product research and selection process by driving the cost, time and effort out of the research process while simultaneously increasing confidence in the results.

The DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide achieves the following objectives:

  • Provides an objective, third party evaluation of products that evaluates and scores their features from an end user’s perspective
  • Ranks each appliance in each category
  • Provides a standardized data sheet for each appliances so organizations may quickly do side-by-side product comparisons
  • Provides insights into what options these appliances offer to integrate with third party cloud storage providers
  • Provides insight into which features will result in improved performance
  • Provides a solid foundation for getting competitive bids from different providers with products that are based on “apples-to-apples” comparisons

The DCIG 2015-16 Hybrid Cloud Backup Appliance Buyer’s Guide is available immediately to subscribing users of the DCIG Analysis Portal. Individuals who have not yet subscribed to the DCIG Analysis Portal may test drive the DCIG Analysis Portal as well as download this Guide by following this link.




The Dell DL4300 Puts the Type of Thrills into Backup and Recovery that Organizations Really Want

Organizations have long wanted to experience the thrills of non-disruptive backups and instant application recoveries. Yet the solutions delivered to date have largely been the exact opposite offering only unwanted backup pain with very few of the types of recovery thrills that organizations truly desire. The new Dell DL4300 Backup and Recovery Appliance successfully takes the pain out of daily backup and puts the right types of thrills into the backup and recovery experience.

Everyone enjoys a thrill now and then. However individuals should want to get their thrills at an amusement park, not when they backup or recover applications or manage the appliance that hosts their software. In cases like these, boring is the goal when it comes to performing backups and/or managing the appliance that hosts the software with the excitement and thrills appropriately reserved for fast, successful application recoveries. This is where the latest Dell DL4300 Backup and Recovery Appliance introduces the right mix of boring and excitement into today’s organizations.

Show Off

Being a show off is rarely if ever perceived as a “good thing.” However IT staff can now in good conscience show off a bit by demonstrating the DL4300’s value to the business as it quickly backs up and recovers applications without putting business operations at risk. The Dell DL4300 Backup and Recovery Appliance’s AppAssure software provides the following five (5) key features to give them this ability:

  • Near-continuous backups. The Dell DL4300 may perform application backups as frequently as every five (5) minutes for both physical and virtual machines. During the short period of time it takes to complete a backup, it only consumes a minimal amount of system resources – no more than 2 percent. Since the backups occur so quickly, organizations have the flexibility to schedule as many as 288 backups in a 24 hour period which helps to minimize the possibility of data loss so organizations can achieve near-real time recovery point objectives (RPOs).
  • Near-instantaneous recoveries. The Dell DL4300 complements its near-continuous backup functionality by also offering near-instantaneous application recoveries. Its Live Recovery feature works across both physical and virtual machines and is intended for use in situations where application data is corrupted or becomes unavailable. In those circumstances, Live Recovery can within minutes present data residing on non-system volumes to a physical or virtual machine. The application may then access that data and resume operations until the data is restored and/or available locally.
  • Virtual Standby. The Dell DL4300’s Virtual Standby feature complements its Live Recovery feature by providing an even higher level of availability and recovery for those physical or virtual machines that need this level of recovery. To take advantage of this feature, organizations identify production applications that need instant recovery. Once identified, these applications are associated with the up to four (4) virtual machines (VMs) that may be hosted by a Dell DL4300 appliance and which are kept in a “standby” state. While in this state, the Standby VM on the DL4300 is kept updated with changes on the production physical or virtual VM. Then should the production server ever go offline, the standby VM on the Dell DL4300 will promptly come online and take over application operations.
  • Helps to insure application consistent recoveries. Simply being able to bring up a Standby VM on a moment’s notice for some production applications may be insufficient. Some applications such as Microsoft Exchange create check points to ensure it is brought up in an application consistent state. In cases such as these, the DL4300 integrates with applications such as Exchange by regularly performing mount checks for specific Exchange server recovery points. These mount checks help to guarantee the recoverability of Microsoft Exchange.
  • Open Cloud support. As more organizations keep their backup data on disk in their data center, many still need to retain copies of data offsite without either moving it to tape or needing to set up a secondary site to which to replicate the data. This makes integration with public cloud storage providers to archive retention backup copies an imperative. The Dell DL4300 meets this requirement by providing one of the broadest levels of public cloud storage integration available as it natively integrates with Amazon S3, Microsoft Azure, OpenStack and Rackspace Cloud Block storage.

The Thrill of Having Peace of Mind

The latest Dell DL4300 series goes a long way towards introducing the type of excitement that organizations really want to experience when they use an integrated backup appliance. It also goes an equally long way toward providing the type of peace of mind that organizations want when implementing a backup appliance or managing it long term.

For instance, the Dell DL4300 gives organization the flexibility to start small and scale as needed in both its Standard and High Capacity models with their capacity on demand license features. The Dell DL4300 Standard comes equipped with 5TB of licensed capacity and a total of 13TB of usable capacity. Similarly, the Dell DL4300 High Capacity ships with 40TB of licensed capacity and 78TB of usable capacity.

Configured in this fashion, DL4300 series minimizes or even eliminates the need for organizations to install additional storage capacity at a later date should its existing, available licensed capacity ever run out of room. If the 5TB threshold is reached on the DL4300 Standard or the 40TB limit is reached on the DL4300 High Capacity, organizations only need to acquire an upgrade license to access and use the pre-installed and existing additional capacity. This takes away the unwanted worry about later upgrades as organizations may easily and non-disruptively add 5TB of additional capacity to the DL4300 Standard or 20TB of additional capacity to the DL4300 High Capacity.

Similarly the DL4300’s Rapid Appliance Software Recovery (RASR) removes the shock of being unable to recover the appliance should it fail. RASR improves the reliability and recoverability of the appliance by taking regularly scheduled backups of the appliance. Then should the appliance itself ever experience data corruption or fail, organizations may first do a default restore to the original backup appliance configuration from an internal SD card and then restore from a recent backup to bring the appliance back up-to-date.

The Dell DL4300 Provides the Types of Thrills that Organizations Want

Organizations want the sizzle that today’s latest technologies have to offer without the unexpected worries that can too often accompany them. The Dell DL4300 provides this experience. It makes its ongoing management largely a non-issue so organizations may experience the thrills of near-continuous backup and near-instantaneous recovery of data and applications across their physical, virtual and/or cloud infrastructures.

It also delivers the new type of functionality that organizations want to meet their needs now and into the future. Through its native integration with multiple public cloud storage providers and giving organizations the flexibility to use its virtual standby feature for enhanced testing to insure consistent and timely recovery of their data, organizations get the type of thrills that they want and should rightfully expect from a solution such as the Dell DL4300 Backup and Recovery appliance that offers industry-leading self-recovery features and enhanced appliance management.




A Glimpse into the Next Decade of Backup and Recovery; Interview with Dell Software’s General Manager, Data Protection, Brett Roscoe, Part IX

Today backup and recovery looks almost nothing like it did 10 years ago. But as one looks at all of the changes still going on in backup and recovery, one can only guess what backup and recovery might look line in another 5-10 years. In this ninth and final installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, Brett provides some insight into where he sees backup and recovery going over the next decade.

Jerome: There is a lot excitement out there right now around data protection and how much backup and recovery has changed in the last 5 – 10 years. To a certain degree, it does not even look like it did 10 years ago. It makes me wonder what it is going to look like in 5 or 10 more years in terms of what new technologies are going to come to market or how they are going take advantage of new technologies. Do you have any thoughts about what the future of backup and recovery looks like and is it even going to be called backup and recovery?

Brett: That’s a great question, and boy, if I could accurately predict what’s going to happen ten years from now, that would be something!. But you’re absolutely right in saying that this is a market that’s quickly evolving and changing. That is one nice thing about the software side of the business: you can quickly change with the market and meet these changing customer needs.

But if I had to predict what I think is going to happen, it is clear to me that we are going to continue to move to real-time or near real-time data protection. That world of 10 years ago, where you scheduled backup jobs at night, is just going to fade away. The real-time backup means that as soon as something changes in your environment, it’s immediately protected. I do not think we are going to get away from that. I think it’s going to be driven by the technology, as well as the demands of our customers.

Data is becoming more and more critical. More and more of my company depends on this data being available and being recoverable. If a business has a disaster and loses their data, a large percentage of them never recover and never stay in business. That is just becoming a reality.

In addition, I think you’re going to more and more cloud and backup and recovery as a service models, especially in the smaller SMB side of the market. Customers are looking to offload and maybe simplify their IT infrastructure. When you can start using very efficient technologies such as WAN optimization, deduplication and compression to minimize the bandwidth required to move data into the cloud, that reduces costs while making bandwidth more efficient. This makes the cloud much more usable as a backup and recovery option.

Further, virtual standby technology is coming into its own. Using this technology, you can actually run your applications in the cloud for some period of time. Using this approach, you may lease some additional compute and storage resources in the cloud to deliver on this capability. However, it is a temporal thing and allows you to meet an SLA which would normally require much more cost if you did it locally. So, in that vein, I expect to see expanded use of the cloud and a continuation of today’s hybrid cloud environment IT initiatives.

Another trend is using backup for additional IT functions like analytics, testing, and data migration. We have all this great data that we’ve captured through the backup process. Now there is a new push to create more value for this idle data and exploring what else we can do with it.

There are all kinds of great tools coming out in terms of analytics and data mining capabilities that can provide additional value to any company that can mine through that data and use it in some other manner.

I also expect to see tighter integration of backup to traditional management infrastructure. This idea that backup becomes a little bit more of the mainstream tool set that you see in IT through your larger IT frameworks or management infrastructures is going to continue. Many big companies are starting to build backup tools into their applications or into their hypervisors. We will continue to see that.

Lastly, there’s the trend we hit on previously when we talked about the Dell Backup & Disaster Recovery Suite, which is the desire for more flexibility to leverage Dell or whichever vendor’s IP across the portfolio. That will continue to grow. How do you get customers to a place where they feel like they can leverage more and more of your IP no matter what product they buy? Dell is leading the charge on that. We really have a vision for making sure all of our IP is available to our customers.

In Part I of this interview series, Brett and I discussed the biggest backup and recovery challenges that organizations face today.
In Part II of this interview series, Brett and I discussed the imperative to move ahead with next gen backup and recovery tools.
In Part III of this interview series, Brett and I discussed four (4) best practices that companies should be implementing now to align the new capabilities in next gen backup and recovery tools with internal business processes.
In Part IV of this interview series, Brett and I discussed the main technologies in which customers are currently expressing the most interest.
In Part V of this interview series, Brett and I examine whether or not one backup software product can “do it all” from a backup and recovery perspective.
In Part VI of this interview series, Brett and I discuss Dell’s growing role as a software provider.
In Part VII of this interview series, Brett provides an in-depth look at Dell’s new Backup and Disaster Recovery Suite.
In Part VIII of this interview series, Brett explains how dell now provides a single backup solution for multiple backup and recovery challenges.

A Glimpse into the Next Decade of Backup and Recovery; Interview with Dell Software’s General Manager, Data Protection, Brett Roscoe, Part IX

Today backup and recovery looks almost nothing like it did 10 years ago. But as one looks at all of the changes still going on in backup and recovery, one can only guess what backup and recovery might look line in another 5-10 years. In this ninth and final installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, Brett provides some insight into where he sees backup and recovery going over the next decade.

Jerome: There is a lot excitement out there right now around data protection and how much backup and recovery has changed in the last 5 – 10 years. To a certain degree, it does not even look like it did 10 years ago. It makes me wonder what it is going to look like in 5 or 10 more years in terms of what new technologies are going to come to market or how they are going take advantage of new technologies. Do you have any thoughts about what the future of backup and recovery looks like and is it even going to be called backup and recovery?

Brett: That’s a great question, and boy, if I could accurately predict what’s going to happen ten years from now, that would be something!. But you’re absolutely right in saying that this is a market that’s quickly evolving and changing. That is one nice thing about the software side of the business: you can quickly change with the market and meet these changing customer needs.

But if I had to predict what I think is going to happen, it is clear to me that we are going to continue to move to real-time or near real-time data protection. That world of 10 years ago, where you scheduled backup jobs at night, is just going to fade away. The real-time backup means that as soon as something changes in your environment, it’s immediately protected. I do not think we are going to get away from that. I think it’s going to be driven by the technology, as well as the demands of our customers.

Data is becoming more and more critical. More and more of my company depends on this data being available and being recoverable. If a business has a disaster and loses their data, a large percentage of them never recover and never stay in business. That is just becoming a reality.

In addition, I think you’re going to more and more cloud and backup and recovery as a service models, especially in the smaller SMB side of the market. Customers are looking to offload and maybe simplify their IT infrastructure. When you can start using very efficient technologies such as WAN optimization, deduplication and compression to minimize the bandwidth required to move data into the cloud, that reduces costs while making bandwidth more efficient. This makes the cloud much more usable as a backup and recovery option.

Further, virtual standby technology is coming into its own. Using this technology, you can actually run your applications in the cloud for some period of time. Using this approach, you may lease some additional compute and storage resources in the cloud to deliver on this capability. However, it is a temporal thing and allows you to meet an SLA which would normally require much more cost if you did it locally. So, in that vein, I expect to see expanded use of the cloud and a continuation of today’s hybrid cloud environment IT initiatives.

Another trend is using backup for additional IT functions like analytics, testing, and data migration. We have all this great data that we’ve captured through the backup process. Now there is a new push to create more value for this idle data and exploring what else we can do with it.

There are all kinds of great tools coming out in terms of analytics and data mining capabilities that can provide additional value to any company that can mine through that data and use it in some other manner.

I also expect to see tighter integration of backup to traditional management infrastructure. This idea that backup becomes a little bit more of the mainstream tool set that you see in IT through your larger IT frameworks or management infrastructures is going to continue. Many big companies are starting to build backup tools into their applications or into their hypervisors. We will continue to see that.

Lastly, there’s the trend we hit on previously when we talked about the Dell Backup & Disaster Recovery Suite, which is the desire for more flexibility to leverage Dell or whichever vendor’s IP across the portfolio. That will continue to grow. How do you get customers to a place where they feel like they can leverage more and more of your IP no matter what product they buy? Dell is leading the charge on that. We really have a vision for making sure all of our IP is available to our customers.

In Part I of this interview series, Brett and I discussed the biggest backup and recovery challenges that organizations face today.
In Part II of this interview series, Brett and I discussed the imperative to move ahead with next gen backup and recovery tools.
In Part III of this interview series, Brett and I discussed four (4) best practices that companies should be implementing now to align the new capabilities in next gen backup and recovery tools with internal business processes.
In Part IV of this interview series, Brett and I discussed the main technologies in which customers are currently expressing the most interest.
In Part V of this interview series, Brett and I examine whether or not one backup software product can “do it all” from a backup and recovery perspective.
In Part VI of this interview series, Brett and I discuss Dell’s growing role as a software provider.
In Part VII of this interview series, Brett provides an in-depth look at Dell’s new Backup and Disaster Recovery Suite.
In Part VIII of this interview series, Brett explains how dell now provides a single backup solution for multiple backup and recovery challenges.



Answering The Question of Whether One Backup Product Can Do It All; Interview with Dell Software’s General Manager, Data Protection, Brett Roscoe Part V

Data protection has evolved well beyond the point where one can backup and recover data doing once a day backups. Continuous data protection, array-based snapshots, asynchronous replication, high availability, disaster recovery, backup and recovery in the cloud and long term backup retention are now all part of managing backup.

However, the real question becomes, “Can one product even manage all of these different facets of backup and recovery? Or should a backup solution even try to accomplish this feat?” In this fifth installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, we discuss this very important question of whether one backup product can do it all in today’s data center.

Jerome: There are a lot of demands being placed on backup and recovery software these days so the question I have you is this, can one backup and software product still do it all to meet these different customer demands? If so, why? If not, why not?

Brett: That’s a great question and it is hard to provide a yes or no answer to that question. Due to the rapid pace of change in IT, we see lots of variables that are changing the landscape including software defined data center, container based application rollout and the ongoing trend of virtualization and cloud adoption. As a result, customers’ requirements for data protection are changing, and that in turn is changing what they look for and need from data protection vendors like Dell and others.

Given that the needs of customers are rapidly evolving, we as a company spend a lot of time working to make sure we provide the new technologies and unique capabilities that can help them meet those needs. That’s one of the core things Dell drives for with every decision we make. As a general manger, I need to make sure that my development teams are constantly working to ensure that our technologies keep up with the changing marketplace.

To tie it back to the initial question, there are certainly ways to consolidate and simplify data protection and disaster recovery. So, for example, I talked about our DR line of target-based disk backup and deduplicating appliances. Those products today not only work seamlessly with the other products in our portfolio, but they can also work with all other backup products a customer might already have in their environment.

If customers want to look for way to consolidate technology, the DR series is a great place to start. The DR products are designed to run in a heterogeneous environment with all applications, any OS, and all backup software. But there are certainly advantages to start consolidating in some of those areas. We have a broad portfolio, , which really has one of the broadest capabilities in the industry and we’ve really worked to tune those products to work better together. Many of our products like AppAssure and vRanger provide very rapid recovery times and provide native replication tools that can extend traditional backup and recovery to more of a business continuity solution.

We are also really driving to integrate across that product line. You are starting to see more and more capabilities of each of these different products within each of the other product lines. We have a lot of integration going on between those products and, over time, you will be able to do more and more to address different use case scenarios within these products.

When we talk to customers, we certainly see an interest in consolidation. Customers are moving away from individual replication tools, high availability tools, and tools that they use for offsite data management, and at Dell, we’ve moved to a place where we can now provide all of that in one tool.

We can do things like data protection using traditional backup and recovery. We can replicate each of those snapshots to an offsite location. We can stand up each of those snapshots in an offsite location or onsite. You can see how that might start moving you to centralize more of your capabilities into the Dell data protection tool set, and to that end, we recently introduced our backup and disaster recovery suite that provides a capacity based license by which you can use all of the products in our portfolio and consolidate their respective capabilities there.

In Part I of this interview series, Brett and I discussed the biggest backup and recovery challenges that organizations face today.
In Part II of this interview series, Brett and I discussed the imperative to move ahead with next gen backup and recovery tools.
In Part III of this interview series, Brett and I discussed four (4) best practices that companies should be implementing now to align the new capabilities in next gen backup and recovery tools with internal business processes.
In Part IV of this interview series, Brett and I discussed the main technologies in which customers are currently expressing the most interest.

In Part VI of this interview series, Brett and I discuss Dell’s growing role as a software provider.

In Part VII of this interview series, Brett provides an in-depth explanation of Dell’s data protection portfolio.

In Part VIII of this interview series, Brett and I discuss the trend of vendors bundling different but complementary data protections products together in a single product suite.