Four Ways to Achieve Quick Wins in the Cloud

More companies than ever want to use the cloud as part of their overall IT strategy. To do so, they often look to achieve some quick wins in the cloud to demonstrate its value. Achieving these quick wins also serves to give them some practical hands on experience in the cloud. Incorporating the cloud into your backup and disaster recovery (DR) processes may serve as the best way to get these wins.

Any company hoping to get some quick wins in the cloud should first define what a “win” looks like. For the purposes of this blog entry, a win consists of:

  • Fast, easy deployments of cloud resources
  • Minimal IT staff involvement
  • Improved application processes or workflows
  • The same or lower costs

Here are four ways for companies to achieve the quick wins in the cloud through their backup and DR processes:

#1 – Take a Non-disruptive Approach

When possible, leverage your company’s existing backup infrastructure to store copies of data in the cloud. All enterprise backup products such as backup software and deduplication backup appliances, save one or two, interface with public clouds. These products can store backup data in the cloud without disrupting your existing environment.

Using these products, companies can get exposure to the public cloud’s core compute and storage services. These are the cloud services companies are most apt to use initially and represent the most mature of the public cloud offerings.

#2 – Deduplicate Backup Data Whenever Possible

Public cloud providers charge monthly for every GB of data that companies store in their respective clouds. The more data that your company stores in the cloud, the higher these charges become.

Deduplicating data reduces the amount of data that your company stores in the cloud. In so doing, it also helps to control and reduce your company’s monthly cloud storage costs.

#3 – Tier Your Backup Data

Many public cloud storage providers offer multiple tiers of storage. The default storage tier they offer does not, however, represent their most cost-effective option. This is designed for data that needs high levels of availability and moderate levels of performance.

Backup data tends to only need these features for the first 24 – 72 hours after it is backed up. After that, companies can often move it to lower cost tiers of cloud storage. Note that these lower cost tiers of storage come with decreasing levels of availability and performance. While many backups (over 99%) fall into this category, check to see if any application recoveries occurred that required data over three days old before moving it to lower tiers of storage.

#4 – Actively Manage Your Cloud Backup Environment

Applications and data residing in the cloud differ from your production environment in one important way. Every GB of data consumed and every hour that an application runs incur costs. This differs from on-premises environments where all existing hardware represents a sunk cost. As such, there is less incentive to actively manage existing hardware resources since any resources recouped only represent a “soft” savings.

This does not apply in the cloud. Proactively managing and conserving cloud resources translate into real savings. To realize these savings, companies need to look to products such as Quest Foglight. It helps them track where their backup data resides in the cloud and identify the application processes they have running. This, in turn, helps them manage and control their cloud costs.

Companies rightfully want to adopt the cloud for the many benefits that it offers and, ideally, achieve a quick win in the process. Storing backup data in the cloud and moving DR processes to the cloud provides the quick win in the cloud that many companies initially seek. As they do so, they should also ensure they put the appropriate processes and software in place to manage and control their usage of cloud resources.




Best of Show at Nutanix .NEXT

Every time DCIG attends a conference, we attempt to meet with as many exhibitors as possible to get an overview of their solutions and the key business challenges they solve. We then identify three that best address these challenges. In attending the Nutanix .NEXT event last week in Anaheim, CA, DCIG awarded these three products as Best of Show.

Best of Show Award Winner #1: Nutanix Mine

Nutanix Mine was one of the announcements made during the opening keynote at the Nutanix .NEXT conference that prompted spontaneous applause from the audience in attendance. That applause came for good reason.

Source: Nutanix

Companies who standardize on Nutanix ideally do not really want to introduce another HCI platform to host their data protection platform. Using Nutanix Mine, companies get all the HCI benefits that Nutanix offers that companies can then use to host the data protection solution of their choice.

Data protection providers such as HYCU, Commvault, Unitrends, and Veritas all announced their intentions to use Nutanix Mine as an option to host their data protection software. Further, other data protection providers in attendance at Nutanix .NEXT privately shared with DCIG that they plan to adopt Mine as a platform hosting option at some point in the future, one even going so far as to say it views Mine as a platform it must adopt.

Best of Show Award Winner #2: Lenovo TruScale

Lenovo TruScale literally introduces enterprises to utility data center computing. Lenovo bills TruScale clients a monthly management fee plus a utilization charge. It bases this charge on the power consumed by the Lenovo-managed IT infrastructure.

Source: Lenovo

This power consumption-based approach is especially appealing to enterprises and service providers for which one or more of the following holds true:

  • Data center workloads tie directly to revenue.
  • Want IT to focus on enabling digital transformation, not infrastructure management.
  • Need to retain possession or secure control of their data.

TruScale does not require companies to install any extra software. TrueScale gets its power utilization data from the management processor already embedded in Lenovo servers. It then passes this power consumption data to the Lenovo operations center(s) along with alerts and other sensor data.

Lenovo uses the data to trigger support interventions and to provide near real-time usage data to customers via a portal. The portal graphically presents performance versus key metrics including actual vs budget.  Lenovo’s approach to utility data center computing provide a distinctive and easy means this technology while simultaneously simplifying billing. (Note: DCIG will be publishing another blog entry very shortly that more thoroughly examines Lenovo TruScale.)

Best of Show Award Winner #3: HYCU 4.0

I have known about HYCU for a while so its tight integration with Nutanix AHV is not the motivation for DCIG awarding HYCU Best of Show. Rather, the testimony that a staff member from Nutanix’s internal IT department shared about Nutanix’s own experience running HYCU to protect its data center caught my attention.

Source: HYCU

Nutanix internally deployed HYCU in three data centers across the United Stated and in its small data centers in its global offices. HYCU protects over 3,500 VMs that includes both Linux and Windows VMs with no agents installed. It provides both file and VM level restores and uses Active Directory for its RBAC (role-based access control).

Nutanix evaluated data protection products from other data protection providers. Nutanix chose HYCU over all of them. Pretty strong testimonial and endorsement of HYCU by Nutanix when almost other data protection provider would give their eye teeth to be Nutanix’s internal go-to provider of backup software.




HYCU Continues Its March Towards Becoming the Default Nutanix Backup Solution

Any time a new operating system platform comes to market, one backup solution tends to lead in providing a robust set of data protection features that companies can quickly, easily, and economically deploy. It happened with Unix. It happened with Windows and VMware. Now it is happening again with the Nutanix Acropolis operating system (AOS) as HYCU continues to make significant product enhancements in its march to become the default backup solution for Nutanix-centric environments.

I greatly respect any emerging technology provider that can succeed at any level in the hyper-competitive enterprise space. To compete and win in the enterprise market, it must execute simultaneously on multiple levels. Minimally, it must have solid technology, a compelling message, and competent engineering, marketing, management, sales, and support teams to back the product up. Nutanix delivers on all these fronts.

However, companies can sometimes overlook the value of the partner community that must simultaneously develop when a new platform such as Nutanix AOS comes to market. If companies such as HYCU, Intel, Microsoft, SAP and others did not commit resources to form technology alliances with Nutanix, it would impede Nutanix’s ability to succeed in the market place.

Of these alliances, Nutanix’s alliance with HYCU merits attention. While Nutanix does have technology alliances with other backup providers, HYCU is the only one of these providers that has largely hitched its wagon to the Nutanix train. As a result, as Nutanix goes, so largely goes HYCU.

Given that Nutanix continues to rock the hyperconverged infrastructure (HCI) market space, this bodes well for HYCU – assuming HYCU matches Nutanix’s pace of innovation step-for-step. Based upon the announcement that HYCU made at this week’s Nutanix .NEXT conference in Anaheim, CA, it is clear that HYCU fully understands the opportunities in front of it and capitalizes on them in its latest 4.0 release. Consider:

  • HYCU supports and integrates with Nutanix Mine beginning in the second half of 2019. Emerging data protection providers such as Cohesity and Rubrik have (rightfully) made a lot of noise about using HCI platforms (and especially theirs) for data protection use cases. In the face of this noise, HYCU, with its HYCU-X announcement in late 2018, grasped that it could use Nutanix to meet this use case. The question was, “Did Nutanix want to position AOS as a platform for data protection software and secondary enterprise workloads?

The short answer is Yes. The Nutanix Mine May 8 announcement makes it clear that Nutanix has no intention of conceding the HCI platform space to competitors that focus primarily on data protection. Further, Nutanix’s technology alliance with HYCU immediately pays dividends. Companies can select backup software that is fully integrated with the Nutanix AOS, obtaining it and managing it in almost the same way as if Nutanix had built its own backup software. Further, HYCU is the only data protection solution ready now to ship when Nutanix goes GA with Mine in the second half of 2019.

  • Manage HYCU through Nutanix Prism management interface. Nutanix Prism is the Nutanix interface used to manage Nutanix AOS environments. With the forthcoming release of HYCU 4.0, companies may natively administer HYCU through the Nutanix PRISM interface as part of their overall Nutanix AOS management experience.
  • Support for Nutanix Files. The scale-out characteristics of Nutanix make it very appealing for companies to use it for purposes other than simply hosting their VMs. Nutanix Files is a perfect illustration as companies can use Nutanix to host their unstructured data to get the availability, performance, and flexibility that traditional NAS providers increasingly struggle to deliver in a cost-effective manner.

HYCU 4.0’s support for Nutanix Files includes NFS support and changed file tracking. This feature eliminates the overhead of file system scans, automates protections of newly created VMs with a default policy, and should serve to accelerate the speed of incremental backups.

  • Protects physical Windows servers. Like it or not, physical Windows servers remain a fixture in many corporate environments and companies must protect them. To address this persistent need, HYCU 4.0 introduces protection for physical Windows servers so as companies look to adopt HYCU to protect their expanding Nutanix environment, they can “check the box”, so to speak, to extend their use of HYCU to protect their physical Windows environment.

The Nutanix Mine announcement represents yet another market place into which Nutanix will extend the reach of its AOS platform to provide a consistent, single cloud platform that companies may use. As Nutanix makes its Mine offering available, companies may note that Nutanix mentions multiple data protection providers who plan to come to market with solutions running on Nutanix Mine.

However, “running on Nutanix Mine” and “optimized and fully integrated with Nutanix” are two very different phrases. Of the providers who were mentioned by Nutanix that will run on Nutanix Mine, only HYCU has moved in lockstep with Nutanix AOS almost since HYCU’s inception. In so doing, HYCU has well positioned itself to become the default backup solution for Nutanix environments due to the many ways HYCU has adopted and deeply ingrained Nutanix’s philosophy of simplicity into its product’s design.




Breaking Down Scalable Data Protection Appliances

Scalable data protection appliances have arguably emerged as one of the hottest backup trends in quite some time, possibly since the introduction of deduplication into the backup process. These appliances offer backup software, cloud connectivity, replication, and scalable storage in a single, logical converged or hyperconverged infrastructure platform offering that simplify backup while positioning a company to seamlessly implement the appliance as part of its disaster recovery strategy or even create a DR solution for the first time.

It is as the popularity of these appliances increases, so does the number of product offerings and the differences between them. To help a company break down the differences between these scalable data protection appliances, here are some commonalities between them as well as three key features to evaluate how they differ.

Features in Common

At a high level these products all generally share the following seven features in common:

  1. All-inclusive licensing that includes most if not all the software features available on the appliance.
  2. Backup software that an organization can use to protect applications in its environment.
  3. Connectivity to general-purpose clouds for off-site long-term data retention.
  4. Deduplication technologies to reduce the amount of data stored on the appliance.
  5. Replication to other appliances on-premises, off-site, and even in the cloud to lay the foundation for disaster recovery.
  6. Rapid application recovery which often includes the appliance’s ability to host one or more virtual machines (VMs) on the appliance.
  7. Scalable storage that enables a company to quickly and easily add more storage capacity to the appliance.

 

It is only when a company begins to drill down into each of these features that it starts to observe noticeable differences between each of the features available from each provider.

All-inclusive Licensing

For instance, on the surface, all-inclusive licensing sounds straight forward. If a company buys the appliance, it obtains the software with it. That part of the statement holds true. The key question that a company must ask is, “How much capacity does the all-inclusive licensing cover before I have to start paying more?

That answer will vary by provider. Some providers such as Cohesity and Rubrik charge by the terabyte. As the amount of data on its appliance under management by its software grows, so do the licensing costs. In contrast, StorageCraft licenses the software on its OneXafe appliance by the node. Once a company licenses StorageCraft’s software for a node, the software license covers all data stored on that node (up to 204TBs raw.)

Deduplication

Deduplication software is another technology available on these appliances that a company might assume is implemented essentially the same way across all these available offerings. That assumption would be incorrect.

Each of these appliances implement deduplication in slightly different ways. Cohesity gives a company a few ways to implement deduplication. These include deduplicating data when it backs up data using its backup software or deduplicating data backed up by another backup software to its appliance. A company may, at its discretion, choose in this latter use case to deduplicate either inline or post-process.

StorageCraft deduplicates data using its backup software on the client and also offers inline deduplication for data backed up by another backup software to its appliance. Rubrik only deduplicates data backed up by Cloud Data Management Software. HYCU uses the deduplication technology natively found in the Nutanix AHV hypervisor.

Scalable Storage

A third area of differentiation between these appliances shows up in how they scale storage. While scale-out architectures get a lot of the press, that is only one scalable storage option available to a company. The scale-out architecture, such as employed by Cohesity, HYCU, and Rubrik, entails adding more nodes to an existing configuration.

Using a scale-up architecture, such as is available on the Asigra TrueNAS appliance from iXsystems, a company can add more disk drives to an existing chassis. Still another provider, StorageCraft, uses a combination of both architectures in its OneXafe appliance. Once can add more drives to an existing node or add more nodes to an existing OneXafe deployment.

Scalable data protection appliances are changing the backup and recovery landscape by delivering both the simplicity of management and the breadth of features that companies have long sought. However, as a cursory examination into three of their features illustrates, the differences between the features on these appliances can be significant. This makes it imperative that a company first break down the features on any of these scalable data protection appliances that it is considering for purchase to ensure it obtains the appliance with the most appropriate feature set for its requirements.




Make the Right Choice between Scale-out and Scale-up Backup Appliance Architectures

Companies are always on the lookout for simpler, most cost-effective methods to manage their infrastructure. This explains, in part, the emergence of scale-out architectures over the last few years as a preferred means for implementing backup appliances. It is as scale-out architectures gain momentum that it behooves companies taking a closer look at the benefits and drawbacks of both scale-out and scale-up architectures to make the best choice for their environment.

Backup appliances primarily ship in two architectures: scale-out and scale-up.  A scale-out architecture is comprised of nodes that are logically grouped together using software that the vendor provides. Each node ships with preconfigured amounts of memory, compute, network ports, and storage capacity. The maximum raw capacities of backup appliances from about a few dozen terabytes to nearly twelve petabytes.

In contract, a scale-up architecture places a controller with compute, memory and network ports in front of storage shelves. A storage shelf may be internal or external to the appliance. Each storage shelf holds a fixed number of disk drives.

Backup appliances based on a scale-up architecture usually require lower amounts of storage capacity for an initial deployment. If an organization needs more capacity, it adds more disk drives to these storage shelves, up to some predetermined, fixed, hard upper limit. Backup appliances that use this scale-up architecture range from a few terabytes of maximum raw capacity to over multiple petabytes of maximum raw capacity.

Scale-out Benefits and Limitations

A scale-out architecture, sometimes referred to as a hyper-converged infrastructure (HCI), enables a company to purchase more nodes as it needs them. Each time it acquires another node, it provides more memory, compute, network interfaces, and storage capacity to the existing solutions. This approach addresses enterprise needs to complete increased backup workloads in the same window of time since they have more hardware resources available to them.

This approach also addresses concerns about product upgrades. By placing all nodes in a single configuration, as existing nodes age or run of capacity, new nodes with higher levels of performance and more capacity can be introduced into the scale-out architecture.

Additionally, an organization may account for and depreciate each node individually. While the solution’s software can logically group physically nodes together, there is no requirement to treat all the physical nodes as a single entity. By treating each node as its own physical entity, an organization can depreciate physical over a three to five-year period (or whatever period its accounting rules allow for.) This approach mitigates the need to depreciate newly added appliances in a shorter time frame as is sometimes required when adding capacity to scale-up appliances.

The flexibility of scale-out solutions can potentially create some management overhead. Using a scale-out architecture, an enterprise should verify that as the number of nodes in the scale-out configuration increases, the solution has a means to automatically load balance the workloads and store backup data across all its available nodes. If not, an enterprise may find it spends an increasing amount of time balancing the backup jobs across its available nodes

An enterprise should also verify that all the nodes work together as one collective entity. For instance, an enterprise should verify that the scale-out solution offers “global deduplication”. This feature dedu­plicates data across all the nodes in the system, regardless of on which node the data resides. If it does not offer this feature, the solution will still deduplicate the data but only on each individual node.

Finally, an enterprise should keep its eye on the possibility of “node sprawl” when using these solutions. These solutions make it easy to grow but an enterprise needs to plan for the optimal way to add each node as individual nodes can vary widely in their respective capacity and performance characteristics.

Scale-up Benefits and Limitations

Backup appliances that use a scale-up architecture have their own sets of benefits and limitations. Three features that currently working in their favor include:

  1. Mature
  2. Well-understood
  3. Widely adopted and used

One other broader backup industry trend currently also works in favor of scale-up architectures. More enterprises use snapshots as their primary backup technique. Using these snapshots as the source for the backup frees enterprises to do backups at almost any time of the day. This helps to mitigate the night and weekend performance bottleneck that can occur when forced to do all backups at the same using one of these appliances as the backup target.

A company may encounter the following challenges when working with scale-up appliances:

First, it must size and configure the appliance correctly. This requires an enterprise to have a good understanding of its current and anticipated backup workloads, its total amount of data to backup, and its data retention requirements. Should it overestimate its requirements, it may end up with an appliance oversized for its environment. Should it underestimate its requirements, backup jobs may not complete on time or it may run out of capacity, requiring it to buy another appliance sooner than it anticipated.

Second, all storage capacity sits behind a single controller. This architecture necessitates that the controller be sufficiently sized to meet all current and future backup workloads. Even though the appliance may support the addition of more disk drives, all backup jobs will still need to run through the same controller. Depending on the amount of data and how quickly backup jobs need to complete, this could bottleneck performance and slow backup and recovery jobs.

Make the Right Backup Appliance Choice

In order to make the right choice between these two architectures, the choice may come down to how well you understand your own environment. If a company expects to experience periods of rapid or unexpected data growth, using a scale-out appliance will often be a better approach. In these scenarios, look to appliances from Cohesity, Commvault, ExaGrid, NEC and StorageCraft.

If a company expects more predictable or minimal data growth in its environment, scale-up backup solutions such as the Asigra TrueNAS and Unitrends appliances will likely better match its requirements.




Tips to Selecting the Best Cloud Backup Solution

The cloud has gone mainstream with more companies than ever looking to host their production applications with general-purpose cloud providers such as the Google Cloud Platform (GCP). As this occurs, companies must identify backup solutions architected for the cloud that capitalize on the native features of each provider’s cloud offering to best protect their virtual machines (VMs) hosted in the cloud.

Company that move their applications and data to the cloud must orchestrate the protection of their applications and data once they move them there. GCP and other cloud providers offer highly available environments and replicate data between data centers in the same region. They also provide options in their clouds for companies to configure their applications to automatically fail over, fail back, scale up, and scale back down as well as create snapshots of their data.

To fully leverage these cloud features, companies must identify an overarching tool that orchestrates the management of these availability, backup and recovery features as well as integrates with their applications to create application-consistent backups. To select the right cloud backup solution for them, here are a few tips to help companies do so.

Simple to Start and Stop

The cloud gives companies the flexibility and freedom to start and stop services as needed and then only pay for these services as they use them. The backup solution should give companies the same ease to start and stop these services. It should only bill companies for the applications it protects during the time it protects them.

The simplicity of the software’s deployment should also extend to its configuration and ongoing management. Companies can quickly select and deploy the compute, networking, storage, and security services cloud providers offer. In the same way, the software should similarly make it easy for companies to select and configure it for the backup of VMs. They can also optionally turn the software off if needed.

Takes Care of Itself

When companies select any cloud provider’s service, companies get the benefits of the service without the maintenance headaches associated with owning it. For example, when companies choose to host data on GCP’s Cloud Storage service, they do not need to worry about administering Google’s underlying IT infrastructure. The tasks of replacing faulty HDDs, maintaining HDD firmware, keeping its Cloud Storage OS patched, etc. fall to Google.

In the same way, when companies select backup software, they want its benefits without the overhead of patching it, updating it, and managing it long term. The backup software should be available and run as any other cloud service. However, in the background, the backup software provider should take care of its software’s ongoing maintenance and updates.

Integrates with the Cloud Provider’s Identity Management Services

Companies use services such as LDAP or Microsoft AD to control access to corporate IT resources. Cloud providers also have their own identity management services that companies can use to control their employees’ access to cloud resources.

The backup software will ideally integrate with the cloud provider’s native identity management services to simplify its management and ensure that those who administer the backup solution have permission to access VMs and data in the cloud.

Integrates with the Cloud Provider’s Management Console

Companies want to make their IT environments easier to manage. For many, that begins with a single pane of glass to manage their infrastructure. In cloud environments, companies must adhere to this philosophy as cloud providers offer dozens of cloud services that individuals can view and access through that cloud provider’s management console.

To ensure cloud administrators remain aware that the backup is available as an option, much less use it, the backup software must integrate with the cloud provider’s default management console. In this way, these individuals can remember to use it and easily incorporate its management into their overall job responsibilities.

Controls Cloud Costs

It should come as no great surprise that cloud providers make their money when companies use their services. The more of their services that companies use, the more the cloud providers charge. It should also not shock anyone the default services that cloud providers offer may be among their most expensive.

The backup software can help companies avoid racking up unneeded costs in the cloud. The backup software will primarily consume storage capacity in the cloud. The software should offer features that help manage these costs. Aside from having policies in place to tier backup data as its ages across these different storage types, it should also provide options to archive, compress, deduplicate, and even delete data. Ideally, it will also spin up cloud compute resources when needed and shut them down once backup jobs complete to further control costs in the cloud.

HYCU Brings the Benefits of Cloud to Backup

Companies choose the cloud for simple reasons: flexibility, scalability, and simplicity. They already experience these benefits when they choose the cloud’s existing compute, networking, storage, and security services. So, they may rightfully wonder, why should the software service they use to orchestrate their backup experience in the cloud be any different?

In short, it should not be any different. As companies adopt and adapt to the cloud’s consumption model, they will expect all services they consume in the cloud to follow its billing and usage model. Companies should not give backup a pass on this growing requirement.

HYCU is the first backup and recovery solution that companies can choose when protecting applications and data on the Google Cloud Platform to follow these basic principles of consuming cloud services. By integrating with GCP’s identity management services, being simple to start and stop, and helping companies control their costs, among others, HYCU exemplifies how easy backup and recovery can and should be in the cloud. HYCU provides companies with the breadth of backup services that their applications and data hosted in the cloud need while relieving them of the responsibility to continue to manage and maintain it.




Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.




HYCU-X Piggybacks on Existing HCI Platforms to Put Itself in the Scale-out Backup Conversation

Vendors are finding multiple ways to enter the scale-out hyper-converged infrastructure (HCI) backup conversation. Some acquire other companies such as StorageCraft did in early 2017 with its acquisition of ExaBlox. Others build their own such as Cohesity and Commvault did. Yet among these many iterations of scale-out, HCI-based backup systems, HYCU’s decision to piggyback its new HYCU-X on top of existing HCI offerings, starting with Nutanix’s AHV HCI Platform, represents one of the better and more insightful ways to deliver backup using a scale-out architecture.

To say that HYCU and Nutanix were inextricably linked before the HYCU-X announcement almost goes without saying. HYCU was the first to market in June 2017 with a backup solution specifically targeted and integrated with the Nutanix AHV HCI Platform. Since then, HYCU has been a leader in providing backup solutions targeted at Nutanix AHV environments.

In coming out with HYCU-X, HYCU addresses an overlooked segment in the HCI backup space. Companies looking for a scale-out secondary storage systems to use as their backup solution typically had to go with a product that was:

  1. New to the backup market
  2. New to the HCI market; or
  3. New to both the backup and HCI markets.

Of these three, a backup provider that fell into either the 2nd or 3rd category where it was or is in any way new to the HCI market is less than ideal. Unfortunately, this is where most backup products fall as the HCI market itself is still relatively new and maturing.

However, this scenario puts these vendors in a tenuous position when it
comes to optimizing their backup product. They must continue to improve and
upgrade their backup solution even as they try to build and maintain an
emerging and evolving HCI platform that supports it. This is not an ideal
situation for most backup providers as it can sap their available resources.

By HYCU initially delivering HYCU-X built on Nutanix’s AHV Platform, it avoids having to create and maintain separate teams to build separate backup and HCI solutions. Rather, HYCU can rely upon Nutanix’s pre-existing and proven AHV HCI Platform and focus on building HYCU-X to optimize Nutanix AHV Platform for use in this role as a scale-out HCI backup platform. In so doing, both HCYU and Nutanix can strive to continue to deliver features and functions that can be delivered in as little as one-click.

Now could companies use Nutanix or other HCI platforms as a scale-out storage target without HYCU-X? Perhaps. But with HYCU-X, companies get the backup engine they need to manage the snapshot and replication features natively found on the HCI platform.

By HYCU starting with Nutanix, companies can leverage the Nutanix AHV HCI Platform as a backup target. They can then use HYCU-X to manage the data once it lands there. Further, companies can then potentially use HYCU-X to backup other applications in their environment.

While some may argue that using Nutanix instead of purpose-built scale-out secondary HCI solutions from other backup providers will cost more, the feedback that HYCU has received from its current and prospective customer base suggests this the opposite is true. Companies find that by time they deploy these other providers’ backup and HCI solutions, their costs could exceed the costs of a Nutanix solution running HYCU-X.

The scale-out backup HCI space continues to gain momentum for good reason. Companies want the ease of management, flexibility, and scalability that these solutions provide along with the promise that they give for them to make disaster recoveries much simpler to adopt and easier to manage over time.

By HYCU piggybacking initially on the Nutanix AHV HCI Platform to deliver a scale-out backup solution, companies get the reliability and stability of one of the largest, established HCI providers and access to a backup solution that runs natively on the Nutanix AHV HCI Platform. That will be a hard combination to beat.




Rethinking Your Data Deduplication Strategy in the Software-defined Data Center

Data Centers Going Software-defined

There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.

The introduction of technologies that can monitor, report on, analyze, and increasingly manage and optimize data center hardware frees organizations from performing housekeeping tasks such as:

  • Verifying hardware firmware compatibility with applications and operating systems
  • Troubleshooting hot spots in the infrastructure
  • Identifying and repairing failing hardware components

Automating these tasks does more than change how organizations manage their data center infrastructures. It reshapes how they can think about their entire IT strategy. Rather than adapting their business to match the limitations of the hardware they choose, they can now pursue business objectives where they expect their IT hardware infrastructure to support these business initiatives.

This change in perspective has already led to the availability of software-defined compute, networking, and storage solutions. Further, software-defined applications such as databases, firewalls, and other applications that organizations commonly deploy have also emerged. These virtual appliances enable companies to quickly deploy entire application stacks. While it is premature to say that organizations can immediately virtualize their entire data center infrastructure, the foundation exists for them to do so.

Software-defined Storage Deduplication Targets

As they do, data protection software, like any other application, needs to be part of this software-defined conversation. In this regard, backup software finds itself well-positioned to capitalize on this trend. It can be installed on either physical or virtual machines (VMs) and already ships from many providers as a virtual appliance. But storage software that functions primarily as a deduplication storage target already finds itself being boxed out of the broader software-defined conversation.

Software-defined storage (SDS) deduplication targets exist that have significantly increased in storage capabilities. By the end of 2018, a few of these software-defined virtual appliances scaled to support about 100TB or more of capacity. But organizations must exercise caution when looking to position these available solutions as a cornerstone in a broader software-defined deduplication storage target strategy.

This caution, in many cases, stems less from the technology itself and more from the vendors who provide these SDS deduplication target solutions. In every case, save one, these solutions originate with providers who focus on selling hardware solutions.

Foundation for Software-defined Data Centers Being Laid Today

Companies are putting plans in place right now to build the data center of tomorrow. That data center will be a largely software-defined data center with solutions that span both on-premises and cloud environments. To achieve that end, companies need to select solutions that have a software-designed focus which meet their current needs while positioning them for tomorrow’s requirements.

Most layers in the data center stack, to include compute, networking, storage, and even applications, are already well down the road of transforming from hardware-centric to software-centric offerings. Yet in the face of this momentous shift in corporate data center environments, SDS deduplication target solutions have been slow to adapt.

It is this gap that SDS deduplication products such as Quest QoreStor look to fill. Coming from a company with “software” in its name, Quest comes without the hardware baggage that other SDS providers must balance. More importantly, Quest QoreStor offers a feature-rich set of services that range from deduplication to replication to support for all major cloud, hardware, and backup software platforms that comes from 10 years of experience in delivering deduplication software.

Free to focus solely on delivering a SDDC solution, Quest QoreStor represents the type of SDS deduplication target that does truly meet the needs of today’s enterprise while positioning them to realize the promise of tomorrow’s software-defined data center.

To read more of DCIG’s thoughts about using SDS deduplication targets in the software-defined data center of tomorrow, follow this link.




Key Differentiators between the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 Systems

Companies have introduced a plethora of technologies into their core enterprise infrastructures in recent years that include all-flash arrays, cloud, hyper-converged infrastructures, object-based storage, and snapshots, just to name a few. But as they do, a few constants remain. One is the need to backup and recover all the data they create.

Deduplication appliances remain one of the primary means for companies to store this data for short-term recovery, disaster recoveries, and long-term data retention. To fulfill these various roles, companies often select either the HPE StoreOnce 5650 or the Dell EMC Data Domain 9300. (To obtain a complimentary DCIG report that compares these two products, follow this link.)

Their respective deduplication appliance lines share many features in common. They both perform inline deduplication. They both offer client software to do source-side deduplication that reduces data sent over the network and improves backup throughput rates. They both provide companies with the option to backup data over NAS or SAN interfaces.

Despite these similarities, key areas of differentiation between these two product lines remain which include the following:

  1. Cloud support. Every company either has or anticipates using a hybrid cloud configuration as part of its production operations. These two product lines differ in their levels of cloud support.
  2. Deduplication technology. Data Domain was arguably the first to popularize widespread use of deduplication for backup. Since then, others such as the HPE StoreOnce 5650 have come on the scene that compete head-to-head with Data Domain appliances.
  3. Breadth of application integration. Software plug-ins that work with applications and understand their data formats prior to deduplicating the data provide tremendous benefits as they improve data reduction rates and decrease the amount of data sent over the network during backups. The software that accompanies the appliances from these two providers has varying degrees of integration with leading enterprise applications.
  4. Licensing. The usefulness of any product hinges on the features it offers, their viability, and which ones are available to use. Clear distinctions between the HPE StoreOnce and Dell EMC Data Domain solutions exist in this area.
  5. Replication. Copying data off-site for disaster recovery and long-term data retention is paramount in comprehensive enterprise disaster recovery strategies. Products from each of these providers offer this but they differ in the number of features they offer.
  6. Virtual appliance. As more companies adopt software-defined data center strategies, virtual appliances have increased appeal.

In the latest DCIG Pocket Analyst Report, DCIG compares the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 product lines and examines how each well these two products fare in their support of these six areas in which DCIG looks at nearly 100 features to draw its conclusions. This report is currently available at no charge for a limited time on DCIG’s partner website, TechTrove. To receive complimentary access to this report, complete a registration form that you can find at this link.




20 Years in the Making, the Future of Data Management Has Arrived

Mention data management to almost any seasoned IT professional and they will almost immediately greet the term with skepticism. While organizations have found they can manage their data within certain limits, when they remove those boundaries and attempt to do so at scale, those initiatives have historically fallen far short if not outright failed. It is time for that perception to change. 20 years in the making, Commvault Activate puts organizations in a position to finally manage their data at scale.

Those who work in IT are loath to say any feat in technology is impossible. If one looks at the capabilities of any handheld device, one can understand why they have this belief. People can pinpoint exactly where they are almost anywhere in the world to within a few feet. They can take videos, pictures, check the status of their infrastructure, text, … you name it, handheld devices can do it.

By way of example, as I write this, I was present to watch YY Lee, SVP and Chief Strategy Officer of Anaplan, onstage at Commvault GO. She explained how systems using artificial intelligence (AI) were able within a very short time, sometimes days, became experts at playing games such as Texas Hold’em and beat the best players in the world at them.

Despite advances such as these in technology, data management continues to bedevil large and small organizations alike. Sure, organizations may have some level of data management in place for certain applications (think email, file servers, or databases,) but when it comes to identifying and leveraging a tool to deploy data management across an enterprise at scale, that tool has, to date, eluded organizations. This often includes the technology firms that are responsible for producing so much of the hardware that stores this data and software that produces it.

The end for this vexing enterprise challenge finally came into view with Commvault’s announcement of Activate. What makes Activate different from other products that promise to provide data management at scale is that Commvault began development on this product 20 years ago in 1998.

During that time, Commvault became proficient in:

  • Archiving
  • Backup
  • Replication
  • Snapshots
  • Indexing data
  • Supporting multiple different operating systems and file systems
  • Gathering and managing metadata

Perhaps most importantly, it established relationships and gained a foothold in enterprise organizations around the globe. This alone is what differentiates it from almost every other provider of data management software. Commvault has 20+ years of visibility into the behavior and requirements of protecting, moving, and migrating data in enterprise organizations. This insight becomes invaluable when viewed in the context of enterprise data management which has been Commvault’s end game since its inception.

Activate builds on Commvault’s 20 years of product development with Activate’s main differentiator being its ability to stand alone apart from other Commvault software. In other words, companies do not first have to deploy Commvault’s Complete Backup and Recovery or any of its other software to utilize Activate.

They can deploy Activate regardless of whatever other backup, replication, snapshot, etc. software product you may have. But because Activate draws from the same code base as the rest of Commvault’s software, companies can deploy it with a great deal of confidence because of the stability of Commvault’s existing code base.

Once deployed, Activate scans and indexes the data across the company’s environment which can include its archives, backups, file servers, and/or data stored in the cloud. Once indexed, companies can do an assessment of the data in their environment in anticipation of taking next steps such as eDiscovery preparation, remediate data privacy risks, and index and analyze data based upon your own criteria.

Today more so than ever companies recognize they need to manage their data across the entirety of their enterprise. Delivering on this requirement requires a tool appropriately equipped and sufficiently mature to meet enterprise requirements. Commvault Activate answers this call as a software product that has been 20 years in the making to provide enterprises with the foundation they need to manage their data going forward.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




CloudShift Puts Flawless DR on Corporate Radar Screens

Hyper-converged Infrastructure (HCI) solutions excel at delivering on many of the attributes commonly associated with private clouds. Consequently, the concepts of hyper-convergence and private clouds have become, in many respects, almost inextricably linked.

But for an HCI solution to not have a clear path forward for public cloud support … well, that’s almost anathema in the increasingly hybrid cloud environments found in today’s enterprises. That’s what makes this week’s CloudShift announcement from Datrium notable – it begins to clarify Datrium’s strategy for how Datrium is going to go beyond backup to the public cloud as part of its DVX solution and puts the concept of flawless DR on corporate radar screens.

Source: Datrium

HCI is transforming how organizations manage their on-premise infrastructure. By combining compute, data protection, networking, storage and server virtualization into a single pre-integrated solution, they eliminate many of the headaches associated with traditional IT infrastructures while delivering the “cloud-like” speed and ease of deployment that enterprises want.

However, enterprises increasingly want more than “cloud-like” abilities from their on-premise HCI solution. They also want the flexibility to move the virtual machines (VMs) they host on their HCI solution into public cloud environments if needed. Specifically, if they run disaster recovery (DR) tests, perform an actual DR, or need to move a specific workload in the public cloud that is experiencing high throughput, having the flexibility to move VMs into and out of the cloud as needed is highly desirable.

Datrium answers the call for public cloud integration with its recent CloudShift announcement. However, Datrium did not just come out with a #MeToo answer for public clouds by announcing it will support the AWS cloud. Rather, it delivered what most enterprises are looking for at this stage in their journey to a hybrid cloud environment: a means to seamlessly incorporate the cloud into their overall DR strategy.

The goals behind its CloudShift announcement are three-fold:

  1. Build on the existing Datrium DVX platform that already manages the primary copy of data as well as its backups. With the forthcoming availability of CloudShift in the first half of 2019, it will complete the primary to backup to cloud circle that companies want.
  2. Make DR work flawlessly. If there are two words together that often represent an oxymoron, it is “flawless DR”. By bringing all primary, backup and cloud together and managing them as one holistic piece, companies can begin to someday soon (ideally in this lifetime) view flawless DR as the norm instead of the exception.
  3. Orchestrated DR failover and failback. DR failover and failback just rolls off the tongue – it is simple to say and everyone understands what it means. But to execute on the successful DR failover and failback in today’s world tends to get very pricey and very complex. By Datrium rolling the management of primary, backup and cloud under one roof and then continually performing compliance checks on the execution environment to ensure that they meet RPO and RTO of the DR plan, companies can have a higher degree of confidence that DR failovers and failbacks only occur when they are supposed to and that when they occur, they will succeed.

Despite many technology advancements in recent years, enterprise-wide, turnkey DR capabilities with orchestrated failover and failback between on-premises and the cloud are still largely the domain of high-end enterprises that have the expertise to pull it off and are willing to commit large amounts of money to establish and maintain a (hopefully) functional DR capability. Datrium’s CloudShift announcement puts the industry on notice that reliable, flawless DR that will meet the budget and demands of a larger number of enterprises is on its way.




Orchestrated Backup IN the Cloud Arrives with HYCU for GCP

Companies are either moving or have moved to the cloud with backup TO the cloud being one of the primary ways they plan to get their data and applications into the cloud. But orchestrating the backup of their applications and data once they reside IN the cloud… well, that requires an entirely different set of tools with few, if any, backup providers yet offering features in their respective products that deliver on this requirement. That ends today with the introduction of HYCU for GCP (Google Cloud Platform).

Listen to the podcast associated with this blog entry.

Regardless of which public cloud platform you may use to host your data and/or applications, Amazon Web Services (AWS), Microsoft Azure, GCP, or some other platform, they all provide companies with multiple native backup utilities to protect data that resides on their cloud. The primary tools include the likes of snapshots, replication, and versioning with GCP being no different.

What makes these tools even more appealing to use is that they are available at a cloud user’s fingertips; they can turn them on with the click of a button; and, they only pay for what they use. Available for any data or applications hosted in the cloud, they give organizations access to levels of data availability, data protection, and even disaster recovery for which they previously had no means to easily deliver and they can do so for any data or application hosted with the cloud provider.

But the problem in this scenario is not application and/or data backup. The catch is how does an organization do this at scale in such a way that they can orchestrate and manage the backups of all their applications and data on a cloud platform such as GCP for all their users. The short answer is: organizations cannot.

This is a problem that HYCU for GCP addresses head-on. HYCU has previously established a beachhead in Nutanix environments thanks to its tight integration with AHV. This integration well positions HYCU to extend those same benefits to any public cloud partner of Nutanix. The fact that Nutanix and Google announced a strategic alliance last year at the Nutanix .NEXT conference to build and operate hybrid clouds certainly helped HYCU prioritize GCP over the other public cloud providers for backup orchestration.

Leveraging HYCU in the GCP, companies immediately gain three benefits:

  1. Subscribe to HYCU directly from the GCP Marketplace. Rather than having to first acquire HYCU separately and then install it in the GCP, companies can buy it in the GCP Marketplace. This accelerates and simplifies HYCU’s deployment in the GCP while simultaneously giving companies access to a corporate grade backup solution that orchestrates and protects VMs in the GCP.
  2. Takes advantage of the native backup features in the GCP. GCP has its own native snapshots that can be used for backup and recovery that HYCU capitalizes on and puts at the fingertips of admins who can then manage and orchestrate backups and recoveries for all corporate VMs residing in the GCP.
  3. Frees organizations to confidently expand their deployment of applications and data in GCP. While GCP obviously had the tools to backup and recover data and applications in GCP, managing them at scale was going to be, at best, cumbersome, and, at worst, impossible. HYCU for GCP frees companies to begin to more aggressively deploy applications and data at scale in GCP knowing that they can centrally manage their protection and recovery.

Backup TO the cloud is great and almost every backup provider offers that feature functionality. But backup IN the cloud where the backup and recovery of a company’s applications and data in the cloud is centrally managed…now, that is something that stands apart from the competition. Thanks to HYCU for GCP, companies can finally do more than just deploy data and applications in the Google Cloud Platform that requires each of their users or admins to assume backup and recovery responsibilities for their applications and data. Instead, companies can do so knowing they now have a tool in place that can centrally manage their backups and recoveries.




Four Implications of Public Cloud Adoption and Three Risks to Address

Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.

Implication #1: No enterprise IT dept will be able to keep pace with the level of services innovation available from cloud providers

The battle is over. Cloud wins. Deal with it.

Dealing with it does not necessarily mean that every business will move every workload to the cloud. It does mean that it is time for business IT departments to build awareness of the services available from public cloud providers. One way to do this is to tap into the flow of service updates from one or more of the major cloud providers.

four public cloud logosFor Amazon Web Services, I like What’s New with AWS. Easy filtering by service category is combined with sections for featured announcements, featured video announcements, and one-line listings of the most recent announcements from AWS. The one-line listings include links to service descriptions and to longer form articles on the AWS blog.

For Microsoft Azure, I like Azure Updates. As its subtitle says, “One place. All updates.” The Azure Updates site provides easy filtering by product, update type and platform. I especially like the ability to filter by update type for General Availability and for Preview. The site also includes links to the Azure roadmap, blog and other resources. This site is comprehensive without being overwhelming.

For Google Cloud Platform, its blog may be the best place to start. The view can be filtered by label, including by announcements. This site is less functional than the AWS and Microsoft Azure resources cited above.

For IBM Cloud, the primary announcements resource is What’s new with IBM Cloud. Announcements are presented as one-line listings with links to full articles.

Visit these sites, subscribe to their RSS feeds, or follow them via social media platforms. Alternatively, subscribe to their weekly or monthly newsletters via email. Once a business has workloads running in one of the public clouds at a minimum an IT staff member should follow the updates site.

Implication #2: Pressure will mount on Enterprise IT to connect business data to public cloud services

The benefits of bringing public cloud services to bear on the organization’s data will create pressure on enterprise IT departments to connect business data to those services. There are many options for accomplishing this objective, including:

  1. All-in with one public cloud
  2. Hybrid: on-prem plus one public
  3. Hybrid: on-prem plus multiple public
  4. Multi-cloud (e.g. AWS + Azure)

The design of the organization and the priorities of the business should drive the approach taken to connect business data with cloud services.

Implication #3: Standard data protection requirements now extend to data and workloads in the public cloud

No matter what approach it taken when embracing the public cloud, standard data protection requirements extend to data and workloads in the cloud. Address these requirements up front. Explore alternative solutions and select one that meets the organizations data protection requirements.

Implication #4: Cloud Data Protection and DRaaS are on-ramps to public cloud adoption

For most organizations the transition to the cloud will be a multi-phased process. Data protection solutions that can send backup data to the cloud are a logical early phase. Disaster recovery as a service (DRaaS) offerings represent another relatively low-risk path to the cloud that may be more robust and/or lower cost that existing disaster recovery setups. These solutions move business data into public cloud repositories. As such, cloud data protection and DRaaS may be considered on-ramps to public cloud adoption.

Once corporate data has been backed up or replicated to the cloud, tools are available to extract and transform the data into formats that make it available for use/analysis by that cloud provider’s services. With proper attention, this can all be accomplished in ways that comply with security and data governance requirements. Nevertheless, there are risks to be addressed.

Risk to Address #1: Loss of change control

The benefit of rapid innovation has a downside. Any specific service may be upgraded or discarded by the provider without much notice. Features used by a business may be enhanced or decremented. This can force changes in other software that integrates with the service or in procedures used by staff and the associated documentation for those procedures.

For example, Office365 and Google G Suite features can change without much notice. This creates a “Where did that menu option go?” experience for end users. Some providers reduce this pain by providing an quick tutorial for new features within the application itself. Others provide online learning centers that make new feature tutorials easy to discover.

Accept this risk as an unavoidable downside to rapid innovation. Where possible, manage the timing of these releases to an organization’s users, giving them advance notice of the changes along with access to tutorials.

Risk to Address #2: Dropped by provider

A risk that may not be obvious to many business leaders is that of being dropped by a cloud service provider. A business with unpopular opinions might have services revoked, sometimes with little notice. Consider how quickly the movement to boycott the NRA resulted in severed business-to-business relationships. Even an organization as large as the US Military faces this risk. As was highlighted in recent news, Google will not renew its military AI project due in large part to pressure from Google employees.

Mitigate this risk through contracts and architecture. This is perhaps one argument in favor of a hybrid on-prem plus cloud approach to the public cloud versus an all-in approach.

Risk to Address #3: Unpredictable costs

It can be difficult to predict the costs of running workloads in the public cloud, and these costs can change rapidly. Address this risk by setting cost thresholds that trigger an alert. Consider subscribing to a service such as Nutanix Beam to gain granular visibility into and optimization of public cloud costs.

Its time to get real about the public cloud

Many business are ready to embrace the public cloud. IT departments should make themselves aware of services that may create value for their business. They should also work through the implications of moving corporate data and workloads to the cloud, and make plans for managing the attendant risks.




Hackers Say Goodbye to Ransomware and Hello to Bitcoin Mining

Ransomware gets a lot of press – and for good reason – because when hackers break through your firewalls, encrypt your data, and make you pay up or else lose your data, it rightfully gets people’s attention. But hackers probably have less desire than most to be in the public eye and sensationalized ransomware headlines bring them unwanted attention. That’s why some hackers have said goodbye to the uncertainty of a payout associated with getting a ransom for your data and instead look to access your servers to do some bitcoin mining using your CPUs.

A week or so ago a friend of mine who runs an Amazon Web Services (AWS) consultancy and reseller business shared a story with me about one of his clients who hosts a large SaaS platform in AWS.

His client had mentioned to him in the middle of the week that the applications on one of his test servers was running slow. While my friend was intrigued, he did not at the time give it much thought. This client was not using his managed services offering which meant that he was not necessarily responsible for troubleshooting their performance issues.

Then the next day his client called him back and said that now all his servers hosting this application – test, dev, client acceptance, and production – were running slow. This piqued his interest, so he offered resources to help troubleshoot the issue. The client then allowed his staff to log into these servers to investigate the issue

Upon logging into these server, they discovered that all instances running at 100% also ran a Drupal web application. This did not seem right, especially considering that it was early on a Saturday morning when the applications should mostly be idle.

After doing a little more digging around on each server, they discovered a mysterious multi-threaded process running on each server that was consuming all their CPU resources. Further, the process also had opened up a networking port to a server located in Europe. Even more curious, the executable that launched the process had been deleted after the process started. It was as if someone was trying to cover their tracks.

At this point, suspecting the servers had all been hacked, they checked to see if there were any recent security alerts. Sure enough. On March 28, 2018, Drupal issued a security advisory that if you were not running Drupal 7.58 or Drupal 8.5.1, your servers were vulnerable to hackers who could remotely execute code on your server.

However, what got my friend’s attention is that these hackers did not want his client’s data. Rather, they wanted his client’s processing power to do bitcoin mining which is exactly what these servers had been doing for a few days now on behalf of these hackers. To help their client, they killed the bitcoin mining process on each of these servers before calling his client to advise them to patch Drupal ASAP.

The story does not end there. In this case, his client did not patch Drupal quickly enough. Sometime after they killed the bitcoin mining processes, another hacker leveraged that same Drupal security flaw and performed the same hack. By the time his client came to work on Monday, there were bitcoin mining processes running on those servers that again consumed all their CPU cycles.

What they found especially interesting was how the executable file that the new hackers had installed worked. In reviewing their code, the first thing it did was to kill any pre-existing bitcoin mining processes started by other hackers. This freed all the CPU resources to handle bitcoin mining processes started by the new hackers. The hackers were literally fighting each other over access to the compromised system’s resources.

Two takeaways from this story:

  1. Everyone is rightfully worried about ransomware but bitcoin mining may not hit corporate radar screens. I doubt that hackers want the FBI, CIA, Interpol, MI6, Mossad, or any other criminal justice agency hunting them down any more than you or I do. While hacking servers and “stealing” CPU cycles is still a crime, it probably is much further down on the priority list of most companies as well as these agencies.

A bitcoin mining hack may go unnoticed for long periods of time and may not be reported by companies or prosecuted by these criminal justice agencies even when reported because it is easy to perceive this type of hack as a victimless crime. Yet every day the hacker’s bitcoin mining processes go unnoticed and remain active, the more bitcoin the hackers earn. Further, one should assume hackers will only become more sophisticated going forward. Expect hackers to figure out how to install bitcoin mining processes that run without consuming all CPU cycles so these processes remain running and unnoticed for longer periods of time.

  1. Hosting your data and processes in the cloud does not protect your data and your processes against these types of attacks. AWS has all the utilities available to monitor and detect these rogue processes. That said, organizations still need someone to implement these tools and then monitor and manage them.

Companies may be relieved to hear that some hackers have stopped targeting their data and are instead targeting their processors to use them for bitcoin mining. However, there are no victimless crimes. Your pocket book will still get hit in cases like this as Amazon will bill you for using these resources.

In cases like this, if companies start to see their AWS bills going through the roof, it may not be the result of their businesses. It may be their servers have been hacked and they are paying to finance some hacker’s bitcoin mining operation. To avoid this scenario, companies should ensure they have the right internal people and processes in place to keep their applications up-to-date, to protect infrastructure from attacks, and to monitor their infrastructures whether hosted on-premise or in the cloud.




Nutanix Backup Software: Make the Best Choice Between HYCU and Rubrik

Organizations of all sizes now look to hyper-converged infrastructure solutions such as the Nutanix Enterprise Cloud Platform to provide them with their next generation of data center IT infrastructure services. As they do, they need software optimized for protecting Nutanix environments. HYCU, Inc., and Rubrik are two early leaders in this space. Each possess distinctive attributes that make one or the other better suited for providing data protection services when these conditions exist in your environment.

Get the DCIG Pocket Analyst Report comparing these two products by following this link.

Hyper-converged infrastructure solutions such as the Nutanix Enterprise Cloud Platform stand poised to fundamentally change how enterprises manage their IT infrastructure. They simplify and automate long standing problems such as application availability, data migrations, and hardware refreshes as well as integration with leading public cloud providers. But this looming changeover in IT infrastructure still leaves organizations with the responsibility to protect the data hosted on these solutions. This is where products such as those from HYCU, Inc., and Rubrik come into play.

HYCU (pronounced “hıˉ Q”) for Nutanix and Rubrik Cloud Data Management are two data protection software products that protect virtual machines (VMs) but which also offer features optimized for the protection of Nutanix environments. Both HYCU and Rubrik Cloud Data Management share some similarities as they both support:

  • Application and file level restores for Windows applications and operating systems
  • Concurrent backups
  • Full recovery of VMs from backups
  • Multiple cloud providers for application recovery and/or long-term data retention
  • Protection of VMs on non-Nutanix platforms
  • Snapshots to perform incremental backups

Despite these similarities, differences between these two products remain. To help enterprises select the product that best fits their needs to protect their Nutanix environment, DCIG in its newest Pocket Analyst Report identifies seven factors that differentiate these two products to help enterprises evaluate them and choose the most appropriate one for their environment. Some of these factors include:

  1. Depth of Nutanix integration
  2. Breadth of application support
  3. Breadth of public cloud support
  4. Vendor stability

This four page DCIG Pocket Analyst Report contains analyst commentary about each of these features, identifies which product has strengths in each of these areas, and contains multiple pages of side-by-side feature comparisons to support these conclusions. Follow this link to download and access this newest DCIG Pocket Analyst Report that is available at no charge for a limited time.

 




HYCU Branches Out to Tackle ESX Backups in non-Nutanix Shops

A virtualization focused backup software play may be perceived as “too little, too late” with so many players in today’s backup space. However, many former virtualization centric backup software plays (PHD Virtual and vRanger come to mind) have largely disappeared while others got pricier and/or no longer do just VM backups. These changes have once again created a need for a virtualization centric backup software solution. This plays right into the hands of the newly created HYCU as it formally tackles the job of ESX virtual machine (VM) backups in non-Nutanix shops.

Virtualization centric backup software has almost disappeared in the last few years. Either they have been acquired and become part of larger entities (PHD Virtual was acquired by Unitrends while AppAssure and vRanger both ended up with Quest Software) while others have diversified into providing both physical and virtual backups. But as these changes have occurred, the need for a virtualization focused backup software solution has not necessarily diminished. If anything, the rise of hyper-converged platforms such as Dell EMC, Nutanix, HPE SimpliVity, Pivot3, and others offer has created a new need for a backup software product designed for these environments.

Enter HYCU. HYCU as a brand originally surfaced mid-last year from Comtrade Software. Today it takes on the name of its flagship HYCU backup software product as well as becomes a standalone company. By adopting the corporate name of HYCU, it completes the break from its parent company, Comtrade Group, as well as the Comtrade Software name under which it has operated for the past nine months.

During its initial nine-month existence, HYCU focused on tackling VM backups in Nutanix environments. It started out by protecting VMs running on Nutanix Acropolis hypervisor (AHV) environments and then expanded to protect VMs running on ESX in Nutanix environments.

Today HYCU takes a logical and necessary leap to ensure its VM-centric backup software finds a home in a broader number of enterprises. While HYCU may arguably do the best job of any backup software product available when it comes to protecting VMs in Nutanix environments, most organizations do not yet host all their VMs on Nutanix.

To address this larger market, HYCU is broadening it capabilities to tackle the protection of VMs on non-Nutanix platforms. There is some significance in HYCU taking this step. Up to this point, HYCU leveraged the native data protection capabilities found on Nutanix’s platform to negate the possibility of VM stuns. This approach worked whether it protected VMs running on AHV or ESX as both were hosted on the Nutanix platform and HYCU could call on Nutanix’s native snapshot capabilities.

Source: HYCU

By porting its software to protect VMs running on non-Nutanix platforms, HYCU by necessity must use the native VMware APIs for Data Protection (VADP) to protect these VMs. As VADP does not offer the same level of data protection against VM stuns that the native Nutanix platform offers, users on non-Nutanix platforms remain exposed to the possibility of VM stuns.

That said, organizations do gain three advantages by using HYCU on non-Nutanix platforms:

  1. They obtain a common solution to protect VMs on both their Nutanix and non-Nutanix platforms. HYCU provides them with one interface to manage the protection of all VMs.
  2. Affordable VM backups. HYCU prices its backup software very aggressively with list prices of about $1500/socket.
  3. They can more easily port VMs from non-Nutanix to Nutanix platforms. Once they begin to protect VMs on non-Nutanix platforms, they can restore them to Nutanix platforms. Once ported, they can replace the VM’s underlying data protection methodology with Nutanix’s native data protection capabilities to negate the possibility of VM stuns.

In today’s highly virtualized world a virtualization centric backup software play may seem late to market. However, backup software consolidations and mergers coupled with the impact that hyper-converged infrastructures are having on enterprise data centers have created an opening for an affordable virtualization centric backup software play.

HYCU has rightfully discerned such an opportunity exists. By now extending the capabilities of its product to protect non-Nutanix environments, it both knocks down the barriers and objections for these environments to adopt its software while simultaneously easing their path to eventually transition to Nutanix and address the VM stun challenges that persist in non-Nutanix environments.

Bitnami