Tips to Selecting the Best Cloud Backup Solution

The cloud has gone mainstream with more companies than ever looking to host their production applications with general-purpose cloud providers such as the Google Cloud Platform (GCP). As this occurs, companies must identify backup solutions architected for the cloud that capitalize on the native features of each provider’s cloud offering to best protect their virtual machines (VMs) hosted in the cloud.

Company that move their applications and data to the cloud must orchestrate the protection of their applications and data once they move them there. GCP and other cloud providers offer highly available environments and replicate data between data centers in the same region. They also provide options in their clouds for companies to configure their applications to automatically fail over, fail back, scale up, and scale back down as well as create snapshots of their data.

To fully leverage these cloud features, companies must identify an overarching tool that orchestrates the management of these availability, backup and recovery features as well as integrates with their applications to create application-consistent backups. To select the right cloud backup solution for them, here are a few tips to help companies do so.

Simple to Start and Stop

The cloud gives companies the flexibility and freedom to start and stop services as needed and then only pay for these services as they use them. The backup solution should give companies the same ease to start and stop these services. It should only bill companies for the applications it protects during the time it protects them.

The simplicity of the software’s deployment should also extend to its configuration and ongoing management. Companies can quickly select and deploy the compute, networking, storage, and security services cloud providers offer. In the same way, the software should similarly make it easy for companies to select and configure it for the backup of VMs. They can also optionally turn the software off if needed.

Takes Care of Itself

When companies select any cloud provider’s service, companies get the benefits of the service without the maintenance headaches associated with owning it. For example, when companies choose to host data on GCP’s Cloud Storage service, they do not need to worry about administering Google’s underlying IT infrastructure. The tasks of replacing faulty HDDs, maintaining HDD firmware, keeping its Cloud Storage OS patched, etc. fall to Google.

In the same way, when companies select backup software, they want its benefits without the overhead of patching it, updating it, and managing it long term. The backup software should be available and run as any other cloud service. However, in the background, the backup software provider should take care of its software’s ongoing maintenance and updates.

Integrates with the Cloud Provider’s Identity Management Services

Companies use services such as LDAP or Microsoft AD to control access to corporate IT resources. Cloud providers also have their own identity management services that companies can use to control their employees’ access to cloud resources.

The backup software will ideally integrate with the cloud provider’s native identity management services to simplify its management and ensure that those who administer the backup solution have permission to access VMs and data in the cloud.

Integrates with the Cloud Provider’s Management Console

Companies want to make their IT environments easier to manage. For many, that begins with a single pane of glass to manage their infrastructure. In cloud environments, companies must adhere to this philosophy as cloud providers offer dozens of cloud services that individuals can view and access through that cloud provider’s management console.

To ensure cloud administrators remain aware that the backup is available as an option, much less use it, the backup software must integrate with the cloud provider’s default management console. In this way, these individuals can remember to use it and easily incorporate its management into their overall job responsibilities.

Controls Cloud Costs

It should come as no great surprise that cloud providers make their money when companies use their services. The more of their services that companies use, the more the cloud providers charge. It should also not shock anyone the default services that cloud providers offer may be among their most expensive.

The backup software can help companies avoid racking up unneeded costs in the cloud. The backup software will primarily consume storage capacity in the cloud. The software should offer features that help manage these costs. Aside from having policies in place to tier backup data as its ages across these different storage types, it should also provide options to archive, compress, deduplicate, and even delete data. Ideally, it will also spin up cloud compute resources when needed and shut them down once backup jobs complete to further control costs in the cloud.

HYCU Brings the Benefits of Cloud to Backup

Companies choose the cloud for simple reasons: flexibility, scalability, and simplicity. They already experience these benefits when they choose the cloud’s existing compute, networking, storage, and security services. So, they may rightfully wonder, why should the software service they use to orchestrate their backup experience in the cloud be any different?

In short, it should not be any different. As companies adopt and adapt to the cloud’s consumption model, they will expect all services they consume in the cloud to follow its billing and usage model. Companies should not give backup a pass on this growing requirement.

HYCU is the first backup and recovery solution that companies can choose when protecting applications and data on the Google Cloud Platform to follow these basic principles of consuming cloud services. By integrating with GCP’s identity management services, being simple to start and stop, and helping companies control their costs, among others, HYCU exemplifies how easy backup and recovery can and should be in the cloud. HYCU provides companies with the breadth of backup services that their applications and data hosted in the cloud need while relieving them of the responsibility to continue to manage and maintain it.




Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.




The Early Implications of NVMe/TCP on Ethernet Network Designs

The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.

First, NVMe/TCP will accelerate the deployment of NVMe in enterprises.

NVMe is already available in networked storage environments using competing protocols such as RDMA which ships as RoCE (RDMA over Converged Ethernet). The challenge is no one (well, very few anyway) use RDMA in any meaningful way in their environment so using RoCE to run NVMe never gained and will likely never gain any momentum.

The availability of NVMe over TCP changes that. Companies already understand TCP, deploy it everywhere, and know how to scale and run it over their existing Ethernet networks. NVMe/TCP will build on this legacy infrastructure and knowledge.

Second, any latency that NVMe/TCP introduces still pales in comparison to existing storage networking protocols.

Running NVMe over TCP does introduces latency versus using RoCE. However, the latency that TCP introduces is nominal and will likely be measured in microseconds in most circumstances. Most applications will not even detect this level of latency due to the substantial jump in performance that natively running NVMe over TCP will provide versus using existing storage protocols such as iSCSI and FC.

Third, the introduction of NVMe/TCP will require companies implement Ethernet network designs that minimize latency.

Ethernet networks may implement buffering in Ethernet switches to handle periods of peak workloads. Companies will need to modify that network design technique when deploying NVMe/TCP as buffering introduces latency into the network and NVMe is highly latency sensitive. Companies will need to more carefully balance how much buffering they introduce on Ethernet switches.

Fourth, get familiar with the term “incast collapse” on Ethernet networks and how to mitigate it.

NVMe can support up to 64,000 queues. Every queue that NVMe opens up initiates a TCP session. Here is where challenges may eventually surface. Simultaneously opening up multiple queues will result in multiple TCP sessions initiating at the same time. This could, in turn, have all these sessions arrive at a common congestion point in the Ethernet network at the same time. The network remedies this by having all TCP sessions backing off at the same time, or an incast collapse, creating latency in the network.

Source: University of California-Berkeley

Historically this has been a very specialized and rare occurrence in networking due to the low probability that such an event would ever take place. But the introduction of NVMe/TCP into the network makes the possibility of such a event much more likely to occur, especially as more companies deploy NVMe/TCP into their environment.

The Ratification of the NVMe/TCP

Ratification of the NVMe/TCP standard potentially makes every enterprise data center a candidate for storage systems that can deliver dramatically better performance to their work loads. Until the performance demands of every workload in a data center are met instantaneously, some workload requests will queue up behind a bottleneck in the data center infrastructure.

Just as introducing flash memory into enterprise storage systems revealed bottlenecks in storage operating system software and storage protocols, NVMe/TCP-based storage systems will reveal bottlenecks in data center networks. Enterprises seeking to accelerate their applications by implementing NVMe/TCP-based storage systems may discover bottlenecks in their networks that need to be addressed in order to see the full benefits that NVMe/TCP-based storage.

To view this presentation in its entirety, follow this link.




All-inclusive Software Licensing: Best Feature Ever … with Caveats

On the surface, all-inclusive software licensing sounds great. You get all the software features that the product offers at no additional charge. You can use them – or not use them – at your discretion. It simplifies product purchases and ongoing licensing.

But what if you opt not to use all the product’s features or only need a small subset of them? In those circumstances, you need to take a hard look at any product that offers all-inclusive software licensing to determine if it will deliver the value that you expect.

Why We Like All-Inclusive Software Licensing

All-inclusive software licensing has taken off in recent years with more enterprise data storage and data protection products than ever delivering their software licensing in this manner. Further, this trend shows no signs of abating for the following reasons:

  • It makes lives easier for the procurement since they do not have manage and negotiate software licensing separately.
  • It makes lives easier for the IT staff who want to use its features only to find out they cannot use them because they do not have a license to use them.
  • It helps the vendors because their customers use their features. The more they use and like the features, the more apt they are to keep using the product long term.
  • It provides insurance for the companies involved that if they do unexpectedly need a feature, they do not have to go back to the proverbial well and ask for more money to license it.
  • It helps IT be more responsive to changes in business requirements. Business need can change unexpectedly. It happens where IT is assured that a certain feature will never be of interest to the end user. Suddenly, this “never gonna need it” becomes a “gotta have it” requirement.

All-inclusive software licensing solves these dilemmas and others.

The Best Feature Ever … Has Some Caveats

The reasons as to why companies may consider all-inclusive software licensing the best feature ever are largely self-evident. But there are some caveats as to why companies should minimally examine all-inclusive software licensing before they select any product that supports it.

  1. Verify you will use the features offered by the platform. It is great that a storage platform offers deduplication, compression, thin provisioning, snapshots, replication, metro clusters, etc., etc. at no extra charge. But if you do not use these features now and have no plans to use them, guess what? You are still going to indirectly pay for them if you buy the product.
  2. Verify the provider measures and knows which of its features are used. When you buy all-inclusive software licensing, you generally expect the vendor to support it and continue to develop it. But how does the vendor know which of its features are being used, when they are being used, and for what purposes? It makes no sense for the provider to staff its support lines with experts in replication or continue developing its replication features if no one uses it. Be sure you select a product that regularly monitors and reports back to the providers which of its features are used, how they are used and actively supports and develops them.
  3. Match your requirements to the features available on the product. It still pays to do your homework. Know your requirements and then evaluate products with all-inclusive software licensing based upon them.
  4. Verify the software works well in your environment. I have run across a few providers who led the way in providing all-inclusive software licensing. Yet the ones who selected the product based on this offering found out the features were not as robust as they anticipated or were so difficult to use that they had to abandon using them. In short, having a license to use software that does not work in your environment does not help anyone.
  5. Try to quantify if other companies use the specific software features. Ideally, you want to know that others like you use the feature in production. This can help you avoid become an unsuspecting beta-tester for that feature.

Be Grateful but Wary

I, for one, am grateful that providers have come around with more of them making all-inclusive software licensing available as a licensing option for their products. But the software features that vendors include with their all-inclusive software licensing vary from product to product. They also differ in their maturity, robustness, and fullness of support.

It behooves everyone to hop on the all-inclusive software licensing bandwagon. But as you do, verify to which train you hitched your wagon and that it will take you to where you want to go.




Time: The Secret Ingredient behind an Effective AI or ML Product

In 2019 the level of interest that companies expressed in using artificial Intelligence (AI) and machine learning (ML) exploded. Their interest is justifiable. These technologies gather the almost endless streams of data coming out of the scads of devices that companies deploy everywhere, analyze it, and then turn it into useful information. But time is the secret ingredient that companies must look for as they look to select an effective AI or ML product.

Data
Collection Must Proceed AI and ML

The premise behind the deployment of AI and ML technologies
is sound. Every device that a company deploys, in whatever form it takes (video
camera, storage array, server, network switch, automatic door opener, whatever)
has some type of software on it. This software serves two purposes:

  1. Operates the device
  2. Gathers data about the device’s operations, health, and potentially even the environment in which it operates

Option 1 initially drove the development and deployment of
the device’s software while Option 2 sometimes got characterized as a necessary
evil to identify and resolve issues with the device before the device was
impacted. But with more devices Internet enabled, the data each device gathers no
longer needs to remain stranded on each device. It could be centralized.

Devices can now send their data to a central data repository. This is often hosted and supported by the device manufacturer though companies can do this data collection and aggregation on their own.

This is where the AI and ML comes into the picture. Once collected, the manufacturers use AI or ML software to analyze this aggregated amount of data. This analysis can reveal broader trends and patterns otherwise undetectable if the data remained on the devices.

Only
Time Can Deliver an Effective AI or ML Strategy

But here is a key to choosing a product that is truly effective at delivering AI and ML. The value that AI and ML technologies bring relies upon having devices deployed and in production in the field for some time. New vendors, products, and even new deployments, even when they offer AI and ML features, may not provide meaningful insights until the devices collect and analyze a large amount of data over some time from these devices. This can take months or perhaps even years to accomplish.

Only after data is collected and analyzed will the full value of AI or ML technologies become fully evident. Initially, they may help anticipate and prevent some issues. But their effectiveness at anticipating and predicting issues will be limited until they have months or years worth of data at their disposal to analyze.

The evidence of this is seen from companies such as HPE Nimble and Unitrends, among others. Each has improved its ability to better support its clients and resolve issues before companies even know they have issues. For example, HPE Nimble and Unitrends each use their respective technologies to identify and resolve many hardware issues before they impact production.

In each example, each provider needed to collect a great deal of data over multiple years and analyze it before they could proactively and confidently take the appropriate actions to predict and resolve specific issues.

This element of time gives the manufacturers who have large numbers of devices already in the field and who offer AI and ML such a substantial head start in this race to be the leaders in AI and MO. Those just deploying these technologies will still need to gather data for some time period from multiple data points before they can provide the broad type of analytics that companies need and are coming to expect.




HYCU-X Piggybacks on Existing HCI Platforms to Put Itself in the Scale-out Backup Conversation

Vendors are finding multiple ways to enter the scale-out hyper-converged infrastructure (HCI) backup conversation. Some acquire other companies such as StorageCraft did in early 2017 with its acquisition of ExaBlox. Others build their own such as Cohesity and Commvault did. Yet among these many iterations of scale-out, HCI-based backup systems, HYCU’s decision to piggyback its new HYCU-X on top of existing HCI offerings, starting with Nutanix’s AHV HCI Platform, represents one of the better and more insightful ways to deliver backup using a scale-out architecture.

To say that HYCU and Nutanix were inextricably linked before the HYCU-X announcement almost goes without saying. HYCU was the first to market in June 2017 with a backup solution specifically targeted and integrated with the Nutanix AHV HCI Platform. Since then, HYCU has been a leader in providing backup solutions targeted at Nutanix AHV environments.

In coming out with HYCU-X, HYCU addresses an overlooked segment in the HCI backup space. Companies looking for a scale-out secondary storage systems to use as their backup solution typically had to go with a product that was:

  1. New to the backup market
  2. New to the HCI market; or
  3. New to both the backup and HCI markets.

Of these three, a backup provider that fell into either the 2nd or 3rd category where it was or is in any way new to the HCI market is less than ideal. Unfortunately, this is where most backup products fall as the HCI market itself is still relatively new and maturing.

However, this scenario puts these vendors in a tenuous position when it
comes to optimizing their backup product. They must continue to improve and
upgrade their backup solution even as they try to build and maintain an
emerging and evolving HCI platform that supports it. This is not an ideal
situation for most backup providers as it can sap their available resources.

By HYCU initially delivering HYCU-X built on Nutanix’s AHV Platform, it avoids having to create and maintain separate teams to build separate backup and HCI solutions. Rather, HYCU can rely upon Nutanix’s pre-existing and proven AHV HCI Platform and focus on building HYCU-X to optimize Nutanix AHV Platform for use in this role as a scale-out HCI backup platform. In so doing, both HCYU and Nutanix can strive to continue to deliver features and functions that can be delivered in as little as one-click.

Now could companies use Nutanix or other HCI platforms as a scale-out storage target without HYCU-X? Perhaps. But with HYCU-X, companies get the backup engine they need to manage the snapshot and replication features natively found on the HCI platform.

By HYCU starting with Nutanix, companies can leverage the Nutanix AHV HCI Platform as a backup target. They can then use HYCU-X to manage the data once it lands there. Further, companies can then potentially use HYCU-X to backup other applications in their environment.

While some may argue that using Nutanix instead of purpose-built scale-out secondary HCI solutions from other backup providers will cost more, the feedback that HYCU has received from its current and prospective customer base suggests this the opposite is true. Companies find that by time they deploy these other providers’ backup and HCI solutions, their costs could exceed the costs of a Nutanix solution running HYCU-X.

The scale-out backup HCI space continues to gain momentum for good reason. Companies want the ease of management, flexibility, and scalability that these solutions provide along with the promise that they give for them to make disaster recoveries much simpler to adopt and easier to manage over time.

By HYCU piggybacking initially on the Nutanix AHV HCI Platform to deliver a scale-out backup solution, companies get the reliability and stability of one of the largest, established HCI providers and access to a backup solution that runs natively on the Nutanix AHV HCI Platform. That will be a hard combination to beat.




Best Practices for Getting Ready to Go “All-in” on the Cloud

To ensure an application migration to the cloud goes well or that a company should even migrate a specific application to the cloud requires a thorough understanding of each application. This understanding should encompass what resources the application currently uses as well as how it behaves over time. Here is a list of best practices that a company can put in place for its on-premises applications before it moves any of them to the cloud.

  1. Identify all applications running on-premises. A company may assume it knows what applications it has running in its data center environment. However, it is better to be safe than sorry. Take inventory and actively monitor its on-premises environment to establish a baseline. During this time, identify any new virtual or physical machines that come online.
  2. Quantify the resources used by these applications and when and how they use them. This step ensures that a company has a firm handle on the resources each application will need in the cloud, how much of these resources each one will need, and what types of resources it will need. For instance, simply knowing one needs to move a virtual machine (VM) to the cloud is insufficient. A company needs to know how much CPU, memory, and storage each VM needs; when the application runs; its run-time behavior; and, its periods of peak performance to choose the most appropriate VM instance type in the cloud to host it.
  3. Identify which applications will move and which will stay. Test and development applications will generally top the list of applications that a company will move to the cloud first. This approach gives a company the opportunity to become familiar with the cloud, its operations, and billing. Then a company should prioritize production applications starting with the ones that have the lowest level of impact to the business. Business and mission critical applications should be some of the last ones that a company moves. Applications that will stay on-premises are often legacy applications or those that cloud providers do not support.
  4. Map each application to the appropriate VM instance in the cloud. To make the best choice requires that a company knows both their application requirements and the offerings available from the cloud provider. This can take some time to quantify as Amazon Web Services (AWS) offers over 90 different VM instance types on which a company may choose to host an application while Microsoft Azure offers over 150 VM instance types. Further, each of these provider’s VMs may be deployed as an on-demand, reserved, or spot instance that each has access to multiple types of storage. A company may even look to move to serverless compute. To select the most appropriate VM instance type for each application requires that a company know at the outset the capacity and performance requirements of each VM as well as its data protection requirements. This information will ensure a company can select the best VM to host it as well as appropriately configure the VM’s CPU, data protection, memory, and storage settings.
  1. Determine which general-purpose cloud provider to use. Due to the multiple VM instance types each cloud provider offers and the varying costs of each VM instance type, it behooves a company to explore which cloud provider can best deliver the hosting services it needs. This decision may come down to price. Once it maps each of its applications to a cloud provider’s VM instance type, a company should be able to get an estimate of what its monthly cost will be to host its applications in each provider’s cloud.

Companies have good reasons for wanting to go “all-in” on the cloud as part of their overall business and IT strategies. But integral to both these strategies, a company must also have a means to ensure the stability of this new hybrid cloud environment as well as provide assurances that its cloud costs will be managed and controlled over time. By going “all-in” on software such as Quest Software’s Foglight, a company can have confidence that its decision to go “all-in” on the cloud will succeed initially and then continue to pay-off over time.

A recent white paper by DCIG provides more considerations for going all-in on the cloud to succeed both initially and over time. This paper is available to download by following this link to Quest Software’s website.




Rethinking Your Data Deduplication Strategy in the Software-defined Data Center

Data Centers Going Software-defined

There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.

The introduction of technologies that can monitor, report on, analyze, and increasingly manage and optimize data center hardware frees organizations from performing housekeeping tasks such as:

  • Verifying hardware firmware compatibility with applications and operating systems
  • Troubleshooting hot spots in the infrastructure
  • Identifying and repairing failing hardware components

Automating these tasks does more than change how organizations manage their data center infrastructures. It reshapes how they can think about their entire IT strategy. Rather than adapting their business to match the limitations of the hardware they choose, they can now pursue business objectives where they expect their IT hardware infrastructure to support these business initiatives.

This change in perspective has already led to the availability of software-defined compute, networking, and storage solutions. Further, software-defined applications such as databases, firewalls, and other applications that organizations commonly deploy have also emerged. These virtual appliances enable companies to quickly deploy entire application stacks. While it is premature to say that organizations can immediately virtualize their entire data center infrastructure, the foundation exists for them to do so.

Software-defined Storage Deduplication Targets

As they do, data protection software, like any other application, needs to be part of this software-defined conversation. In this regard, backup software finds itself well-positioned to capitalize on this trend. It can be installed on either physical or virtual machines (VMs) and already ships from many providers as a virtual appliance. But storage software that functions primarily as a deduplication storage target already finds itself being boxed out of the broader software-defined conversation.

Software-defined storage (SDS) deduplication targets exist that have significantly increased in storage capabilities. By the end of 2018, a few of these software-defined virtual appliances scaled to support about 100TB or more of capacity. But organizations must exercise caution when looking to position these available solutions as a cornerstone in a broader software-defined deduplication storage target strategy.

This caution, in many cases, stems less from the technology itself and more from the vendors who provide these SDS deduplication target solutions. In every case, save one, these solutions originate with providers who focus on selling hardware solutions.

Foundation for Software-defined Data Centers Being Laid Today

Companies are putting plans in place right now to build the data center of tomorrow. That data center will be a largely software-defined data center with solutions that span both on-premises and cloud environments. To achieve that end, companies need to select solutions that have a software-designed focus which meet their current needs while positioning them for tomorrow’s requirements.

Most layers in the data center stack, to include compute, networking, storage, and even applications, are already well down the road of transforming from hardware-centric to software-centric offerings. Yet in the face of this momentous shift in corporate data center environments, SDS deduplication target solutions have been slow to adapt.

It is this gap that SDS deduplication products such as Quest QoreStor look to fill. Coming from a company with “software” in its name, Quest comes without the hardware baggage that other SDS providers must balance. More importantly, Quest QoreStor offers a feature-rich set of services that range from deduplication to replication to support for all major cloud, hardware, and backup software platforms that comes from 10 years of experience in delivering deduplication software.

Free to focus solely on delivering a SDDC solution, Quest QoreStor represents the type of SDS deduplication target that does truly meet the needs of today’s enterprise while positioning them to realize the promise of tomorrow’s software-defined data center.

To read more of DCIG’s thoughts about using SDS deduplication targets in the software-defined data center of tomorrow, follow this link.




Key Differentiators between the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 Systems

Companies have introduced a plethora of technologies into their core enterprise infrastructures in recent years that include all-flash arrays, cloud, hyper-converged infrastructures, object-based storage, and snapshots, just to name a few. But as they do, a few constants remain. One is the need to backup and recover all the data they create.

Deduplication appliances remain one of the primary means for companies to store this data for short-term recovery, disaster recoveries, and long-term data retention. To fulfill these various roles, companies often select either the HPE StoreOnce 5650 or the Dell EMC Data Domain 9300. (To obtain a complimentary DCIG report that compares these two products, follow this link.)

Their respective deduplication appliance lines share many features in common. They both perform inline deduplication. They both offer client software to do source-side deduplication that reduces data sent over the network and improves backup throughput rates. They both provide companies with the option to backup data over NAS or SAN interfaces.

Despite these similarities, key areas of differentiation between these two product lines remain which include the following:

  1. Cloud support. Every company either has or anticipates using a hybrid cloud configuration as part of its production operations. These two product lines differ in their levels of cloud support.
  2. Deduplication technology. Data Domain was arguably the first to popularize widespread use of deduplication for backup. Since then, others such as the HPE StoreOnce 5650 have come on the scene that compete head-to-head with Data Domain appliances.
  3. Breadth of application integration. Software plug-ins that work with applications and understand their data formats prior to deduplicating the data provide tremendous benefits as they improve data reduction rates and decrease the amount of data sent over the network during backups. The software that accompanies the appliances from these two providers has varying degrees of integration with leading enterprise applications.
  4. Licensing. The usefulness of any product hinges on the features it offers, their viability, and which ones are available to use. Clear distinctions between the HPE StoreOnce and Dell EMC Data Domain solutions exist in this area.
  5. Replication. Copying data off-site for disaster recovery and long-term data retention is paramount in comprehensive enterprise disaster recovery strategies. Products from each of these providers offer this but they differ in the number of features they offer.
  6. Virtual appliance. As more companies adopt software-defined data center strategies, virtual appliances have increased appeal.

In the latest DCIG Pocket Analyst Report, DCIG compares the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 product lines and examines how each well these two products fare in their support of these six areas in which DCIG looks at nearly 100 features to draw its conclusions. This report is currently available at no charge for a limited time on DCIG’s partner website, TechTrove. To receive complimentary access to this report, complete a registration form that you can find at this link.




NVMe: Four Key Trends Set to Drive Its Adoption in 2019 and Beyond

Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.

Many individuals have heard about the performance benefits of NVMe. Using it, companies can reduce latency with response times measured in few hundred microseconds or less. Further, applications can leverage the many more channels that NVMe has to offer to drive throughput to hundreds of GBs per second and achieve millions of IOPs. These types of performance characteristics have many companies eagerly anticipating NVMe’s widespread availability.

To date, however, few companies have experienced the full breadth of performance characteristics that NVMe offers. This stems from:

  • The lack of AFAs on the market that fully support NVMe (about 20%).
  • The relatively small performance improvements that NVMe offers over existing SAS-attached solid-state drives (SSDs); and,
  • The high level of difficulty and cost associated with deploying NVMe in existing data centers.

This is poised to change in the next 12-24 months with four key trends poised to converge that will open up NVMe to a much wider audience.

  1. Large storage vendors getting ready to enter the NVMe market. AFA providers such as Tegile (Western Digital), iXsystems, Huawei, Lenovo, and others ship products that support NVMe. These vendors represent the leading edge of where NVMe innovation has occurred. However, their share of the storage market remains relatively small compared to providers such as Dell EMC, HPE, IBM, and NetApp. As these large storage providers enter the market with AFAs that support NVMe, expect market acceptance and adoption of NVMe to take off.
  2. The availability of native NVMe drivers on all major operating systems. The only two major enterprise operating systems that have currently native NVMe drivers for their OSes are Linux and VMware. However, until Microsoft and, to a lesser degree, Solaris, offer native NVMe drives, many companies will have to hold off on deploying NVMe in their environments. The good news is that all these major OS providers are actively working on NVMe drivers. Further, expect that the availability of these drivers will closely coincide with the availability of NVMe AFAs from the major storage providers and the release of the NVMe-oF TCP standard.
  3. NVMe-oF TCP protocol standard set to be finalized yet in 2018. Connecting the AFA controller to its backend SSDs via NVMe is only one half – and much easier part – of solving the performance problem. The much larger and more difficult problem is easily connecting hosts to AFAs over existing storage networks as it is currently difficult to setup and scale NVMe-oF. The establishment of the NVMe-oF TCP standard will remedy this and facilitate the introduction and use of NVMe-oF between hosts and AFAs using TCP/IP over existing Ethernet storage networks.
  4. The general availability of NVMe-oF TCP offload cards. To realize the full performance benefits of NVMe-oF using TCP, companies are advised to use NVMe-oF TCP offload cards. Using standard Ethernet cards with no offload engine, companies will still see high throughput but very high CPU utilization (up to 50 percent.) Using the forthcoming NVMe-oF TCP offload cards, performance increases by anywhere from 33 to 150 percent versus native TCP cards while only introducing nominal amounts of latency (single to double digit microseconds.)

The business need for NVMe technology is real. While today’s all-flash arrays have tremendously accelerated application performance, NVMe stands poised to unleash another round of up to 10x or more performance improvements. But to do that, a mix of technologies, standards, and programming changes to existing operating systems must converge for mass adoption in enterprises to occur. This combination of events seems poised to happen in the next 12-24 months.




20 Years in the Making, the Future of Data Management Has Arrived

Mention data management to almost any seasoned IT professional and they will almost immediately greet the term with skepticism. While organizations have found they can manage their data within certain limits, when they remove those boundaries and attempt to do so at scale, those initiatives have historically fallen far short if not outright failed. It is time for that perception to change. 20 years in the making, Commvault Activate puts organizations in a position to finally manage their data at scale.

Those who work in IT are loath to say any feat in technology is impossible. If one looks at the capabilities of any handheld device, one can understand why they have this belief. People can pinpoint exactly where they are almost anywhere in the world to within a few feet. They can take videos, pictures, check the status of their infrastructure, text, … you name it, handheld devices can do it.

By way of example, as I write this, I was present to watch YY Lee, SVP and Chief Strategy Officer of Anaplan, onstage at Commvault GO. She explained how systems using artificial intelligence (AI) were able within a very short time, sometimes days, became experts at playing games such as Texas Hold’em and beat the best players in the world at them.

Despite advances such as these in technology, data management continues to bedevil large and small organizations alike. Sure, organizations may have some level of data management in place for certain applications (think email, file servers, or databases,) but when it comes to identifying and leveraging a tool to deploy data management across an enterprise at scale, that tool has, to date, eluded organizations. This often includes the technology firms that are responsible for producing so much of the hardware that stores this data and software that produces it.

The end for this vexing enterprise challenge finally came into view with Commvault’s announcement of Activate. What makes Activate different from other products that promise to provide data management at scale is that Commvault began development on this product 20 years ago in 1998.

During that time, Commvault became proficient in:

  • Archiving
  • Backup
  • Replication
  • Snapshots
  • Indexing data
  • Supporting multiple different operating systems and file systems
  • Gathering and managing metadata

Perhaps most importantly, it established relationships and gained a foothold in enterprise organizations around the globe. This alone is what differentiates it from almost every other provider of data management software. Commvault has 20+ years of visibility into the behavior and requirements of protecting, moving, and migrating data in enterprise organizations. This insight becomes invaluable when viewed in the context of enterprise data management which has been Commvault’s end game since its inception.

Activate builds on Commvault’s 20 years of product development with Activate’s main differentiator being its ability to stand alone apart from other Commvault software. In other words, companies do not first have to deploy Commvault’s Complete Backup and Recovery or any of its other software to utilize Activate.

They can deploy Activate regardless of whatever other backup, replication, snapshot, etc. software product you may have. But because Activate draws from the same code base as the rest of Commvault’s software, companies can deploy it with a great deal of confidence because of the stability of Commvault’s existing code base.

Once deployed, Activate scans and indexes the data across the company’s environment which can include its archives, backups, file servers, and/or data stored in the cloud. Once indexed, companies can do an assessment of the data in their environment in anticipation of taking next steps such as eDiscovery preparation, remediate data privacy risks, and index and analyze data based upon your own criteria.

Today more so than ever companies recognize they need to manage their data across the entirety of their enterprise. Delivering on this requirement requires a tool appropriately equipped and sufficiently mature to meet enterprise requirements. Commvault Activate answers this call as a software product that has been 20 years in the making to provide enterprises with the foundation they need to manage their data going forward.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




Analytics, Automation and Hybrid Clouds among the Key Takeaways from VMworld 2018

At early VMworld shows, stories emerged of attendees scurrying from booth to booth on the exhibit floor looking for VM data protection and hardware solutions to address the early challenges that VMware ESXi presented. Fast forward to the 2018 VMworld show and the motivation behind attendees attending training sessions and visiting vendor booths has changed significantly. Now they want solutions that bring together their private and public clouds, offer better ways to analyze and automate their virtualized environments, and deliver demonstrable cost savings and/or revenue opportunities after deploying them.

The entrance to the VMworld 2018 exhibit hall greeted attendees a little differently this than in year’s past. Granted, there were still some of the usual suspects such as Dell EMC and HPE that have reserved booths at this show for many years. But right alongside them were relative newcomers (to the VMworld show anyway) such as Amazon Web Services and OVHcloud.

Then as one traversed the exhibit hall floor and visited the booths of the vendors immediately behind them, the data protection and hardware themes of the early VMworld shows persisted in these booths, though the messaging and many of the vendor names have changed since the early days of this show.

Companies such as Cohesity, Druva, and Rubrik represent the next generation of data protection solutions for vSphere while companies such as Intel and Trend Micro have a more pronounced presence on the VMworld show floor. Together these exhibitors reflect the changing dynamics of what is occurring in today’s data centers and what the current generation of organizations are looking for vendors to provide for their increasingly virtualized environments. Consider:

  1. Private and public cloud are coming together to become hybrid. The theme of hybrid clouds with applications that can span both public and private clouds began with VMworld’s opening keynote announcing the availability of Amazon Relational Database Service (Amazon RDS) on VMware. Available in the coming months, this functionality will free organizations to automate the setup of Microsoft SQL Server, Oracle, PostgreSQL, MariaDB and MySQL databases in their traditional VMware environments and then migrate them to the AWS cloud. Those interested in trying out this new service can register here for a preview.
  2. Analytics will pave the ways for increasing levels of automation. As organizations of all sizes adopt hybrid environments, the only way they can effectively manage their hybrid environments at scale is to automate their management. This begins with the use of analytics tools that capture the data points coming in from the underlying hardware, the operating systems, the applications, the public clouds to which they attach, the databases, the devices which feed them the data, whatever.

Evidence of growing presence of these analytics tools that enable this automation was everywhere at VMworld. One good example is Runecast analyzes the logs of these environments and then also scours blogs, white papers, forums, and other online sources for best practices to advise companies on how to best configure their environments. Another one is Login VSI which does performance benchmarking and forecasting to anticipate how VDI patches and upgrades will impact the current infrastructure.

  1. The cost savings and revenue opportunities for these hybrid environments promise to be staggering. One of the more compelling segments in one of the keynotes was the savings that many companies initially achieved deploying vSphere. Below is one graphic that appeared at the 8:23 mark in this video of the second day’s keynote where a company reduced its spend on utility charges by over $60,000 per month or an 84% reduction in cost. Granted, this example was for illustration purposes but it seemed inline with other stories I have anecdotally heard.

Source: VMware

But as companies move into this hybrid world that combines private and public clouds, the value proposition changes. While companies may still see cost savings going forward, it is more likely that they will realize and achieve new opportunities that were simply not possible before. For instance, they may deliver automated disaster recoveries and high availability for many more or all their applications. Alternatively, they will be able to bring new products and services to market much more quickly or perform analysis that simply could not have been done before because they have access to resources that were unavailable to them in a cost-effective or timely manner.




DCIG’s VMworld 2018 Best of Show

VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.

VMworld 2018 Best of Show #1: QoreStor Software-defined Secondary Storage

Software defined dedupe in the form of QoreStor has arrived. A few years ago Dell Technologies sold off its Dell Software division that included an assortment (actually a lot) of software products that emerged as Quest Software. Since then, Quest Software has done nothing but innovate like bats out of hell. One of the products coming out of this lab of mad scientists is QoreStor, a stand-alone software-defined secondary storage solution. QoreStor installs on any server hardware platform and works with any backup software. QoreStor provides a free download that will deduplicate up to 1TB of data at no charge. While at least one other vendor (Dell Technologies) offers a deduplication product that is available as a virtual appliance, only Quest QoreStor gives organizations the flexibility to install it on any hardware platform.

VMworld 2018 Best of Show #2: LoginVSI Load Testing

Stop guessing on the impact of VDI growth with LoginVSI. Right sizing the hardware for a new VDI deployment from any VDI provider (Citrix, Microsoft, VMware, etc.) is hard enough. But to keep the hardware appropriately sized as more VDIs are added or patches and upgrades are made to existing VDI instances, that can be dicey at best and downright impossible at worst.

LoginVSI helps take the guesswork out of any organization’s future VDI growth by:

  • examining the hardware configuration underlying the current VDI environment,
  • comparing that to the changes proposed for the environment, and then
  • making recommendations as to what changes, if any, need to be made to the environment to support
    • more VDI instances or
    • changes to existing VDI instances.

VMworld 2018 Best of Show #3: Runecast vSphere Environment Analyzer

Plug vSphere’s holes and keep it stable with Runecast. Runecast was formed by a group of former IBM engineers who over the years had become experts at two things while working at IBM: (1) Installing and configuring VMware vSphere; and (2) Pulling out their hair trying to keep each installation of VMware vSphere stable and up-to-date. To rectify this second issue, they left IBM and formed Runecast,

Runecast provides organizations with the software and tools they need to proactively:

  • identify needed vSphere patches and fixes,
  • identify security holes, and even
  • recommend best practices for tuning your vSphere deployments
  • with minimal to no manual intervention.

VMworld 2018 Best of Show #4: VxRail Hyper-converged Infrastructure

Kudos to Dell Technologies VxRail solution for saving every man, woman, and child in the United States $2/person in 2018. I was part of a breakfast conversation on Wednesday morning with one of the Dell Technologies VxRail product managers. He anecdotally shared with the analysts present that Dell Technologies recently completed two VxRail hyper-converged infrastructure deployments in two US government agencies. Each deployment is saving each agency $350 million annually. Collectively that amounts to $700 million or $2/person for every person residing in the US. Thank you, Dell.




CloudShift Puts Flawless DR on Corporate Radar Screens

Hyper-converged Infrastructure (HCI) solutions excel at delivering on many of the attributes commonly associated with private clouds. Consequently, the concepts of hyper-convergence and private clouds have become, in many respects, almost inextricably linked.

But for an HCI solution to not have a clear path forward for public cloud support … well, that’s almost anathema in the increasingly hybrid cloud environments found in today’s enterprises. That’s what makes this week’s CloudShift announcement from Datrium notable – it begins to clarify Datrium’s strategy for how Datrium is going to go beyond backup to the public cloud as part of its DVX solution and puts the concept of flawless DR on corporate radar screens.

Source: Datrium

HCI is transforming how organizations manage their on-premise infrastructure. By combining compute, data protection, networking, storage and server virtualization into a single pre-integrated solution, they eliminate many of the headaches associated with traditional IT infrastructures while delivering the “cloud-like” speed and ease of deployment that enterprises want.

However, enterprises increasingly want more than “cloud-like” abilities from their on-premise HCI solution. They also want the flexibility to move the virtual machines (VMs) they host on their HCI solution into public cloud environments if needed. Specifically, if they run disaster recovery (DR) tests, perform an actual DR, or need to move a specific workload in the public cloud that is experiencing high throughput, having the flexibility to move VMs into and out of the cloud as needed is highly desirable.

Datrium answers the call for public cloud integration with its recent CloudShift announcement. However, Datrium did not just come out with a #MeToo answer for public clouds by announcing it will support the AWS cloud. Rather, it delivered what most enterprises are looking for at this stage in their journey to a hybrid cloud environment: a means to seamlessly incorporate the cloud into their overall DR strategy.

The goals behind its CloudShift announcement are three-fold:

  1. Build on the existing Datrium DVX platform that already manages the primary copy of data as well as its backups. With the forthcoming availability of CloudShift in the first half of 2019, it will complete the primary to backup to cloud circle that companies want.
  2. Make DR work flawlessly. If there are two words together that often represent an oxymoron, it is “flawless DR”. By bringing all primary, backup and cloud together and managing them as one holistic piece, companies can begin to someday soon (ideally in this lifetime) view flawless DR as the norm instead of the exception.
  3. Orchestrated DR failover and failback. DR failover and failback just rolls off the tongue – it is simple to say and everyone understands what it means. But to execute on the successful DR failover and failback in today’s world tends to get very pricey and very complex. By Datrium rolling the management of primary, backup and cloud under one roof and then continually performing compliance checks on the execution environment to ensure that they meet RPO and RTO of the DR plan, companies can have a higher degree of confidence that DR failovers and failbacks only occur when they are supposed to and that when they occur, they will succeed.

Despite many technology advancements in recent years, enterprise-wide, turnkey DR capabilities with orchestrated failover and failback between on-premises and the cloud are still largely the domain of high-end enterprises that have the expertise to pull it off and are willing to commit large amounts of money to establish and maintain a (hopefully) functional DR capability. Datrium’s CloudShift announcement puts the industry on notice that reliable, flawless DR that will meet the budget and demands of a larger number of enterprises is on its way.




Orchestrated Backup IN the Cloud Arrives with HYCU for GCP

Companies are either moving or have moved to the cloud with backup TO the cloud being one of the primary ways they plan to get their data and applications into the cloud. But orchestrating the backup of their applications and data once they reside IN the cloud… well, that requires an entirely different set of tools with few, if any, backup providers yet offering features in their respective products that deliver on this requirement. That ends today with the introduction of HYCU for GCP (Google Cloud Platform).

Listen to the podcast associated with this blog entry.

Regardless of which public cloud platform you may use to host your data and/or applications, Amazon Web Services (AWS), Microsoft Azure, GCP, or some other platform, they all provide companies with multiple native backup utilities to protect data that resides on their cloud. The primary tools include the likes of snapshots, replication, and versioning with GCP being no different.

What makes these tools even more appealing to use is that they are available at a cloud user’s fingertips; they can turn them on with the click of a button; and, they only pay for what they use. Available for any data or applications hosted in the cloud, they give organizations access to levels of data availability, data protection, and even disaster recovery for which they previously had no means to easily deliver and they can do so for any data or application hosted with the cloud provider.

But the problem in this scenario is not application and/or data backup. The catch is how does an organization do this at scale in such a way that they can orchestrate and manage the backups of all their applications and data on a cloud platform such as GCP for all their users. The short answer is: organizations cannot.

This is a problem that HYCU for GCP addresses head-on. HYCU has previously established a beachhead in Nutanix environments thanks to its tight integration with AHV. This integration well positions HYCU to extend those same benefits to any public cloud partner of Nutanix. The fact that Nutanix and Google announced a strategic alliance last year at the Nutanix .NEXT conference to build and operate hybrid clouds certainly helped HYCU prioritize GCP over the other public cloud providers for backup orchestration.

Leveraging HYCU in the GCP, companies immediately gain three benefits:

  1. Subscribe to HYCU directly from the GCP Marketplace. Rather than having to first acquire HYCU separately and then install it in the GCP, companies can buy it in the GCP Marketplace. This accelerates and simplifies HYCU’s deployment in the GCP while simultaneously giving companies access to a corporate grade backup solution that orchestrates and protects VMs in the GCP.
  2. Takes advantage of the native backup features in the GCP. GCP has its own native snapshots that can be used for backup and recovery that HYCU capitalizes on and puts at the fingertips of admins who can then manage and orchestrate backups and recoveries for all corporate VMs residing in the GCP.
  3. Frees organizations to confidently expand their deployment of applications and data in GCP. While GCP obviously had the tools to backup and recover data and applications in GCP, managing them at scale was going to be, at best, cumbersome, and, at worst, impossible. HYCU for GCP frees companies to begin to more aggressively deploy applications and data at scale in GCP knowing that they can centrally manage their protection and recovery.

Backup TO the cloud is great and almost every backup provider offers that feature functionality. But backup IN the cloud where the backup and recovery of a company’s applications and data in the cloud is centrally managed…now, that is something that stands apart from the competition. Thanks to HYCU for GCP, companies can finally do more than just deploy data and applications in the Google Cloud Platform that requires each of their users or admins to assume backup and recovery responsibilities for their applications and data. Instead, companies can do so knowing they now have a tool in place that can centrally manage their backups and recoveries.




Differentiating between High-end HCI and Standard HCI Architectures

Hyper-converged infrastructure (HCI) architec­tures have staked an early claim as the data center architecture of the future. The abilities of all HCI solutions to easily scale capacity and performance, simplify ongoing management, and deliver cloud-like features have led many enterprises to view it as a logical path forward for building on-premise clouds.

But when organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high-end and standard hyper-converged architectures deliver the same bene­fits. They each virtualize compute, memory, storage networking, and data stor­age in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snap­shots and replication. In short, these two archi­tectures mirror each other in many ways.

However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments. In these envi­ronments, enterprises often need to:

  • Host thousands of VMs in a single, logical environment.
  • Provide guaranteed, sustainable performance for each VM by insulating them from one another.
  • Offer connectivity to multiple public cloud providers for archive and/or backup.
  • Seamlessly move VMs to and from the cloud.

It is in these circumstances that differences between high-end and standard architectures emerge with the high-end HCI architecture possessing four key advantages over standard HCI architectures. Consider:

  1. Data availability. Both high-end and standard HCI solutions distribute data across multiple nodes to ensure high levels of data availability. However, only high-end HCI architectures use separate, distinct compute and data nodes with the data nodes dedicated to guar­anteeing high levels of data availability for VMs on all compute nodes. This ensures that should any compute node become unavail­able, the data associated with any VM on it remains available.
  2. Flash/performance optimization. Both high-end and standard HCI architectures take steps to keep data local to the VM by storing the data of each VM on the same node on which the VM runs. However, solutions based on high-end HCI architectures store a copy of each VM’s data on the VM’s compute node as well as on the high-end HCI architecture’s underlying data nodes to improve and optimize flash performance. High-end HCI solutions such as the Datrium DVX also deduplicate and compress all data while limiting the amount of inter-nodal communication to free resources for data processing.
  3. Platform manageability/scalability. As a whole, HCI architectures are intuitive to size, manage, scale, and understand. In short, if you need more performance and/or capacity, one only needs to add another node. However, enterprises often must support hundreds or even thousands of VMs that each must perform well and be protected. Trying to deliver on those requirements at scale using a standard HCIA solution where inter­-nodal communication is a prerequisite becomes almost impos­sible to achieve. High-end HCI solutions facilitate granular deployments of compute and data nodes while limiting inter-nodal communication.
  4. VM insulation. Noisy neighbors—VMs that negatively impact the performance of adjacent VMs—remain a real problem in virtual environments. While both high-end and standard HCI architectures support storing data close to VMs, high-end HCI architectures do not rely on the availability of other compute nodes. This reduces inter-nodal communication and the probability that the noisy neighbor problem will surface.

HCI architectures have resonated with organizations in large part because of their ability to deliver cloud-like features and functionality with maintaining their simplicity of deployment and ongoing maintenance. However, next generation high-end HCI architecture with solutions available from providers like Datrium provide organizations greater flexibility to deliver cloud-like functionality at scale such as offering better management and optimization of flash performance, improved data availability, and increased insulation between VM instances. To learn more about how standard and high-end HCI architectures compare, check out this recent DCIG pocket analyst report that is available on the TechTrove website.

Note: Image added on 7/27/2018.



Too Many Fires, Poor Implementations, and Cost Overruns Impeding Broader Public Cloud Adoption

DCIG’s analysts (myself included) have lately spent a great deal of time getting up close and personal on the capabilities of public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. We have also spent time talking to individuals deploying cloud solutions. As we have done so, we recognize that the capabilities of these cloud offerings should meet and exceed the expectations of most organizations regardless of their size. However, impeding cloud adoption are three concerns that have little to do with the technical capabilities of these public cloud solutions.

Anyone who spends any time studying the capabilities of any of these cloud offerings for the first time will walk away impressed. Granted, each offering has its respective strengths and weaknesses. However, when one examines each of these public cloud offerings and their respective infrastructures and compares them to the data centers that most companies own and manage, the differences are stark. The offerings from these public cloud providers win hands down. This might explain why organizations of all sizes are adopting the cloud at some level.

The more interesting dilemma is why organizations are not adopting public cloud offerings at a faster pace and why some early adopters are even starting to leave the cloud. While this is not an extensive list of reasons, here are three key concerns that have come out of our conversations and observations that are impeding cloud adoption.

Too many fires. Existing data centers are a constant target for budget cutbacks, understaffing, and too often lack any clear, long-term vision that guides data center development. This combination of factors has led to costly, highly complex, inflexible data centers that need a lot of people to manage them. This situation exists at the exact moment when the business side of the house expects the data center to become simpler and more cost-effective and flexible to manage. While in-house data center IT staff may want to respond to these business requests, they often are consumed with putting out the fires caused by the complexity of the existing data center. This leaves them little or no time to explore and investigate new solutions.

Poor implementations. The good news is that public cloud offerings have a very robust feature set. The bad news is that all these features make it very daunting to learn and very easy to incorrectly set it up. If anything, the ease and low initial costs of most public cloud providers may work against the adoption of public cloud solutions. They have made it so easy and low cost for companies to get into the cloud that companies may try it out without really understanding all the options available to them and the ramifications of the decisions they make. This can easily lead to poor application implementations in the cloud and potentially introduce more costs and complexity – not less. The main upside here is that because creating and taking down virtual private clouds with these providers is relatively easy, even if one does set it up poorly it can be rectified by creating a new virtual private cloud that does better meet your needs.

Cloud cost overruns. Part of the reason companies live with and even mask the complexity of their existing data centers is that they can control their costs. Even if an application needs more storage, compute, networking, power – whatever – they can sometimes move hardware and software around on the back end to mask these costs until the next fiscal quarter or year rolls around when they go to the business to ask for approval to buy more. Once applications and data are in the cloud and start to grow, these costs become exposed almost immediately. Since cloud providers bill based upon monthly usage, companies need to closely monitor their applications and data in the cloud to include identifying which ones are starting to incur additional charges, to know what options they have available to them to lower these charges, and the practicality of making these changes.

Anyone who honestly assesses the capabilities available from the major public cloud providers will find they can better deliver next-gen features than what most organizations can do on their own. That said, companies either need to find the time to first educate themselves about these cloud providers or identify someone they trust to help them down the cloud path. While these three issues are impeding cloud adoption, they should not be stopping it as they still too often do. The good news is that even if a company does poorly implement their environment in the cloud the first time around (and few will,) the speed and flexibility at which public cloud providers offer to build out new virtual private clouds and tear down existing ones means they can cost-effectively improve it.