Rethinking Your Data Deduplication Strategy in the Software-defined Data Center

Data Centers Going Software-defined

There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.

The introduction of technologies that can monitor, report on, analyze, and increasingly manage and optimize data center hardware frees organizations from performing housekeeping tasks such as:

  • Verifying hardware firmware compatibility with applications and operating systems
  • Troubleshooting hot spots in the infrastructure
  • Identifying and repairing failing hardware components

Automating these tasks does more than change how organizations manage their data center infrastructures. It reshapes how they can think about their entire IT strategy. Rather than adapting their business to match the limitations of the hardware they choose, they can now pursue business objectives where they expect their IT hardware infrastructure to support these business initiatives.

This change in perspective has already led to the availability of software-defined compute, networking, and storage solutions. Further, software-defined applications such as databases, firewalls, and other applications that organizations commonly deploy have also emerged. These virtual appliances enable companies to quickly deploy entire application stacks. While it is premature to say that organizations can immediately virtualize their entire data center infrastructure, the foundation exists for them to do so.

Software-defined Storage Deduplication Targets

As they do, data protection software, like any other application, needs to be part of this software-defined conversation. In this regard, backup software finds itself well-positioned to capitalize on this trend. It can be installed on either physical or virtual machines (VMs) and already ships from many providers as a virtual appliance. But storage software that functions primarily as a deduplication storage target already finds itself being boxed out of the broader software-defined conversation.

Software-defined storage (SDS) deduplication targets exist that have significantly increased in storage capabilities. By the end of 2018, a few of these software-defined virtual appliances scaled to support about 100TB or more of capacity. But organizations must exercise caution when looking to position these available solutions as a cornerstone in a broader software-defined deduplication storage target strategy.

This caution, in many cases, stems less from the technology itself and more from the vendors who provide these SDS deduplication target solutions. In every case, save one, these solutions originate with providers who focus on selling hardware solutions.

Foundation for Software-defined Data Centers Being Laid Today

Companies are putting plans in place right now to build the data center of tomorrow. That data center will be a largely software-defined data center with solutions that span both on-premises and cloud environments. To achieve that end, companies need to select solutions that have a software-designed focus which meet their current needs while positioning them for tomorrow’s requirements.

Most layers in the data center stack, to include compute, networking, storage, and even applications, are already well down the road of transforming from hardware-centric to software-centric offerings. Yet in the face of this momentous shift in corporate data center environments, SDS deduplication target solutions have been slow to adapt.

It is this gap that SDS deduplication products such as Quest QoreStor look to fill. Coming from a company with “software” in its name, Quest comes without the hardware baggage that other SDS providers must balance. More importantly, Quest QoreStor offers a feature-rich set of services that range from deduplication to replication to support for all major cloud, hardware, and backup software platforms that comes from 10 years of experience in delivering deduplication software.

Free to focus solely on delivering a SDDC solution, Quest QoreStor represents the type of SDS deduplication target that does truly meet the needs of today’s enterprise while positioning them to realize the promise of tomorrow’s software-defined data center.

To read more of DCIG’s thoughts about using SDS deduplication targets in the software-defined data center of tomorrow, follow this link.




Key Differentiators between the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 Systems

Companies have introduced a plethora of technologies into their core enterprise infrastructures in recent years that include all-flash arrays, cloud, hyper-converged infrastructures, object-based storage, and snapshots, just to name a few. But as they do, a few constants remain. One is the need to backup and recover all the data they create.

Deduplication appliances remain one of the primary means for companies to store this data for short-term recovery, disaster recoveries, and long-term data retention. To fulfill these various roles, companies often select either the HPE StoreOnce 5650 or the Dell EMC Data Domain 9300. (To obtain a complimentary DCIG report that compares these two products, follow this link.)

Their respective deduplication appliance lines share many features in common. They both perform inline deduplication. They both offer client software to do source-side deduplication that reduces data sent over the network and improves backup throughput rates. They both provide companies with the option to backup data over NAS or SAN interfaces.

Despite these similarities, key areas of differentiation between these two product lines remain which include the following:

  1. Cloud support. Every company either has or anticipates using a hybrid cloud configuration as part of its production operations. These two product lines differ in their levels of cloud support.
  2. Deduplication technology. Data Domain was arguably the first to popularize widespread use of deduplication for backup. Since then, others such as the HPE StoreOnce 5650 have come on the scene that compete head-to-head with Data Domain appliances.
  3. Breadth of application integration. Software plug-ins that work with applications and understand their data formats prior to deduplicating the data provide tremendous benefits as they improve data reduction rates and decrease the amount of data sent over the network during backups. The software that accompanies the appliances from these two providers has varying degrees of integration with leading enterprise applications.
  4. Licensing. The usefulness of any product hinges on the features it offers, their viability, and which ones are available to use. Clear distinctions between the HPE StoreOnce and Dell EMC Data Domain solutions exist in this area.
  5. Replication. Copying data off-site for disaster recovery and long-term data retention is paramount in comprehensive enterprise disaster recovery strategies. Products from each of these providers offer this but they differ in the number of features they offer.
  6. Virtual appliance. As more companies adopt software-defined data center strategies, virtual appliances have increased appeal.

In the latest DCIG Pocket Analyst Report, DCIG compares the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 product lines and examines how each well these two products fare in their support of these six areas in which DCIG looks at nearly 100 features to draw its conclusions. This report is currently available at no charge for a limited time on DCIG’s partner website, TechTrove. To receive complimentary access to this report, complete a registration form that you can find at this link.




20 Years in the Making, the Future of Data Management Has Arrived

Mention data management to almost any seasoned IT professional and they will almost immediately greet the term with skepticism. While organizations have found they can manage their data within certain limits, when they remove those boundaries and attempt to do so at scale, those initiatives have historically fallen far short if not outright failed. It is time for that perception to change. 20 years in the making, Commvault Activate puts organizations in a position to finally manage their data at scale.

Those who work in IT are loath to say any feat in technology is impossible. If one looks at the capabilities of any handheld device, one can understand why they have this belief. People can pinpoint exactly where they are almost anywhere in the world to within a few feet. They can take videos, pictures, check the status of their infrastructure, text, … you name it, handheld devices can do it.

By way of example, as I write this, I was present to watch YY Lee, SVP and Chief Strategy Officer of Anaplan, onstage at Commvault GO. She explained how systems using artificial intelligence (AI) were able within a very short time, sometimes days, became experts at playing games such as Texas Hold’em and beat the best players in the world at them.

Despite advances such as these in technology, data management continues to bedevil large and small organizations alike. Sure, organizations may have some level of data management in place for certain applications (think email, file servers, or databases,) but when it comes to identifying and leveraging a tool to deploy data management across an enterprise at scale, that tool has, to date, eluded organizations. This often includes the technology firms that are responsible for producing so much of the hardware that stores this data and software that produces it.

The end for this vexing enterprise challenge finally came into view with Commvault’s announcement of Activate. What makes Activate different from other products that promise to provide data management at scale is that Commvault began development on this product 20 years ago in 1998.

During that time, Commvault became proficient in:

  • Archiving
  • Backup
  • Replication
  • Snapshots
  • Indexing data
  • Supporting multiple different operating systems and file systems
  • Gathering and managing metadata

Perhaps most importantly, it established relationships and gained a foothold in enterprise organizations around the globe. This alone is what differentiates it from almost every other provider of data management software. Commvault has 20+ years of visibility into the behavior and requirements of protecting, moving, and migrating data in enterprise organizations. This insight becomes invaluable when viewed in the context of enterprise data management which has been Commvault’s end game since its inception.

Activate builds on Commvault’s 20 years of product development with Activate’s main differentiator being its ability to stand alone apart from other Commvault software. In other words, companies do not first have to deploy Commvault’s Complete Backup and Recovery or any of its other software to utilize Activate.

They can deploy Activate regardless of whatever other backup, replication, snapshot, etc. software product you may have. But because Activate draws from the same code base as the rest of Commvault’s software, companies can deploy it with a great deal of confidence because of the stability of Commvault’s existing code base.

Once deployed, Activate scans and indexes the data across the company’s environment which can include its archives, backups, file servers, and/or data stored in the cloud. Once indexed, companies can do an assessment of the data in their environment in anticipation of taking next steps such as eDiscovery preparation, remediate data privacy risks, and index and analyze data based upon your own criteria.

Today more so than ever companies recognize they need to manage their data across the entirety of their enterprise. Delivering on this requirement requires a tool appropriately equipped and sufficiently mature to meet enterprise requirements. Commvault Activate answers this call as a software product that has been 20 years in the making to provide enterprises with the foundation they need to manage their data going forward.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




CloudShift Puts Flawless DR on Corporate Radar Screens

Hyper-converged Infrastructure (HCI) solutions excel at delivering on many of the attributes commonly associated with private clouds. Consequently, the concepts of hyper-convergence and private clouds have become, in many respects, almost inextricably linked.

But for an HCI solution to not have a clear path forward for public cloud support … well, that’s almost anathema in the increasingly hybrid cloud environments found in today’s enterprises. That’s what makes this week’s CloudShift announcement from Datrium notable – it begins to clarify Datrium’s strategy for how Datrium is going to go beyond backup to the public cloud as part of its DVX solution and puts the concept of flawless DR on corporate radar screens.

Source: Datrium

HCI is transforming how organizations manage their on-premise infrastructure. By combining compute, data protection, networking, storage and server virtualization into a single pre-integrated solution, they eliminate many of the headaches associated with traditional IT infrastructures while delivering the “cloud-like” speed and ease of deployment that enterprises want.

However, enterprises increasingly want more than “cloud-like” abilities from their on-premise HCI solution. They also want the flexibility to move the virtual machines (VMs) they host on their HCI solution into public cloud environments if needed. Specifically, if they run disaster recovery (DR) tests, perform an actual DR, or need to move a specific workload in the public cloud that is experiencing high throughput, having the flexibility to move VMs into and out of the cloud as needed is highly desirable.

Datrium answers the call for public cloud integration with its recent CloudShift announcement. However, Datrium did not just come out with a #MeToo answer for public clouds by announcing it will support the AWS cloud. Rather, it delivered what most enterprises are looking for at this stage in their journey to a hybrid cloud environment: a means to seamlessly incorporate the cloud into their overall DR strategy.

The goals behind its CloudShift announcement are three-fold:

  1. Build on the existing Datrium DVX platform that already manages the primary copy of data as well as its backups. With the forthcoming availability of CloudShift in the first half of 2019, it will complete the primary to backup to cloud circle that companies want.
  2. Make DR work flawlessly. If there are two words together that often represent an oxymoron, it is “flawless DR”. By bringing all primary, backup and cloud together and managing them as one holistic piece, companies can begin to someday soon (ideally in this lifetime) view flawless DR as the norm instead of the exception.
  3. Orchestrated DR failover and failback. DR failover and failback just rolls off the tongue – it is simple to say and everyone understands what it means. But to execute on the successful DR failover and failback in today’s world tends to get very pricey and very complex. By Datrium rolling the management of primary, backup and cloud under one roof and then continually performing compliance checks on the execution environment to ensure that they meet RPO and RTO of the DR plan, companies can have a higher degree of confidence that DR failovers and failbacks only occur when they are supposed to and that when they occur, they will succeed.

Despite many technology advancements in recent years, enterprise-wide, turnkey DR capabilities with orchestrated failover and failback between on-premises and the cloud are still largely the domain of high-end enterprises that have the expertise to pull it off and are willing to commit large amounts of money to establish and maintain a (hopefully) functional DR capability. Datrium’s CloudShift announcement puts the industry on notice that reliable, flawless DR that will meet the budget and demands of a larger number of enterprises is on its way.




Orchestrated Backup IN the Cloud Arrives with HYCU for GCP

Companies are either moving or have moved to the cloud with backup TO the cloud being one of the primary ways they plan to get their data and applications into the cloud. But orchestrating the backup of their applications and data once they reside IN the cloud… well, that requires an entirely different set of tools with few, if any, backup providers yet offering features in their respective products that deliver on this requirement. That ends today with the introduction of HYCU for GCP (Google Cloud Platform).

Listen to the podcast associated with this blog entry.

Regardless of which public cloud platform you may use to host your data and/or applications, Amazon Web Services (AWS), Microsoft Azure, GCP, or some other platform, they all provide companies with multiple native backup utilities to protect data that resides on their cloud. The primary tools include the likes of snapshots, replication, and versioning with GCP being no different.

What makes these tools even more appealing to use is that they are available at a cloud user’s fingertips; they can turn them on with the click of a button; and, they only pay for what they use. Available for any data or applications hosted in the cloud, they give organizations access to levels of data availability, data protection, and even disaster recovery for which they previously had no means to easily deliver and they can do so for any data or application hosted with the cloud provider.

But the problem in this scenario is not application and/or data backup. The catch is how does an organization do this at scale in such a way that they can orchestrate and manage the backups of all their applications and data on a cloud platform such as GCP for all their users. The short answer is: organizations cannot.

This is a problem that HYCU for GCP addresses head-on. HYCU has previously established a beachhead in Nutanix environments thanks to its tight integration with AHV. This integration well positions HYCU to extend those same benefits to any public cloud partner of Nutanix. The fact that Nutanix and Google announced a strategic alliance last year at the Nutanix .NEXT conference to build and operate hybrid clouds certainly helped HYCU prioritize GCP over the other public cloud providers for backup orchestration.

Leveraging HYCU in the GCP, companies immediately gain three benefits:

  1. Subscribe to HYCU directly from the GCP Marketplace. Rather than having to first acquire HYCU separately and then install it in the GCP, companies can buy it in the GCP Marketplace. This accelerates and simplifies HYCU’s deployment in the GCP while simultaneously giving companies access to a corporate grade backup solution that orchestrates and protects VMs in the GCP.
  2. Takes advantage of the native backup features in the GCP. GCP has its own native snapshots that can be used for backup and recovery that HYCU capitalizes on and puts at the fingertips of admins who can then manage and orchestrate backups and recoveries for all corporate VMs residing in the GCP.
  3. Frees organizations to confidently expand their deployment of applications and data in GCP. While GCP obviously had the tools to backup and recover data and applications in GCP, managing them at scale was going to be, at best, cumbersome, and, at worst, impossible. HYCU for GCP frees companies to begin to more aggressively deploy applications and data at scale in GCP knowing that they can centrally manage their protection and recovery.

Backup TO the cloud is great and almost every backup provider offers that feature functionality. But backup IN the cloud where the backup and recovery of a company’s applications and data in the cloud is centrally managed…now, that is something that stands apart from the competition. Thanks to HYCU for GCP, companies can finally do more than just deploy data and applications in the Google Cloud Platform that requires each of their users or admins to assume backup and recovery responsibilities for their applications and data. Instead, companies can do so knowing they now have a tool in place that can centrally manage their backups and recoveries.




Four Implications of Public Cloud Adoption and Three Risks to Address

Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.

Implication #1: No enterprise IT dept will be able to keep pace with the level of services innovation available from cloud providers

The battle is over. Cloud wins. Deal with it.

Dealing with it does not necessarily mean that every business will move every workload to the cloud. It does mean that it is time for business IT departments to build awareness of the services available from public cloud providers. One way to do this is to tap into the flow of service updates from one or more of the major cloud providers.

four public cloud logosFor Amazon Web Services, I like What’s New with AWS. Easy filtering by service category is combined with sections for featured announcements, featured video announcements, and one-line listings of the most recent announcements from AWS. The one-line listings include links to service descriptions and to longer form articles on the AWS blog.

For Microsoft Azure, I like Azure Updates. As its subtitle says, “One place. All updates.” The Azure Updates site provides easy filtering by product, update type and platform. I especially like the ability to filter by update type for General Availability and for Preview. The site also includes links to the Azure roadmap, blog and other resources. This site is comprehensive without being overwhelming.

For Google Cloud Platform, its blog may be the best place to start. The view can be filtered by label, including by announcements. This site is less functional than the AWS and Microsoft Azure resources cited above.

For IBM Cloud, the primary announcements resource is What’s new with IBM Cloud. Announcements are presented as one-line listings with links to full articles.

Visit these sites, subscribe to their RSS feeds, or follow them via social media platforms. Alternatively, subscribe to their weekly or monthly newsletters via email. Once a business has workloads running in one of the public clouds at a minimum an IT staff member should follow the updates site.

Implication #2: Pressure will mount on Enterprise IT to connect business data to public cloud services

The benefits of bringing public cloud services to bear on the organization’s data will create pressure on enterprise IT departments to connect business data to those services. There are many options for accomplishing this objective, including:

  1. All-in with one public cloud
  2. Hybrid: on-prem plus one public
  3. Hybrid: on-prem plus multiple public
  4. Multi-cloud (e.g. AWS + Azure)

The design of the organization and the priorities of the business should drive the approach taken to connect business data with cloud services.

Implication #3: Standard data protection requirements now extend to data and workloads in the public cloud

No matter what approach it taken when embracing the public cloud, standard data protection requirements extend to data and workloads in the cloud. Address these requirements up front. Explore alternative solutions and select one that meets the organizations data protection requirements.

Implication #4: Cloud Data Protection and DRaaS are on-ramps to public cloud adoption

For most organizations the transition to the cloud will be a multi-phased process. Data protection solutions that can send backup data to the cloud are a logical early phase. Disaster recovery as a service (DRaaS) offerings represent another relatively low-risk path to the cloud that may be more robust and/or lower cost that existing disaster recovery setups. These solutions move business data into public cloud repositories. As such, cloud data protection and DRaaS may be considered on-ramps to public cloud adoption.

Once corporate data has been backed up or replicated to the cloud, tools are available to extract and transform the data into formats that make it available for use/analysis by that cloud provider’s services. With proper attention, this can all be accomplished in ways that comply with security and data governance requirements. Nevertheless, there are risks to be addressed.

Risk to Address #1: Loss of change control

The benefit of rapid innovation has a downside. Any specific service may be upgraded or discarded by the provider without much notice. Features used by a business may be enhanced or decremented. This can force changes in other software that integrates with the service or in procedures used by staff and the associated documentation for those procedures.

For example, Office365 and Google G Suite features can change without much notice. This creates a “Where did that menu option go?” experience for end users. Some providers reduce this pain by providing an quick tutorial for new features within the application itself. Others provide online learning centers that make new feature tutorials easy to discover.

Accept this risk as an unavoidable downside to rapid innovation. Where possible, manage the timing of these releases to an organization’s users, giving them advance notice of the changes along with access to tutorials.

Risk to Address #2: Dropped by provider

A risk that may not be obvious to many business leaders is that of being dropped by a cloud service provider. A business with unpopular opinions might have services revoked, sometimes with little notice. Consider how quickly the movement to boycott the NRA resulted in severed business-to-business relationships. Even an organization as large as the US Military faces this risk. As was highlighted in recent news, Google will not renew its military AI project due in large part to pressure from Google employees.

Mitigate this risk through contracts and architecture. This is perhaps one argument in favor of a hybrid on-prem plus cloud approach to the public cloud versus an all-in approach.

Risk to Address #3: Unpredictable costs

It can be difficult to predict the costs of running workloads in the public cloud, and these costs can change rapidly. Address this risk by setting cost thresholds that trigger an alert. Consider subscribing to a service such as Nutanix Beam to gain granular visibility into and optimization of public cloud costs.

Its time to get real about the public cloud

Many business are ready to embrace the public cloud. IT departments should make themselves aware of services that may create value for their business. They should also work through the implications of moving corporate data and workloads to the cloud, and make plans for managing the attendant risks.




Hackers Say Goodbye to Ransomware and Hello to Bitcoin Mining

Ransomware gets a lot of press – and for good reason – because when hackers break through your firewalls, encrypt your data, and make you pay up or else lose your data, it rightfully gets people’s attention. But hackers probably have less desire than most to be in the public eye and sensationalized ransomware headlines bring them unwanted attention. That’s why some hackers have said goodbye to the uncertainty of a payout associated with getting a ransom for your data and instead look to access your servers to do some bitcoin mining using your CPUs.

A week or so ago a friend of mine who runs an Amazon Web Services (AWS) consultancy and reseller business shared a story with me about one of his clients who hosts a large SaaS platform in AWS.

His client had mentioned to him in the middle of the week that the applications on one of his test servers was running slow. While my friend was intrigued, he did not at the time give it much thought. This client was not using his managed services offering which meant that he was not necessarily responsible for troubleshooting their performance issues.

Then the next day his client called him back and said that now all his servers hosting this application – test, dev, client acceptance, and production – were running slow. This piqued his interest, so he offered resources to help troubleshoot the issue. The client then allowed his staff to log into these servers to investigate the issue

Upon logging into these server, they discovered that all instances running at 100% also ran a Drupal web application. This did not seem right, especially considering that it was early on a Saturday morning when the applications should mostly be idle.

After doing a little more digging around on each server, they discovered a mysterious multi-threaded process running on each server that was consuming all their CPU resources. Further, the process also had opened up a networking port to a server located in Europe. Even more curious, the executable that launched the process had been deleted after the process started. It was as if someone was trying to cover their tracks.

At this point, suspecting the servers had all been hacked, they checked to see if there were any recent security alerts. Sure enough. On March 28, 2018, Drupal issued a security advisory that if you were not running Drupal 7.58 or Drupal 8.5.1, your servers were vulnerable to hackers who could remotely execute code on your server.

However, what got my friend’s attention is that these hackers did not want his client’s data. Rather, they wanted his client’s processing power to do bitcoin mining which is exactly what these servers had been doing for a few days now on behalf of these hackers. To help their client, they killed the bitcoin mining process on each of these servers before calling his client to advise them to patch Drupal ASAP.

The story does not end there. In this case, his client did not patch Drupal quickly enough. Sometime after they killed the bitcoin mining processes, another hacker leveraged that same Drupal security flaw and performed the same hack. By the time his client came to work on Monday, there were bitcoin mining processes running on those servers that again consumed all their CPU cycles.

What they found especially interesting was how the executable file that the new hackers had installed worked. In reviewing their code, the first thing it did was to kill any pre-existing bitcoin mining processes started by other hackers. This freed all the CPU resources to handle bitcoin mining processes started by the new hackers. The hackers were literally fighting each other over access to the compromised system’s resources.

Two takeaways from this story:

  1. Everyone is rightfully worried about ransomware but bitcoin mining may not hit corporate radar screens. I doubt that hackers want the FBI, CIA, Interpol, MI6, Mossad, or any other criminal justice agency hunting them down any more than you or I do. While hacking servers and “stealing” CPU cycles is still a crime, it probably is much further down on the priority list of most companies as well as these agencies.

A bitcoin mining hack may go unnoticed for long periods of time and may not be reported by companies or prosecuted by these criminal justice agencies even when reported because it is easy to perceive this type of hack as a victimless crime. Yet every day the hacker’s bitcoin mining processes go unnoticed and remain active, the more bitcoin the hackers earn. Further, one should assume hackers will only become more sophisticated going forward. Expect hackers to figure out how to install bitcoin mining processes that run without consuming all CPU cycles so these processes remain running and unnoticed for longer periods of time.

  1. Hosting your data and processes in the cloud does not protect your data and your processes against these types of attacks. AWS has all the utilities available to monitor and detect these rogue processes. That said, organizations still need someone to implement these tools and then monitor and manage them.

Companies may be relieved to hear that some hackers have stopped targeting their data and are instead targeting their processors to use them for bitcoin mining. However, there are no victimless crimes. Your pocket book will still get hit in cases like this as Amazon will bill you for using these resources.

In cases like this, if companies start to see their AWS bills going through the roof, it may not be the result of their businesses. It may be their servers have been hacked and they are paying to finance some hacker’s bitcoin mining operation. To avoid this scenario, companies should ensure they have the right internal people and processes in place to keep their applications up-to-date, to protect infrastructure from attacks, and to monitor their infrastructures whether hosted on-premise or in the cloud.




Nutanix Backup Software: Make the Best Choice Between HYCU and Rubrik

Organizations of all sizes now look to hyper-converged infrastructure solutions such as the Nutanix Enterprise Cloud Platform to provide them with their next generation of data center IT infrastructure services. As they do, they need software optimized for protecting Nutanix environments. HYCU, Inc., and Rubrik are two early leaders in this space. Each possess distinctive attributes that make one or the other better suited for providing data protection services when these conditions exist in your environment.

Get the DCIG Pocket Analyst Report comparing these two products by following this link.

Hyper-converged infrastructure solutions such as the Nutanix Enterprise Cloud Platform stand poised to fundamentally change how enterprises manage their IT infrastructure. They simplify and automate long standing problems such as application availability, data migrations, and hardware refreshes as well as integration with leading public cloud providers. But this looming changeover in IT infrastructure still leaves organizations with the responsibility to protect the data hosted on these solutions. This is where products such as those from HYCU, Inc., and Rubrik come into play.

HYCU (pronounced “hıˉ Q”) for Nutanix and Rubrik Cloud Data Management are two data protection software products that protect virtual machines (VMs) but which also offer features optimized for the protection of Nutanix environments. Both HYCU and Rubrik Cloud Data Management share some similarities as they both support:

  • Application and file level restores for Windows applications and operating systems
  • Concurrent backups
  • Full recovery of VMs from backups
  • Multiple cloud providers for application recovery and/or long-term data retention
  • Protection of VMs on non-Nutanix platforms
  • Snapshots to perform incremental backups

Despite these similarities, differences between these two products remain. To help enterprises select the product that best fits their needs to protect their Nutanix environment, DCIG in its newest Pocket Analyst Report identifies seven factors that differentiate these two products to help enterprises evaluate them and choose the most appropriate one for their environment. Some of these factors include:

  1. Depth of Nutanix integration
  2. Breadth of application support
  3. Breadth of public cloud support
  4. Vendor stability

This four page DCIG Pocket Analyst Report contains analyst commentary about each of these features, identifies which product has strengths in each of these areas, and contains multiple pages of side-by-side feature comparisons to support these conclusions. Follow this link to download and access this newest DCIG Pocket Analyst Report that is available at no charge for a limited time.

 




HYCU Branches Out to Tackle ESX Backups in non-Nutanix Shops

A virtualization focused backup software play may be perceived as “too little, too late” with so many players in today’s backup space. However, many former virtualization centric backup software plays (PHD Virtual and vRanger come to mind) have largely disappeared while others got pricier and/or no longer do just VM backups. These changes have once again created a need for a virtualization centric backup software solution. This plays right into the hands of the newly created HYCU as it formally tackles the job of ESX virtual machine (VM) backups in non-Nutanix shops.

Virtualization centric backup software has almost disappeared in the last few years. Either they have been acquired and become part of larger entities (PHD Virtual was acquired by Unitrends while AppAssure and vRanger both ended up with Quest Software) while others have diversified into providing both physical and virtual backups. But as these changes have occurred, the need for a virtualization focused backup software solution has not necessarily diminished. If anything, the rise of hyper-converged platforms such as Dell EMC, Nutanix, HPE SimpliVity, Pivot3, and others offer has created a new need for a backup software product designed for these environments.

Enter HYCU. HYCU as a brand originally surfaced mid-last year from Comtrade Software. Today it takes on the name of its flagship HYCU backup software product as well as becomes a standalone company. By adopting the corporate name of HYCU, it completes the break from its parent company, Comtrade Group, as well as the Comtrade Software name under which it has operated for the past nine months.

During its initial nine-month existence, HYCU focused on tackling VM backups in Nutanix environments. It started out by protecting VMs running on Nutanix Acropolis hypervisor (AHV) environments and then expanded to protect VMs running on ESX in Nutanix environments.

Today HYCU takes a logical and necessary leap to ensure its VM-centric backup software finds a home in a broader number of enterprises. While HYCU may arguably do the best job of any backup software product available when it comes to protecting VMs in Nutanix environments, most organizations do not yet host all their VMs on Nutanix.

To address this larger market, HYCU is broadening it capabilities to tackle the protection of VMs on non-Nutanix platforms. There is some significance in HYCU taking this step. Up to this point, HYCU leveraged the native data protection capabilities found on Nutanix’s platform to negate the possibility of VM stuns. This approach worked whether it protected VMs running on AHV or ESX as both were hosted on the Nutanix platform and HYCU could call on Nutanix’s native snapshot capabilities.

Source: HYCU

By porting its software to protect VMs running on non-Nutanix platforms, HYCU by necessity must use the native VMware APIs for Data Protection (VADP) to protect these VMs. As VADP does not offer the same level of data protection against VM stuns that the native Nutanix platform offers, users on non-Nutanix platforms remain exposed to the possibility of VM stuns.

That said, organizations do gain three advantages by using HYCU on non-Nutanix platforms:

  1. They obtain a common solution to protect VMs on both their Nutanix and non-Nutanix platforms. HYCU provides them with one interface to manage the protection of all VMs.
  2. Affordable VM backups. HYCU prices its backup software very aggressively with list prices of about $1500/socket.
  3. They can more easily port VMs from non-Nutanix to Nutanix platforms. Once they begin to protect VMs on non-Nutanix platforms, they can restore them to Nutanix platforms. Once ported, they can replace the VM’s underlying data protection methodology with Nutanix’s native data protection capabilities to negate the possibility of VM stuns.

In today’s highly virtualized world a virtualization centric backup software play may seem late to market. However, backup software consolidations and mergers coupled with the impact that hyper-converged infrastructures are having on enterprise data centers have created an opening for an affordable virtualization centric backup software play.

HYCU has rightfully discerned such an opportunity exists. By now extending the capabilities of its product to protect non-Nutanix environments, it both knocks down the barriers and objections for these environments to adopt its software while simultaneously easing their path to eventually transition to Nutanix and address the VM stun challenges that persist in non-Nutanix environments.




Seven Key Differentiators between the Cohesity DataPlatform and Rubrik Cloud Data Management HCIA Solutions

Hyper-converged infrastructure architectures (HCIA) are foundational for the next generation of data centers. Key to realizing that vision is to implement HCIA solutions for both primary and secondary storage. The Cohesity DataPlatform and Rubrik Cloud Data Management solutions have emerged as the early leaders in this rapidly growing market segment. While these two products share many features in common, seven key points of differentiation between them yet exist as the latest DCIG Pocket Analyst Report reveals.

Rubrik and Cohesity are in an intense battle with their respective hyper-converged infrastructure architectures (HCIAs) representing the next generation of cloud data management, data protection, and secondary storage—with merit. Initially HCIAs were only viewed in the context of hosting consolidating software and hardware running in production. However, these HCIA solutions show great promise for data protection and recovery.

Virtualizing compute, memory, storage networking, data storage, and data protection in a simple to deploy and manage scale-out architecture, these solutions solve the same problems as those HCIAs targeted at production environments. However, these solutions give organizations a clear path toward cost-effectively and simply implementing data protection, data recovery, and connectivity to the cloud among many other features. In the race to deliver on the promise of this solution, Cohesity and Rubrik have emerged as the early leaders in this emerging market.

On the surface, Cohesity and Rubrik appear to share many features in common. Both entered the market with HCIA appliances before later rolling out software running on 3rd party hardware and virtual appliances capable of running onsite or in the cloud. The product feature sets and capabilities from the two companies often mirror each other with rapid product release schedules and updates where what may appear to be a gap in a one product lineup one quarter is filled the next.

Despite these similarities, differences between them remain. To help enterprises make the most appropriate choice between these two solutions, DCIG’s latest Pocket Analyst Report examines seven key features to consider when choosing between these two products that include:

  1. Breadth of hypervisor support
  2. Breadth of supported cloud providers
  3. Breadth of industry standard server hardware support
  4. Data protection and replication capabilities
  5. Flexibility of deduplication deployment options
  6. Proven scale-out capabilities
  7. vCenter backup monitoring and management

This four page DCIG Pocket Analyst Report contains analyst commentary about each of these features, identifies which product has strengths in each of these areas, and contains 2+ pages of side-by-side feature comparisons to support these conclusions. Follow this link to register to access this newest DCIG Pocket Analyst Report at no charge for a limited time.

 




A More Elegant (and Affordable) Approach to Nutanix Backups

One of the more perplexing challenges that Nutanix administrators face is how to protect the data in their Nutanix deployments. Granted, Nutanix natively offers its own data protection utilities. However, these utilities leave gaps that enterprises are unlikely to find palatable when protecting their production applications. This is where Comtrade Software’s HYCU and ExaGrid come into play as their combined solutions provide a more affordable and elegant approach to protecting Nutanix environments.

One of the big appeals of using hyperconverged solutions such as Nutanix’s inclusion of basic data protection utilities. Using its Time Stream and Cloud Connect technologies, Nutanix makes it easy and practical for organizations to protect applications hosted on VMs running on Nutanix deployments.

The issue becomes how does one affordably deliver and manage data protection in Nutanix environments at scale? This becomes a tougher question for Nutanix to answer because to use its data protection technologies at scale requires running the Nutanix platform to host the secondary/backup copies of data. While that is certainly doable, that approach is likely not the most affordable way to tackle this challenge.

This is where a combined data protection solution from Comtrade Software and ExaGrid for the protection of Nutanix environments makes sense. Comtrade Software’s HYCU was the first backup software product to come to market purpose-built to protect Nutanix environments. Like Nutanix’s native data protection utilities, Nutanix administrators can manage HYCU and their VM backups from within the Nutanix PRISM management console. Unlike Nutanix’s native data protection utilities, HYCU auto-detects applications running within VMs and configures them for protection.

Further distinguishing HYCU from other competitive backup software products mentioned on Nutanix’s web page, HYCU is the only one currently listed that can run as a VM in an existing Nutanix implementation. The other products listed require organizations to deploy a separate physical machine to run their software which add cost and complexity into the backup equation.

Of course, once HYCU protects the data, the issue becomes, where does one store the backup copies of data for fast recoveries and long-term retention. While one can certainly keep these backup copies on the existing Nutanix deployment or on a separate deployment of it, these creates two issues.

  • One, if there is some issue with the current Nutanix deployment, you may not be able to recover the data.
  • Two, there are more cost-effective solutions for the storage and retention of backup copies of data.

ExaGrid addresses these two issues. Its scale-out architecture resembles Nutanix’s architecture enabling an ExaGrid deployment to start small and then easily scale to greater amounts of capacity and throughput. However, since it is a purpose-built backup appliance intended to store secondary copies of data, it is more affordable than deploying a second Nutanix cluster. Further, the Landing Zones that are uniquely found on ExaGrid deduplication systems facilitate near instantaneous recovery of VMs.

Adding to the appeal of ExaGrid’s solutions in enterprise environments is its recently announced EX63000E appliance. This appliance has 58% more capacity than its predecessor, allowing for a 63TB full backup. Up to thirty-two (32) EX63000E appliances can be combined in a single scale-out system to allow for a 2PB full backup. Per ExaGrid’s published performance benchmarks, each EX63000E appliance has a maximum ingest rate of 13.5TB/hr. per appliance enabling thirty-two (32) EX63000Es combined in a single system to achieve maximum ingest rate of 432TB/hr.

Hyperconverged infrastructure solutions are in general poised to re-shape enterprise data center landscapes with solutions from Nutanix currently leading the way. As this data center transformation occurs, organizations need to make sure that the data protection solutions that they put in place offer both the same ease of management and scalability that the primary hyperconverged solution provides. Using Comtrade Software HYCU and ExaGrid, organizations get the affordable yet elegant data protection solution that they seek for this next generation data center architecture.




Differentiating between the Dell EMC Data Domain and ExaGrid EX Systems

Deduplication backup target appliances remain a critical component of the data protectioninfrastructure for many enterprises. While storing protected data in the cloud may be fine for very small businesses or even as a final resting place for enterprise data, deduplication backup target appliances continue to function as their primary backup target and primary source for recovering data. It is for these reasons that enterprises frequently turn to deduplication backup target appliances from Dell EMC and ExaGrid to meet these specific needs that are covered in recent DCIG Pocket Analyst Report.

The Dell EMC Data Domain and ExaGrid families of deduplication backup target appliances appear on the short lists for many enterprises. While both these providers offer systems for small, midsize, and large organizations, the underlying architecture and features on the systems from these two providers make them better suited for specific use cases.

Their respective data center efficiency, deduplication, networking, recoverability, replication, and scalability features (to include recently announced enhancements) provide insight into the best use cases for the systems from these two vendors.

Purpose-built, deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They offer appliances in vari­ous physical configurations to meet the specific backup needs of small, midsize, and large enterprises while provid­ing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

Their systems significantly reduce backup data stores and offer concurrent backup and replication. They also limit the number of backup streams, display real-time dedu­plication ratios, and do capacity analysis and trending. Despite the similarities that the systems from these respective vendors share, six differences exist between them in their underlying features that impact their ability to deliver on key end-user expectations. These include:

  1. Data center efficiency to include how much power they use and the size of their data center footprint.
  2. Data reduction to include what deduplication options they offer and how they deliver them.
  3. Networking protocols to include connectivity for NAS and SAN environments.
  4. Recoverability to include how quickly, how easily, and where recoveries may be performed.
  5. Replication to include copying data offsite as well as protecting data in remote and branch offices.
  6. Scalability to include total amount of capacity as well as ease and simplicity of scaling.

DCIG is pleased to make a recent DCIG Pocket Analyst Report that compares these two families of deduplication backup target appliances available for a complimentary download for a limited time. This succinct, 4-page report includes a detailed product matrix as well as insight into these six differentiators between these two solutions and which one is best positioned to deliver on these six key data center considerations.

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




Glitch is the Next Milestone in Recoveries

No business – and I mean no business – regardless of its size ever wants to experience an outage for any reason or duration. However, to completely avoid outages means spending money and, in most cases, a lot of money. That is why, when someone shared with me earlier this week, that one of their clients has put in place a solution that keeps their period of downtime to what appears as a glitch to their end-users for nominal cost, it struck a chord with me.

The word outage does not sit well with anyone in any size organization. It conjures up images of catastrophes, chaos, costs, lost data, screaming clients, and uncertainty. Further, anyone who could have possibly been involved with causing the outage often takes the time to make sure they have their bases covered or their resumes updated. Regardless of the scenario, very little productive work gets done as everyone scrambles to first diagnose the root cause of the outage, fix it, and then takes steps to prevent it from ever happening again.

Here’s the rub in this situation: only large enterprises with money to buy top-notch hardware and software backed by elite staff to put solutions in place that come anywhere near guaranteeing this type of availability. Even then, these solutions are usually reserved for a handful of mission critical and maybe business critical applications. The rest of their applications remain subject to outages of varying lengths and causes.

Organizations other than large enterprises daily face this fear. While their options for speed of recovery have certainly improved in recent years thanks to disk-based backup and virtualization, recovering any of their applications from a major outage such as hardware failure, ransomware attack, or just plain old unexpected human error, it may still take hours or longer to complete the recovery. Perhaps worse, everyone knows about it and cursing out the IT staff for this unexpected and prolonged interruption in their work day.

Here’s what caught my attention on the phone call I had this week. While this company in question retains its ideal of providing uninterrupted availability for its end-users as its end game, its immediate milestone is to reduce the impact of outages down to a glitch from the perspective of their end-users.

Granted, a temporary outage of any applications for even a few minutes is neither ideal nor will end-users or management greet any outage with cheers. However, recovering an application in a few minutes (say in 5-10 minutes,) will be more well-received than communicating that the recovery will take hours, days, or replying with an ambiguous “we are making a best faith effort to fix the problem.

This is where setting a milestone of having any application recovery appear as a glitch to the organization starts to make sense. Solutions that provide uninterrupted availability and instant recoveries often remain out of reach financially for all but the wealthiest enterprises. However, solutions that provide recoveries that can make outages appear as only a glitch to end-users are now within reach of almost any size business.

No one likes outages of any type. However, if IT can in the near-term turn outages into glitches from a corporate visibility perspective, IT will have achieved a lot. The good news is that data protection solutions that span on-premises and the cloud are readily available now that when properly implemented can well turn many applications outages into a glitch.




2017 Reflects the Tipping Point in IT Infrastructure Design and Protection

At the end of the year people naturally reflect on the events of the past year and look forward to the new. I am no different. It is as I reflect on the past year and look ahead on how IT infrastructures within organizations have changed and will change, 2017 has been as transformative as any year in the past decade if not the past 50 years. While that may sound presumptuous, 2017 seems to be the year that reflects the tipping point in how organizations will build out and protect their infrastructures going forward.

Over the last few years technologies have been coming to market that challenge two long standing assumptions regarding the build out of IT infrastructures and the protection of the data stored in that infrastructure.

  1. The IT infrastructure stack consists of a server with its own CPU, memory, networking, and storage stack (or derivations thereof) to support it
  2. The best means of protecting data stored in that stack is done at the file level

Over the last two decades, organizations of all sizes have been grappling with how best to accommodate and manage the introduction of applications into their environment that automate everything. They have been particularly stressed on the IT infrastructure side with each application needing its own supporting server stack. While managing one or even a few (less than 5) applications may be adequately achieved using the original physical server stack, more than that starts to break the stack and create new inefficiencies.

These inefficiencies gave rise to virtualization at the server, networking, and storage levels which helped to somewhat alleviate these inefficiencies. However, at the end of day, one still had multiple physical servers, storage arrays, and networking switches that now hosted virtual servers, storage arrays, and fabrics. This virtualization solved some problems but created its own set of complexities that made managing these virtualized infrastructures even worse if one did proactively put in place or have in place frameworks to automate the management of these virtualized infrastructures.

Further aggravating this situation, organizations also needed to protect the data residing on this IT infrastructure. In protecting it, one of the underlying assumptions made by both providers of data protection software and those who acquired it was that data was best protected at the file level. While this premise largely worked well when applications resided on physical servers, it begins to break down in virtualized environments and almost completely falls apart in virtualized environments with hundreds or thousands of virtual machines (VMs).

These inefficiencies associated with very large (and even not so large) virtualized environments have resulted in the following two trends coming to the forefront and transforming how organizations manage their IT infrastructures going forward.

  1. Hyper-converged infrastructures will become the predominant way that organizations will deploy, host, and manage applications going forward
  2. Data protection will predominantly occur at the volume level as opposed to the host level

I call out hyper-converged infrastructures as this architecture provides organizations the means to successfully manage and scale their IT infrastructure. It does so with minimal to no compromise on any of the features that organizations want their IT infrastructure to provide: affordability, availability, manageability, reliability, scalability, or any of the other abilities I mentioned in my blog entry from last week.

The same holds true with protecting applications at the volume level. By primarily creating copies of data at the volume level (aka virtual machine level) instead of the file level, organizations get the level of recoverability that they need with the ease and speed at which they need it.

I call out 2017 as a tipping point in the deployment of IT infrastructures in large part because the combination of hyper-converged infrastructures and the protection of data at the volume level enables the IT infrastructure to finally get out of the way of organizations easily and quickly deploying more applications. Too often organizations hit a wall of sorts that precluded them from adopting new applications as quickly, easily, and cost-effectively as they wanted because the existing IT infrastructures only scaled up to a point. Thanks to the availability and broad acceptance of hyper-converged infrastructures and volume level data protection, it appears the internal IT infrastructure wall that prevented the rapid adoption of new technologies has finally fallen.




Comtrade Software goes beyond AHV, Adds ESX Support

Every vendor new to a market generally starts by introducing a product that satisfies a niche to gain a foothold in that market. Comtrade Software exemplified this premise by earlier this year coming to market with its HYCU software that targets the protection of VMs hosted on the Nutanix AHV hypervisor. But to grow in a market, especially in the hyper-competitive virtual machine (VM) data protection space, one must expand to protect all market-leading hypervisors. Comtrade Software’s most recent HYCU release achieves that goal with its new support for VMware ESX.

In any rapidly growing market – and few markets currently experience faster growth than the VM data protection space – there will be opportunities to enter it that existing players overlook or cannot respond to in a timely manner. Such an entry point occurred earlier this year.

Comtrade Software recognized that no vendor had yet released purpose-built software targeted at protecting VMs hosted on the Nutanix AHV hypervisor. By coming to market with its HYCU software when it did (June 2017,) it was able to gain a foothold in customer accounts already using AHV who needed a simpler and more intuitive data protection solution.

But being a one-trick pony only works so long in this space. Other vendors have since come to market with features that compete head-to-head with HYCU by enabling their software to more effectively protect VMs hosted on the Nutanix AHV hypervisor. Remaining viable and relevant in this space demanded that Comtrade Software expand its support from VMs running on other hypervisors.

Comtrade Software answered that challenge this month. Its current release adds VMware ESX support to give organizations the freedom to use HYCU to protect VMs running on AHV, ESX, or both. However, Comtrade Software tackled its support of ESX in a manner different than many of its counterparts.

Comtrade Software does NOT rely on the VMware APIs for Data Protection (VADP) which have become almost the default industry standard for protecting VMs. It instead leverages Nutanix snapshots to protect VMs running on the Nutanix cluster regardless if the underlying hypervisor is AHV or ESX. The motive behind this decision are two-fold as this technique minimizes if not eliminates:

  1. Application impact
  2. VM stuns

A VM stun, or quiescing a virtual machine (VM), is done to create a snapshot that contains a consistent or recoverable backup of the application and/or data residing on the VM. This VM stun, occurring under normal conditions, poses minimal or no risk to an organization as it typically completes in under a second.

However, hyper-converged environments are becoming anything but normal. As organizations continue to increase VM density, virtualize more I/O intensive applications, and/or retain more snapshots for longer periods of time on their Nutanix cluster, the length and impact of VM stuns increases using VMware’s native VADP as other authors have discussed. To counter this, HYCU leverages the native snapshot functionality found in Nutanix to offset this known deficiency of VMware VADP when using it where any of these three conditions exist.

Comtrade Software rightly recognizes what it is up against as it seeks to establish a larger footprint in the broader VM data protection space. While its initial release of HYCU enabled it to establish a footprint with some organizations, to keep that footprint with its existing customers as well as attract new customers going forward, it needed to introduce support for other hypervisors.

Its most recent release accomplishes that objective and its choice of ESX exposes it to many more opportunities with nearly 70 percent of Nutanix installations currently using ESX as their preferred hypervisor. However, Comtrade Software offers support for ESX in a very clever way that differentiates it from its competitors.

By leveraging Nutanix snapshots instead of VMware VADP, it capitalizes on its existing tight relationship with Nutanix by giving organizations new opportunities to improve the availability and protection of applications already running on Nutanix. Further, it gives them greater confidence to scale their Nutanix implementation to host more applications and/or higher performance applications going into the future.

Other DCIG blog entries about Comtrade:

All other DCIG blog entries.




Deduplication Still Matters in Enterprise Clouds as Data Domain and ExaGrid Prove

Technology conversations within enterprises increasingly focus on the “data center stack” with an emphasis on cloud enablement. While I agree with this shift in thinking, one can too easily overlook the merits of underlying individual technologies when only considering the “Big Picture“. Such is happening with deduplication technology. A key enabler of enterprise archiving, data protecton, and disaster recovery solutions, vendors such as Dell EMC and ExaGrid deliver deduplication technology in different ways as DCIG’s most recent 4-page Pocket Analyst Report reveals that makes each product family better suited for specific use cases.

It seemed for too many years enterprise data centers focused too much on the vendor name on the outside of the box as opposed to what was inside the box – the data and the applications. Granted, part of the reason for their focus on the vendor name is they wanted to demonstrate they had adopted and implemented the best available technologies to secure the data and make it highly available. Further, some of the emerging technologies necessary to deliver a cloud-like experience with the needed availability and performance characteristics did not yet exist, were not yet sufficiently mature, or were not available from the largest vendors.

That situation has changed dramatically. Now the focus is almost entirely on software that provides enterprises with cloud-like experiences that enables them to more easily and efficiently manage their applications and data. While this change is positive, enterprises should not lose sight of the technologies that make up their emerging data center stack as they are not all equally equipped to deliver them in the same way.

A key example is deduplication. While this technology has existed for years and has become very mature and stable during that time, the options in which enterprises can implement it and the benefits they will realize it vary greatly. The deduplication solutions from Dell EMC Data Domain and ExaGrid illustrate these differences very well.

DCIG Pocket Analyst Report Compares Dell EMC Data Domain and ExaGrid Product Families

Deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They also both offer appliances in various physical configurations to meet the specific backup needs of small, midsize, and large enterprises while providing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

However, their respective systems also differ in key areas that will impact the overall effectiveness these systems will have in the emerging cloud data stacks that enterprises are putting in place. The six areas in which they differ include:

  1. Data center efficiency
  2. Deduplication methodology
  3. Networking protocols
  4. Recoverability
  5. Replication
  6. Scalability

The most recent 4-page DCIG Pocket Analyst Report analyzes these six attributes on the systems from these two providers of deduplication systems and compares their underlying features that deliver on these six attributes. Further, this report identifies which product family has the advantage in each area and provides a feature comparison matrix to support these claims.

This report provides the key insight in a concise manner that enterprises need to make the right choice in deduplication solutions for their emerging cloud data center stack. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

Cloud-like data center stacks that provide application and data availability, mobility, and security are rapidly becoming a reality. But as enterprises adopt these new enterprise clouds, they ignore or overlook technologies such as deduplication that make up these stacks at their own peril as the underlying technologies they implement can directly impact the overall efficiency and effectiveness of the cloud that one is building.




Veritas Delivering on its 360 Data Management Strategy While Performing a 180

Vendors first started bandying about the phrase “cloud data management” a year or so ago. While that phrase caught my attention, specifics as what one should expect when acquiring a “cloud data management” solution remained nebulous at best. Fast forward to this week’s Veritas Vision 2017 and I finally encountered a vendor that was providing meaningful details as to what cloud data management encompasses while simultaneously performing a 180 behind the scenes.

Ever since I heard the term cloud data management a year or so ago, I loved it. If there was ever a marketing phrase that captured the essence of how every end-user secretly wants to manage all its data while the vendor or vendors promising to deliver it commits to absolutely nothing, this phrase nailed it. A vendor could shape and mold that definition however it wanted and know that end-users would listen to the pitch even if deep down the users knew it was marketing spin at its best.

Of course, Veritas promptly blew up these pre-conceived notions of mine this week at Vision 2017. While at the event, Veritas provided specifics about its cloud data management strategy that rang true if for no other reason that they had a high degree of veracity to them. Sure, Veritas may refer to its current strategy as “360 Data Management.” But to my ears it sure sounded like someone had finally articulated, in a meaningful way, what cloud data management means and the way in which they could deliver on it.

Source: Veritas

The above graphic is the one that Veritas repeatedly rolls out when it discusses its 360 Data Management strategy. While notable in that it is one of the few vendors that can articulate the particulars of its data management strategy, it more importantly has three important components to it that currently makes its strategy more viable than many of its competitors. Consider:

  1. Its existing product portfolio maps very neatly into its 360 Data Management strategy. One might argue (probably rightfully so) that Veritas derived its 360 Data Management strategy from its existing product portfolio that it has built-up over the years. However, many of these same critics have also contended that Veritas has been nothing but a company with an amalgamation of point products with no comprehensive vision. Well, guess what, the world changed over the past 12-24 months and it bent decidedly bent in the direction of software. Give Veritas some credit. It astutely recognized this shift, saw that its portfolio aligned damn well with how enterprises want to manage their data going forward, and had the hutzpah to craft a vision that it could deliver based upon the products it had in-house.
  2. It is not resting on its laurels. Last year when Veritas first announced its 360 Data Management strategy, I admit, I inwardly groaned a bit. In its first release, all it did was essentially mine the data in its own NetBackup catalogs. Hello, McFly! Veritas is only now thinking of this? To its credit, this past week it expanded the list of products to which to which its Information Map connectors can access to over 20. These include Microsoft Exchange, Microsoft SharePoint, and Google Cloud among others. Again, I must applaud Veritas for its efforts on this front. While this news may not be momentous or earth-shattering, it visibly reflects a commitment to delivering on and expanding the viability of its 360 Data Management strategy beyond just NetBackup catalogs.
  3. The cloud plays very well in this strategy. Veritas knows that plays in the enterprise space and it also knows that enterprises want to go to the cloud. While nowhere in its vision image above does it overtly say “cloud”, guess what? It doesn’t have to. It screams, “Cloud!” This is why many of its announcements at Veritas Vision around its CloudMobility, Information Map, NetBackup Catalyst, and other products talk about efficiently moving data to and from the cloud and then monitoring and managing it whether it resides on-premises, in the cloud, or both.

One other change it has made internally (and this is where the 180 initially comes in,) is how it communicates this vision. When Veritas was part of Symantec, it stopped sharing its roadmap with current and prospective customers. In this area, Veritas has made a 180, customers who ask and sign a non-disclosure agreement (NDA) with Veritas can gain access to this road map.

Veritas may communicate that the only 180 turn it has made in the last 18 months or so since it was spun out of Symantec is its new freedom to communicate its road map to current and/or prospective customers. While that may be true, the real 180 it has made entails it successfully putting together a cohesive vision that articulates the value of products in its portfolio in a context that enterprises are desperate to hear. Equally impressive, Veritas’ software-first focus better positions it than its competitors to enable enterprises to realize this ideal.