HCI Comparison Report Reveals Key Differentiators Between Dell EMC VxRail and Nutanix NX

Many organizations view hyper-converged infrastructure (HCI) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

HCI Appliances Deliver Radical Simplicity

Hyper-converged infrastructure appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. Thus, the stakes are high for vendors such as Dell EMC and Nutanix as they compete to own this critical piece of data center real estate.

In the last several years, HCI has also emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, plus non-disruptive hardware upgrades and data migrations are among the features that enterprises love about these solutions.

HCI Appliances Are Not All Created Equal

Many enterprises are considering HCI solutions from providers Dell EMC and Nutanix. A cursory examination of these two vendors and their solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their HCI appliances. Also, both providers pretest firmware and software updates and automate cluster-wide roll-outs.

Nevertheless, important differences remain between the products. Due to the high level of interest in these products, DCIG published an initial comparison in November 2017. Both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first page of HCI comparison reportUpdated DCIG Pocket Analyst Report Reveals Key HCI Differentiators

In this updated report, DCIG identifies six ways the HCI solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed feature matrix as well as insight into key differentiators between these two HCI solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




VMware vSphere and Nutanix AHV Hypervisors: An Updated Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform. Each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its updated DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors.

blurred image of first page of reportThis succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  • Data protection ecosystem
  • Support for Guest OSes
  • Support for VDI platforms
  • Certified enterprise applications
  • Fit with corporate direction
  • More favorable licensing model
  • Simpler management

This DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




Dell EMC VxRail vs Nutanix NX: Six Key HCIA Differentiators

Many organizations view hyper-converged infrastructure appliances (HCIAs) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises consider HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their respective HCIA solutions. Both pre-test firmware and software updates holistically and automate cluster-wide roll-outs.

Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG published its first comparison of these products in November 2017. There is a high level of interest in these products, and both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first pageIn this updated report, DCIG identifies six ways the HCIA solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed product matrix as well as insight into key differentiators between these two HCIA solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




DCIG’s VMworld 2018 Best of Show

VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.

VMworld 2018 Best of Show #1: QoreStor Software-defined Secondary Storage

Software defined dedupe in the form of QoreStor has arrived. A few years ago Dell Technologies sold off its Dell Software division that included an assortment (actually a lot) of software products that emerged as Quest Software. Since then, Quest Software has done nothing but innovate like bats out of hell. One of the products coming out of this lab of mad scientists is QoreStor, a stand-alone software-defined secondary storage solution. QoreStor installs on any server hardware platform and works with any backup software. QoreStor provides a free download that will deduplicate up to 1TB of data at no charge. While at least one other vendor (Dell Technologies) offers a deduplication product that is available as a virtual appliance, only Quest QoreStor gives organizations the flexibility to install it on any hardware platform.

VMworld 2018 Best of Show #2: LoginVSI Load Testing

Stop guessing on the impact of VDI growth with LoginVSI. Right sizing the hardware for a new VDI deployment from any VDI provider (Citrix, Microsoft, VMware, etc.) is hard enough. But to keep the hardware appropriately sized as more VDIs are added or patches and upgrades are made to existing VDI instances, that can be dicey at best and downright impossible at worst.

LoginVSI helps take the guesswork out of any organization’s future VDI growth by:

  • examining the hardware configuration underlying the current VDI environment,
  • comparing that to the changes proposed for the environment, and then
  • making recommendations as to what changes, if any, need to be made to the environment to support
    • more VDI instances or
    • changes to existing VDI instances.

VMworld 2018 Best of Show #3: Runecast vSphere Environment Analyzer

Plug vSphere’s holes and keep it stable with Runecast. Runecast was formed by a group of former IBM engineers who over the years had become experts at two things while working at IBM: (1) Installing and configuring VMware vSphere; and (2) Pulling out their hair trying to keep each installation of VMware vSphere stable and up-to-date. To rectify this second issue, they left IBM and formed Runecast,

Runecast provides organizations with the software and tools they need to proactively:

  • identify needed vSphere patches and fixes,
  • identify security holes, and even
  • recommend best practices for tuning your vSphere deployments
  • with minimal to no manual intervention.

VMworld 2018 Best of Show #4: VxRail Hyper-converged Infrastructure

Kudos to Dell Technologies VxRail solution for saving every man, woman, and child in the United States $2/person in 2018. I was part of a breakfast conversation on Wednesday morning with one of the Dell Technologies VxRail product managers. He anecdotally shared with the analysts present that Dell Technologies recently completed two VxRail hyper-converged infrastructure deployments in two US government agencies. Each deployment is saving each agency $350 million annually. Collectively that amounts to $700 million or $2/person for every person residing in the US. Thank you, Dell.




Differentiating between High-end HCI and Standard HCI Architectures

Hyper-converged infrastructure (HCI) architec­tures have staked an early claim as the data center architecture of the future. The abilities of all HCI solutions to easily scale capacity and performance, simplify ongoing management, and deliver cloud-like features have led many enterprises to view it as a logical path forward for building on-premise clouds.

But when organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high-end and standard hyper-converged architectures deliver the same bene­fits. They each virtualize compute, memory, storage networking, and data stor­age in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snap­shots and replication. In short, these two archi­tectures mirror each other in many ways.

However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments. In these envi­ronments, enterprises often need to:

  • Host thousands of VMs in a single, logical environment.
  • Provide guaranteed, sustainable performance for each VM by insulating them from one another.
  • Offer connectivity to multiple public cloud providers for archive and/or backup.
  • Seamlessly move VMs to and from the cloud.

It is in these circumstances that differences between high-end and standard architectures emerge with the high-end HCI architecture possessing four key advantages over standard HCI architectures. Consider:

  1. Data availability. Both high-end and standard HCI solutions distribute data across multiple nodes to ensure high levels of data availability. However, only high-end HCI architectures use separate, distinct compute and data nodes with the data nodes dedicated to guar­anteeing high levels of data availability for VMs on all compute nodes. This ensures that should any compute node become unavail­able, the data associated with any VM on it remains available.
  2. Flash/performance optimization. Both high-end and standard HCI architectures take steps to keep data local to the VM by storing the data of each VM on the same node on which the VM runs. However, solutions based on high-end HCI architectures store a copy of each VM’s data on the VM’s compute node as well as on the high-end HCI architecture’s underlying data nodes to improve and optimize flash performance. High-end HCI solutions such as the Datrium DVX also deduplicate and compress all data while limiting the amount of inter-nodal communication to free resources for data processing.
  3. Platform manageability/scalability. As a whole, HCI architectures are intuitive to size, manage, scale, and understand. In short, if you need more performance and/or capacity, one only needs to add another node. However, enterprises often must support hundreds or even thousands of VMs that each must perform well and be protected. Trying to deliver on those requirements at scale using a standard HCIA solution where inter­-nodal communication is a prerequisite becomes almost impos­sible to achieve. High-end HCI solutions facilitate granular deployments of compute and data nodes while limiting inter-nodal communication.
  4. VM insulation. Noisy neighbors—VMs that negatively impact the performance of adjacent VMs—remain a real problem in virtual environments. While both high-end and standard HCI architectures support storing data close to VMs, high-end HCI architectures do not rely on the availability of other compute nodes. This reduces inter-nodal communication and the probability that the noisy neighbor problem will surface.

HCI architectures have resonated with organizations in large part because of their ability to deliver cloud-like features and functionality with maintaining their simplicity of deployment and ongoing maintenance. However, next generation high-end HCI architecture with solutions available from providers like Datrium provide organizations greater flexibility to deliver cloud-like functionality at scale such as offering better management and optimization of flash performance, improved data availability, and increased insulation between VM instances. To learn more about how standard and high-end HCI architectures compare, check out this recent DCIG pocket analyst report that is available on the TechTrove website.

Note: Image added on 7/27/2018.



Nutanix Backup Software: Make the Best Choice Between HYCU and Rubrik

Organizations of all sizes now look to hyper-converged infrastructure solutions such as the Nutanix Enterprise Cloud Platform to provide them with their next generation of data center IT infrastructure services. As they do, they need software optimized for protecting Nutanix environments. HYCU, Inc., and Rubrik are two early leaders in this space. Each possess distinctive attributes that make one or the other better suited for providing data protection services when these conditions exist in your environment.

Get the DCIG Pocket Analyst Report comparing these two products by following this link.

Hyper-converged infrastructure solutions such as the Nutanix Enterprise Cloud Platform stand poised to fundamentally change how enterprises manage their IT infrastructure. They simplify and automate long standing problems such as application availability, data migrations, and hardware refreshes as well as integration with leading public cloud providers. But this looming changeover in IT infrastructure still leaves organizations with the responsibility to protect the data hosted on these solutions. This is where products such as those from HYCU, Inc., and Rubrik come into play.

HYCU (pronounced “hıˉ Q”) for Nutanix and Rubrik Cloud Data Management are two data protection software products that protect virtual machines (VMs) but which also offer features optimized for the protection of Nutanix environments. Both HYCU and Rubrik Cloud Data Management share some similarities as they both support:

  • Application and file level restores for Windows applications and operating systems
  • Concurrent backups
  • Full recovery of VMs from backups
  • Multiple cloud providers for application recovery and/or long-term data retention
  • Protection of VMs on non-Nutanix platforms
  • Snapshots to perform incremental backups

Despite these similarities, differences between these two products remain. To help enterprises select the product that best fits their needs to protect their Nutanix environment, DCIG in its newest Pocket Analyst Report identifies seven factors that differentiate these two products to help enterprises evaluate them and choose the most appropriate one for their environment. Some of these factors include:

  1. Depth of Nutanix integration
  2. Breadth of application support
  3. Breadth of public cloud support
  4. Vendor stability

This four page DCIG Pocket Analyst Report contains analyst commentary about each of these features, identifies which product has strengths in each of these areas, and contains multiple pages of side-by-side feature comparisons to support these conclusions. Follow this link to download and access this newest DCIG Pocket Analyst Report that is available at no charge for a limited time.

 




Seven Key Differentiators between the Cohesity DataPlatform and Rubrik Cloud Data Management HCIA Solutions

Hyper-converged infrastructure architectures (HCIA) are foundational for the next generation of data centers. Key to realizing that vision is to implement HCIA solutions for both primary and secondary storage. The Cohesity DataPlatform and Rubrik Cloud Data Management solutions have emerged as the early leaders in this rapidly growing market segment. While these two products share many features in common, seven key points of differentiation between them yet exist as the latest DCIG Pocket Analyst Report reveals.

Rubrik and Cohesity are in an intense battle with their respective hyper-converged infrastructure architectures (HCIAs) representing the next generation of cloud data management, data protection, and secondary storage—with merit. Initially HCIAs were only viewed in the context of hosting consolidating software and hardware running in production. However, these HCIA solutions show great promise for data protection and recovery.

Virtualizing compute, memory, storage networking, data storage, and data protection in a simple to deploy and manage scale-out architecture, these solutions solve the same problems as those HCIAs targeted at production environments. However, these solutions give organizations a clear path toward cost-effectively and simply implementing data protection, data recovery, and connectivity to the cloud among many other features. In the race to deliver on the promise of this solution, Cohesity and Rubrik have emerged as the early leaders in this emerging market.

On the surface, Cohesity and Rubrik appear to share many features in common. Both entered the market with HCIA appliances before later rolling out software running on 3rd party hardware and virtual appliances capable of running onsite or in the cloud. The product feature sets and capabilities from the two companies often mirror each other with rapid product release schedules and updates where what may appear to be a gap in a one product lineup one quarter is filled the next.

Despite these similarities, differences between them remain. To help enterprises make the most appropriate choice between these two solutions, DCIG’s latest Pocket Analyst Report examines seven key features to consider when choosing between these two products that include:

  1. Breadth of hypervisor support
  2. Breadth of supported cloud providers
  3. Breadth of industry standard server hardware support
  4. Data protection and replication capabilities
  5. Flexibility of deduplication deployment options
  6. Proven scale-out capabilities
  7. vCenter backup monitoring and management

This four page DCIG Pocket Analyst Report contains analyst commentary about each of these features, identifies which product has strengths in each of these areas, and contains 2+ pages of side-by-side feature comparisons to support these conclusions. Follow this link to register to access this newest DCIG Pocket Analyst Report at no charge for a limited time.

 




Nuancing the Management of HCI Deployments

When one examines enterprise data protection and data storage products through the lens of hyper-converged infrastructure (HCI) designs, one would think each product either supports an HCI architecture or it does not. But as one begins to see when one scrutinizes this topic, the answer is not a simple “Yes” or “No”. Nuancing how well or if a product fits into an HCI design, one first needs to think about the question or even the series of questions that he or she should ask to properly make this assessment.

As DCIG has begun to re-position its analysis to look at the emerging and existing set of data storage and data protection products in the context of HCI, one can quickly see many of them introducing or coming to market with HCI architectures. Consider:

  • Cohesity and Rubrik have come to market with data protection solutions built upon hyper-converged architectures
  • Deduplication backup appliance provider ExaGrid has been using the phrase “hyper-converged” more frequently when referencing the architecture of its product
  • Commvault recently announced ScaleProtect that, delivered along with Cisco’s HyperScale Software, provides an HCI data management and protection software solution
  • This year Comtrade Software introduced its HYCU software that specifically targets the protection of Nutanix Acropolis Hypervisor (AHV) environments running on Nutanix HCI platforms
  • Pivot3 acquired NexGen Storage in 2016 and has since incorporated NexGen’s Quality of Service technology into Pivot3’s HCI Acuity platform
  • NetApp, a long-time enterprise storage player, in mid-2017 announced its own HCI platform aptly named HCI

Looking at these multiple announcements from long standing players in the enterprise data protection and storage spaces who are adopting hyper-converged solutions as well as how many of the new entrants into the market are delivering their solutions on HCI architectures, one would think checking the HCI check box would be straight forward.

In one respect, it is. Many of the emerging and existing data protection and data storage solutions can now run on HCI platforms.  You will get no argument from me on that point.

However, the bigger question is, does a product running on its own HCI platform really solve the bigger management problem that enterprises should ultimately seek to address? Or do all these latest and greatest product iterations just create another new rat’s nest of management complexity that enterprises need to deal with?

When I look at what HCI platforms should deliver in the context of a perfect world, they should only have one solution that spans on-premises, off-premises, and/or in the cloud that gives them the flexibility to:

  • Run applications and/or VMs where they are best suited to run
  • Provides them with the analytics they need to optimize data placement, performance, and quickly trouble shoot issues
  • Minimizes or eliminates down time associated with upgrades and patches
  • Manages data holistically across the environment

Granted, no such environment existed in any organization for which I have ever worked in either the public or private sector. However, it would be awesome if such a solution existed that could deliver on this ideal. In short, one could put all these products together to create a cohesive, single architecture upon which they could use to build their underlying IT infrastructure for tomorrow’s data centers.

Here’s the problem that emerges with HCI solutions as they stand today. If one selects an HPE SimpliVity, Nutanix, VMware VxRail, Pivot3, Scale Computing, or any other HCI platform as their primary production HCI platform, it is unlikely – and I would even say improbable – that any of these new secondary HCI solutions will integrate into one of these primary HCI platforms.

Granted, these other solutions may provide primary or secondary storage for the primary HCI environment or protect the applications and/or data residing on them. However, can they plug into and/or be managed as part the broader HCI enterprise environment that enterprises want and need? Or do any of these other solutions once again just become another island that organizations must manage outside of their primary HCI platform?

To help mitigate these management pitfalls that can result from using multiple HCI-based solutions, alternative solutions such as those from Datrium have already emerged. Like other HCI platforms intended for use in primary, production environments, Datrium’s software offers:

  • Archiving
  • Disaster recovery workflows
  • Data encryption
  • Flexibility to run on inexpensive x86 servers
  • High availability
  • Snapshot and replication capabilities for VMs
  • Support for leading hypervisors such as VMware vSphere and Linux KVM

Datrium also gives organizations the option to create two types of nodes: Compute and Data Nodes. Compute nodes provide the high levels of performance that production applications require while its Data Nodes provide cost-effective options to optimize storage capacity housed in server-size footprints.

The wave of adoption for hyper-converged platforms is just getting started and much yet needs to be sorted out in terms of how well these HCI platforms interact with one another. However, any organization that adopts a primary HCI platform and then expects any of its data storage and/or data protection products running on adjacent HCI platforms to seamlessly plug into their primary one may be in for a rude awakening. Data storage and data protection products running on secondary HCI platforms do result in individually making them easier to deploy, manage, and upgrade. However,unless each of the HCI solutions connect and run as one cohesive solution, they still create their own silos of data that organizations must manage apart from their primary HCI platform.

Note: This blog entry was updated on January 2, 2018.




2017 Reflects the Tipping Point in IT Infrastructure Design and Protection

At the end of the year people naturally reflect on the events of the past year and look forward to the new. I am no different. It is as I reflect on the past year and look ahead on how IT infrastructures within organizations have changed and will change, 2017 has been as transformative as any year in the past decade if not the past 50 years. While that may sound presumptuous, 2017 seems to be the year that reflects the tipping point in how organizations will build out and protect their infrastructures going forward.

Over the last few years technologies have been coming to market that challenge two long standing assumptions regarding the build out of IT infrastructures and the protection of the data stored in that infrastructure.

  1. The IT infrastructure stack consists of a server with its own CPU, memory, networking, and storage stack (or derivations thereof) to support it
  2. The best means of protecting data stored in that stack is done at the file level

Over the last two decades, organizations of all sizes have been grappling with how best to accommodate and manage the introduction of applications into their environment that automate everything. They have been particularly stressed on the IT infrastructure side with each application needing its own supporting server stack. While managing one or even a few (less than 5) applications may be adequately achieved using the original physical server stack, more than that starts to break the stack and create new inefficiencies.

These inefficiencies gave rise to virtualization at the server, networking, and storage levels which helped to somewhat alleviate these inefficiencies. However, at the end of day, one still had multiple physical servers, storage arrays, and networking switches that now hosted virtual servers, storage arrays, and fabrics. This virtualization solved some problems but created its own set of complexities that made managing these virtualized infrastructures even worse if one did proactively put in place or have in place frameworks to automate the management of these virtualized infrastructures.

Further aggravating this situation, organizations also needed to protect the data residing on this IT infrastructure. In protecting it, one of the underlying assumptions made by both providers of data protection software and those who acquired it was that data was best protected at the file level. While this premise largely worked well when applications resided on physical servers, it begins to break down in virtualized environments and almost completely falls apart in virtualized environments with hundreds or thousands of virtual machines (VMs).

These inefficiencies associated with very large (and even not so large) virtualized environments have resulted in the following two trends coming to the forefront and transforming how organizations manage their IT infrastructures going forward.

  1. Hyper-converged infrastructures will become the predominant way that organizations will deploy, host, and manage applications going forward
  2. Data protection will predominantly occur at the volume level as opposed to the host level

I call out hyper-converged infrastructures as this architecture provides organizations the means to successfully manage and scale their IT infrastructure. It does so with minimal to no compromise on any of the features that organizations want their IT infrastructure to provide: affordability, availability, manageability, reliability, scalability, or any of the other abilities I mentioned in my blog entry from last week.

The same holds true with protecting applications at the volume level. By primarily creating copies of data at the volume level (aka virtual machine level) instead of the file level, organizations get the level of recoverability that they need with the ease and speed at which they need it.

I call out 2017 as a tipping point in the deployment of IT infrastructures in large part because the combination of hyper-converged infrastructures and the protection of data at the volume level enables the IT infrastructure to finally get out of the way of organizations easily and quickly deploying more applications. Too often organizations hit a wall of sorts that precluded them from adopting new applications as quickly, easily, and cost-effectively as they wanted because the existing IT infrastructures only scaled up to a point. Thanks to the availability and broad acceptance of hyper-converged infrastructures and volume level data protection, it appears the internal IT infrastructure wall that prevented the rapid adoption of new technologies has finally fallen.




Dell EMC VxRail vs Nutanix NX: Eight Key HCIA Differentiators

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and/or virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ments and management. As such, the stakes are high for Dell EMC and Nutanix who are competing to own this critical piece of data center infrastructure real estate.

In the last couple of years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them.

Both companies control the entire hardware and software stacks of their respective HCIA solutions as they pre-test firmware and software updates holistically and automate cluster-wide roll-outs. Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG has identified eight ways the HCIA solutions from these two companies currently differentiate themselves from one another.

blurred image of first page of the reportDCIG is pleased to make a recent DCIG Pocket Analyst Report that compares these two HCIA products available for a complimentary download. This succinct, 4-page report includes a detailed product matrix as well as insight into eight key differentiators between these two HCIA solutions and which one is best positioned to deliver on key data center considerations such as:

  1. Breadth of ecosystem
  2. Data center storage integration
  3. Enterprise applications certified
  4. Licensing
  5. Multi-hypervisor flexibility
  6. Scaling options
  7. Solution integration
  8. Vendor stability

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




VMware vSphere and Nutanix AHV Hypervisors: A Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two common choices but key differences between them persist.

In the last couple of years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform as each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its most recent DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors available for a complimentary download.

This succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  1. Breadth of partner ecosystem
  2. Enterprise application certification
  3. Guest OS support
  4. HCIA management capabilities
  5. Overall corporate direction
  6. Software licensing options
  7. Virtual desktop infrastructure (VDI) support

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




Full Potential of Disk-based Backup Finally Becoming a Reality with Cohesity DataPlatform 4.0

Organizations have come to the realization that using disk as a backup storage target does more than simply solve backup problems. It creates entirely new possibilities for recovery. But as they recognize these new opportunities, they also see the need for backup solutions that offer them new options for application availability and recoverability backed by ease of management. The latest DataPlaform 4.0 release from Cohesity moves organizations closer to this ideal.

Using tape as a primary backup target is largely dead but the best practices, technologies, and the possibilities to capitalize on using disk as a backup target and as a source for recoveries are still emerging. For instance, secondary storage solutions that only offer “scale-up” architectures create management problems. Additionally, organizations want to do more with their long neglected second or third copies of data so they want to use these secondary storage solutions to host applications or VMs for the purposes of recovery.

Cohesity’s latest DataPlatform 4.0 release illustrates the potential of what the current generation of secondary storage targets can do for organizations to improve their abilities to recover while simultaneously making it easier for them to manage and scale their infrastructure.

Source: Cohesity

Consider:

  • Integration with the Pure Storage FlashArray//M series. Making snapshots of applications and/or virtual machines (VMs) on your Pure Storage production arrays is a great approach to data protection and instant recovery until one starts to run out of capacity on these arrays. Aggravating this situation, flash costs money. Through its recently announced integration with Pure Storage, organizations can seamlessly move snapshots via SAN or NAS protocols from Pure Storage FlashArray//M arrays to the Cohesity DataPlatform. This frees up availability capacity on Pure Storage arrays while making it possible for organizations to retain snapshots for longer periods of time.
  • More usable capacity using the same amount of raw capacity. Everyone ideally wants something for nothing and Cohesity’s latest 4.0 DataPlatform release delivers on this ideal. Previously, it mirrored data between disk drives for data redundancy. Using its new erasure coding technology, organizations can achieve 40% or more storage efficiency when compared to its previous generation product. Further, organizations can achieve this increase in storage capacity by installing this latest software realize on its existing platform.
  • New options for remote and branch office locations. Remote and branch offices are not going away anytime soon yet organizations do not have any more time to manage and protect them. To provide them with higher levels of protection while reducing the time required to manage them, Cohesity introduced its smaller C2100 appliance as well as rolled out a Virtual Edition of its software. The Virtual Edition can be used on traditional backup servers to support current backup and recovery operations or even operate in the cloud when it can serve as a backup target.
  • Your choice of cloud providers. The Cohesity Virtual Edition can operate with multiple cloud providers to include Microsoft Azure and Amazon. In this way, organizations can extend their Cohesity deployment into the cloud to provide instant backup and recovery to ensure uninterrupted operations.

Organizations are now quite acquainted with using disk as a backup target but many still find themselves on the outside looking in when it comes to realizing disk’s full potential as a backup target… such as offering fast, simple recoveries that they can deliver at an enterprise scale. The Cohesity DataPlatform 4.0 changes that perspective.  Cohesity’s use of hyperconverged technology as part of a secondary storage offering solves the key pain points that organizations have for quickly recovering either locally or in the cloud while simultaneously making their backups easier to manage.




Comtrade Software HYCU Serves as a Bellwether for Accelerated Adoption of Hyperconverged Platforms

In today’s business world where new technologies constantly come to market, there are signs that indicate when certain ones are gaining broader market adoption and ready to go mainstream. Such an event occurred this month when a backup solution purpose built for Nutanix was announced.

This product minimized the need for users of Nutanix’s hyperconverged infrastructure solution to parse through multiple product to find the right backup solution for them. Now they can turn to Comtrade Software’s HYCU software confident that will get a backup solution purpose-built to protect VMs and applications residing on the Nutanix Acropolis hyperconverged infrastructure platform.

In the history of every new platform that comes to market, certain tipping points occur that validate and accelerate its adoption. One such event is the availability of other products built specifically to run on that platform that make it more practical and/or easier for users of that platform to derive more value from it. Such an event for the Nutanix Acropolis platform occurred this month when Comtrade Software brought its HYCU backup software to market which is specifically designed to protect VMs and applications running on the Nutanix Acropolis hyperconverged platform.

The availability of this purpose-built data protection solution from Comtrade Software for Nutanix is significant in three ways.

  • It signifies that the number of companies adopting hyperconverged infrastructures solutions has reached critical mass in the market place and that this technology is poised for larger growth.
  • It would suggest that current backup solutions do not deliver the breadth of functionality that administrators of hyperconverged infrastructure solution need; that they cost too much; that they are too complicated to use; or some combination of all three.
  • It indirectly validates that Nutanix is the market leader in providing hyperconverged infrastructure solutions as Comtrade placed its bets on first bringing a solution to market that addresses the specific backup and recovery challenges that Nutanix users face.

Considering that the Comtrade Software’s HYCU is just out of the gate, it offers a significant amount of functionality that make it a compelling data protection solution for any Nutanix deployment. One of Comtrade’s design goals was to make it is simple as possible to deploy and manage backup over time in Nutanix environments. While this is typically the goal of every product that comes to market, Comtrade Software’s HYCU stands apart with its ability to detect the application running inside of each VM.

One of the challenges that administrators routinely face is the lack of ability to easily discern what applications run inside a VM without first tracking down the owner of that VM and/or the application owner to obtain that information. In the demo I saw of HYCU, it mitigates the need to chase down these individuals as it can look inside of a VM to identify which application and operating system it hosts. Once it has this information, the most appropriate backup policies for that VM may be assigned.

Equally notable about the Comtrade Software HYCU product is its management interface. Rather than presenting and requiring administrators to learn a new management interface to perform backups and recoveries, it presents a management interface that closely if not exactly replicates the one used by Nutanix.

Every platform that attains broad acceptance in the marketplace reaches a point where partners come alongside it and begin to offer solutions either built upon it or that do a better job of performing certain tasks such as data protection and recovery. Comtrade Software offering its HYCU data protection software serves as a bellwether for where Nutanix sits in the hyperconverged market place. By coming to market when it has, Comtrade Software positions its HYCU offering as the front runner in this emerging space as it currently has no other competitors that offer purpose-built backup software for Nutanix hyperconverged infrastructure deployments.




An Alternative Approach for Anytime, Anyplace, Anywhere Insight into IT Infrastructure Solutions

The Internet has eliminated any excuses for not having access to the information that individuals need to make informed buying decisions about products and/or services. However, providing easy, ready insight that quickly and easily compares IT infrastructure solutions… well, let’s just say Google does not address that challenge. Using DCIG and its Competitive Intelligence Suite, organizations get the tools and services they need to first aggregate research on IT infrastructure solutions and then quickly and easily generate reports that compare product features and services.

DCIG regularly speaks with individuals in organizations that research IT infrastructure solutions. For instance, vendors want to know what features that IT products do and do not support. Likewise, companies, prior to buying IT products or services, may independently research them to determine which features they offer and which one best aligns with their needs.

Many assume this research is a straightforward process but the devil is in the details. The reality is that the processes employed to do the research and even the resulting research is often deeply flawed as research is the result of ad hoc data collection processes . Even assuming one follows a stringent data collection process, the methodologies available for distributing the data to those who need it are problematic at best. Consider:

  1. Prior to starting the product research, one must ask the right questions. Developing a list of questions to evaluate a product or service can take hours, days, or even weeks depending on the complexity of the environment into which the product is being deployed. Further, one must take the time to carefully word each question to obtain the desired response.
  2. Create answers to the questions. While you may construct all questions with short, fill-in-the-blank answers, it is better to create questions with “yes-no” or “multiple-choice” responses. These questions create objective evaluation criteria, remove subjectivity, and expedite product evaluations. However, this process can again take hours, days, or weeks to complete.
  3. Do the product research. Once the survey is complete, companies move to the research stage which they tackle in one or more ways. They may send out surveys for vendors to complete. They may do the research in house. They may hire consultants. They may perform each of these tasks. Again, hours, days, or weeks may go by while research is done.
  4. Aggregating, analyzing, and distributing the data. Organizations often create and store this research in Microsoft Excel or Word documents and then distribute the research via email. These tools create data silos with the research inaccessible to those in the organization who most need it.

Perhaps the worst part of this process is that organizations rarely have the time to thoroughly complete any stage of the research. Even should they do so, few if any organization has a mechanism to distribute the research and make it accessible to organizations that need it. As a result, they rely upon inaccurate and/or incomplete data when making complex decisions or, if they do go through this entire research and analysis process to make the “best” decision, their window of opportunity to act on the opportunity may have passed.

DCIG, using its Competitive Intelligence Suite and accompanying services. changes this process. DCIG expedites and simplifies the survey creation, the research, and the distribution of the data. It does so in the following ways.

  1. DCIG offers pre-existing research for IT infrastructure products that companies commonly evaluate and buy. This include  cloud data protection, all-flash arrays, and hyperconverged infrastructure, among others.
  2. Organizations can leverage DCIG’s research and augment it with data on IT infrastructure solutions that they have collected.
  3. The data is put in a central database and made available via a web browser which only individuals in their organization can access anytime and anywhere. Then, using DCIG’s visualization tool, they can simply create multiple views into the data that compare product features.

In today’s world, organizations want instant access to information that they want and need to make better decisions. While the Internet certainly grants organizations “instant” access to information, the abilities to easily compare, analyze, and distribute that information do not come along for the ride.

DCIG and its Competitive Intelligence Suite provide organizations with the tools and services they need to aggregate data and create meaningful reports that compare features on IT infrastructure solutions. Providing preexisting questions, answers, and research on many technology topics, organizations receive both the means to accelerate their research and then the flexibility for anyone in the organization to access the data and generate the reports they need anywhere at any time.




DCIG 2017-18 Hyperconverged Infrastructure Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the DCIG 2017-18 Hyperconverged Infrastructure Appliance Buyer’s Guide developed from the converged infrastructure body of research.

The DCIG 2017-18 Hyperconverged Infrastructure Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-four (24) products from five (5) vendors. Using ranking categories of Recommended and Excellent, this Buyer’s Guide offers much of the information an organization should need to make a highly-informed decision as to which hyperconverged appliance will suit their needs.

Each appliance included in the DCIG 2017-18 Hyperconverged Infrastructure Appliance Buyer’s Guide had to meet the following criteria:

  • Must be available (orderable) as a single SKU and includes its own hardware and software
  • Must be marketed as a hyperconverged appliance
  • Must support at least one hypervisor (XEN, Hyper-V, VMware, KVM, etc)
  • Must provide compute and storage in the same infrastructure solution (i.e. the appliance can host multiple virtual machines and use local direct attached storage as the storage layer)
  • Must not require an external storage appliance (i.e. SAN/NAS)
  • Must cluster nodes together
  • Must support a centralized management and reporting structure
  • Must provide data protection features
  • There must be sufficient information available to DCIG to make meaningful decisions. DCIG makes a good faith effort to reach out and obtain information from as many storage providers as possible. However, products may be excluded because of a lack of sufficient reliable data
  • Must be formally announced and/or generally available for purchase as of April 28, 2017.

DCIG’s succinct analysis provides insight into the state of the hyperconverged appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using an hyperconverged appliance and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a-glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by-side comparisons assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Competitive Intelligence Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by- side feature comparisons of the products in which the organization is most interested.




Data Visualization, Recovery, and Simplicity of Management Emerging as Differentiating Features on Integrated Backup Appliances

Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.

Once largely assembled and configured by either IT staff or value added resellers (VARs), integrated backup appliances have gone mainstream and are available for use in almost any size organization. By bundling together both hardware and software, large enterprises get the turnkey backup appliance solution that was just a few years ago primary reserved for smaller organizations. In so doing, large enterprises can eliminate the need to spend days, weeks, or even months they previously had to spend configuring and deploying these solutions into their infrastructure.

The evidence of the demand for backup appliances at all levels of the enterprise is made plain by the providers who bring them to market. Once the domain of providers such as STORServer and Unitrends, “software only” companies such as Commvault and Veritas have responded to the demand for turnkey backup appliance solutions with both now offering their own backup appliances under their respective brand names.

commvault-pbba

Commvault Backup Appliance

symantec-netbackup-pbba

Veritas NetBackup Appliance

In so doing, any size organization may get any of the most feature rich enterprise backup software solutions on the market, whether it is IBM Tivoli Storage Manager (STORServer), Commvault (Commvault and STORServer), Unitrends or Veritas NetBackup, delivered to them as a backup appliance. Yet while traditional all-software providers have entered the backup appliance market,  behind the scenes new business demands are driving further changes on backup appliances that organizations should consider as they contemplate future backup appliance acquisitions.

  • First, organizations expect successful recoveries. A few years ago, the concept of all backup jobs completing successfully was enough to keep everyone happy and giving high-fives to one another. No more. organizations recognize that they have reliable backups residing on a backup appliance and these appliances may largely sit idle during off-backup hours. This gives the enterprise some freedom to do more with these backup appliances during these periods of time such as testing recoveries, recovering applications on the appliance itself, or even presenting these backup copies of data to other applications to use as sources for internal testing and development. DCIG found that a large number of backup appliances support one or more vCenter Instant Recovery features and the emerging crop of backup appliances can also host virtual machines and recover applications on them.
  • Second, organizations want greater visibility into their data to justify business decisions. The amount of data residing in enterprise backup repositories is staggering. Yet the lack of value that organizations derive from that stored data combined with the potential risk it presents to them by retaining it is equally staggering. Features that provide greater visibility into the metadata of these backups which then analyze it and help turn it into measurable value for the business are already starting to find their way onto these appliances. Expect these features to become more prevalent in the years to come.
  • Third, enterprises want backup appliances to expand their value proposition. Backup appliances are already easy to deploy but maintaining and upgrading them over time or deploying them for other use cases gets more complicated over time. To address these concerns, emerging providers such as Cohesity, which is making its first appearance in DCIG Buyer’s Guides as an integrated backup appliance, directly addresses these concerns. Available as a scale-out backup appliance that can function as a hybrid cloud backup appliance, a deduplicating backup appliance and/or as an integrated backup appliance, it provides an example of how an enterprises can more easily scale and maintain it over time while giving them the flexibility to use it internally in multiple different ways.

The forthcoming DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide Editions highlight the most robust and feature rich integrated backup appliances available on the market today. As such, organizations should consider these backup appliances covered in these Buyer’s Guides as having many of the features they need to protect both their physical and virtual environments. Further, a number of these appliances give them early access to the set of features that will position them to meet their next set of recovery challenge,s satisfy rising expectations for visibility into their corporate data, and simplify their ongoing management so they may derive additional value from it.




SaaS Provider Decides to Roll Out Cohesity for Backup and DR; Interview with System Architect, Fidel Michieli, Part 2

Evaluating product features, comparing prices, and doing proofing of concepts are important steps in the process of adopting almost any new product. But once one completes those steps, the time arrives to start to roll the product out and implement it. In this second installment of my interview series with System Architect, Fidel Michieli, he shares how his company gained a comfort level with Cohesity for backup and disaster recovery (DR) and how broadly it decided to deploy the product in the primary and secondary data centers.

Jerome: How did you come to gain a comfort level for introducing Cohesity into your production environment?

Fidel: We first did a proof of concept (POC).  We liked what we saw about Cohesity but we had a set of target criteria based on the tests we had previously run using our existing backup software and the virtual machine backup software. As such, we had a matrix of what numbers were good and what numbers were bad. Cohesity’s numbers just blew them out of the water.

Jerome:  How much faster was Cohesity than the other solutions you had tested?

Fidel: Probably 250 percent or more. Cohesity does a metadata snapshot where it essentially uses VMware’s technology, but the way that it ingests the data and the amount of compute that it has available to do the backups creates the difference, if that makes sense. We really liked the performance for both backups and restores.

We had two requirements. On the Exchange side we needed to do granular message restores. Cohesity was able to help us achieve that objective by using an external tool that it licensed and which works. Our second objective was to get out of the tape business. We wanted to go to cloud. Unfortunately for us we are constrained to a single vendor. So we needed to work with that vendor.

Jerome: You mean single cloud vendor?

Fidel: Well it’s a tape vendor, Iron Mountain. We are constrained to them by contract. If we were going to shift to the cloud, it had to be to Iron Mountain’s cloud. But Cohesity, during the POC level, got the data to Iron Mountain.

Jerome: How many VMs?

Fidel: We probably have around 1,400 in our main data center and about 120 hosts. We have a two-site disaster recovery (DR) strategy with a primary and a backup. Obviously it was important to have replication for DR. That was part of the plan before the 3-2-1 rule of backup. We wanted to cover that.

Jerome: So you have Cohesity at both your production and DR sites replicating between them?

Fidel: Correct.

Jerome: How many Cohesity nodes at each site?

Fidel: We have 8 and 8 at both sites. After the POC we started to recognize a lot of the efficiencies from management perspective. We knew that object storage was the way we wanted to go, the obvious reason being the metadata.

What the metadata means to us is that we can have a lot of efficiencies sit on top of your data. When you are analyzing or creating objects on your metadata, you can more efficiently manage your data. You can create objects that do compression, deduplication, objects that do analysis, and objects that hold policies. It’s more of a software defined data, if you will. Obviously with that metadata and the object storage behind it, our maintenance windows and backups windows started getting lower and lower.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.

In the fourth and final installment of this interview series, Fidel describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.




Storage Remains a Factor in Hyper-converged Infrastructure Deployment Decisions

Anyone in attendance at VMworld last week in Las Vegas and walking through the exhibit hall where all of the vendors showcased their wares could hardly miss the vast numbers of hyper-converged infrastructure and hyper-converged like vendors in attendance. Cisco, Dell, EMC, HPE, Nutanix, Maxta, Cohesity, Pivot3, Rubrik, Simplivity and Datrium, just to name a few, and I am sure there were others. Yet what caught my attention is speaking to their representatives and some of their users is how storage remains a factor in the architecture of hyper-converged infrastructure (HCI) solutions.

One of the premises of HCI is that its management should get simpler. By creating a hyper-converged server that contains compute, storage, and memory and then layering software that offers virtualization, networking, data protection, and scale-out clustering across these servers, organizations get the benefits of the traditional server, network, and storage array stack without all of its management complexities. Further, by using flash in part or in total within these servers, organizations get the performance they need at a lower cost.

While this premise largely holds up, cracks may appear in this architecture as organizations look to use HCI more broadly in their environment. Ironically, it is storage and its effective management and use within the HCI that again creates challenges for organizations in two specific areas:

  1. Cost containment
  2. Optimized application performance

In talking with Datrium, it seeks to contain these two costs associated with HCI deployments. One of the issues that organizations find as they look to deploy hyper-converged infrastructure solutions is that their costs for such a solution may rival that of SAN or NAS solution consisting of servers, networking, and storage. While a HCI solution may arguably be easier to manage than these SAN or NAS solutions, justifying the comparable cost of a hyper-converged solution can become a head scratcher since these consist of what should be commodity components.

The Datrium DVX solution seeks to address this dilemma using a two-fold approach. It first permits organizations to continue down their existing path of using their existing blade or installed servers equipped with flash drives and a VMware ESX hypervisor. However, to these ESX servers it adds its own software – a Hyperdriver – that stores data both on the local ESX server flash drives and on a central Datrium DVX NetShelf server/storage appliance. Using this design, writes still occur quickly and are protected across multiple devices while reads occur much more quickly since all data is retrieved from the local server’s data store residing on flash.

datrium-server-powered-storage

Source: Datrium

The desirability of Datrium’s design approach to HCI is that organizations may continue to use existing servers or blade servers while introducing its value-add in the server/storage NetShelf that resides in the network and which all servers access. Further, this design approach also helps to maintain application performance for virtual machines (VMs) on each ESX server initially and over time.

I also spoke at length with Pivot3 while at VMworld about its HCI models. While Pivot3 is not following the same design path as Datrium in terms of using storage specific servers as part of its hyper-converged architecture, it definitely recognizes that properly deploying and utilizing storage as part of its HCI solution plays an important role to controlling costs and optimizing application performance.

One of the key ways Pivot3 addresses these dual concerns is by leveraging its acquisition earlier this year of NexGen Storage to take advantage of some of the key quality of service (QoS) features and the PCI-e Flash architecture found on the NexGen arrays. An issue that can emerge in hyper-converged deployments is the inability of applications/VMs to get the levels of performance that a particular VM requires. By adding Granular QoS controls, Pivot3 can guarantee application performance to even the most demanding low-latency workloads.  While there are a number of organic workarounds to this issue, most involve manual intervention and spending more money to solve the issue.

pivot3-qos

Source: Pivot3

Pivot3 elected to, in part, automate the resolution to these issues with its acquisition of NexGen Storage. By giving organizations the flexibility to incorporate its N5 PCIe Flash Arrays into its HCI solution, organizations get dynamic QoS for their VMs. This approach serves to both lower costs (since they need fewer servers that require less time to manage) and accelerate application performance by using QoS to ensure each VM gets the level of performance its applications demand.

DCIG sees HCI becoming the predominant architecture that organizations of all sizes adopt and migrate to over time as a means to host and manage the majority of their virtualized infrastructure. But before they do, these solutions need to take into account how to optimize the storage that is part of their design so they remain an enabler (as opposed to an inhibitor) for continued HCI adoption. Based upon what DCIG saw in providers such as Datrium and Pivot3, solutions such as these are well on their way to delivering the key features that organizations need to enable them to scale HCI solutions to meet their application requirements now and into the future.

Note: This blog entry was updated at 9:30 am on September 16, 2016, to properly reflect some technical capabilities of Pivot3’s product.




Pivot3 QoS Positions Organizations to Introduce and Maintain Order in their Hyperconverged Deployments

Hyperconverged infrastructure solutions stand poised to disrupt traditional IT architectures in every way possible. Combining compute, data protection, networking, memory, scale out, storage, and virtualization on a single platform, they deliver the benefits of traditional IT infrastructures without their associated complexities. But as organizations look to consolidate on hyperconverged infrastructure solutions, they need data protection services such as Pivot3’s Quality of Service (QoS) feature now found on its vSTAC SLX Hyperconverged product that enables organizations to better protect their applications.

Hyperconverged infrastructure solutions provide organizations with the opportunity to consolidate and then centrally manage disparate applications on a single platform. However, consolidating applications presents its own set of challenges, namely ensuring that each application hosted by the hyperconverged infrastructure solution receives the appropriate level of service.

Absent any policies that guide its actions, the hyperconverged solution will simply attempt to service each application request in the order in which they arrive. While this approach may work fine if the hyperconverged solution only hosts a few virtual machines (VMs) or even a few dozen VMs, organizations want to deploy hyperconverged infrastructure solutions that host potentially hundreds or even thousands of VMs with different priorities. In these situations, the hyperconverged solution needs policies to help it prioritize which actions on which VMs to take first.

This is where the data protection QoS feature now available on Pivot3’s vSTAC SLX Hyperconverged product comes into play by offering the following benefits to applications hosted on the Pivot3 platform:

  1. Through its policies, organizations may pre-configure performance targets for applications that ensure minimum and maximum performance for them.
  2. Once set, the Pivot3 QoS uses adaptive bandwidth throttling and adaptive queuing to prioritize which application workloads have access to specific hyperconverged resources and when.
  3. Organizations have the flexibility to change these performance QoS policies on-the-fly as the needs of specific applications change.
  4. It can prioritize data protection operations by putting replication jobs for mission critical applications at the head of the line ahead of previously queued up business critical and non-critical replication jobs.

What makes Pivot3’s QoS notable is two-fold.

First, as it moves tasks associated with mission critical applications to the head of the line, it does not forget to service the tasks associated with business critical and non-critical applications. Every time tasks associated with these applications get by-passed to service a task from a mission critical application, Pivot3’s QoS in the background slightly upgrades the priority of the tasks associated with these applications. In this way, should tasks associated with these lower priority applications get by-passed too many times by tasks from mission critical applications, eventually the Pivot4 QoS will prioritize these tasks the same as mission critical applications and they will get serviced.

Second, organizations may set different QoS levels for data protection and performance. By way of example, organizations may need to provide an application with a high level of performance (such as one they are testing or only need temporarily) but that same application may only need minimal or no data protection. Conversely, another application may need only nominal levels of performance but it may need very high levels of data protection. Using these varying QoS levels that Pivot3 makes available, organizations can appropriately address these different data protection and performance needs that each application may have.

Source: Pivot3

Source: Pivot3

Hyperconverged infrastructure solutions bring much needed affordability and simplicity to an architecture that has grown far more costly and complex than what companies ever wanted. Yet as this transition to hyperconverged occurs, organizations now need new tools to keep these hyperconverged infrastructures simple and easy to manage. The Pivot3 QoS is an excellent example of the type of feature that organizations should look for a hyperconverged infrastructure solution to possess to ensure that these types of solutions deliver on their promise initially and over time.