VMware vSphere and Nutanix AHV Hypervisors: An Updated Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform. Each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its updated DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors.

blurred image of first page of reportThis succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  • Data protection ecosystem
  • Support for Guest OSes
  • Support for VDI platforms
  • Certified enterprise applications
  • Fit with corporate direction
  • More favorable licensing model
  • Simpler management

This DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




Differentiating between High-end HCI and Standard HCI Architectures

Hyper-converged infrastructure (HCI) architec­tures have staked an early claim as the data center architecture of the future. The abilities of all HCI solutions to easily scale capacity and performance, simplify ongoing management, and deliver cloud-like features have led many enterprises to view it as a logical path forward for building on-premise clouds.

But when organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high-end and standard hyper-converged architectures deliver the same bene­fits. They each virtualize compute, memory, storage networking, and data stor­age in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snap­shots and replication. In short, these two archi­tectures mirror each other in many ways.

However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments. In these envi­ronments, enterprises often need to:

  • Host thousands of VMs in a single, logical environment.
  • Provide guaranteed, sustainable performance for each VM by insulating them from one another.
  • Offer connectivity to multiple public cloud providers for archive and/or backup.
  • Seamlessly move VMs to and from the cloud.

It is in these circumstances that differences between high-end and standard architectures emerge with the high-end HCI architecture possessing four key advantages over standard HCI architectures. Consider:

  1. Data availability. Both high-end and standard HCI solutions distribute data across multiple nodes to ensure high levels of data availability. However, only high-end HCI architectures use separate, distinct compute and data nodes with the data nodes dedicated to guar­anteeing high levels of data availability for VMs on all compute nodes. This ensures that should any compute node become unavail­able, the data associated with any VM on it remains available.
  2. Flash/performance optimization. Both high-end and standard HCI architectures take steps to keep data local to the VM by storing the data of each VM on the same node on which the VM runs. However, solutions based on high-end HCI architectures store a copy of each VM’s data on the VM’s compute node as well as on the high-end HCI architecture’s underlying data nodes to improve and optimize flash performance. High-end HCI solutions such as the Datrium DVX also deduplicate and compress all data while limiting the amount of inter-nodal communication to free resources for data processing.
  3. Platform manageability/scalability. As a whole, HCI architectures are intuitive to size, manage, scale, and understand. In short, if you need more performance and/or capacity, one only needs to add another node. However, enterprises often must support hundreds or even thousands of VMs that each must perform well and be protected. Trying to deliver on those requirements at scale using a standard HCIA solution where inter­-nodal communication is a prerequisite becomes almost impos­sible to achieve. High-end HCI solutions facilitate granular deployments of compute and data nodes while limiting inter-nodal communication.
  4. VM insulation. Noisy neighbors—VMs that negatively impact the performance of adjacent VMs—remain a real problem in virtual environments. While both high-end and standard HCI architectures support storing data close to VMs, high-end HCI architectures do not rely on the availability of other compute nodes. This reduces inter-nodal communication and the probability that the noisy neighbor problem will surface.

HCI architectures have resonated with organizations in large part because of their ability to deliver cloud-like features and functionality with maintaining their simplicity of deployment and ongoing maintenance. However, next generation high-end HCI architecture with solutions available from providers like Datrium provide organizations greater flexibility to deliver cloud-like functionality at scale such as offering better management and optimization of flash performance, improved data availability, and increased insulation between VM instances. To learn more about how standard and high-end HCI architectures compare, check out this recent DCIG pocket analyst report that is available on the TechTrove website.

Note: Image added on 7/27/2018.



HYCU Branches Out to Tackle ESX Backups in non-Nutanix Shops

A virtualization focused backup software play may be perceived as “too little, too late” with so many players in today’s backup space. However, many former virtualization centric backup software plays (PHD Virtual and vRanger come to mind) have largely disappeared while others got pricier and/or no longer do just VM backups. These changes have once again created a need for a virtualization centric backup software solution. This plays right into the hands of the newly created HYCU as it formally tackles the job of ESX virtual machine (VM) backups in non-Nutanix shops.

Virtualization centric backup software has almost disappeared in the last few years. Either they have been acquired and become part of larger entities (PHD Virtual was acquired by Unitrends while AppAssure and vRanger both ended up with Quest Software) while others have diversified into providing both physical and virtual backups. But as these changes have occurred, the need for a virtualization focused backup software solution has not necessarily diminished. If anything, the rise of hyper-converged platforms such as Dell EMC, Nutanix, HPE SimpliVity, Pivot3, and others offer has created a new need for a backup software product designed for these environments.

Enter HYCU. HYCU as a brand originally surfaced mid-last year from Comtrade Software. Today it takes on the name of its flagship HYCU backup software product as well as becomes a standalone company. By adopting the corporate name of HYCU, it completes the break from its parent company, Comtrade Group, as well as the Comtrade Software name under which it has operated for the past nine months.

During its initial nine-month existence, HYCU focused on tackling VM backups in Nutanix environments. It started out by protecting VMs running on Nutanix Acropolis hypervisor (AHV) environments and then expanded to protect VMs running on ESX in Nutanix environments.

Today HYCU takes a logical and necessary leap to ensure its VM-centric backup software finds a home in a broader number of enterprises. While HYCU may arguably do the best job of any backup software product available when it comes to protecting VMs in Nutanix environments, most organizations do not yet host all their VMs on Nutanix.

To address this larger market, HYCU is broadening it capabilities to tackle the protection of VMs on non-Nutanix platforms. There is some significance in HYCU taking this step. Up to this point, HYCU leveraged the native data protection capabilities found on Nutanix’s platform to negate the possibility of VM stuns. This approach worked whether it protected VMs running on AHV or ESX as both were hosted on the Nutanix platform and HYCU could call on Nutanix’s native snapshot capabilities.

Source: HYCU

By porting its software to protect VMs running on non-Nutanix platforms, HYCU by necessity must use the native VMware APIs for Data Protection (VADP) to protect these VMs. As VADP does not offer the same level of data protection against VM stuns that the native Nutanix platform offers, users on non-Nutanix platforms remain exposed to the possibility of VM stuns.

That said, organizations do gain three advantages by using HYCU on non-Nutanix platforms:

  1. They obtain a common solution to protect VMs on both their Nutanix and non-Nutanix platforms. HYCU provides them with one interface to manage the protection of all VMs.
  2. Affordable VM backups. HYCU prices its backup software very aggressively with list prices of about $1500/socket.
  3. They can more easily port VMs from non-Nutanix to Nutanix platforms. Once they begin to protect VMs on non-Nutanix platforms, they can restore them to Nutanix platforms. Once ported, they can replace the VM’s underlying data protection methodology with Nutanix’s native data protection capabilities to negate the possibility of VM stuns.

In today’s highly virtualized world a virtualization centric backup software play may seem late to market. However, backup software consolidations and mergers coupled with the impact that hyper-converged infrastructures are having on enterprise data centers have created an opening for an affordable virtualization centric backup software play.

HYCU has rightfully discerned such an opportunity exists. By now extending the capabilities of its product to protect non-Nutanix environments, it both knocks down the barriers and objections for these environments to adopt its software while simultaneously easing their path to eventually transition to Nutanix and address the VM stun challenges that persist in non-Nutanix environments.




Comtrade Software goes beyond AHV, Adds ESX Support

Every vendor new to a market generally starts by introducing a product that satisfies a niche to gain a foothold in that market. Comtrade Software exemplified this premise by earlier this year coming to market with its HYCU software that targets the protection of VMs hosted on the Nutanix AHV hypervisor. But to grow in a market, especially in the hyper-competitive virtual machine (VM) data protection space, one must expand to protect all market-leading hypervisors. Comtrade Software’s most recent HYCU release achieves that goal with its new support for VMware ESX.

In any rapidly growing market – and few markets currently experience faster growth than the VM data protection space – there will be opportunities to enter it that existing players overlook or cannot respond to in a timely manner. Such an entry point occurred earlier this year.

Comtrade Software recognized that no vendor had yet released purpose-built software targeted at protecting VMs hosted on the Nutanix AHV hypervisor. By coming to market with its HYCU software when it did (June 2017,) it was able to gain a foothold in customer accounts already using AHV who needed a simpler and more intuitive data protection solution.

But being a one-trick pony only works so long in this space. Other vendors have since come to market with features that compete head-to-head with HYCU by enabling their software to more effectively protect VMs hosted on the Nutanix AHV hypervisor. Remaining viable and relevant in this space demanded that Comtrade Software expand its support from VMs running on other hypervisors.

Comtrade Software answered that challenge this month. Its current release adds VMware ESX support to give organizations the freedom to use HYCU to protect VMs running on AHV, ESX, or both. However, Comtrade Software tackled its support of ESX in a manner different than many of its counterparts.

Comtrade Software does NOT rely on the VMware APIs for Data Protection (VADP) which have become almost the default industry standard for protecting VMs. It instead leverages Nutanix snapshots to protect VMs running on the Nutanix cluster regardless if the underlying hypervisor is AHV or ESX. The motive behind this decision are two-fold as this technique minimizes if not eliminates:

  1. Application impact
  2. VM stuns

A VM stun, or quiescing a virtual machine (VM), is done to create a snapshot that contains a consistent or recoverable backup of the application and/or data residing on the VM. This VM stun, occurring under normal conditions, poses minimal or no risk to an organization as it typically completes in under a second.

However, hyper-converged environments are becoming anything but normal. As organizations continue to increase VM density, virtualize more I/O intensive applications, and/or retain more snapshots for longer periods of time on their Nutanix cluster, the length and impact of VM stuns increases using VMware’s native VADP as other authors have discussed. To counter this, HYCU leverages the native snapshot functionality found in Nutanix to offset this known deficiency of VMware VADP when using it where any of these three conditions exist.

Comtrade Software rightly recognizes what it is up against as it seeks to establish a larger footprint in the broader VM data protection space. While its initial release of HYCU enabled it to establish a footprint with some organizations, to keep that footprint with its existing customers as well as attract new customers going forward, it needed to introduce support for other hypervisors.

Its most recent release accomplishes that objective and its choice of ESX exposes it to many more opportunities with nearly 70 percent of Nutanix installations currently using ESX as their preferred hypervisor. However, Comtrade Software offers support for ESX in a very clever way that differentiates it from its competitors.

By leveraging Nutanix snapshots instead of VMware VADP, it capitalizes on its existing tight relationship with Nutanix by giving organizations new opportunities to improve the availability and protection of applications already running on Nutanix. Further, it gives them greater confidence to scale their Nutanix implementation to host more applications and/or higher performance applications going into the future.

Other DCIG blog entries about Comtrade:

All other DCIG blog entries.




VMware vSphere and Nutanix AHV Hypervisors: A Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two common choices but key differences between them persist.

In the last couple of years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform as each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its most recent DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors available for a complimentary download.

This succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  1. Breadth of partner ecosystem
  2. Enterprise application certification
  3. Guest OS support
  4. HCIA management capabilities
  5. Overall corporate direction
  6. Software licensing options
  7. Virtual desktop infrastructure (VDI) support

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




VMware Shows New Love for Public Clouds and Containers

In recent months and years, many have come to question VMware’s commitment to public clouds and containers used by enterprise data centers (EDCs). No one disputes that VMware has a solid footprint in EDCs and that it is in no immediate danger of being displaced. However, many have wondered how or if it will engage with public cloud providers such as Amazon as well as how it would address threats posed by Docker. At VMworld 2017, VMware showed new love for these two technologies that should help to alleviate these concerns.

Public cloud offerings such as are available from Amazon and container technologies such as what Docker offers have captured the fancy of enterprise organizations and for good reasons. Public clouds provide an ideal means for organizations of all size to practically create hybrid private-public clouds for disaster recovery and failover. Similarly, container technologies expedite and simplify application testing and development as well as provide organizations new options to deploy applications into production with even fewer resources and overhead than what virtual machines require.

However, the rapid adoption and growth of these two technologies in the last few years among enterprises had left VMware somewhat on the outside looking in. While VMware had its own public cloud offering, vCloud Air, it did not compete very well with the likes of Amazon Web Services (AWS) and Microsoft Azure as vCloud Air was primarily a virtualization platform. This feature gap probably led to VMware’s decision to create a strategic alliance with Amazon in October 2016 to run its vSphere-based cloud services on AWS and its subsequent decision in May 2017 to divest itself of vCloud Air altogether and sell it to OVH.

This strategic partnership between AWS and VMware became a reality at VMworld 2017 with the announcement of the initial availability of VMware Cloud on AWS.  Using VMware Cloud Foundation, administrators can use a single interface to manage their vSphere deployments whether reside locally or in Amazon’s cloud. The main caveat is this service is currently only available in the AWS US West region. VMware expects to roll this program out throughout the rest of AWS’s regions worldwide in 2018.

VMware’s pricing for this offering is as follows

Region: US West (Oregon)

On-Demand (hourly)

1 Year Reserved*

3 Year Reserved*

List Price ($ per host)

$8.3681

$51,987

$109,366

Effective Monthly**

$6,109

$4,332

$3,038

Savings Over On-Demand

30%

50%

*Coming Soon. Pricing Option Available at Initial Availability: Redeem HPP or SPP credits for on-demand consumption model.
**Effective monthly pricing is shown to help you calculate the amount of money that a 1-year and 3-year term commitment will save you over on-demand pricing. When you purchase a term commitment, you are billed for every hour during the entire term that you select, regardless of whether the instances are running or not.

Source: VMware

The other big news coming out of VMworld was its response to the threat/opportunity presented by container technologies. To tackle this issue, it partnered with Pivotal Software, Inc., and collaborated with Google Cloud to offer the new Pivotal Container Service (PKS) that combines the Pivotal Cloud Foundry and VMware’s software-defined data center infrastructure offerings.

Source: Pivotal Software

One of the major upsides of this offering is a defined, supported code level for use by enterprises for testing and development. Container technologies are experiencing a tremendous of change and innovation. While this may foretell great things for container platforms, this degree of innovation makes it difficult for enterprises to do predictable and meaningful application testing and development when the underlying code base is changing so swiftly.

By Google, Pivotal, and VMware partnering to deliver this platform, enterprises have access to a more predictable, stable, and supported container code base than what they might obtain independently. Further, they can have more confidence that the the platform on which they test their code will work in VMware environments in the months and years to come.

VMware’s commitment to public cloud and container providers has been somewhat unclear over the past few years. But what VMware made clear at this year’s VMworld is that it no longer views cloud and container providers such as Amazon and Google as threats. Rather, it finally embraced what its customers already understood. VMware excels at virtualization and Amazon and Google excel at cloud and container technologies. At VMworld 2017, it admitted to itself and the whole world that if you could not beat them, join them which was the right move for move VMware and the customers it seeks to serve.




Software-defined Data Centers Have Arrived – Sort of

Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.

The concept of software-defined data centers is really nothing new. This topic has been discussed for decades and was the subject of one of the first articles I ever published 15 years ago (though the technology was more commonly called virtualization at that time.) What is new, however, is the fact that the complementary, supporting set of hardware technologies needed to enable the software-defined data center now exists.

More powerful processors, higher capacity memory, higher bandwidth networks, scale-out architectures, and other technologies have each contributed, in part, to making software-defined data centers a reality. The recent availability of solid state drives (SSDs) may have been perhaps the technology that ultimately enabled this concept to go from the drawing boards into production. SSDs reduce data access times from milliseconds to microseconds helping to remove one of the last remaining performance bottlenecks to making software-defined data centers a reality.

Yet as organizations look to replace their hardware defined infrastructure with a software-defined data center, they must still proceed carefully. Hardware defined infrastructures may currently cost a lot more than software-defined data centers but they do offer distinct benefits that software-defined solutions currently are still hard-pressed to match.

For instance, the vendors who offer the purpose-built appliances for applications, backup, networking, security, or storage used in hardware defined infrastructures typically provide hardware compatibility lists (HCLs). Each HCL names the applications, operating systems, firmware, etc., for which the appliance is certified to interact with and which the vendor will provide support. Deviate from that HCL and your ability to get support suddenly gets sketchy.

Even HCLs are problematic due to the impossibly large number of possible configurations that exist in enterprise environments which vendors can never thoroughly vet and test.

This has led to the emergence of converged infrastructures. Using these, vendors guarantee that all components in the stack (applications, servers, network, and storage along with their firmware and software) are tested and certified to work together. So long as organizations use the vendor approved and tested hardware and software component in this stack and keep them in sync with the vendor specifications, they should have a reliable solution.

Granted, obtaining solutions that satisfy these converged infrastructure requirements cost more. But for many enterprises paying the premium was worth it. This testing helps to eliminate situations such as I once experienced many years ago.

We discovered in the middle of a system wide SAN upgrade that a FC firmware driver on all the UNIX systems could not detect the LUNs on the new storage systems. Upgrading this driver required us to spend nearly two months with individuals coming in every weekend to apply this fix across all these servers before we could implement and use the new storage systems.

Software-defined data centers may still encounter these types of problems. Even though the software itself may work fine, it cannot account for all the hardware in the environment or guarantee interoperability with them. Further, since software-defined solutions tend to go into low cost and/or rapidly changing environments, there is a good possibility the HCLs and/or converged solutions they do offer are limited in their scope and may have not been subjected to the extensive testing that production environments.

The good news is that software-defined data centers are highly virtualized environments. As such, copies of production environments can be made and tested very quickly. This flexibility mitigates the dangers of creating unsupported, untested production environments. It also provides organizations an easier, faster means to failback to the original configuration should the configuration now work as expected.

But here’s the catch. While software-defined data centers provide flexibility, someone must still possess the skills and knowledge to make the copies, perform the tests, and do the failbacks and recoveries if necessary. Further, software-defined data centers eliminate neither their reliance on underlying hardware components nor the individuals who create and manage them.

Interoperability with the hardware is not a given and people are known to be unpredictable and/or unreliable from time to time, the whole system could go down or function unpredictably without a clear path to resolution. Further, if one encounters interoperability issues initially or at some point in the future, the situation may get thornier. Organizations may have to ask and answer questions such as:

  1. When the vendors start finger pointing, who owns the problem and who will fix it?
  2. What is the path to resolution?
  3. Who has tested the proposed solution?
  4. How do you back out if the proposed solution goes awry?

Software-defined data centers are rightfully creating a lot of buzz but they are still not the be all and end all. While the technology now exists at all levels of the data center to make it practical to deploy this architecture and for companies to realize significant hardware savings in their data center budgets, the underlying best practices and support needed to successfully implement software-defined data are still playing catch-up. Until those are fully in place or you have full assurances of support by a third party, organizations are advised to proceed with caution on any software-defined initiative, data center or otherwise.




Comtrade Software HYCU Serves as a Bellwether for Accelerated Adoption of Hyperconverged Platforms

In today’s business world where new technologies constantly come to market, there are signs that indicate when certain ones are gaining broader market adoption and ready to go mainstream. Such an event occurred this month when a backup solution purpose built for Nutanix was announced.

This product minimized the need for users of Nutanix’s hyperconverged infrastructure solution to parse through multiple product to find the right backup solution for them. Now they can turn to Comtrade Software’s HYCU software confident that will get a backup solution purpose-built to protect VMs and applications residing on the Nutanix Acropolis hyperconverged infrastructure platform.

In the history of every new platform that comes to market, certain tipping points occur that validate and accelerate its adoption. One such event is the availability of other products built specifically to run on that platform that make it more practical and/or easier for users of that platform to derive more value from it. Such an event for the Nutanix Acropolis platform occurred this month when Comtrade Software brought its HYCU backup software to market which is specifically designed to protect VMs and applications running on the Nutanix Acropolis hyperconverged platform.

The availability of this purpose-built data protection solution from Comtrade Software for Nutanix is significant in three ways.

  • It signifies that the number of companies adopting hyperconverged infrastructures solutions has reached critical mass in the market place and that this technology is poised for larger growth.
  • It would suggest that current backup solutions do not deliver the breadth of functionality that administrators of hyperconverged infrastructure solution need; that they cost too much; that they are too complicated to use; or some combination of all three.
  • It indirectly validates that Nutanix is the market leader in providing hyperconverged infrastructure solutions as Comtrade placed its bets on first bringing a solution to market that addresses the specific backup and recovery challenges that Nutanix users face.

Considering that the Comtrade Software’s HYCU is just out of the gate, it offers a significant amount of functionality that make it a compelling data protection solution for any Nutanix deployment. One of Comtrade’s design goals was to make it is simple as possible to deploy and manage backup over time in Nutanix environments. While this is typically the goal of every product that comes to market, Comtrade Software’s HYCU stands apart with its ability to detect the application running inside of each VM.

One of the challenges that administrators routinely face is the lack of ability to easily discern what applications run inside a VM without first tracking down the owner of that VM and/or the application owner to obtain that information. In the demo I saw of HYCU, it mitigates the need to chase down these individuals as it can look inside of a VM to identify which application and operating system it hosts. Once it has this information, the most appropriate backup policies for that VM may be assigned.

Equally notable about the Comtrade Software HYCU product is its management interface. Rather than presenting and requiring administrators to learn a new management interface to perform backups and recoveries, it presents a management interface that closely if not exactly replicates the one used by Nutanix.

Every platform that attains broad acceptance in the marketplace reaches a point where partners come alongside it and begin to offer solutions either built upon it or that do a better job of performing certain tasks such as data protection and recovery. Comtrade Software offering its HYCU data protection software serves as a bellwether for where Nutanix sits in the hyperconverged market place. By coming to market when it has, Comtrade Software positions its HYCU offering as the front runner in this emerging space as it currently has no other competitors that offer purpose-built backup software for Nutanix hyperconverged infrastructure deployments.




If We Cannot Scale Our Backup Solution, We Die; Interview with SaaS Provider System Architect Fidel Michieli, Part I

Every year at VMworld I have conversations that broaden my understanding and appreciation for new products on the market. This year was no exception as I had the opportunity to talk at length with Fidel Michieli, a System Architect at a SaaS provider, who shared his experiences with me about his challenges with backup and recovery and how he came to choose Cohesity. In this first installment in my interview series with Fidel, he shared the challenges that his company was facing with his existing backup configuration as well as the struggles that he had in identifying a backup solution that scaled to meet his dynamically changing and growing environment.

Jerome: Fidel, thanks for taking time out of schedule here at VMworld to meet and talk with me about how you came to choose Cohesity for your environment. To begin, please tell me about your role at your company.

fidel_hands

Fidel:    I work as a system architect at a software-as-a-service (SaaS) provider that pursues a very innovative, agile course of development which is very good at adopting new technology and trends.

My job is on the corporate infrastructure side. I do not work with the software delivery to our customers. Our software is more of a cookie cutter environment. It is very scalable but it is restricted to our application stack. I work on the corporate side where we have all the connectivity, email, financial, and other applications that the enterprise needs, including some of our customers’ applications.

I am responsible for choosing and deploying the technology and the strategy to get us to where we need to go and try to develop this division and the strategy to see where things are going. We used Veritas NetBackup and Dell deduplication appliances for backups. Using these solutions, we were constrained as they did not scale to match the demands of our business that grows at the rate we were going.

One of the biggest things that we have to worry about is scale. Often by the time we architect and set up a new solution, we always end up short. If we do not scale, we die.

We were at a crossroads with our previous strategy where we did not scale. It was very expensive to grow and manage. The criticality of the restore is huge and we had horrible restore times. We had a tape strategy. The tape guy came once a week. You could ask for a tape and it would come the next time he stopped by so we would potentially wait six days for the tape to get there. Then you had to move the data, get it off of tape, and convert it to a disk format. Our recovery SLAs were horrible.

I was tasked with finding a new solution. For back-end storage, we looked at Data Domain as we were an EMC shop.  For backup software, we looked at Gartner and their magic quadrant and we chose the first three.  With EMC (now Dell Technologies) we saw what the ecosystem looked like 12 years ago. A bunch of acquisitions integrated into one solution. It does not get one out of the silo scaling. There were some efficiencies but, honestly, we were not impressed with the price.

Jerome: Did you find EMC expensive for what it offered?

Fidel: Yes. It was ridiculously expensive. We also looked at Commvault and just with the first quote we realized this is way too complicated. We are a smaller organization, so we do not have people dedicated to jobs. Commvault quoted us 30 days for implementation engineers. We would have a guy from Commvault in our office for 30 days implementing and migrating jobs. That speaks about the complexity about what we are doing and it speaks to how, when their implementation engineer leaves, who is going to take on these responsibilities and how long is that going to take.

We decided that we should find a more sensible approach.

Jerome: How virtualized is your environment?

Fidel:  98 percent. All VMware. This led us to look at a virtual machine backup solution. We had heard very good things about this product but the only problem we had was the back-end storage. How do we tackle the back-end storage? My background is on the storage side so I started looking at solutions like Swift, which is an open source object-based storage as well as ScaleIO.  Yet when we evaluated this virtual machine backup solution using this storage, we were not impressed with it.

Jerome: Why was that? Those solutions are specifically tailored for backup of virtual machines.

Fidel: To be very honest, NetBackup performed better which I did not expect. I was very invested in the virtual machine backup solution. We did a full analysis on times and similar testing using different back ends. We found that the virtual machine backup software was up to 37 percent slower and more expensive because of its licensing model so it was not going to work for us.

Jerome: What did you decide to do at that point?

Fidel: We talked with our SHI International representative. We explained that we experienced a very high rate of change and that we needed to invest in a solution that in 2-3 years could be supporting an environment that may look radically different than today. Further, we did not want to delay deploying it because we were concerned how competitive we would be. If we delayed, the impact could be huge.

He recommended Cohesity. We recognized that it was obviously scale-out. One of the things that I particularly really liked about its scale-out architecture is that since you originate all of your data copies from the storage, you can have multiple streams from all your nodes. In this way, you are not only scale-out on capacity, but also performance and the amount of data streams that you can have.

In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.

In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.

In the fourth and final installment of this interview series, Fidel describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.




Feature Consolidation on Backup Appliances Currently Under Way

Integrating backup software, cloud services support, deduplication, and virtualization into a single hardware appliance remains a moving target. Even as backup appliance providers merge these technologies onto their respective appliances, the methodologies they employ to do so can vary significantly between them. This becomes very apparent when one looks at growing number of backup appliances from the providers in the market today and the various ways that they offer these features.

fujitsu2

Some providers such as Cohesity provide options in their appliances to satisfy the demands of three different backup appliance configuration. Their appliances may be configured as a target-based deduplication appliance, an integrated backup (offer both storage and backup software for data protection behind the firewall) and as a hybrid cloud backup appliance which gives their appliances to backup data locally and store data with cloud services providers.

By offering options to configure their appliances this way, it opens up the door for their products to address multiple use cases over time. In recently speaking with Cohesity, it often initially positions its product as a target deduplication appliance as a means to non-disruptively get a foothold in organizations with the hopes that organizations will eventually start to use its backup software as well.

Cohesity’s scale-out design also makes it an appealing alternative to competitors such as EMC Data Domain. By scaling out, organizations can eliminate creating the backup silos that results from deploying multiple instances of EMC Data Domain. Using Cohesity, organizations can instead create one central backup repository that makes its solution a more scalable and easier to manage deduplicating backup target that EMC Data Domain.

Further, now that Cohesity has a foothold, organizations can begin to test and use Cohesity’s backup software in lieu of their existing software. A number have already found that Cohesity’s software is already sufficiently robust that it meets the needs of their backup environment. This frees organizations to save even more money and further consolidate their backup infrastructure on a single solution.

Other providers also bundle deduplication along with virtualization and connectivity to cloud services providers as part of their backup appliance offering to offer instant and cloud recovery as part of their solution. In doing so, one specific area in which these appliances differentiate themselves in their ability to deliver instant recoveries on the appliance and even with cloud services providers.

Many providers now offer make virtual machines (VMs) available on their backup appliances to host application recoveries and some even make VMs available with cloud services providers. These VMs that reside locally on the backup appliance give organizations access to application recoveries such as Microsoft Exchange or SQL Server or to use these VMs for test and development. DCIG has found that appliances from Barracuda, Datto, Dell, and Unitrends all support these types of capabilities.

In evaluating these features across different backup appliances, DCIG finds that the Dell DL4300 Backup and Recovery Appliance sets itself apart from the others with its Virtual Standby feature that includes fully licensed VMs from Microsoft. Its VMs run in the background in standby mode and receive constant application. In this way, they are ready for access and use at any time should they be called up. This compares to the others where VMs on the appliance take time to set up. While organizations may want also bring up production level applications on the VMs on other backup appliances, it does take more time to bring these applications on these VMs and may require the intervention of the backup administrators to do this.

However other providers also give organizations a means to access and recover their data and applications.

  • Using Barracuda organizations can recover from a replicated site using a Local Control appliance and Local LiveBoot. Once accessed, administrators may recover to the local appliance using virtual machines.
  • Datto aoffers instant restore capabilities where VMs may be set up locally on the appliance for instant recovery. If the Datto appliance connects to the cloud, users also have the option to run VMs in the cloud which gives organizations time to fix a local server outage and providing business continuity during this time.
  • Unitrends lets users mount VMs for instant recovery on the appliance and in the cloud. Users that opt-in to its Disaster Recovery Service gain access to up to five VMs depending on the size of the appliance or they may also acquire VMs in the cloud if needed.

The consolidating of deduplication, virtualization, and cloud connectivity coupled with new scale-out capabilities provide organizations more reasons than ever to purchase a single appliance to protect their applications and data. Buying a single backup appliance not only provides a smart data protection plan but affords them new opportunities to introduce new technologies into their environment.

The means in which providers incorporate these new technologies into their backup appliances is one of many components to consider when selecting any of today’s backup appliances. However, their cloud connectivity, instant recovery, consolidated set of features, and scale-out features are becoming the new set of features that organizations should examine on the latest generation of backup appliances. Look for the release of a number of DCIG Buyer’s Guide Editions on backup appliances in the weeks and months to come that provide the guidance and insight you need to make these all important decisions about these products.




SDS’s Impact on the Storage Landscape; Interview with Nexenta Chairman and CEO, Tarkan Maner, Part 2

In the last 12-18 months, software-only software-defined storage (SDS) seems to be on the tip of everyone’s tongue as the “next big thing” in storage. However, getting some agreement as to what features constitute SDS software, who offers it and even who competes against who, can be a bit difficult to ascertain as provider allegiances and partnerships quickly evolve. In this second installment of my interview series with Nexenta’s Chairman and CEO, Tarkan Maner, he provides his views into how SDS software is impacting the competitive landscape, and how Nexenta seeks to differentiate itself.

Jerome: You have made comments along the lines that VMware views companies such as DataCore and Falconstor as competitors as opposed to Nexenta. Can you elaborate upon your comment?

Tarkan:  There are three categories of companies. One is storage vendors that take an old school approach with legacy systems and hardware appliances that have some storage software. Although they call themselves software-defined storage (SDS), at the end of the day, the customer buys a box.

A second category is the companies that come from old legacy storage resource management (SRM) and storage virtualization worlds. Although there are some great companies, who have done some great things, their solutions are a little bit more static and the way they deliver software-defined storage isn’t as pure software.

Then the third category is true software-only software-defined storage. These are companies that are 100 percent software companies. Their software runs on any infrastructure, on any server, on any flash, and can integrate with any kind of a stack.

NexentaStor

Source: Nexenta

Nexenta’s software-defined storage runs with Red Hat, Canonical, Docker, VMware, Citrix, and Microsoft. We also work with VMware hand-in-hand as partners in the marketplace. Rather than being a software defined storage solution for a specific stack, our vision is a much more expansive, open one.

Our vision is keeping our software-defined storage platform truly independent. Nexenta is the only private company with a customer base of almost 6,000 customers with a very large partner base and OEM relationships that can deliver on any stack, on any server, any disk. That’s our differentiation from the rest of the world.

Jerome:  What challenges does that create for you?

Tarkan: We are still a small company, although we are growing at a healthy rate, (with 80-plus gross margins,) we still fight a big fight with very large vendors. Some of these large companies might see open source as a little niche play. But let me tell you, we have a strategic solution supporting a lot of our partner systems like server vendors, disk vendors, flash vendors, and cloud platform vendors. Our open model is expansive; but, at the same time, is very open and aggressive in the marketplace through our partnerships.

Ultimately, our customers tell us this is the way to go. That is the reason we are not shooting for the sun but for the Milky Way. Hopefully we are going to minimally end up at Mars, not in San Carlos.

In Part 1 of this interview series, Tarkan provides his definition of Software-Defined Storage (SDS) and then calls out storage providers for holding their customers hostage with overpriced and inflexible storage solutions.

In the next and final installment in this interview series, Tarkan elaborates on how SDS is moving up to host more tier one applications.




Server-based Storage Makes Accelerating Application Performance Insanely Easy

In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.

As flash transforms the demands of application owners, organizations want more options to cost-effectively deploy and manage it. These include:

  • Putting lower cost flash on servers as it performs better on servers than across a SAN.
  • Hyper-converged solutions have become an interesting approach to server-based storage. However, concerns remain about fixed compute/capacity scaling requirements and server hardware lock-in.
  • Array-based arrays have taken off in large part because they provide a pool of shared flash storage accessible to multiple servers.

Now a fourth, viable flash option has appeared on the market. While I have always had some doubts about server-based storage solutions that employ server-side software, today I changed my viewpoint after reviewing Datrium’s DVX Server-powered Storage System.

Datrium has the obvious advantages over arrays as it leverages the vast, affordable and often under-utilized server resources.  But unlike hyper-converged systems, it scales flexibly and does not require a material change in server sourcing.

To achieve this ends, Datrium has taken a very different approach with its “server-powered” storage system design.  In effect, Datrium split speed from durable capacity in a single end-to-end system.  Storage performance and data services tap host compute and flash cache, driven by Datrium software that is uploaded to the virtual host. It then employs its DVX appliance, an integrated external storage appliance, that permanently holds data and orchestrates the DVX system protects application data in the event of server or flash failure.

This approach has a couple meaningful takeaways versus traditional arrays:

  • Faster flash-based performance given it is local to the server versus accessed across a SAN
  • Lower cost since server flash drives cost far less than flash drives found on an all-flash array.

But it also addresses some concerns that have been raised about hyper-convered systems:

  • Organizations may independently scale compute and capacity
  • Plugs into an organization’s existing infrastructure.

Datrium Offers a New Server-based Storage Paradigm

StatelessServers_Diesl-1024x818

Source: Datrium

Datrium DVX provides the different approach needed to create a new storage paradigm. It opens new doors for organizations to:

  1. Leverage excess CPU cycles and flash capacity on ESX servers. ESX servers now exhibit the same characteristics that the physical servers they replaced once did: they have excess, idle CPU. By deploying server-based storage software at the hypervisor level, organizations can harness this excess, idle CPU to improve application performance.
  2. Capitalize on lower-cost server-based flash drives. Regardless of where flash drives reside (server-based or array-based,) they deliver high levels of performance. However, server-based flash costs much less than array-based flash while providing greater flexibility to add more capacity going forward.

Accelerating Application Performance Acceleration Just Became Insanely Easy

Access to excess server-based memory, CPU and flash combine to offer another feature that array-based flash can never deliver: push button application performance. By default, when the Datrium storage software installs on ESX hypervisor, it limits itself to 20 percent of the available vCPU available to each VM. However, not every VM uses all of its available vCPU with many VMs only using only 10-40 percent of their available resources.

Using Datrium’s DIESL Hyperdriver Software version 1.0.6.1, VM administrators can non-disruptively tap into these latent vCPU cycles. Using Datrium’s new Insane Mode, they may increase the available vCPU cycles a VM can access from 20 to 40 percent with a click of a button. While the host VM must have latent vCPU cycles available to accomplish this task, this is a feature that array-based flash would be hard-pressed to ever offer and unlikely could ever do with the click of a button.

Server-based storage designs have shown a lot of promise over the years but have not really had the infrastructure available to them to build a runway to success. That has essentially changed and Datrium is one of the first solutions to come to market that recognizes this fundamental change in the infrastructure of data centers and has brought a product to market to capitalize on it. As evidenced by the Insane Mode in its latest software release, organizations may now harness next generation server-based storage designs and accelerate application performance while dramatically lowering complexity and costs in their environment.




Continued Benefits and Need for Agent-based Backup

The advent of agent-less backup makes it easy to believe that the end of agent-based backup is nigh. Nothing is further from the truth. While agent-less backup addresses many challenges around the protection and recovery of virtual machines (VMs), agent-less backup is no panacea as compelling reasons persist for organizations to continue to use offer agent-based backup as an alternative to agent-less backup. Consider:

  1. Agents can better monitor server CPU and tune backup performance accordingly. Agent-less backups still incur overhead on the underlying physical machine as they consume server CPU and memory while the backup occurs. The consumption of these resources may, in turn, impact other VMs running on the host. Using an agent-based approach, organizations may better monitor the CPU and/or memory consumption on the host by the backup process. If the backup process does impact applications on other VMs on that host, an agent-based backup is better positioned to throttle the backup process to lessen its effect on other VMs running on that host.
  2. Agents better facilitate understanding of database or email applications as well as creating application-consistent backups. Almost every database and email application generates logs that must be captured and then applied to ensure their successful recovery. Capturing all of this metadata almost always necessitates the deployment of some type of agent on the VM to ensure this recovery. Even a number of backup solutions that promote themselves as “agent-less” still employ the use of an agent at the beginning of a backup of these types of applications to capture the data necessary to ensure a successful recovery. The main difference is that at the end of the backup they remove their agent while agent-based solutions have an agent permanently residing on the VM.
  3. Agent deployment and management is less of an issue. Agent-less backup certainly does eliminate the time, effort and management overhead associated with deploying agents initially and then managing them long term. However agent-based backup solutions have taken numerous steps over the years to automate the initial deployment of agent and then maintain them long term. Through their integration with management consoles such as Microsoft Management Console (MMC) and/or VMware vSphere vCenter, organizations may now deploy agents on VMs managed by this software as part of each VM’s setup and/or ongoing management.
  4. Agents gather more detailed, technical information about the volumes and applications within the VM. A key drawback for agent-less backup is their lack of visibility into the data contained within each VM. As such, when backups are completed, they often lack the necessary metadata to quickly restore specific components within the VM, such as specific files or folders. Rather the entire VM must first be restored and mounted before the administrator can navigate to and recover the data in question. Agent-based approaches capture and retain this type of metadata so administrators may more quickly do recoveries that require this type of detail to be at their fingertips.



Why Everyone Needs to Watch VMware CEO Pat Gelsinger’s Portion of the VMworld 2015 Opening Keynote

VMware and its suite of products have largely been designed by geeks, for geeks, with VMware pulling no punches about this claim. VMware’s CEO, Pat Gelsinger, is himself a self-professed geek which is made evident a couple of times in his VMworld keynote. But where he personally and VMware corporately have made big steps forward in the last few years is stripping out the technical mumbo-jumbo that can so easily beset VMware’s product suite and better translating its value proposition into “business speak.” This change in focus and language was put on full display during Gelsinger’s portion of the opening keynotes that kicked off the VMworld 2015 conference.

Virtualization in general and VMware in particular have been driving forces behind many of the changes that have occurred over the last 5-10 years behind the scenes in enterprise data centers. VMware has eliminated the need to deploy hundreds of physical servers and tens or hundreds of terabytes of storage through its use of virtualization. VMware then optimizes these virtualized physical resources enabling organizations to fundamentally transform their IT operations by both better utilizing them and granting organizations greater application mobility and recovery.

But while VMware and its products have resonated with folks on the IT side of the house, when you mention VMware to people on the business side of the house, the response is often, “What is VMware and why should I care from a broader business perspective?

Those are good questions to which VMware did not really have a good, succinct answers. While it could explain all of its technical benefits to the folks on the business side of the house and how VMware lowered costs, VMware could not address how it could help the business grow revenue and sales. In other words, if VMware was ever going to be relevant to people outside of the data center, it needed to change its messaging and add components its product suite to do the same.

The speakers in VMworld’s opening keynotes preceding Pat Gelsinger’s keynote did that in part. One focused on VMware’s Airwatch that provides individuals the flexibility to use almost any type of device (desktop, laptop, tablet or smart phone) that they want to access whatever enterprise application that they need. To do so, Airwatch provides identity management which gives individuals accessing enterprise applications the flexibility and single sign-on capabilities that they want while providing enterprise IT the control and security they need.

But it was Gelsinger’s keynote that really set the table as to why VMware is beginning to establish and differentiate itself as an entity that should be taken seriously outside of the data center. Beginning at the 54:00 minute marker in this video of the opening keynotes at VMworld, Pat Gelsinger explains how the world has changed and why VMware needs to change to remain relevant. Consider:

  • In 1995 16 million people were connected to the Internet. Now over 3 billion people are now connected in some way to the Internet with that number forecast to double to 6 billion by 2030 which will represent over 80 percent of the world’s population.
  • The mobile infrastructure and wireless technologies are enabling emerging economies to connect much more quickly to the Internet. These are enabling third world countries leapfrog and/or bypass the need to build out hard-wired connected infrastructures.
  • The average number of connected devices per person is steadily increasing. In 1995 only 1 in 10 people had a connected device. Now each person has an average of 3 connected devices. By 2020 that will double to roughly 6 connected devices per person.
  • There are over 7200 objects orbiting earth. Te vast majority are dedicated to delivering mobile connectivity.

These points illustrate that solving infrastructure issues inside the data center such as VMware has done were key to creating and supporting these types of technologies. However these new technologies are creating their own set of challenges that VMware needs to adapt in both its technologies to address these new requirements and its messaging to explain their benefits.

It was at this point that Gelsinger drew some good analogies between the changes occurring in the tech world today and how they are comparable to major points of transition in the past.

For instance, it never occurred to me to compare the Revolutionary War of 1776 between US and Britain with some of the changes going on technology today. However that illustration in his presentation exemplified how the technology changes going in the world today are very similar to that time and that VMware, unless it proceeds carefully, could be usurped by an upstart in much the same way that England was beaten back by the United States.

This point was probably best illustrated by Eric Pearson, the CIO of InterContinental Hotels Group, who appeared in one of the videos during Gelsinger’s portion of the keynote. In his video appearance, Pearson makes a statement that articulates the changes going on in the world and what they mean to businesses when he says, “It is no longer the big beating the small. It is the fast beating the slow.

That viewpoint might explain what I detected in VMware as a whole and in Gelsinger in particular: a new willingness to take greater risks. Over the last few years and in particular during Gelsinger’s reign as CEO at VMware, I sensed that it was not taking the bold, forward-looking moves it needed to remain at the forefront of technology.

While VMware was not falling behind or becoming irrelevant, it was leaving doors open and becoming a follower rather than a leader in virtualization. This was leaving doors open through which upstarts were taking the initiative and walking through in large part because they were faster and VMware was slower. VMware now seems to grasp that the least risky approach to growing its business is to take risks as not taking the right technology risks is simply too risky.

This gets to the heart of why I encourage people to watch Gelsinger’s portion of the opening series of keynotes at VMworld. Yes, you will learn more about VMware, many of the new technologies it is bringing to market and how Gelsinger foresees these technologies helping VMware remain relevant in a rapidly changing world. Yet maybe more importantly, his presentation provides some sound advice for each of us to follow. By  identifying the broader trends and changes going on in the world around us and then having the courage to do make the changes in our lives and/or businesses, we personally can remain relevant as these changes occur and ideally even position ourselves as leaders as they take place.




SimpliVity OmniStack 3.0 Illustrates Why Hyper-converged Infrastructures are Experiencing Hyper-Growth

As the whole technology world (or at least those intimately involved with the enterprise data center space) takes a breath before diving head first into VMworld next week, a few vendors are jumping the gun and making product announcements in advance of it. One of those is SimpliVity which announced its latest hyper-converged offering, OmniStack 3.0, this past Wednesday. In so doing, it continues to put a spotlight on why hyper-converged infrastructures and the companies delivering them are experiencing hyper-growth even in a time of relative market and technology uncertainty.

The business and technology benefits that organizations are experiencing who are already using hyper-converged infrastructure solutions are pretty stunning. SimpliVity shared that among the organizations who already use its solutions, one of the largest benefits that they realize is the reduction in the amount of storage that they need to procure and manage across their enterprise.

In that vein, about 33 percent or 180 of its 550+ customers achieve 100:1 data efficiency. In layman’s terms, for each 1TB of storage that they deploy as part of the SimpliVity OmniCube family, they eliminate the need to deploy an additional 99TBs of storage.

In calculating this ratio, SimpliVity measures the additional storage capacity across production, archive and backup that organizations normally would have had to procure using traditional data center management architectures and methods. By instead deploying and managing this storage capacity as part of a hyper-converged infrastructure and then deduplicating and compressing the data stored in it, many of its customers report hyper-storage reductions accompanied by similar cost savings.

In its OmniStack 3.0 announcement from earlier this week, SimpliVity builds upon this foundation so organizations and/or enterprises may more fully experience its benefits regardless of their size or location. Two key new feature that its OmniStack 3.0 release delivers:

  • Right-sized, right-priced product for ROBOs. The OmniCube CN-1200 delivers most if not all of the software functionality that SimpliVity’s larger models offer and does so in a form factor (~2.7TB usable) appropriately sized for remote and branch offices (ROBOs). The more intriguing part of this story, however, is that the CN-1200 may be managed centrally alongside all of the other SimpliVity models in a common console. In this way, ROBOs can get the benefits of having a hyper-converged solution in their environment without needing to manage it. They can instead leave those management responsibilities to the experts back in the corporate data center.
  • Centralized, automated data protection and recovery for ROBOs. I have personally always found it perplexing that application data management and data protection and recovery are largely treated as two separate, discrete tasks within data centers when they are so interrelated. Hyper-converged infrastructure solutions as a whole have been actively breaking down this barrier with the OmniStack 3.0 blasting another sizeable hole in this wall in two different ways as it pertains to ROBOs.

First, SimpliVity has created a hub and spoke architecture. Using this topology, the hub or central management console dynamically probes the enterprise network, detects models in these ROBO locations and then adds them to its database of managed devices. This is done without requiring any user input at the ROBO locations.

Second, data protection and recovery are done in its central management console so no additional backup software is necessarily required. The new feature in its OmniStack 3.0 release is the option to change backup policies in bulk. In this way, organizations that have ROBOs across dozens of offices with perhaps hundreds or even thousands of VMs in them can centrally add, change or update a backup or restore policy and then apply that change across all of the protected VMs in as quickly as a minute.

Using this built-in data protection feature, SimpliVity reports that 63 percent or approximately 345 of its customers can now perform recoveries of any of its applications across its enterprise in minutes as opposed to hours or days.

VMworld 2015 may have the industry as a whole hitting the pause button until everyone sees what types of announcements that VMware makes. However hyper-converged infrastructure providers such as SimpliVity are hitting the fast-forward button by bringing solutions to market that are forcing organizations of all size to re-think how they are going to deploy and implement virtualized infrastructures going forward.

No longer can or should organizations treat and manage hypervisors, data management and data protection software and server, storage and networking hardware as separate purchases that are then left to IT managers to configure and make work. By delivering these as single, comprehensive hyper-converged solutions with SimpliVity in particular making its OmniCube models more price competitive and even easier to deploy and manage in ROBOs, it is no wonder that more organizations are taking a hard look at deploying this type of solution in their environments.




Hyper-converged Infrastructures Poised to Go Hyper-Growth in 2016

Hyper-converged infrastructures are quickly capturing the fancy of end-user organizations everywhere. They bundle hypervisor, server and storage in a single node and provide the flexibility to scale-out to form a single logical entity. In this configuration, they offer a very real opportunity for organizations to economically and practically collapse their existing infrastructure of servers and storage arrays into one that is much easier to implement, manage and upgrade over time.

These benefits have many contemplating whether hyper-converged infrastructures foretell the end of big box servers and storage into a data center that is solely hyper-converged. There is a great deal of merit in this viewpoint as an infrastructure shift to hyper-converged is already occurring with a high likelihood that it will go hyper-growth as soon as 2016.

The driving force behind the implementation and adoption of hyper-converged infrastructures is largely two-fold. The advent of server virtualization – which has already been going on for the better part of a decade – and the more recent rise of flash memory as a storage media.

Server virtualization has reduced the number of physical servers that organizations have had to implement as, using this technology, they can host multiple virtual machines or VMs on a single physical server. The downside of server virtualization to date has been the inability of storage media – primarily internal hard disk drives (HDDs) – to meet the performance demands of hosting multiple VMs on a single physical server.

This has led many organizations to use externally attached storage arrays that can provide the levels of performance that these multiple VMs hosted on a single server required. Unfortunately externally attached storage arrays, especially when they are networked together, become very costly to implement and then manage.

In fact, many find that the costs of creating and managing this networked storage infrastructure can offset whatever cost savings they realized by virtualizing their servers in the first place. Then when one starts to factor in the new levels of complexity that networked storage introduces, the headaches associated with data migration and the difficulty in getting hypervisors to optimally work with the underlying storage arrays, it makes many wonder why they went down this path in the first place.

The recent rise of flash memory – typically sold as solid state drives (SSD) – changes the conversation. Now for the first time organizations can get the type of performance once only available in a storage array in the form factor of a disk drive that may be inserted into any server. Further, by putting the storage capacity back inside the server, they eliminate the complexity and costs associated with creating a networked storage environment.

This two factors have led to the birth and rapid rise of hyper-converged infrastructures. Organizations can eliminate implementing today’s networked server and storage solutions by using a single hyper-converged solution. Further, due to their scale-out capabilities, as more computing resources or storage capacity are needed, additional nodes may be added to an existing hyper-converged implementation.

These benefits of hyper-convergence have every major big box IT vendor from Dell and HP to Cisco and EMC proclaiming that they now have a hyper-converged story that competes with up-and-comers like Maxta, Nutanix, Simplivity and Springpath. These big box vendors recognize that a hyper-converged solution from any of these emerging providers has the potential to make their existing big box server, networking or storage array story one that no one wants to hear any longer.

One question that organizations need to answer is, “How quickly will hyper-converged solutions go hyper-growth? Near term (0-12 months) I see hyper-converged infrastructures cutting into the respective market shares of these different systems as organizations test drive them in their remote and branch offices as well as in their test and development environments. Once these trials are and, based upon what I am hearing from enterprise shops and the level of interest that they are displaying in this technology, 2016 could well be the year that hyper-converged goes hyper-growth.

At this point, it is still too early to definitively conclude the full impact that hyper-converged infrastructures will ultimately have on today’s existing data center infrastructures and how quickly that will happen. But when looks at how many new vendors are coming out of the woodwork, how quickly existing vendors are bringing hyper-converged infrastructure solutions to market and how much end-user interest there is in this technology, I am of the mindset that the transition to hyper-converged infrastructures may potentially happen much faster than anyone anticipates.




HP 3PAR StoreServ’s VVols Integration Brings Long Awaited Storage Automation, Optimization and Simplification to Virtualized Environments

VMware Virtual Volumes (VVols) stands poised to fundamentally and positively change storage management in highly virtualized environments that use VMware vSphere. However enterprises will only realize the full benefits that VVols have to offer by implementing a backend storage array that stands ready to take advantage of the VVols architecture. The HP 3PAR StoreServ family of arrays provide the virtualization-first architecture along with the simplicity of implementation and ongoing management that organizations need to realize the benefits that the VVols architecture provide short and long term.

VVols Changes the Storage Management Conversation

VVols eliminate many of the undesirable aspects associated with managing external storage array volumes in networked virtualized infrastructures today. Using storage arrays that are externally attached to ESXi servers over either Ethernet or Fibre Channel (FC) storage networks, organizations currently struggle with issues such as:

  • Deciding on the optimal block-based protocol to achieve the best mix of cost and performance
  • Provisioning storage to ESXi servers
  • Lack of visibility into the data placed on LUNs assigned to specific VMs on ESXi servers
  • Identifying and reclaiming stranded storage capacity
  • Optimizing application performance on these storage arrays

The VVols architecture changes the storage management conversation in virtualized environments that use VMware in the following ways:

  • Protocol agnostic. VVols minimize or even eliminate deciding on which protocol is “best” as VVols work the same way whether block or file-based protocols are used.
  • Uses pools of storage. Storage arrays make raw capacity available in a unit known as a VVol Storage Container to one or more ESXi servers. As each VM is created, the VMware ESXi server allocates the proper amount of array capacity that is part of the VVol Storage Container to the VM.
  • Heightened visibility. Using the latest VMware APIs for Storage Awareness (VASA 2.0), the ESXi server lets the storage array know exactly which array capacity is assigned to and used by each VM.
  • Automated storage management. Knowing where each VM resides on the array facilitates the implementation of automated storage reclamation routines as well as performance management software. Organizations may also offload functions such as snapshots, thin provisioning and the overhead associated with these tasks onto the storage array.

VVols’ availability make it possible for organizations to move much closer to achieving the automated, non-disruptive, hassle-free storage array management experience in virtualized environments that they want and have been waiting for years to implement.

Robust, VMware ESXi-aligned Storage Platform a Prerequisite to Realizing VVols Potential

Yet the availability of VVols from VMware does not automatically translate into organizations being able to implement them by simply purchasing and installing any storage array. To realize the potential storage management benefits that VVols offer requires deploying a properly architected storage platform that is aligned with and integrated with VMware ESXi. These requirements make it a prerequisite for organizations to select a storage array that:

  • Is highly virtualized. Each time array capacity is allocated to a VM, a virtual volume must be created on the storage array. Allocating a virtual volume that performs well and uses the most appropriate tier of storage for each VM requires a highly virtualized array.
  • Supports VVols. VVols represent a significant departure from how storage capacity has been managed to date in VMware environments. As such, the storage array must support VVols.
  • Tightly integrates with VMware VASA. Simplifying storage management only occurs if a storage array tightly integrates with VMware VASA. This integration automates tasks such as allocating virtual volumes to specific VMs, monitoring and managing performance on individual virtual volumes and reclaiming freed and stranded capacity on those volumes.

HP 3PAR StoreServ: Locked and Loaded with VVols Support

The HP 3PAR StoreServ family of arrays come locked and loaded with VVols support. This enables any virtualized environment running VMware vSphere 6.0 on its ESXi hosts to use a VVol protocol endpoint to directly communicate with HP 3PAR StoreServ storage arrays running the HP 3PAR 0S 3.2.1 MU2 P12 or later software.

Using FC protocols, the ESXi server(s) integrates with the HP 3PAR StoreServ array using the various APIs natively found in VMware vSphere. A VASA Provider is directly built into HP 3PAR StoreServ arrays which recognizes vSphere commands. It then automatically performs the appropriate storage management operations such as carving up and allocating a portion of the HP 3PAR StoreServ storage array capacity to a specific VM or reclaiming the capacity associated with a VM that has been deleted and is no longer needed.

Yet perhaps what makes HP 3PAR StoreServ’s support of VVols most compelling is that the pre-existing HP 3PAR OS software carries forward. This gives the VMs created on a VVols Storage Container on the HP 3PAR StoreServ array access to all of the same, powerful data management services that were previously only available at the VMFS level on HP 3PAR StoreServ LUNs. These services include:

  • Adaptive Flash Cache that dedicates a portion of the HP 3PAR StoreServ’s available SSD capacity to augment its available primary cache and then accelerates response times for applications with read-intensive I/O workloads.
  • Adaptive Optimization that optimizes service levels by matching data with the most cost-efficient resource on the HP 3PAR StoreServ system to meet that application’s service level agreement (SLA).
  • Priority Optimization that identifies exactly what storage capacity is being utilized by each VM and then places that data on the most appropriate storage tier according to each application’s SLA so a minimum performance goal for each VM is assured and maintained.
  • Thin Deduplication that first assigns a unique hash to each incoming write I/O. It then leverages HP 3PAR’s Thin Provisioning metadata lookup table to quickly do hash comparisons, identify duplicate data and, when matches are found, to deduplicate like data.
  • Thin Provisioning that only allocates very small chunks of capacity (16 KB) when writes actually occur.
  • Thin Persistence that reclaims allocated but unused capacity on virtual volumes without manual intervention or VM timeouts.
  • Virtual Copy that can create up to 2,048 point-in-time snapshots of each virtual volume with up to 256 of them being available for read-write access.
  • Virtual Domains, also known as virtual private arrays, offer secure multi-tenancy for different applications and/or user groups. Each Virtual Domain may then be assigned its own service level.
  • Zero Detect that is used when migrating volumes from other storage arrays to HP 3PAR arrays. The Zero Detect technology identifies “zeroes” on existing volumes which represent allocated but unused space on those volumes. As HP 3PAR migrates these external volumes to HP 3PAR volumes, the zeroes are identified but not migrated so the space may be reclaimed on the new HP 3PAR volume.

HP 3PAR StoreServ and VVols Bring Together Storage Automation, Optimization and Simplification

HP 3PAR StoreServ arrays are architected and built from the ground up to meet the specific storage requirements of virtualized environments. However VMware’s introduction of VVols further affirms this virtualization-first design of the HP 3PAR StoreServ storage arrays as together they put storage automation, optimization and simplification within an organization’s reach.

HP 3PAR StoreServ frees organizations to immediately implement the new VVols storage architecture and take advantage of the granularity of storage management that they offer. By HP 3PAR StoreServ immediately integrating and supporting VVols and bringing forward its existing, mature set of data management services, organizations can take a long awaited step forward to automate and simplify the deployment and ongoing storage management of VMs in their VMware environment.




The Four (4) Behind-the-Scenes Forces that Drive Many of Today’s Technology Infrastructure Buying Decisions

It is almost a given in today’s world that for almost any organization to operate at peak efficiency and achieve optimal results that it has to acquire and use multiple forms of technology as part of its business processes. However what is not always so clear is the forces that are at work both insider and outside of the business that drive its technology acquisitions. While by no means a complete list, here are four (4) forces that DCIG often sees at work behind the scenes that influence and drive many of today’s technology infrastructure buying decisions.

  1. Keep Everything (All Data). Many organizations often start with the best of intentions when it comes to reigning in data growth by deleting their aging or unwanted data. Then reality sets as they consider the cost and time associated with managing this data in an optimal manner. At that point, they often find it easier, simpler, less risky and most cost effective to just keeping the date.

New technologies heavily contribute to them arriving at this decision. Data compression and data deduplication minimize or eliminate redundant data. Ever higher capacity hard disk drives (HDDs) facilitate storing more data in the same data center footprint. The combination of these technologies amplify the benefits of the other. Further, with IT staffing levels staying flat or even dropping in many organizations, no one has the time to manage the data or wants to risk deleting data that is later deemed needed.

  1. Virtualize Everything. An initial motivation for many organizations to virtualize many applications in the data center was to eliminate both capital and operational expenditures. While those reasons persist, organizations now recognize that virtualizing everything pays many other dividends as well. These include faster applications recoveries; better access to copies of production data; eliminating backup windows; and, new opportunities for testing and developing existing and new applications.
  2. Instant Recovery. Almost all users within organizations expect continuous availability from all of their applications regardless of the tier of application within an organization. However instant recovery is a realistic expectation on the part of most end-users. By virtualizing applications, r using data protection solutions that offer continuous data protection for applications that reside on physical machines, or clustering software, applications that cannot be recovered in seconds or minutes should be in the process of becoming the exception rather than the rule.
  3. Real Time Analytics and Performance. As is evidenced by the prior three points, organizations have more data than ever before at their fingertips and should be able to make better decisions in real time using that data. While this force is still in the early vestiges of becoming a reality, DCIG sees more evidence of this happening all of the time thanks in large part to the growing adoption of open source computing, the use of commodity or inexpensive hardware for mission critical processing and the growing availability of software that can leverage these resources and deliver on these new business requirements.

Technology infrastructure buying decisions are never easy and always have some risk associated with them but if organizations are to remain at peak efficiency and competitive, not having the right technologies is NOT an option. By understanding these seen and unseen forces that are often at work behind the scenes can help organizations better understand and prioritize which technologies they should buy as well as help to quantify the business benefits they should to expect to see after acquiring them and putting them in place.




Three Specific Use Cases for the Successful Implementation of Software-defined Storage

The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well. While the value of software-defined storage has never been disputed, best practices associated with its implementation, management and support short and long term took time to develop. We are now seeing the fruits of these efforts as evidenced by some of the successful ways in which software-defined storage solutions are packaged and shipped.

The impact that software-defined storage solutions are poised to have on the traditional storage market is significant. Recent IDC research suggests that traditional stand-alone hybrid systems (mix of disk and flash) are expected to decline at a 13 percent compound annual rate while new system (all-flash, hyperconverged and software-defined) adoption will grow at 22 percent clip from 2014 to 2018.

The exact percentage that software-defined storage solutions will contribute to this overall 22% growth rate is unclear. However it is clear that doubts about their short and long term viability have largely evaporated.

Contributing to this increased confidence in using software-defined storage is the growing number of successful implementations of this technology on appliances and storage controllers. While software-defined storage has had a presence on these devices for well over a decade, the increased availability of software-defined storage solutions from vendors and growing adoption by end-users stems from the ability to better mitigate the issues associated with the use of software-defined storage  and improved best practices for its initial implementation and ongoing management that optimize its inherent strengths.

Specific use cases where DCIG is aware of software-defined storage (SDS) solutions being successfully implemented and used on appliance and storage controller-based devices include:

  • Non-disruptive (or near non-disruptive) data migrations. This is historically where appliance and storage controller-based SDS solutions have been used successfully for years. By inserting the appliance or storage controller SDS solution into an existing storage network between the server and back end storage, the SDS solution is then used to virtualize the storage volumes on both existing and new storage arrays and then migrate the data from the existing array to a new storage array.

The appeal of using this approach was that the appliance or storage controller could be inserted non-disruptively or nearly non-disruptively (application downtime of only seconds or minutes) into the environment. Data may then be migrated from one storage array to another while the application continues to operate unaware that a data migration is occurring.

The HP 3PAR StoreServ storage arrays with their SDS solution now provide such an option. When migrating from an existing HP 3PAR, EMC VNX or EMC VMAX array to a new HP 3PAR StoreServ array, organizations may deploy the new HP 3PAR StoreServ, virtualize the volumes on the existing storage arrays, non-disruptively migrate the data to the storage on the new HP 3PAR StoreServ array and then cut the application(s) over to the new HP 3PAR StoreServ array with minimal to no application downtime.

  • Better managing deployments of utility storage. Many if not most organizations have a growing need for deployments of large amounts of utility storage in their environments. Organizations increasingly have vast amounts of data for which they cannot quantify its value but know that it is sufficiently valuable that they cannot easily or justifiably delete it. In these cases they often want to use storage arrays that are reliable, stable, economical (e. – provide storage capacity at well under $1/GB,) perform moderately well and remain easy to manage and scale.

The storage upon which this data resides needs relatively few bells and whistles. In other words, it typically does not need integration with any VMware APIs, will not host any Oracle databases, does not need any flash nor will it need any special automated storage tiering features. In short, the storage array deployed needs to be cheap and deep.

SDS solutions play nicely in these environments. Whether the SDS software resides on a storage controller (such as on a Dell EqualLogic, EMC Isilon, ExaBlox OneBlox or HP P4000 array) or on an appliance (DataCore SANSymphony, FalconStor FreeStor or IBM SVC), more storage capacity can be quickly and easily added to these environments and then just as easily managed and scaled since many of the interoperability and performance issues that have hindered SDS deployments in the past do not really come into play in these situations.

  • Heterogeneous vendor multi-tiered storage environments. One of the big issues with appliance and storage controller-based SDS solutions is that they attempted to do it all by virtualizing every vendors’ storage arrays. But by attempting to do it all, they often failed to deliver on one of the biggest benefit that SDS has to offer – creating a single pane of glass to manage all of the storage capacity and provide a common, standardized set of storage management features. Virtualizing all storage from all vendors made it too complicated to implement all of the features associated with each of the underlying arrays that were virtualized.

IBM with its SAN Volume Controller (SVC) has smartly avoided this pitfall. Rather than trying to virtualize every vendor’s storage arrays and deliver all of their respective capabilities, its primary focus is to virtualize the various IBM storage arrays and deliver their respective capabilities. While organizations arguably sacrifice some choice and flexibility to buy from any storage vendor, many would rather have less choice with a more predictable environment than more choices with more risk. Further, IBM provides organizations with a sufficient number of storage array options (flash, hybrid, disk, etc.) that they get most if not all of the tiers of disk that they will need, the flexibility to manage all of this storage capacity centrally and the ability to present a common set of storage array features to all attached applications.

Software-defined storage may not yet be fully mature but neither is it a half-baked or poorly thought out solution anymore. Vendors have largely figured out how to best implement it so they can take advantage of its strengths while mitigating its risks and have developed best practices to do so. Ultimately, this developing and maturing set of best practices will probably contribute more to SDS’s long term success than any other new features that SDS solutions may offer now or in the future.




Three Specific Use Cases for the Successful Implementation of Software-defined Storage

The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well. While the value of software-defined storage has never been disputed, best practices associated with its implementation, management and support short and long term took time to develop. We are now seeing the fruits of these efforts as evidenced by some of the successful ways in which software-defined storage solutions are packaged and shipped.

The impact that software-defined storage solutions are poised to have on the traditional storage market is significant. Recent IDC research suggests that traditional stand-alone hybrid systems (mix of disk and flash) are expected to decline at a 13 percent compound annual rate while new system (all-flash, hyperconverged and software-defined) adoption will grow at 22 percent clip from 2014 to 2018.

The exact percentage that software-defined storage solutions will contribute to this overall 22% growth rate is unclear. However it is clear that doubts about their short and long term viability have largely evaporated.

Contributing to this increased confidence in using software-defined storage is the growing number of successful implementations of this technology on appliances and storage controllers. While software-defined storage has had a presence on these devices for well over a decade, the increased availability of software-defined storage solutions from vendors and growing adoption by end-users stems from the ability to better mitigate the issues associated with the use of software-defined storage  and improved best practices for its initial implementation and ongoing management that optimize its inherent strengths.

Specific use cases where DCIG is aware of software-defined storage (SDS) solutions being successfully implemented and used on appliance and storage controller-based devices include:

  • Non-disruptive (or near non-disruptive) data migrations. This is historically where appliance and storage controller-based SDS solutions have been used successfully for years. By inserting the appliance or storage controller SDS solution into an existing storage network between the server and back end storage, the SDS solution is then used to virtualize the storage volumes on both existing and new storage arrays and then migrate the data from the existing array to a new storage array.

The appeal of using this approach was that the appliance or storage controller could be inserted non-disruptively or nearly non-disruptively (application downtime of only seconds or minutes) into the environment. Data may then be migrated from one storage array to another while the application continues to operate unaware that a data migration is occurring.

The HP 3PAR StoreServ storage arrays with their SDS solution now provide such an option. When migrating from an existing HP 3PAR, EMC VNX or EMC VMAX array to a new HP 3PAR StoreServ array, organizations may deploy the new HP 3PAR StoreServ, virtualize the volumes on the existing storage arrays, non-disruptively migrate the data to the storage on the new HP 3PAR StoreServ array and then cut the application(s) over to the new HP 3PAR StoreServ array with minimal to no application downtime.

  • Better managing deployments of utility storage. Many if not most organizations have a growing need for deployments of large amounts of utility storage in their environments. Organizations increasingly have vast amounts of data for which they cannot quantify its value but know that it is sufficiently valuable that they cannot easily or justifiably delete it. In these cases they often want to use storage arrays that are reliable, stable, economical (e. – provide storage capacity at well under $1/GB,) perform moderately well and remain easy to manage and scale.

The storage upon which this data resides needs relatively few bells and whistles. In other words, it typically does not need integration with any VMware APIs, will not host any Oracle databases, does not need any flash nor will it need any special automated storage tiering features. In short, the storage array deployed needs to be cheap and deep.

SDS solutions play nicely in these environments. Whether the SDS software resides on a storage controller (such as on a Dell EqualLogic, EMC Isilon, ExaBlox OneBlox or HP P4000 array) or on an appliance (DataCore SANSymphony, FalconStor FreeStor or IBM SVC), more storage capacity can be quickly and easily added to these environments and then just as easily managed and scaled since many of the interoperability and performance issues that have hindered SDS deployments in the past do not really come into play in these situations.

  • Heterogeneous vendor multi-tiered storage environments. One of the big issues with appliance and storage controller-based SDS solutions is that they attempted to do it all by virtualizing every vendors’ storage arrays. But by attempting to do it all, they often failed to deliver on one of the biggest benefit that SDS has to offer – creating a single pane of glass to manage all of the storage capacity and provide a common, standardized set of storage management features. Virtualizing all storage from all vendors made it too complicated to implement all of the features associated with each of the underlying arrays that were virtualized.

IBM with its SAN Volume Controller (SVC) has smartly avoided this pitfall. Rather than trying to virtualize every vendor’s storage arrays and deliver all of their respective capabilities, its primary focus is to virtualize the various IBM storage arrays and deliver their respective capabilities. While organizations arguably sacrifice some choice and flexibility to buy from any storage vendor, many would rather have less choice with a more predictable environment than more choices with more risk. Further, IBM provides organizations with a sufficient number of storage array options (flash, hybrid, disk, etc.) that they get most if not all of the tiers of disk that they will need, the flexibility to manage all of this storage capacity centrally and the ability to present a common set of storage array features to all attached applications.

Software-defined storage may not yet be fully mature but neither is it a half-baked or poorly thought out solution anymore. Vendors have largely figured out how to best implement it so they can take advantage of its strengths while mitigating its risks and have developed best practices to do so. Ultimately, this developing and maturing set of best practices will probably contribute more to SDS’s long term success than any other new features that SDS solutions may offer now or in the future.

Bitnami