When organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high end and standard hyper-converged architectures deliver the same benefits. They each virtualize compute, memory, storage networking, and data storage in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snapshots and replication. In short, these two architectures mirror each other in many ways. However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments.
A virtualization focused backup software play may be perceived as “too little, too late” with so many players in today’s backup space. However, many former virtualization centric backup software plays (PHD Virtual and vRanger come to mind) have largely disappeared while others got pricier and/or no longer do just VM backups. These changes have once again created a need for a virtualization centric backup software solution. This plays right into the hands of the newly created HYCU as it formally tackles the job of ESX virtual machine (VM) backups in non-Nutanix shops.
Every vendor new to a market generally starts by introducing a product that satisfies a niche to gain a foothold in that market. Comtrade Software exemplified this premise by earlier this year coming to market with its HYCU software that targets the protection of VMs hosted on the Nutanix AHV hypervisor. But to grow in a market, especially in the hyper-competitive virtual machine (VM) data protection space, one must expand to protect all market-leading hypervisors. Comtrade Software’s most recent HYCU release achieves that goal with its new support for VMware ESX.
Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two common choices but key differences between them persist.
In recent months and years, many have come to question VMware’s commitment to public clouds and containers used by enterprise data centers (EDCs). No one disputes that VMware has a solid footprint in EDCs and that it is in no immediate danger of being displaced. However, many have wondered how or if it will engage with public cloud providers such as Amazon as well as how it would address threats posed by Docker. At VMworld 2017, VMware showed new love for these two technologies that should help to alleviate these concerns.
Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.
In today’s business world where new technologies constantly come to market, there are signs that indicate when certain ones are gaining broader market adoption and ready to go mainstream. Such an event occurred this month when a backup solution purpose built for Nutanix was announced by Comtrade Software.
Every year at VMworld I have conversations that broaden my understanding and appreciation for new products on the market. This year was no exception as I had the opportunity to talk at length with Fidel Michieli, a System Architect at a SaaS provider, who shared his experiences with me about his challenges with backup and recovery and how he came to choose Cohesity. In this first installment in my interview series with Fidel, he shared the challenges that his company was facing with his existing backup configuration as well as the struggles that he had in identifying a backup solution that scaled to meet his dynamically changing and growing environment.
Integrating backup software, cloud services support, deduplication, and virtualization into a single hardware appliance remains a moving target. Even as backup appliance providers merge these technologies into their respective appliances, the methodologies they employ to do so can differ significantly between them. This becomes very apparent when one looks at growing number of backup appliances from the providers in the market today.
In the last 12-18 months, software-only software-defined storage (SDS) seems to be on the tip of everyone’s tongue as the “next big thing” in storage. However, getting some agreement as to what features constitute SDS software, who offers it and even who competes against who, can be a bit difficult to ascertain as provider allegiances and partnerships quickly evolve. In this second installment of my interview series with Nexenta’s Chairman and CEO, Tarkan Maner, he provides his views into how SDS software is impacting the competitive landscape, and how Nexenta seeks to differentiate itself.
In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.
The advent of agent-less backup makes it easy to believe that the end of agent-based backup is nigh. Nothing is further from the truth. While agent-less backup addresses many challenges around the protection and recovery of VMs, agent-less backup is no panacea as compelling reasons persist for organizations to continue to use offer agent-based backup as an alternative to agent-less backup. Consider:
VMware and its suite of products have largely been designed by geeks, for geeks, with VMware pulling no punches about this claim. VMware’s CEO, Pat Gelsinger, is himself a self-professed geek which is made evident a couple of times in his VMworld keynote. But where he personally and VMware corporately have made big steps forward in the last few years is stripping out the technical mumbo-jumbo that can so easily beset VMware’s product suite and better translating its value proposition into “business speak.” This change in focus and language was put on full display during Gelsinger’s portion of the opening keynotes that kicked off the VMworld 2015 conference.
As the whole technology world (or at least those intimately involved with the enterprise data center space) takes a breath before diving head first into VMworld next week, a few vendors are jumping the gun and making product announcements in advance of it. One of those is SimpliVity which announced its latest hyper-converged offering, OmniStack 3.0, this past Wednesday. In so doing, it continues to put a spotlight on why hyper-converged infrastructures and the companies delivering them are experiencing hyper-growth even in a time of relative market and technology uncertainty.
Hyper-converged infrastructures are quickly capturing the fancy of end-user organizations everywhere. They bundle hypervisor, server and storage in a single node and provide the flexibility to scale-out to form a single logical entity. In this configuration, they offer a very real opportunity for organizations to economically and practically collapse their existing infrastructure of servers and storage arrays into one that is much easier to implement, manage and upgrade over time.
VMware Virtual Volumes (VVols) stands poised to fundamentally and positively change storage management in highly virtualized environments that use VMware vSphere. However enterprises will only realize the full benefits that VVols have to offer by implementing a backend storage array that stands ready to take advantage of the VVols architecture. The HP 3PAR StoreServ family of arrays provide the virtualization-first architecture along with the simplicity of implementation and ongoing management that organizations need to realize the benefits that the VVols architecture provide short and long term.
It is almost a given in today’s world that for almost any organization to operate at peak efficiency and achieve optimal results that it has to acquire and use multiple forms of technology as part of its business processes. However what is not always so clear is the forces that are at work both insider and outside of the business that drive its technology acquisitions. While by no means a complete list, here are four (4) forces that DCIG often sees at work behind the scenes that influence and drive many of today’s technology infrastructure buying decisions.
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well.
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well. While the value of software-defined storage has never been disputed, best practices associated with its implementation, management and support short and long term took time to develop. We are now seeing the fruits of these efforts as evidenced by some of the successful ways in which software-defined storage solutions are packaged and shipped.
In the early 2000’s I was a big believer in appliance and/or array-based storage virtualization technology. To me, it seemed like the most logical choice to solve some of the most pressing problems such as data migrations, storage optimization and reducing storage networking’s overall management complexity that were confronting the deployment of storage networks in enterprise data centers. Yet here we find ourselves in 2015 and, while appliance and storage array-based storage virtualization still exists, it certainly never became the runaway success that many envisioned at the time. Here are my top 3 reasons as to what went wrong with this technology and why it has yet to fully realize its promise. It did not and still does not sufficiently scale to meet enterprise requirements. The big appeal to me of storage virtualization appliances and/or array controllers was that they could aggregate all of an infrastructure’s storage arrays and their capacity into one giant pool of storage which could then…