In the last 12-18 months, software-only software-defined storage (SDS) seems to be on the tip of everyone’s tongue as the “next big thing” in storage. However, getting some agreement as to what features constitute SDS software, who offers it and even who competes against who, can be a bit difficult to ascertain as provider allegiances and partnerships quickly evolve. In this second installment of my interview series with Nexenta’s Chairman and CEO, Tarkan Maner, he provides his views into how SDS software is impacting the competitive landscape, and how Nexenta seeks to differentiate itself.
Category Archives: Virtualization
In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.
The advent of agent-less backup makes it easy to believe that the end of agent-based backup is nigh. Nothing is further from the truth. While agent-less backup addresses many challenges around the protection and recovery of VMs, agent-less backup is no panacea as compelling reasons persist for organizations to continue to use offer agent-based backup as an alternative to agent-less backup. Consider:
VMware and its suite of products have largely been designed by geeks, for geeks, with VMware pulling no punches about this claim. VMware’s CEO, Pat Gelsinger, is himself a self-professed geek which is made evident a couple of times in his VMworld keynote. But where he personally and VMware corporately have made big steps forward in the last few years is stripping out the technical mumbo-jumbo that can so easily beset VMware’s product suite and better translating its value proposition into “business speak.” This change in focus and language was put on full display during Gelsinger’s portion of the opening keynotes that kicked off the VMworld 2015 conference.
As the whole technology world (or at least those intimately involved with the enterprise data center space) takes a breath before diving head first into VMworld next week, a few vendors are jumping the gun and making product announcements in advance of it. One of those is SimpliVity which announced its latest hyper-converged offering, OmniStack 3.0, this past Wednesday. In so doing, it continues to put a spotlight on why hyper-converged infrastructures and the companies delivering them are experiencing hyper-growth even in a time of relative market and technology uncertainty.
Hyper-converged infrastructures are quickly capturing the fancy of end-user organizations everywhere. They bundle hypervisor, server and storage in a single node and provide the flexibility to scale-out to form a single logical entity. In this configuration, they offer a very real opportunity for organizations to economically and practically collapse their existing infrastructure of servers and storage arrays into one that is much easier to implement, manage and upgrade over time.
VMware Virtual Volumes (VVols) stands poised to fundamentally and positively change storage management in highly virtualized environments that use VMware vSphere. However enterprises will only realize the full benefits that VVols have to offer by implementing a backend storage array that stands ready to take advantage of the VVols architecture. The HP 3PAR StoreServ family of arrays provide the virtualization-first architecture along with the simplicity of implementation and ongoing management that organizations need to realize the benefits that the VVols architecture provide short and long term.
It is almost a given in today’s world that for almost any organization to operate at peak efficiency and achieve optimal results that it has to acquire and use multiple forms of technology as part of its business processes. However what is not always so clear is the forces that are at work both insider and outside of the business that drive its technology acquisitions. While by no means a complete list, here are four (4) forces that DCIG often sees at work behind the scenes that influence and drive many of today’s technology infrastructure buying decisions.
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well.
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well. While the value of software-defined storage has never been disputed, best practices associated with its implementation, management and support short and long term took time to develop. We are now seeing the fruits of these efforts as evidenced by some of the successful ways in which software-defined storage solutions are packaged and shipped.
In the early 2000’s I was a big believer in appliance and/or array-based storage virtualization technology. To me, it seemed like the most logical choice to solve some of the most pressing problems such as data migrations, storage optimization and reducing storage networking’s overall management complexity that were confronting the deployment of storage networks in enterprise data centers. Yet here we find ourselves in 2015 and, while appliance and storage array-based storage virtualization still exists, it certainly never became the runaway success that many envisioned at the time. Here are my top 3 reasons as to what went wrong with this technology and why it has yet to fully realize its promise. It did not and still does not sufficiently scale to meet enterprise requirements. The big appeal to me of storage virtualization appliances and/or array controllers was that they could aggregate all of an infrastructure’s storage arrays and their capacity into one giant pool of storage which could then…
n the early 2000’s I was a big believer in appliance and/or controller-based storage virtualization technology. To me, it seemed like the most logical choice to solve some of the most pressing problems such as data migrations, storage optimization and reducing storage networking’s overall management complexity that were confronting the deployment of storage networks in enterprise data centers. Yet here we find ourselves in 2015 and, while appliance and storage control-based storage virtualization still exists, it certainly never became the runaway success that many envisioned at the time. Here are my top 3 reasons as to what went wrong with this technology and why it has yet to fully realize its promise.
Facebook is turning to a disaggregated racks strategy to create a next gen cloud computing data center infrastructure
Physical, purpose-built deduplicating backup appliances have found their way into many enterprise data centers as they expedite installation and simplify ongoing management of backup data. However there is a growing business case for virtual appliances that offer the benefits of deduplication without the associated hardware costs. To determine when and if a virtual appliance is the correct choice, there are key factors that enterprises must evaluate to arrive at the right decision for a specific office or environment.
Organizations are becoming increasingly virtualized within their data center infrastructures which is leading them to aggressively virtualize the storage arrays in their infrastructure to complement their already virtualized server environment. As they do so, it behooves them to distinguish between, and have a clear understanding, of each virtual component that makes up their newly virtualized storage infrastructure. The need to clarify this terminology comes clearly into focus as organizations evaluate the multi-tenancy and virtual storage array capabilities found on many high end storage arrays.
Choosing the right backup appliance – physical or virtual – does not have to be complicated so long as an organization knows the right questions to ask and gathers the appropriate information. However, as organizations are gathering this information, most conclude that a virtual backup appliance is NOT the right answer in most circumstances. In this fifth and final installment of DCIG’s interview with STORServer President Bill Smoldt, he explains how to choose the most appropriate backup appliance for your environment and why a virtual backup appliance is probably not the choice you will be making.
VMware® VMmark® has quickly become a performance benchmark to which many organizations turn to quantify how many virtual machines (VMs) they can realistically expect to host and then perform well on a cluster of physical servers. Yet a published VMmark score for a specified hardware configuration may overstate or, conversely, fail to fully reflect the particular solution’s VM consolidation and performance capabilities. The HP ProLiant BL660c published VMmark performance benchmarks using a backend HP 3PAR StoreServ 7450 all-flash array provide the relevant, real-world results that organizations need to achieve maximum VM density levels, maintain or even improve VM performance as they scale and control costs as they grow.
Delivering always-on application availability accompanied by the highest levels of capacity, management and performance are the features that historically distinguish high end storage arrays from other storage arrays available on the market. But even these arrays struggle to easily deliver on a fundamental data center task: migrating data from one physical array to another. The introduction of the storage virtual array feature into the new HP XP7 dramatically eases this typically complex task as it facilitates data consolidations and migrations by migrating entire storage virtual arrays from one physical array frame to another while simplifying array management in the process.
ITaaS is the new Holy Grail with 75 percent of IT managers saying ITaaS aligns with their organization’s philosophy and needs. Accustomed to living in a world where each application had dedicated servers, networking and storage, ITaaS eliminates this issue. It aggregates these resources into a common pool that is accessible by all virtual machines (VMs) and their hosted applications that may be owned by multiple different departments or even different organizations. These resources may then be allocated to them at any time.
Anytime DCIG prepares a Buyer’s Guide – whether a net new Buyer’s Guide or a refresh of an existing Buyer’s Guide – it always uncovers a number of interesting trends and developments about that technology. Therefore it is no surprise (at least to us anyway) that as DCIG prepares to release its DCIG 2014 Enterprise Midrange Array Buyer’s Guide that it observed a number of interesting data points about enterprise midrange arrays. As DCIG looks forward to releasing this Buyer’s Guide, we wanted to share some of these observations and insights that we gained as we prepared this Guide as well as why we reached some of the conclusions that we did.